Hello Ed Pegg, Benjamin Chaffin, Sven Kahrkling, Tomas Rokicki, Mike Paterson and John Dethridge, (Tomas, can you please forward this to John Dethridge?) Thank you all for your fascinating results on Paterson's Worm! I had no idea that work had resumed on this until Ed Pegg mentioned his MAA article. As I replied to Ed, the recent results are intriguing, amazing, and gratifying. I sometimes wonder what a great inventor or scientist in years past would think if they could return today and see the fruits of their work. What would, say, Edison think of today's power grid, halogen lights and LEDs? It seems sad that some great discoverers did not live to see the acceptance or application of their work. Not to imply that I or Paterson's Worm are great stuff, but I did yearn to know what happens with those "uncertain" worms. As time went by, it seemed unlikely that I would ever know. I appreciate your efforts and results not only for their own sake, but also as very satisfying answers to the questions raised 30 years ago. The results are amazing. I would never have guessed that worms would terminate after such long paths as were computed recently. I am awed that such a simple set of rules would lead to such intricate behavior and such huge paths. The images are wonderful. I'll try to clear up questions about my old work, and then I have some questions about the new efforts. Sven Kahrkling is correct, that there can be 2^20 * 3^10 * 5 = 309,586,821,120 distinct rule codes for a Paterson-type worm on a grid with arbitrary segments already eaten. That complexity makes it very difficult to say what would happen on a grid with boundaries, or with holes, or with multiple worms. That is why I used the simplification that the grid is infinite and completely uneaten. With that simplification, the behavior (gentle or sharp bend) at a virgin node determines what situations a given worm can encounter when it meets its own path (arrives at a node with 2 segments eaten). A gentle bend worm cannot encounter a sharp bend, and vice versa. So each of the four arrival directions into a gentle bend can be paired with (an arbitrary) one of the four arrival directions into a sharp bend. The rule code then needs to designate an exit choice for each of those four sets, not for all eight path-meeting situations. That is the motivation for the grouping. Extending the grouping idea, a worm can return to the origin the first time at most once. Therefore, each of the (four) choices at one of the (five) situations of the first return to the origin can be grouped with a choice for the other four situations. This creates just 4 rules for the first return to the origin. Similarly, the second return to the origin can occur at most once, so situations can be grouped to form just 2 sets. They correspond to 2 possible rules for the second return to the origin. There are 7 sets of situations: 1 for no segments eaten, 4 for path meeting, 1 for the first return to the origin, and 1 for the second return to the origin. By construction of the sets, a Paterson-like worm (behavior dependent only on the eaten/uneaten pattern at the current node, normalized for entry direction) on an infinite virgin isometric grid cannot require any rules outside the grouping restrictions of the sets. I bit-coded the choice for each set into a field of an octal number, so each rule code was 4 octal digits. The correspondence to Gardner's notation is simple and one-to-one. This grouping forms sets of situations, such that only one situation in a set can occur in the life of a given worm. For the path-meeting sets, it is easy to tell which of the two situations applies, because in each set one situation is for gentle-bend worms and one is for sharp-bend. In the sets for the first -- and especially the second -- return to the origin, it is not immediately obvious which (if any) of the situations occurs. The worm's path must be calculated until it returns to the origin the first or second time, to know which situation occurs. In this respect, the grouping into sets is like Sven Kahrkling's dynamic definition of rules. Like Sven, I wanted a concise definition of a worm's rule code. However, I wanted the ability to write down a complete set of all the rules that a given worm might need to apply, before starting to compute the path. I believe I saw the issue as two separate problems: enumerate all possible rules, and then analyze each of them. I was content with the reduction from 3*10^11 to 1296, which seemed a tractable number. Only after computing results did I look at redundancy among the 1296. Sven's approach of dynamically defined rule codes is a good way to enumerate the distinct worms. It is not simple to translate from one of Sven's dynamic rule codes to a set of rules that the worm will employ. To evaluate the meaning of the Nth digit, the path must be computed up to the point where the Nth new situation arises. Fortunately, Sven shows the correspondence between his codes and Gardner's codes on his web site. Another drawback to the dynamic rule codes is that for an unknown length worm with less than 7 digits in its rule code so far, it is unknown whether the rule code is complete, and therefore whether it is one code or (implicitly) more than one. Tomas Rokicki found a case of this for 104202. In fact, Tomas's results on 104202 seem to answer a question I had: what is the greatest length at which any worm exercises one of its rules for the first time? This is the same as asking for the greatest length at which a digit is needed in Sven's dynamic rule code. The only other candidates are the 3 remaining uncertain-length worms, and Sven lists their full 7-digit rule code at lengths well below that where 104202 needs its final (seventh) rule. Considering the usage of rules leads to a simple observation. A program can be instrumented to record when it first employs each rule. If the worm dies without employing one or more rules, it is one of a class of worms with identical tracks. This is an alternative to my after-the-fact analysis, and to Sven's tree-search enumeration of unique rule codes. I hear that Benjamin Chaffin made the improvement of using 1 bit to represent 1 segment of the grid, plus clever disk-mapping and swapping. Even those improvements seem insufficient to explain Tomas Rokicki's recent results. I hope Tomas will share with us how he computed the path lengths. Was it perhaps an analysis of meta-structure? I suspected way back in 1973 that progress would require a "theoretical" approach. I had in mind something like, "The worm eats a large trapezoidal area, limited by bumping into a previously eaten area or falling off an edge of an existing area. It finishes the area with position and orientation and edge parameters so-and-so, exiting that area in such-and-such a way." Then adding this trapezoidal area to the previously eaten areas, and continuing with the next area, so the path grows by large, parameterized areas instead of by individual segments. Did you use something like that, Tomas? For reference, my paper, "Paterson's Worm", AI Memo 290, is on the web, thanks to the good folks at MIT. http://www.ai.mit.edu/research/publications/ has links to lots of AI Lab literature. For the worm paper in pdf format, see ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-290.pdf The original version had a microfiche of the paper and all unique worm paths. Unfortunately, the fiche is long out of print, but Sven's web site shows each path nicely. URLS of your recent work are: http://www.mathpuzzle.com/MAA/Worms.html http://www.maa.org/editorial/mathgames/mathgames_10_24_03.html http://www.accessv.com/~sven/worms/ http://wso.williams.edu/~bchaffin/patersons_worms/ http://tomas.rokicki.com/worms.html Best regards to all, -- Mike Beeler