Turing Tumble Community

Proof of Turing Completeness?

Two constructions are shown above, one in the middle and one at the end.
I thought that Turing completeness could be satisfied without loops, so I only considered the middle construction.

For the gear bit chain, from the top, I call the 3-part one X, the 11-part one Y, and the 8-part one Z.

In the assumed sequence, one cycle progresses when two balls pass.
In this case, the top phase is always X=1, and the bottom phase is always X=0.
For each bit pattern, summarize the state change when the ball is passed twice.
xyl1
X governs the phase. The first, third, and so on are 1 for the top phase, 0 for the second, fourth, and so on for the bottom phase.
The three inputs of rule110 (l, m, n) correspond to the bits of Y, Z and the color of the ball.
I guessed the rule and filled in the bit pattern table.
For example, the top one (Blue, X=1, Y=0, Z=0) corresponds to (l=0, m=0, n=0), and its output, i.e. Z’ after one cycle, is 0.
The other inputs (ball, Y, Z) and the Z’ output can be summarized to correspond to rule 110.

(Red, X=1, Y=0, Z=1) and (Red, X=1, Y=1, Z=1) have different bits of Z’(= output of rule 110) depending on the color of the ball in the bottom phase.
This is determined by the unit one above it.
Since neighboring units share some bit information, if the appropriate ball is output, the above choice will be the correct one.
I have not been able to confirm this exactly.

In any case, I agree that such a configuration is possible in principle, and I think this pattern is very promising.

Can anyone point me to something that might help bridge a gap in my understanding about Rule 110 and Turing Completeness? Wikipedia etc give good description of rule 110. And I think I can see how to implement Rule 110 in TT. But one thing I haven’t grasped is how it halts (appears not to halt?). Perhaps put another way - how can you tell when to take a reading? (I am assuming the idea is to start with a complex input pattern and end up with a (probably more) complex output pattern which then has to be interpreted/ deciphered. Is that correct?). Isn’t halting a key part of being able to imitate a turing machine? I sense I am missing some fundamental understanding, but not sure what it is!

I’d have to look at it but I’m traveling at the moment… But i think it extremely likely that the simulation of a TM by rule 110 would reach stasis (i.e. the pattern would stop changing) in the event that the simulated TM came to a halt, but otherwise would continue to change with each next generation.

So, the “reading” would just be noticing when the represented pattern on the Turing Tumble stopped changing. Depending on how the Turing Tumble simulation of rule 110 is implemented, that might mean observing the pattern of bits on the board every time a new ball is released to see if at that moment, anything has changed since the previous ball was released. This assumes that each falling ball implements one step of rule 110, i.e., one generation.

It would be lovely if the simulation were able to notice this itself, and send balls into interceptors, stopping the release of additional balls. But I don’t think this is required.

Ok - I just looked; I’d forgotten that the proof of universality of Rule 110 was not by simulating a TM directly, but by simulating a “cyclic tag system”, which in turn simulated “m-tag-systems”, which were shown to be able to simulate TM computations. We’d need to look at how the halting of a simulated TM manifested in the tag system, what that corresponded to in the cyclic tag system, and what that in turn corresponded to in the repeated application of Rule 110. Based on what little skimming I’ve done, I do still think it is a good bet that the halting behavior of a simulated TM would, after tracing it through the various simulations, correspond to an unchanging pattern for Rule 110. Perhaps a particular unchanging pattern.

So, going back to your original question, if you wanted to simulate a TM computation using TT, you’d set up an initial configuration for Rule 110 that corresponds to the cyclic tag system… that corresponds to the “m-tag-system”… that corresponds to the universal TM simulated in the original proof, and you’d wait for the pattern of bits on the board to stop changing when taking readings just before each ball is released.

Dear fellow Turing-Tumble Enthusiasts,

After a lot of thought, I believe I’ve got a direct simulation of a Turing Machine by a natural extension of the Turing Tumble game, and a proof that the simulation is correct. Thus, I believe this proves that Turing Tumble is Turing-complete.

The “natural extension” is just to allow a finitely wide but infinitely long board (as we’ve all discussed) to represent and hold the contents of the Turing machine’s tape.

Because there is no bottom-of-board, the three different types of triggers (for the three ball colors used - this can likely be pared down to two with some work) can be placed anywhere. (The current construction uses an infinite number of triggers, but a more complicated and much less natural construction uses just finitely many (and more complicated, just two triggers))

Here https://docs.google.com/document/d/1-2qzJL158Ipfpk-WaeUmlRIYFvZTuBZhvNHZ0cs7i4w/ is a link to a draft of a paper containing a detailed description of the construction and the proof. I’d very much appreciate your keen eyes and suggestions. (It seems imprudent to allow anybody with the link to comment, the internet being what it is and all, but I’d appreciate discussion in reply to this post, or via email to lennypitt@gmail.com.) Please don’t message me on this forum - I tend to miss those.

After suitable modification, I’ll look for an appropriate recreational or other journal to submit the paper to, so that it gets “properly” peer-reviewed. I’ve used quotes because I trust that the audience here will understand the construction better than a random reviewer, especially one who hasn’t played extensively with Turing Tumble. Why submit it then? I’d like to have the proof also carefully reviewed. Often one finds errors in a method or solution only when one tries to prove it correct. This happened in this case a few times - I thought I had everything down pat, then was half-way through a proof when I said “hmmm - but I need to show thus-and-so”. But alas, “thus-and-so” was not true for some subtle reason, and back to the drawing board…)

So, I think the exposition will benefit enormously from having different audiences look it over. (Suggestions on where to send it to would be welcome.)

Of course, there may be subtle (or not-so-subtle) flaws in the construction, or the proof, or both. So, I ask you to help me determine if that is the case, or otherwise improve the exposition, and if this doesn’t in fact work, how can it be modified so that it will?

While there are a lot of details and nuance, the high level ideas are not hard. I thought I’d share them here in a narrative to give an idea of what was done, without having to delve into minutiae. I hope the following makes sense - I’ve lost a lot of perspective, and this is partly stream-of-consciousness. The paper itself hopefully contains a lot of this intuition.

A finite control sits at the top of the board, and is responsible for keeping track of the current state of the TM, and executing the state-changing function.

This is fairly routine: a ball falls down a branching tree to determine what state the TM is in (stored in a gear bit), then based on the current symbol read, it changes the state by updating the gear bits holding the state information. Of course, how does it know the current symbol being scanned by the TM, especially when that symbol may be arbitrarily far away down the board? More on that below.

The “tape” consists of an infinite sequence of “3-cells”, each of which represents one tape cell of the TM. A 3-cell contains 3 bits of information: The current symbol S on the tape at the cell (0 or 1), whether or not the tape head is scanning the cell (call this bit h (= 0 no, 1 yes)), and also a bit p that keeps track of whether or not the tape head is scanning the PREVIOUS cell. The point of p is to “defy gravity”…

It should be intuitive that if the TM moves right (down), the simulation shouldn’t be so hard, because balls travel downward, and we can use a ball to, say, change the h-bit of 3-cell n to 0 while then changing the h-bit of 3-cell n+1 to 1, thus asserting that the scanning head has moved from tape cell n to tape cell n+1. But how to pass that information upward? That is what the third bit p is for. The bit p is linked via a gear chain back up to h of the previous 3-cell. This allows us to “move” the head back up one cell. Here, as well as everywhere, a challenge lurks in avoiding intersecting gear chains.
(In an earlier construction, each 3-cell also contained an “n” bit, which kept track of whether or not the tape head was in the next cell.)

The key question though that stumped me for a long time is how does the finite control get information to and from the tape? In particular, passing information from the tape UP to the finite control seems near impossible without infinite gear chains. Then I had the insight that now seems like “well, duh”: A triggered ball could pass information. Rather, WHICH ball that is triggered is information itself. So, a 3-cell can “tell” the finite control that the symbol being scanned at that cell is a 0 or 1, by triggering the 0-ball or the 1-ball. (There are three “colors” of balls: 0, 1, and Q (for “query”)).

Thus, which hopper a ball comes from tells the finite control what symbol is being scanned, allowing it to correctly change its state.

How to get that to happen was solved by having 2 balls fall per step of the TM. The first is a Q-ball (query), which goes down the tape looking for an h=1 bit in a 3-cell. When it is encountered, that tells that 3-cell to trigger a 0-ball or a 1-ball depending on what the current symbol at that 3-cell is.

Then the finite control can take over, and change the state, but now it also needs to tell that cell below what symbol to write, and whether to move the tape head left (up) or right (down).

The second key insight is that there are really only four possible messages to be passed down to the cell:

L0: Write 0, move head to the left
L1: Write 1, move head to the left
R0: Write 0, move head to the right
R1: Write 1, move head to the right.

So, the finite control sends the ball down one of four “lines” in the “bus”, corresponding to these four messages.

These lines pass down alongside the 3-cell tape, “looking” at the h-bits. When h=1 is encountered, the ball diverts “into” the 3-cell, where the S bit is written as 0 or 1, and where either the p bit is changed (set previous cell’s h-bit to 1, indicating that the head moves up), or to send the ball down a ways and change the h bit of the next 3-cell.

There are a bunch of other details, but these are the main ideas.

I hope this made some kind of sense. I’ve been thinking about this a lot, immersed in the writing and proof, and don’t trust what sounds coherent or incoherent at this point.

I look forward to hearing your comments.

Edit - one day later - I’ve added a new section to the paper that gives the high level ideas on how instead of using an infinite number of triggers, a variation of the construction can use just a single one, not far from the top hopper, but it uses two infinite gear chains running down the board to allow the tape cells to communicate with the finite control. Not sure which is preferable. (Yet another variation in an older version uses finitely many triggers, and no infinite gear chains, but seems like cheating in that each cell encodes the finite control of the TM.)

Edit (sixteen days later) - one can eliminate the infinite gear chains by having information move upward via interlocking “c” shapes, one step at a time for each ball that falls. This way, a computation can send a signal upward, that arrives arbitrarily far above, eventually.

The paper, now at tinyurl.com/ttistcomplete is mostly done.

Finally - is this thread dead? I was expecting some discussion arising from my post, but perhaps people have moved on after three years :slight_smile:

I went through a slightly older version of the paper that was uploaded.
In the paper, after chapter 4, a pattern is proposed that constitutes a new Turing machine.

The TM needs a head and a tape to store information, plus a decoding table.

Those are realized by the STM (State Transition Module) part and the repetitive tape part.
The STM part consists of N chain gear bits and can hold up to N states.
This is because one gear bit chain corresponding to the state is set to be effective.
Each state contains a circuit that decodes each time a ball passes through it in a predetermined pattern.

Each unit of tape consists of two independent gear bit chains, a symbol S representing the written information and H(~P) representing the head position.
Of the multiple tape units, only one, H, is 1, which represents the position of the head.

The operation sequence is one set of two balls, odd and even.
For the first ball, 3, 5, and odd numbered balls, the STM decodes the ball according to its color (information on the tape), Q, sets the next Q state, and then splits according to the information to be written on the tape {0, 1} and the direction of travel {L, R}.
In addition, the ball clears H and sets H(~P) corresponding to the next step. In other words, the head moves.
Then, it writes the new information S on the tape.
2, 4, Even reads the tape at the specified address and reflects it in the color of the next ball.

Since the ball has three colors, it needs to be expanded from the current board. However, this is not an essential problem since it is easy to change the route for odd-even numbers by adding one gear bit chain.
In any case, this behavior replicates TM, and I think it works as it should.

Yep - I think you got it.
The newer version of the paper, in Section 6, now has a sketch of an improvement which uses only one ball color, and only one trigger, and no infinite chains.
I think the technique is of some interest - instead of a gear chain sending information up to the top, or communicating via ball colors released, the information is moved up one cell at a time by a ball that passes periodically, and when the information finally reaches the top, a new computing step is executed.

Thanks for the feedback!

1 Like

Hi @lennypitt - the thread isn’t dead, but I for one would need to carve out a big chunk of time to have any chance at all of making a meaningful evaluation or contribution to your work. Perhaps one day soon that might happen! And thank you for your contributions :slight_smile:

1 Like

I am pleased to announce that the paper “Turing Tumble is Turing-Complete” has (finally) been accepted for publication in Theoretical Computer Science. Here is the ArXiv almost-final version".

2 Likes

Wow!

Theoretical Computer Science
Volume 948
Turing tumble is Turing-complete
Lenny Pitt
Article 113734

Here is the full article:
https://drive.google.com/file/d/1k0Pu9M89EvT3uk7KTYxnwaZbA212o55k/view?usp=sharing

2 Likes