How to simulate any computer with finitely many pieces


#1

Note - this is not about Turing-completeness, but about how to simulate a (finite) computer. This construction is actually buried in pieces in an earlier post on how to compute an arbitrary Boolean function. I’m extracting it into this new post, explaining it better, and incorporating some pictures. Apologies for the repetition - but I’d appreciate it if others can verify that this works, or point out any problems with the construction.

This construction borrows ideas from @Jcross ingenious cellular automaton simulation, and will hopefully help people understand and appreciate that more complex construction better, since assuming it works (I think it does), it shows that TT is Turing-complete (allowing unbounded computation) assuming a ball can drop through infinitely many gates to trigger another ball.

So, how do we simulate any computation on a finite computer? Any computer can be represented via a finite number of bits, giving the contents of the memory, the registers, the instruction counter, etc. Suppose there are N such bits total, and that they are stacked into a single register on the TT board. (So, for example, if the computer had eight 16-bit registers, we’d have a stack of 8*16 bits at the top representing their values. Below that might be the finite memory contents (the program to be executed), the program location counter, etc. All one tall stack of bits on the board.)

Then after execution of one “step” of a program, or one cycle of the machine, depending on the level of resolution of our view of the algorithmic process, the bits arrive at a different state. Hence, a single step of computation on such a computer can be viewed as a function f from N bits to N bits, and an entire computation is the result of applying that function repeatedly (i.e., composing the function with itself sufficiently many times to achieve a fixed point (until the program has halted and no more changes are occurring).

We can show that for any number of bits N, and any computation function f from N bits to N bits, a finite number of pieces of TT can be arranged so that a single ball drop will cause the change of a collection of a stack of N gear bits to reflect the application of function f. By triggering more balls, the device can apply f repeatedly to walk through the state changes describing the trajectory of the computation.

Instead of representing the computer with a stack of N bits though, we’ll have lots of duplication, and represent it in a binary tree structure with N levels made up of gear bits. The k-th level will have 2^k gear bits, all geared together so that they always show the same value. Hence, there are effectively only N different functional gear bits, and exactly 2^N configurations of these gear bits, although the total number of them is much more (2^N-1). If the gear bits are packed too closely, then there may be as few as N+1 exit points from the tree, but if they are sufficiently far apart, then we can ensure that there are exactly 2^N distinct exit points, and can also extend the 2^N paths from the bottommost gear bits to ensure that the ball may arrive at one of 2^N distinct locations each separated by at least, say, 10 spots, which will allow subsequent gates to be placed below each path without interfering with gates placed below other paths. Below is an example of a tree of height N=3, with 2^3 = 8 paths extending from the bottom of the tree. Due to space constraints, I’ve only shown it with little separation between the 8 paths at the bottom, and this wouldn’t work with the rest of the construction. But by making the tree larger (but the same number of levels N = 3), we could have any separation we’d like.

Assume a tree with N levels, and some particular setting of those bits corresponding to a snapshot of a computation. Then a ball routed to the top of the tree will be directed along one of the 2^N distinct paths. From there each path will continue straight down, and a ball so routed will pass through N latches (puzzle 28) each of which affects a possible change in the gear bits in the binary tree as follows: By choosing either to place a “write-0” or a “write-1” latch in the path, the ball can set the bits across a given level of the tree to a 0 or a 1, respectively, via a connected chain of gears. Here is an example of a ball about to set the bottom layer of the tree to the value 1, in a tree with N=2.

(The astute observer will note that the location of the blue ball is inconsistent with the gear bit settings above it, but this is for illustration purposes only.)

In order to avoid having gear chains cross paths, it is necessary to set the bits in the binary tree in reverse order, so that after exiting the tree at the Nth bit, the first write will affect the Nth level of the tree, the next will affect the N-1st level, and so on, so that the last write encountered will affect the very top gear bit in the tree. This forms a collection of nested “C” shapes as in @jcross cellular automaton construction. Refer to the picture above.

In the picture, f(00) = 01, since if the tree bits are both set to 0, the rightmost path is chosen, and then the ball passes through a write-1 to set the last bit to 1, and then through a write-0, to set the first bit to 0.

Because any sequence we’d like of write-0 and write-1 units may be placed along each of the 2^N paths, any computation function f can be specified. And, after one pass of the ball through the tree with bit pattern x and subsequent exit path corresponding to pattern x, and write bits programmed according to the desired value f(x), by the time the ball hits the lever to trigger another ball, the tree pattern will show f(x) - i.e., one step of computation will have been executed. To stop the computation, particular output paths can terminate with an interceptor instead of continuing on to trigger another step of computation.


#2

Oh I have so much to learn. Thank you for posting.


#3

This is a great concept! Much simpler to understand than the CA emulator. When I read

assuming a ball can drop through infinitely many gates to trigger another ball

I realized that with the CA system, depending on how you want to interpret the TT architecture, you actually don’t need a ball to run through infinitely many gates before triggering the next one. Since all causes/effects of the ball are local, extending just a few cells in either direction, there’s nothing to stop you from sending multiple balls at a fixed interval. For instance, if you mounted one or more levers N cells down from the top, and allowed the balls that trigger them to continue, you could in theory have balls running in parallel N cells apart without interference, and they can continue into the bottomless pit forever.

I don’t think this can be done with your finite state machine as described, because the ball must pass through the writer of the final bit before you can send another without interference. On the other hand, it’s meant to be finite so it’s not really an issue. Maybe there are also ways to parallelize it to some extent, perhaps with interlocking C shapes coming from the other side. I guess I just really like the idea of many balls falling at the same time, like the thunderous sound of Magnus running :).


#4

I agree - putting just a single lever sufficiently far down to trigger another ball would ensure that any write update to a cell would have been completed before the next generation’s read would occur. So, you’re right - no infinite drop needed, no sliding infinite tape (a different model that was suggested). Having an infinite number of parts is of course necessary, but completely “fair” in that the parts are placed only to allow for the computation to expand as needed. At this point, it smells like a proof that TT is Turing-complete! It is probably worth looking at the proof that rule 110 is Turing-complete to make sure that it doesn’t somehow rely on a two-way infinite array of cells.
(But, even if it did, I think there would probably be a way around it, by just shifting the pattern down one space as needed to make more room at the beginning of the pattern.)


#5

@lennypitt I set up a finite state machine with your design here: https://bit.ly/2OAIdx6 which is configured as just a simple 3-bit counter. It seems to work quite well.

Note the three gearbits in a horizontal line on the middle left, which act as a more friendly readout.


#6

I LOVE the simulator - this is beautifully done!
The schematic notation is much easier to understand (especially gear bits).
The ability to size it, the icons, even the rotating gear while the page is loading - so much thought had to have been put into this. Thanks for creating it!!


#7

Thanks! Yeah I got a little obsessed, and it helped distract me from a bad cold. It also has a “realistic” mode, but imo schematic mode is the only way for larger layouts. I hope it allows us to test and share more ambitious creations.


#8

I was playing with the realistic mode - that is even more impressive! I can’t begin to imagine the work involved. But yes, the schematic is easier to work with. If you haven’t yet, you should make a separate post to the bboard announcing its availability. I’m not sure how many people are paying attention to this thread.