Intuitively, it makes sense that gear bits (and gears) are necessary for Turing Tumble to be Turing Complete. I’m not quite seeing if and why they are sufficient. (Granted, I have yet to solve all of the puzzles, so there are certainly more-complex logic constructs I have not seen yet.)

Simulating a Turing Machine seems infeasible given that the tape head moves in that model whereas the onramps in Turing Tumble are fixed (even if they can be arbitrarily positioned). A Counter Machine seems like a plausible mechanism to simulate to prove it’s TC.

Is there already a proof that TT is TC? (Fingers crossed it’s not at the end of the book!)

Good question! No, I don’t have a formal proof. I’d love to see one, though. I think it would be very interesting - especially if it’s found to not be strictly Turing complete. Then we’d know what would be necessary to get it there. A new part? A redesigned board?

That said, I’m pretty confident that it is Turing-complete. You can build NAND gates, you can connect them together however you like, and I haven’t done it, but I’m sure one could build a Fibonnacci sequence generator if the board was a little larger.

I should add this as a challenge. I’d do it, but I have zero experience writing formal proofs and I really doubt my proof would hold water.

I think I’m gonna have to work through the rest of the puzzles before I can have any confidence in developing a proof that TT is TC. I think collections of NAND gates actually fall under combinational logic and generating Fibonacci numbers can be done with finite state machines, both of which are less powerful than Turing Machines or their equivalents. I think the gear bits do make TT more powerful than finite state machines.

Edit: looks like there’s some potentially useful information in this question on the Computer Science Stack Exchange about the connection between NAND gates and Turing Completeness. In particular, this claim in the second answer caught my eye:

Flip-flops make the circuit Turing-complete.

Now that’s something that sounds like an existing challenge that I just haven’t seen yet.

Edit 2: Actually, the Fibonacci sequence might require a pushdown automaton, which is one step below a Turing machine.

If we want to think about simulating a TM using a Turing Tumble (TT), I think we need to first formalize what a TT model of computation is. The finite control is clearly the pieces on the game board. But what and where is the input? And how is output indicated? To simplify things, suppose that we are going to compute only functions from N to N, expressed in unary, and let the input be the number of blue marbles, and let the output be the number of red marbles eventually released before a marble is intercepted (assume an unbounded supply of red marbles). Note however that if there are k bits (regular or gear) on the board, then if a marble isn’t intercepted after 2^k marbles are released, then the machine is in an infinite loop. So, no TT as formulated can compute a function that grows faster than 2^n. This is not surprising, because what we have in essence is a finite state machine.

If we’re looking for a proof of universality, perhaps the question we really want to ask instead is whether for each pair of numbers n and m, and for each function f from {0,1}^n to {0,1}^m, there is a layout that when the top row of n “input” bits is set to indicate an input x, then the bottom row of m “output” bits is left correctly indicating the output f(x). This in essence would show that the TT model is as powerful as the family of circuits, which seems quite likely. So perhaps we should start there.

I think it would actually be better to focus on specific bits, geared or otherwise. The balls falling through are the driving force and while information can be encoded in their output, I think that working only with input and output in terms of the amount of colored balls is unnecessarily restrictive.

Yes, I agree, provided that we want to prove that that any function computed by a circuit can be computed using TT. But, the question of whether TT is Turing-complete can’t make any sense unless we allow the device somehow to take an input of arbitrary size, which was the reason I was proposing letting the number of balls indicate the input.

Ah, I was actually thinking of making the board infinite. Infinite extent for the board, input arbitrarily located, and infinite pieces. The color of the next ball to be released would be determined by whether the current ball passed below all pieces on the left or right side of the input. (And there would be an infinite number of balls of either color, of course.)

Actually, technically, if we’re only using the balls as a driving force, we only need balls of one color.

Yes, infinite board is fine, and infinite supply of balls lined up as needed, but the number of pieces on the board must be finite (although any number is available), otherwise it is not clear what a computation is, or what a program is. What I was trying to point out was that under these assumptions, when you have on the board only a finite number of pieces, the size of the output computed is bounded by 2^n (n = number of pieces), so there are clearly TM-computable functions that can’t be computed. I can’t at the moment think of a natural way around this. So, I think it is enough just to try to prove that any circuit can be simulated.

Good point, lennypitt. I think I understand what you’re saying - that in the strictest sense, a Turing machine has an infinitely long tape. Therefore it has infinite memory: it can have infinitely long input and infinitely long output. Turing Tumble isn’t Turing-complete because it doesn’t have infinite memory.

I guess I’m interested in a more practical definition of Turing-completeness (I think the term is, “linear bounded automaton complete”). After all, even a desktop computer doesn’t strictly qualify as Turing-complete because it has finite memory. So to be more precise, I’m curious if Turing Tumble could be proven to be linear bounded automaton complete. In other words, can we prove that with a big enough board, it could do anything a regular computer can do? Could we build a processor with it?

I think we have confusion because we haven’t agreed on what we mean by input and output… i.e., we haven’t clearly defined the model of computation.
If the input and output are the settings of bits on the board, then we get only finite functions, so we don’t have a general computation, but more something like a circuit.
Again, I think this is an interesting question, and probably the one you want to know… can we build an arbitrary circuit? I don’t think this is so obvious because of the way the computation is done. But, we don’t have a notion of completeness (but have one of universality in being able to compute any Boolean function).

If we really want to prove (any meaningful type of) completeness, we need to think about arbitrary input sizes… and I don’t see how to do that other than letting the input be the number of balls (perhaps the pair (num blue, num red), although that would complicate things). And, we seem limited to this unary input because there are no sensors for blue vs red balls in an input stream.

Let’s move away from computing functions, and look at accepting languages, which is what we’ll want to do to prove completeness anyway. Here is a possible definition.
Say the TT accepts an input number of blue balls if after running it puts a red ball in one interceptor, and rejects if it puts a red ball in a different interceptor. Then we can ask the question whether any unary language accepted by an LBA is accepted by some TT configuration.

Here is one which is not: Numbers of the form 2^n, written in unary. An LBA can repeatedly halve its input in place by marking off symbols, and accept iff it ends up at 1. But a TT program with B bits on the board cannot accept an input exceeding 2^B, because it will be in an infinite loop if it hasn’t intercepted a ball by then.

Perhaps the model of acceptance is wrong. Instead of an interceptor, let’s say it accepts if it ever releases a red ball. But then by arguments similar to the pumping lemma for regular languages, either it doesn’t accept any number greater than 2^B, or else if it does, then it will accept many more that form an arithmetic progression. In this case, it can’t accept only balls of number of the form 2^n either.

I suspect rather strongly that the class of languages accepted in this model (where acceptance is by releasing, not intercepting, a red ball) by TTs is exactly the semi-linear sets, which are sets of numbers made up from the union of a finite number of arithmetic sequences. (Example: accept iff the input n is of the form 3i+7 or of the form 6i+11 or of the form 5i+2. ) This is the class of unary languages accepted by finite state machines.

Sorry for the long post/reply. I think I got carried away.

I’d definitely say the input and output of our system would have to be the settings of the bits on the board, just like in a computer it would be the settings of the flip-flops in memory. Maybe we should start there, because I don’t understand why that limits our ability to compute. It seems to me that using the number of balls as input and output would be massively limiting - it would allow only one input value and only one output value. I guess I’m not interested in accepting infinite inputs, just like a regular computer can’t accept infinite input. I’m just interested in whether Turing Tumble can do what a regular computer can do.

I suppose one way to prove that TT is Turing-complete (or at least linear bounded automaton complete) is to actually build a working processor. (Ha ha - I almost typed micro processor. ) Would that do the trick?

You have a good point, if we’re treating the infinite TT as a physical object where falling balls have to go through the levers, then yes, numbers cannot be infinite in extent because otherwise, there would be an infinite expanse between two points in space, which is not possible here. However, I think there is a useful shortcut: if there are no pieces below a falling ball, it instantly reaches the end.

Now we can actually treat numbers as infinite and we can offset registers diagonally to one side so it’s possible to access an infinite number of registers. I think in this way, a counter machine can be constructed.

Edit:@paul, attempting to build a working processor would be very painful and frustrating if TT is not TC. When I and the others did Tetris in GoL, we already knew that GoL was TC. It’s better to prove TC then build a processor.

I’m so excited about this discussion! I wrote my thesis about proving Turing Tumble last year. The summary of my thesis is available on this page (Please click second “here” on the page, my paper starts at the page proc-25). I’m not sure whether this is a formal proof, but I think this is one way to prove completeness of Turing Tumble.

In my proof, I considered Reversible Logic Elements instead of logic gates (e.g. NAND), and I think RLE is suitable for Turing Tumble. An RLE is an element with input lines and output lines and it communicates with other elements via exchanging tokens on input/output lines. So Turing Tumble is similar to RLE based circuits, and it has proven that RLEs can construct Turing Machine (ref.). I think the most difficult part is the moving direction of balls. Because balls can move only downwards, it is difficult to send information to the above elements. I believe the reason why gear bits are needed is gear bits are usable for this purpose.

This blog post is also interesting. This is written about completeness of Digi-Comp II that a toy computer similar to Turing Tumble. In the post, it has been concluded that Digi-Comp II is not universal. That may be helpful for us.

The focus on infinite I/O is not merely a distraction I think.

On a PC we implement a Turing Machine application. Then if we use hard drives as the “tape” in our TM app we can swap drives in/out as needed to provide more tape. The limit is that there aren’t enough atoms in the universe to provide an infinite number of hard drives. If we assume an infinite supply of hard drives is available, a regular computer is very much Turing Complete. The TM/PC has finite memory because the universe is finite, not because of an inherent property of the machine.

I’m not so sure about TT, though. As there is no external tape to read/write, the TT is bounded by its initial configuration. Setting aside problems of having a TT board of infinite size, you’d never finish putting the pieces on the board.

But seriously, if you create an area of storage bits on the TT board it must grow horizontally or vertically… As the storage area grows horizontally the ball drop moves up to allow for the balls to go left and right far enough to reach the left-most and right-most bits. This means the ball drop is infinitely far away from the entire storage bits area… game over. If the storage area grows vertically, you might be able to get around the physical constraint of the ball never reaching the bottom by making the assumption that any ball that goes into free space immediately reaches the bottom. But to me that feels like a cheat.

What I’d like to see for Turing Tumble is an expansion pack where we can actually use a gear to move a portion of the board itself up or down (or left/right). Then you can define your TM in TT around a fixed head location and move sections of the board containing only bits (aka the “tape”) up and down. Having only a portion of the board that needs to be infinite gets around the problem of the balls needing to fall all the way to the bottom of an infinite space.

The other way to get around the infinity issue with TT is to allow for a human operator to be involved. One can certainly build a finite TT board that has a set of I/O registers and interrupts that would allow the operator to consult an infinite tape, set the bits appropriately, restart the machine, on the next interrupt, the operator would write the value of the registers to the tape, and move the tape left/right accordingly. Restart the machine. Ad infinitum. Of course, the operator may grow old and die before the machine stops… but I don’t think this should be considered a limitation.

[disclaimer: I’m not a computer scientist by any stretch of the imagination, I’m just having a fantastic time thinking about this stuff from a layperson’s perspective]

There are two limitations to showing any type of “completeness”…

Turing completeness, and any notion of (“linear bounded automata completeness”) will only be possible for models of computation which can recognize infinite languages (not infinite length strings, but strings of any arbitrary finite length). (By “language” for the non-CS people, I mean a subset of bitstrings out of all possible bitstrings. For example, consider the set of bitstrings that are the binary representations of a prime number. This is an infinite language. Or bitstrings with an odd number of 1s. Also an infinite language (though each string we consider is finite). The latter can of course be recognized by a finite state machine with two states.

Even ignoring 1., a model of computation that is in essence a finite device with finite number of parts each with a finite number of states, like Turing Tumble (TT), and without any external extensible memory, cannot do anything more than what a finite state machine can do. As Michael points out, computers as we have today are in essence finite state machines, unless we incorporate the possibility that they have additional memory, as much as needed, to read and write to.

It doesn’t make sense to ask the question “Is TT Turing-complete” until we define what a TT computation is formally. And, any way I see of formulating what a TT computation is, seems to fail at least on point 2, and perhaps on point 1. (Even if we allow an infinite board, and use of the board as memory - it is not at all clear how the machine can decide to add more bits to its memory. Who puts these pieces on the board? Or, does it start off with an infinite array of bits? There are other issues or reachability of gates that Michael alluded to. I can think of some ways to extend TT to possibly accommodate a “tape”, but then it is not TT anymore.)

In light of the above, I think the right thing to be asking instead of about completeness is the question of universality - whether any arbitrary Boolean function of n inputs can be computed with TT. It is easier to think about this than about simulating a microprocessor. And if the answer is “yes”, then TT can compute anything a computer can (if we take the theoretically justified but practically flawed view that a computer can only compute boolean functions). I’d suggest we start by trying to show that we can evaluate a function whose disjunctive normal form has, say, two terms. For example, if the inputs are x1, x2, x3, x4, x5, then output “1” (intercept a blue ball, or, set an output bit to the right) if the bits are flipped to either satisfy (x1 and x2 set to 1 and x4 set to 0), OR (x1 and x3 set to 0, and x5 set to 1). Otherwise, intercept a red ball, (or set an output bit to the left). Note, just because we can build AND, OR, and NOT gates, doesn’t make this trivial; it is not clear to me at least how you compose these on the TT board, given that a computation happens with one “electron” moving through the “circuit” at a time.

I consider Turing completeness to be a property of the theoretical model under consideration, unhindered by practical constraints. You can simplify Turing Tumble to a model where, aside from the ramps, crossovers, bits, gears, and gear bits, there is one ball drop and either balls in free space drop immediately to the “bottom” or one may place an infinite number of flippers. The important thing in that last bit is that the next ball is released when the previous one is “done”.

As long as it doesn’t take an infinite amount of time or space to do any finite operation or computation, then that’s fine, even if the constructs involved are infinite in extent. As an example, a register of infinitely many bits can still be used to add 1 indefinitely - all balls going through will exit on the right side eventually, which can then trigger the next ball.

@OUDON: I’ve read a little bit of your paper so far, and I’m having difficulty understanding it due to my lack of knowledge and experience in that particular area. Do you think you would be able to write up an explanation suitable for a layman?

That paper was indeed fairly interesting. It’s very much worth noting, however, that the author explicitly excluded switches, which are equivalent to TT’s bits. Also worth noting that Digi-Comp II did not have gear bits, so far as I could tell, so that’s another way that TT has more functionality.

Or it can grow diagonally. I do not see any reason why one cannot have an infinite progression of columns to one side and use an insane amount of ramps and crossovers to direct the output of bits to the appropriate columns. I don’t yet have a solution though, methinks I’ll post it as a theoretical challenge.

I was considering using bits as the switching mechanism to direct the ball to a particular problem, but the problem is that either 1) some column will be hit more than others, which may be okay depending on the design of the rest, or 2) balls have to pass through an infinite number of bits, which is forbidden. Gear bits may provide a solution however, given the ability to “reset” specific gear bits.

As long as any given column can be reached in finite time, there is no problem to having an infinite number of columns.

CSS + HTML is Turing complete…if you consider a human operator clicking particular boxes as part of the system. Also, the way you’ve described it, the operator is doing the vast majority of the work in simulating a Turing machine, so I don’t think that really counts either.

I’m not a computer scientist either, just someone that knows a little about this stuff.

Unfortunately, I never got the hang of working with language-based models of computation. It’s harder for me to think about that than to explore mechanical constructs needed for specific operations. Not to say that you shouldn’t; just saying that I’ll have a hard time following without more explanation and elaboration.

Hence, we shouldn’t think about physical devices. The real Turing Tumble must be the inspiration for a theoretical model.

An infinite board must have an infinite array of bits placed by some rule or algorithm.

For me personally, this seems much easier and less interesting. You have a point in that it’s non-trivial given that we only have one ball going through the system at a time, but I have a gut feeling that this is not a huge obstacle.

As an aside, bits must be used for input and/or output in the general case of any computation. Consider the problem of calculating 3 * 5 using only balls. I’m pretty sure it’s effectively impossible (or at least, very very difficult without extra external information or more colors etc) to distinguish input/output balls from the ones required to drive the machine.

A potential candidate for proving TC of TT is to simulate Rule 110. The main difficulty there is covering all of the columns needed for any given computation without iterating through all of them, unless we handwave it and say that we assume unneeded columns are not computed.

That’s cool - but I don’t understand how this can be used to simulate reading/writing from a tape. Even if we assume the layout begins with an infinite diagonal line of these devices, the program itself - the (finite) parts layout that implements the finite control - must be placed somewhere. Once it is placed, how does it have direct read/write/access to the memory elements that are far away? Perhaps I am not seeing something, but I think this problem applies to any layout where we don’t allow relative motion between the finite control and the memory units.

I think the model that has the most promise is the one @Michael proposed: we have part of the board that can move via gears. Let me explore this a bit further - here is a concrete simple model which I believe is in the spirit of TT, and about which we can inquire whether any TM can be simulated:

There is a horizontal strip of bits that extends indefinitely to the right and left… This strip is at the top of the board., and just below a ball drop.
The finite initial input to the TM is encoded on the bits, starting with the bit below the blue drop, and extending to the right. Bits past the encoding in each direction are all initially set to 0. [there is a question as to how the machine determines where the encoding ends, but there are well-known coding tricks so that this can be dealt with].

The “program” or finite control is placed somewhere below - a finite number of traditional pieces, with the exception of two new pieces: a gear that when a ball rolls over it, makes the read/write strip move one bit to the left, and one which makes it move one bit to the right. At the bottom, below the finite control pieces, the user may place a lever to trigger more balls. (There is no need to have both red and blue balls - since the balls are being used only to drive the computation.)

There are a number of possible generalizations that we might allow… for example. the horizontal sliding bit memory strip might be placed below some of the program pieces instead of at the top - which would allow the machine to potentially “read” as far to the left and right as the finite number of program pieces extend. This would be comparable to a TM that could read a finite number of adjacent cells at once. This doesn’t extend the power of a TM, but might make it easier to program the TT to simulate one.

Given this model, perhaps a next question is whether somebody can think of a computation that this device can perform that makes use of the memory and could not have been performed with the TT as it comes in the box.

Here is one of the simplest things I can think of: unary addition. The input has m consecutive 1s, followed by a 0, followed by n consecutive 1s. When the computation is done, the tape must show m+n consecutive 1s. Example input: 11110111. Ouput: 1111111. A trivial (two-state) TM solution is to replace the first 1 with a 0, then scan right and replace the first 0 with a 1. Challenge: Is there one of these TT-with-sliding-memory strip that can do this computation?