A Physically Universal Cellular Automaton

It’s been understood for decades that, if you take a simple discrete rule—say, a cellular automaton like Conway’s Game of Life—and iterate it over and over, you can very easily get the capacity for universal computation.  In other words, your cellular automaton becomes able to implement any desired sequence of AND, OR, and NOT gates, store and retrieve bits in a memory, and even (in principle) run Windows or Linux, albeit probably veerrryyy sloowwllyyy, using a complicated contraption of thousands or millions of cells to represent each bit of the desired computation.  If I’m not mistaken, a guy named Wolfram even wrote an entire 1200-page-long book about this phenomenon (see here for my 2002 review).

But suppose we want more than mere computational universality.  Suppose we want “physical” universality: that is, the ability to implement any transformation whatsoever on any finite region of the cellular automaton’s state, by suitably initializing the complement of that region.  So for example, suppose that, given some 1000×1000 square of cells, we’d like to replace every “0” cell within that square by a “1” cell, and vice versa.  Then physical universality would mean that we could do that, eventually, by some “machine” we could build outside the 1000×1000 square of interest.

You might wonder: are we really asking for more here than just ordinary computational universality?  Indeed we are.  To see this, consider Conway’s famous Game of Life.  Even though Life has been proved to be computationally universal, it’s not physically universal in the above sense.  The reason is simply that Life’s evolution rule is not time-reversible.  So if, for example, there were a lone “1” cell deep inside the 1000×1000 square, surrounded by a sea of “0” cells, then that “1” cell would immediately disappear without a trace, and no amount of machinery outside the square could possibly detect that it was ever there.

Furthermore, even cellular automata that are both time-reversible and computationally universal could fail to be physically universal.  Suppose, for example, that our CA allowed for the construction of “impenetrable walls,” through which no signal could pass.  And suppose that our 1000×1000 region contained a hollow box built out of these impenetrable walls.  Then, by definition, no amount of machinery that we built outside the region could ever detect whether there was a particle bouncing around inside the box.

So, in summary, we now face a genuinely new question:

Does there exist a physically universal cellular automaton, or not?

This question had sort of vaguely bounced around in my head (and probably other people’s) for years.  But as far as I know, it was first asked, clearly and explicitly, in a lovely 2010 preprint by Dominik Janzing.

Today, I’m proud to report that Luke Schaeffer, a first-year PhD student in my group, has answered Janzing’s question in the affirmative, by constructing the first cellular automaton (again, to the best of our knowledge) that’s been proved to be physically universal.  Click here for Luke’s beautifully-written preprint about his construction, and click here for a webpage that he’s prepared, explaining the details of the construction using color figures and videos.  Even if you don’t have time to get into the nitty-gritty, the videos on the webpage should give you a sense for the intricacy of what he accomplished.

Very briefly, Luke first defines a reversible, two-dimensional CA involving particles that move diagonally across a square lattice, in one of four possible directions (northeast, northwest, southeast, or southwest).  The number of particles is always conserved.  The only interesting behavior occurs when three of the particles “collide” in a single 2×2 square, and Luke gives rules (symmetric under rotations and reflections) that specify what happens then.

Given these rules, it’s possible to prove that any configuration whatsoever of finitely many particles will “diffuse,” after not too many time steps, into four unchanging clouds of particles, which thereafter simply move away from each other in the four diagonal directions for all eternity.  This has the interesting consequence that Luke’s CA, when initialized with finitely many particles, cannot be capable of universal computation in Turing’s sense.  In other words, there’s no way, using n initial particles confined to an n×n box, to set up a computation that continues to do something interesting after 2n or 22^n time steps, let alone forever. On the other hand, using finitely many particles, one can also prove that the CA can perform universal computation in the Boolean circuit sense.  In other words, we can implement AND, OR, and NOT gates, and by chaining them together, can compute any Boolean function that we like on any fixed number of input bits (with the number of input bits generally much smaller than the number of particles).  And this “circuit universality,” rather than Turing-machine universality, is all that’s implied anyway by physical universality in Janzing’s sense.  (As a side note, the distinction between circuit and Turing-machine universality seems to deserve much more attention than it usually gets.)

Anyway, while the “diffusion into four clouds” aspect of Luke’s CA might seem annoying, it turns out to be extremely useful for proving physical universality.  For it has the consequence that, no matter what the initial state was inside the square we cared about, that state will before too long be encoded into the states of four clouds headed away from the square.  So then, “all” we need to do is engineer some additional clouds of particles, initially outside the square, that

  1. intercept the four escaping clouds,
  2. “decode” the contents of those clouds into a flat sequence of bits,
  3. apply an arbitrary Boolean circuit to that bit sequence, and then
  4. convert the output bits of the Boolean circuit into new clouds of particles converging back onto the square.

So, well … that’s exactly what Luke did.  And just in case there’s any doubt about the correctness of the end result, Luke actually implemented his construction in the cellular-automaton simulator Golly, where you can try it out yourself (he explains how on his webpage).

So far, of course, I’ve skirted past the obvious question of “why.”  Who cares that we now know that there exists a physically-universal CA?  Apart from the sheer intrinsic coolness, a second reason is that I’ve been interested for years in how to make finer (but still computer-sciencey) distinctions, among various “candidate laws of physics,” than simply saying that some laws are computationally universal and others aren’t, or some are easy to simulate on a standard Turing machine and others hard.  For ironically, the very pervasiveness of computational universality (the thing Wolfram goes on and on about) makes it of limited usefulness in distinguishing physical laws: almost any sufficiently-interesting set of laws will turn out to be computationally universal, at least in the circuit sense if not the Turing-machine one!

On the other hand, many of these laws will be computationally universal only because of extremely convoluted constructions, which fall apart if even the tiniest error is introduced.  And in other cases, we’ll be able to build a universal computer, all right, but that computer will be relatively impotent to obtain interesting input about its physical environment, or to make its output affect the gross features of the CA’s physical state.  If you like, we’ll have a recipe for creating a universe full of ivory-tower, eggheaded nerds, who can search for counterexamples to Goldbach’s Conjecture but can’t build a shelter to protect themselves from a hail of “1” bits, or even learn whether such a hail is present or not, or decide which other part of the CA to travel to.

As I see it, Janzing’s notion of physical universality is directly addressing this “egghead” problem, by asking whether we can build not merely a universal computer but a particularly powerful kind of robot: one that can effect a completely arbitrary transformation (given enough time, of course) on any part of its physical environment.  And the answer turns out to be that, at least in a weird CA consisting of clouds of diagonally-moving particles, we can indeed do that.  The question of whether we can also achieve physical universality in more natural CAs remains open (and in his Future Work section, Luke discusses several ways of formalizing what we mean by “more natural”).

As Luke mentions in his introduction, there’s at least a loose connection here to David Deutsch’s recent notion of constructor theory (see also this followup paper by Deutsch and Chiara Marletto).  Basically, Deutsch and Marletto want to reconstruct all of physics taking what can and can’t be constructed (i.e., what kinds of transformations are possible) as the most primitive concept, rather than (as in ordinary physics) what will or won’t happen (i.e., how the universe’s state evolves with time).  The hope is that, once physics was reconstructed in this way, we could then (for example) state and answer the question of whether or not scalable quantum computers can be built as a principled question of physics, rather than as a “mere” question of engineering.

Now, regardless of what you think about these audacious goals, or about Deutsch and Marletto’s progress (or lack of progress?) so far toward achieving them, it’s certainly a worthwhile project to study what sorts of machines can and can’t be constructed, as a matter of principle, both in the real physical world and in other, hypothetical worlds that capture various aspects of our world.  Indeed, one could say that that’s what many of us in quantum information and theoretical computer science have been trying to do for decades!  However, Janzing’s “physical universality” problem hints at a different way to approach the project: starting with some far-reaching desire (say, to be able to implement any transformation whatsoever on any finite region), can we engineer laws of physics that make that desire possible?  If so, then how close can we make those laws to “our” laws?

Luke has now taken a first stab at answering these questions.  Whether his result ends up merely being a fun, recreational “terminal branch” on the tree of science, or a trunk leading to something more, probably just depends on how interested people get.  I have no doubt that our laws of physics permit the creation of additional papers on this topic, but whether they do or don’t is (as far as I can see) merely a question of contingency and human will, not a constructor-theoretic question.

87 Responses to “A Physically Universal Cellular Automaton”

  1. Patrick Says:

    I’ve always thought of universal computers built in CAs as an abstraction built on top of the medium of the CA. So it seems like asking whether you can achieve a box of zeros in a CA is sort of like asking if a MIPS processor can still do addition with a stuck bit in its instruction register.

    Am I missing something?

  2. Scott Says:

    Patrick #1: If you like, the question we’re asking is whether it’s possible, within a given CA, not only to build up higher-level abstractions capable of universal computing, but also have those higher-level abstractions interface with the low-level rules of the CA in a desired way. So it’s much more general than asking if a chip can still do addition with a stuck bit. A better analogy might be: it’s like asking whether it’s possible to build a machine that takes apart any physical object, then reassembles it exactly how it was, except with any lead in the object replaced by an equal amount of gold. Or, for maybe a more interesting example, whether it’s possible to scan a human brain so accurately that you can then make multiple perfect copies of it. It’s clear that you’ll want a universal computer, at the least, for either of these tasks, but also clear that a universal computer won’t suffice by itself: you also need to be able to manipulate the physical world in the relevant ways.

  3. Patrick Says:

    Scott #2

    Thanks for the response. I think I understand better. But this wouldn’t necessarily be specific to CAs then? Almost no traditional computers would be physically universal, because they have complicated encoding mechanisms and they only work in a very restrictive subset of their full state space.

    And in that context the rules in the your student’s paper look an awful lot like elastic collisions, and the clouds of particles being something like thermal losses or entropy generated by the calculation.

  4. Scott Says:

    Patrick #3: No traditional computers (without exception) are physically universal. A “physically universal device,” in our universe, would be something able to rearrange matter, and even the geometry of spacetime at the Planck scale, in completely arbitrary ways consistent with the laws of physics.

    Yes, Luke’s rules basically are for elastic collisions (except that, if I’m not mistaken, while “energy” is conserved in his collisions, the conservation of “momentum” can be violated). And yes, the clouds of particles are a lot like thermal losses—except that, for Luke’s purposes, he can’t just write them off as losses! Instead, he needs to carefully intercept the clouds and decode every detail of their state, in order to work backwards and determine the initial state of the CA that gave rise to those clouds.

  5. Patrick Says:

    Scott #4

    I’m trying to understand, but I guess I see a little bit of a contradiction between “completely arbitrary” and “consistent with the laws of physics”?

    Like the Pauli exclusion principle saying you can’t have two up spin electrons in the same orbital. Isn’t that sorta like saying the laws of physics can’t create a 2×1 area containing two ones?

  6. Scott Says:

    Patrick: Sorry, I meant “completely arbitrary subject to the constraint that you can’t violate the laws of physics”—for example, by violating the uncertainty principle and perfectly distinguishing non-orthogonal quantum states, or by outputting a state that’s not even in the physical state space, like in your example.

  7. Patrick Says:

    I see what you’re saying. But if the physical laws both define the state space and are analogous to the transformation rules of the automaton, isn’t it a little bit circular?

    If you can say “two same spin electrons in the same orbital aren’t in the state space of the universe, but of course, you can encode 2’b11 in lots of other ways.”

    Can’t I say a hollow block of 1000×1000 isn’t in the state space of a Turing complete Rule 110 automaton, but I can encode a hollow block of 1000 x 1000 in a configuration of spaceships and then do whatever I want with the encoding?

  8. Jay Says:


  9. asdf Says:

    In the part about impenetrable walls it sounded like you were thinking of black holes. Is the idea that the actual physical universe is not physically universal?

  10. rrtucci Says:

    “As Luke mentions in his introduction, there’s at least a loose connection here to David Deutsch’s recent notion of constructor theory”

    Very, very, very loose. This is a classical model and nature is quantum. Deutsch’s constructor theory reminds me of Principia Mathematica, except for physics instead of mathematics and orders of magnitude vaguer, but also doomed to fail

  11. Liron Says:


    Great idea. Checking CAs for properties like Turing Universality and Physical Universality feels like the beginning of a new field, “Universe Science”, at the intersection of Physics and CS.

    It strikes me that the distinction between Turing Completeness might be a quantitative one. There might be a single notion of “finite universality” for a machine in a CA, defined as “capable of effecting all mappings from a specified finite domain to a specified finite codomain”.

    Obviously the codomain has to be a subset of physical state-space, but it doesn’t have to be a natural/dense subset like the complete inside of a rectangle in GoL. It could be a “coarse-grained” subset like the subset of rectangles that can be decomposed into single-color 2×2 blocks, or the subset of rectangles that can be decomposed into a union of black L shapes (the grains). So the interesting notion would be “g-universality”, where g is some measure or description of granularity that the universality is with respect to.

    Physical Universality would be the finest-grained type, “0-universality”, and Turing Universality would be the coarsest-grained type, “G-universality”, where capital-G is a property of the CA universe saying how coarse the grains have to be before you can build a Turing Machine. Cool right?

    Btw in your post, I don’t know what “the complement of that region” is.

  12. Scott Says:

    Patrick #7: No, I don’t think it’s circular. From the mere fact that a CA has a certain set of states, it doesn’t follow that there exist mechanisms to transform the state of a finite region in a completely arbitrary way, the way Luke’s result does. As a loose analogy, Turing’s discovery of universal Turing machines might seem circular at first glance, but it isn’t. From the fact that all these different functions are computable, it doesn’t follow that there exists a single machine that computes all of them, when fed an appropriate code as input.

    (Speaking of which, I believe it ought to be possible to modify Luke’s construction to produce a “universal state-transformer”: that is, a single “machine” that does whatever you want to the finite region of interest when fed an appropriate sequence of bits as input.)

  13. Scott Says:

    Jay #8: Luke’s construction, as it stands, is wildly sensitive to the tiniest amount of noise. Constructing a physically-universal CA that’s also fault-tolerant is left as an exercise for the reader. 🙂

  14. Scott Says:

    asdf #9: The actual physical universe might or might not be physically universal (I don’t know, and the answer will presumably depend on how you choose to formalize that concept in a quantum, general-relativistic world). But if it isn’t, then it’s not because black holes act like impenetrable boxes. Indeed, today there’s almost a consensus that black holes don’t act like impenetrable boxes: not only do they eventually evaporate in Hawking radiation, but the Hawking radiation should contain the infalling information in a highly scrambled form. If this is true, then dropping something into a black hole shouldn’t be conceptually all that different from burning it: yes, you make its original state almost impossible to reconstruct “as a practical matter,” but you don’t do anything to challenge the reversibility of physical law.

  15. Scott Says:

    Liron #11: The “complement” of a region, just means all the cells in the CA that aren’t in that region.

  16. Itai Says:

    It seems that most of the famous physicist that have interest in computing,
    comes to the point of the “universe as a computer” and CA.
    (wolfram, Toffoli, Fredkin ,Seth Loyd , David Deutsch and etc’ )
    and have the idea that that law of physics underlines some sort of computation ( but not in the theoretical CS way of looking at this, with notions like universal, algorithm , input and output ).

    Toffoli, thought about the “principle of least action” as the law that underlines the universe CA rules.
    and considered action as a measure for the universe amount of computation.

    It seems logical to me regarding computation to action even from CS student perspective,
    because the “computational step” in theoretical CS actually involves both time and energy , and we should want to minimize both for physically more effective computation.

    About the pervasiveness of computational universality ,
    While it is important when designing personal computers ,
    In more exotics computations I think in some sense it is true,
    I don’t know why the brain/mind is considered as some sort of computer or compared to one in AI, when it is most obviously not a universal one ( Do you see yourself doing most of the computation your computer does? )

  17. Pablo Says:

    Dear Scott,
    Unless I misundestood your notion of physical universality, it has been considered in the literature under the name of “intrinsic universality” of Cellular Automata. Intrinsically universal CA have been constructed in many flavours (Non-exhaustive list: Non-reversible: Theyssier, Reversible: Durand-Lose, Kari, Ollinger, Quantum: Arrighi, Fargetton, Wang, Grattage). In the quantum case, its significance in terms of the search for a minimal, universal quantum physical interaction law is discussed in http://arxiv.org/abs/0907.3827. A 2D construction is given. An even simpler 3D construction is: http://arxiv.org/abs/1010.3120). I am also convinced that the topic is of central importance quite happy that more researchers look at it!

  18. Scott Says:

    Pablo #17: Thanks for the links, which we didn’t know about! However, the notion of “intrinsic universality” discussed in the abstract of arXiv:0907.3827 is not at all the same thing as physical universality. Intrinsic universality seems to mean: having a single 2D CA that can simulate any other 2D CA, possibly slowly, and using a large square for each square of the original CA, but in a way that respects the original CA’s geometry. By contrast, physical universality makes no reference to other CA’s; it means being able to implement any transformation on a finite region of a given CA, by appropriately initializing the complement of that region. One way to see the difference is that, while (if I understand the definition correctly) it’s perfectly possible for an irreversible CA to be intrinsically universal, it’s not possible for an irreversible CA to be physically universal.

  19. fred Says:

    That’s very exciting!
    After Scott’s intro about “impenetrable walls” (btw, do we know if building such walls is feasible with any TR/CU CA?) I was dreading that the answer would be “no!” or some sort of confusing conclusion (requiring 3 phds in mathematics).
    It’s great to see a “yes!” once in a while 🙂
    Reminds me when I read about the Von Neumann self replicating CA for the first time.

  20. fred Says:

    Scott #15
    “The “complement” of a region, just means all the cells in the CA that aren’t in that region.”

    but I’m guessing that, practically, the region (in the complement) that acts on the inner grid has to be finite as well?
    So that itself could be controlled by another outer region?

  21. Scott Says:

    fred #19:

      It’s great to see a “yes!” once in a while

    LOL! To paraphrase Michael Sipser, “yes” answers are extremely important, if for no other reason than that they’re sometimes significant barriers in the quest to prove “no” answers. 😉

  22. domenico Says:

    It is interesting; but if this cellular automaton work with spin in each cells, instead of bit, then this cellular automaton have the power of the quantum computer, with only local interaction (only the interaction with the neighbors).
    If it exist a bijective boolean function between little block of cells and little block of cells, then each dynamic is reversible, and the dynamic can be produced with digital components in each block, and the number of dynamic is equal to the number of possible bijective Boolean functions in the block.

  23. Scott Says:

    fred #20: Yes, the “control machinery” in Luke’s construction itself only involves a finite number of cells. (And this can be seen to be without loss of generality, as follows: if the transformation of the region is supposed to happen after T time steps, then by locality, only cells at distance at most T from the region can have affected it by the time the transformation is over.)

    And yes, you raise an extremely amusing point that hadn’t occurred to me. Namely, after a finite number of steps, the control machinery itself will diffuse away into four gas clouds (by Luke’s results). But, by again appealing to Luke’s results, we could “resurrect” the control machinery—or make it into new machinery that does the exact opposite of what it did before, or whatever—using a second layer of control machinery, which intercepts the cloud residue of the first layer of machinery after the first layer falls apart! And then after the second layer fell apart into gas clouds, we could resurrect it using a third layer of machinery, etc. etc.

  24. Lambert Says:

    Itai 16 :

    ‘most obviously not a universal one’
    The brain can simulate a Turing machine (pen and paper will help, but it would be possible to memorise the state of the tape); the failure of the brain at many tasks at which computers excel is due to the brain being slower at those tasks.
    possible != feasible

  25. Scott Says:

    domenico #22: I didn’t understand all of your comment, but I should point out that Luke’s construction is only of a classical physically-universal CA. Constructing a quantum physically-universal CA is given explicitly as one of his open problems. (My guess is that it would be very complicated, even more so than the classical case, but doable.)

  26. Scott Says:

    fred #19, one other thing:

      btw, do we know if building such walls is feasible with any TR/CU CA?

    Yes. Luke mentions in the introduction to his paper that Margolus’s CA is both time-reversible and circuit-universal, but allows for the construction of impenetrable walls. That’s why he needed to modify Margolus’s CA in order to get his physically-universal CA.

  27. fred Says:

    Scott #23
    Is there anything interesting added if the CA world grid is wrapping on itself? (i.e. the grid is forming a doughnut/toroid) So that signals that drift away eventually converges back. Maybe one could setup a sort of ping-pong configuration between two sub-grids… that’s probably a known thing in CAs in general (such as Game of Life).

  28. Vadim Says:

    Would a 3D CA, where the third dimension is used to keep a full history of the CA’s evolution, be physically universal? The transformation rules, of course, would be able to make use of this entire history.

  29. Luke Says:

    Jay #8 and Scott #13

    The early examples of Turing complete cellular automata were also hopelessly sensitive to noise, but there are now fault-tolerant CAs (see http://arxiv.org/abs/math/0003117), so it’s possible we’ll see some kind of fault-tolerant physically universal CAs, given enough time.

    On the other hand, there could be perturbations in the input cells before we can even “look” at them. It would be impossible to detect these errors (let alone correct them) because we can always run the CA backwards (exploiting reversibility) to find an input which would evolve (without perturbations) to the perturbed state.

  30. Itai Says:

    Lambert 24
    well if the mind needs pen and paper to do that sort of stuff , then alone it is not universal.
    also i doubt anyone will do so many simple compuations that computer does on a paper with no mistakes.
    what im trying to say that it is not designed to do some stuff , although it is designed to do stuff that computer is not disigned for.

  31. Scott Says:

    Vadim #28: That’s maybe the germ of an idea, but you’d have to explain in much more detail how it worked. In particular, one immediate problem I can see is that, for a 3D cellular automaton, the right definition of physical universality shouldn’t only involve being able to implement any desired transformation on a 2D “slice.” Rather, we should be able to implement whatever transformation we want on a full 3D subcube! And in order to store the full history of a 3D subcube in the way you suggest, you’d presumably need a fourth dimension, and so on ad infinitum.

    But even if it weren’t for that problem, you’d still have a lot more work to do in explaining how the CA’s evolution rule both causes the history to be stored, and allows you to exploit the history to get physical universality. And in any case, we know from (e.g.) Luke’s result that explicitly storing an entire history isn’t necessary for physical universality: all you need is reversibility plus complete access to the current state; then you can work backwards to determine the earlier states.

  32. Scott Says:

    fred #27:

      Is there anything interesting added if the CA world grid is wrapping on itself? (i.e. the grid is forming a doughnut/toroid)

    That’s a very interesting question! At the moment, I can only make a couple trivial observations:

    1. If the torus wasn’t big enough compared to the region of interest, then it’s possible you wouldn’t have enough room to fit a device that implements your desired transformation on the region.

    2. On the other hand, it’s also conceivable that we could exploit the wrapping-around to our advantage. For it would no longer be the case that all particles “escape to infinity” unless we intercept them with other particles (which can at most delay their escape to infinity). Instead, we could simply wait for particles carrying important bits to loop back around and catch them then. But it would be nice to be able to pinpoint some concrete problem with Luke’s construction, that can be fixed using this idea.

  33. Luke Says:

    Liron #11 and Pablo #17

    It sounds like you’re talking about something like this (https://www.youtube.com/watch?v=xP5-iIeKXE8, maybe turn off your sound before you watch), where a grid of blocks simulates a cellular automaton. This is not the same as physical universality, but we are interested in finding a physically universal CA with this property for efficiency reasons.

    Suppose you have a CA which can transform any finite region into a grid of blocks (efficiently), simulate any CA on those blocks (with a constant factor slowdown), and then transform it back (again, efficiently). Then your CA is physically universal if any of the CAs it can simulate are physically universal. Moreover, it is at most a constant factor slower than any CA it can simulate, so it is “optimal”.

  34. Aaron Sheldon Says:

    Kind of begs the question, given a sufficiently coarse grain can computational universal CA behave as a physically universal CA on the coarse grain. That is would the patterns that cannot be reached by a computational universal CA be of more detail then the scale of the physically universal CA simulated on the computational universal CA.

  35. Aaron Sheldon Says:

    The other thing that is bothering me is more practical.

    Given an initial state how do you know when it is done the computing? First past the post? Can you create a count down clock outside the region?

  36. Aaron Sheldon Says:

    So to answer the previous practical question you need to be able to answer this one:

    Can any transformation be coded in such a way that the transformation is complete exactly when there are no automaton in the region between the boundary of the I/O region and the initial boundary of the computational region?

  37. Scott Says:

    Aaron #35: That’s a good question. Luke dealt with this simply by declaring that, given an nxn grid and a transformation with circuit complexity s, the machine is finished after exactly f(n,s) time steps, for some function f that’s specified in advance. So then, you (“looking in from the outside”) can just count down until the pre-specified deadline is reached, at which time you’ll see that the transformation has occurred.

    Furthermore, it wouldn’t be hard to extend Luke’s construction so that some special “flag,” within the automaton itself, went off only at the exact moment the transformation was finished. E.g., we could arrange for some particular cell to be occupied with a particle after exactly f(n,s) time steps, but not occupied before or after.

    Having said that, I agree it is weird that, after the f(n,s) time steps have elapsed and the goal is achieved, the particles just continue right on diffusing, as if nothing special just happened! It would somehow feel more satisfying if everything stopped the moment the goal was achieved. But then, thinking about it further, if the CA’s evolution rule caused all the particles to stop once the goal was achieved, then how would that same rule, applied earlier in time, have caused the particles to start moving? And if the particles never moved, then how would we ever have gotten physical universality?

    The same incessant motion that starts destroying our beautiful end state the moment after it’s reached, was needed to reach the end state in the first place. Without destruction, there is no possibility of creation. (Insert gong sound)

  38. Aaron Sheldon Says:

    Yup, reversibility requires that the automaton keep on moving after the computation is finished. What I find interesting is that as corollary to the diffusion theorem, for an nXn I/O region surrounded by an m>>n mXm computation region, the last possible nontrivial computation occurs at time step (m-n)/2. After that the whole system undergoes trivial diffusion.

  39. Itai Bar-Natan Says:

    (Not to be confused with the other Itai.)

    Two thoughts:

    1. I notice Schaeffer’s cellular automaton has a scaling symmetry, in that for any configuration it is possible to spread out the particles more thinly by some integer constant, and they will collide and interact the same way, only more slowly. I think this could be used to make a more efficient decoding circuit for the diffused particles, as follows: rescale the escaping gas by redirecting its particles and shoot it back into a proportionally larger box. The particles will reverse the original diffusion process until they’re in a rescaled model of their original configuration. Right then you redirect them again before they have a chance to continue interacting. This is possible since after the rescaling there is enough space to put addition particles to interact with the input particles.

    2. It seems like it should be possible to make a physically universe cellular automaton that’s not endlessly diffusive the way this one is. Here’s one general schema I imagine by which this could be done: The cellular automaton has stable stationary objects, but they can always be destroyed. Moreover, for any configuration it possible to apply “heat” to it, so that with sufficient heat and sufficient time any configuration will diffuse and turn into a spray of particles. This spray is collected and recorded like in Schaeffer’s model and after a computation everything is reversed like in this model.

    If it turns out that many pre-existing simple reversible cellular automata (not that there really are that many) are physically universal, it is like that this would be the mechanism by which they are universal. This mechanism might also be applicable in our own laws of physics, though that’s probable not a cellular automaton.

  40. fred Says:

    Aaron #38
    One nice thing I like about CAs is that the dynamics of computations are explicit, you need room to expand (memory) and the computation speed (one grid at each tick) is built in as well.

  41. Liron Says:

    I’ll take another shot at communicating my “granularity” intuition from #11, since that comment doesn’t make much sense in retrospect.

    Physical Universality pits the workings of machines in the CA against the lowest-level laws of the CA. But between the operation of sophisticated machines and the lowest-level CA laws, there are bound to be many layers of abstraction that account for the machine’s behavior.

    My intuition says that pitting the highest abstraction layer against the lowest abstraction layer is the crudest of many possible “abstraction layer mashup” analyses.

    Has anyone thought about how to get “shades of gray” with Physical Universality? E.g. characterizing the fraction of n*n-cell configurations that are arbitrarily-transformable? (It would be 100% for Luke’s universe, and less than 100% for GoL but I wouldn’t know how to characterize that set.)

    Or for that matter, are there even interesting shades of gray with Turing Completeness in CAs? (I was going to say “for Turing Completeness we have the Chomsky hierarchy”, but I don’t know if there are any interesting Turing-incomplete CAs that can nonetheless contain Finite Automatons.)

  42. Itai Says:

    Scott #37
    “Without destruction, there is no possibility of creation.”
    If you were one of those “universe as CA” supporters,
    it means you also support the cyclic universe model.

  43. Scott Says:

    Itai #32: Nope! I’m only “one of those universe as CA supporters” if you stretch the definition of CA to encompass quantum mechanics, GR, and the rest of modern physics. But regardless of that, while you arguably can’t have creation without destruction, you certainly can have destruction without creation, and more concretely, there exist CAs (for example, irreversible ones) whose evolution with various initial conditions doesn’t display cyclic behavior at all.

  44. gentzen Says:

    While browsing through your review of Wolfram’s “New Kind of Science” again, I wondered whether the improving the efficiency of universal computations by cellular automatons would count as progress. The results Surveyed by Turlough Neary and Damien Woods that reduce an exponential slowdown to a merely polynomial slowdown might be relevant here. So since 2006, CA compute no longer veerrrryyyyyyyy slloooowwwwwwwwllllllllllllllllyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy, but only veerrryyy sloowwllyyy.

    But how significant or important is such progress really? Is it on par with Luke’s results on physical universality?

  45. Itai Says:

    Scott #43
    “can’t have creation without destruction”
    Well this probably contradicts the stable universe model, that people once thought was always existed ( no big bang at all ).
    But I think it is considered now physically impossible also.

    So , if we compare CA to the universe,
    both should start with initial conditions ( destruction) ,
    and should expand forever, or display cyclic behavior.
    Both are legal models of the universe according to GR.

    It doesn’t say the universe is CA, but it is a suspicious similarity .

  46. fred Says:

    Well if the CA grid wraps on itself, some configurations may diffuse from an initial cluster, then condense back eventually at the other side of the grid, into something similar to a big bang/big crunch cycle?

  47. Ben Standeven Says:

    @Liron, #41:
    Sure, there’s different grades of “Turing completeness” for CA. The highest level usually considered is the existence of a “universal constructor” which can produce any computational device desired, given a description of the device in terms of gadgets that can be built in the CA (for the game of Life, this would mean glider streams, eaters, guns, and so on).
    Then there’s “strong universality,” which means that the CA can simulate some self-expanding model of computation, building new gadgets as needed. So in practice, this is usually proved as a corollary of the universal constructor. But sometimes, there will be a way to “pull” and “push” an object, but only along one direction. This might be usable to make a register machine, but not a universal constructor.
    The “next” level down is “intrinsic universality;” as was pointed out above, this means that the CA can simulate any other CA of appropriate dimension (or any other computational model of the appropriate power) with only constant slowdown. But the gadgets to do this may need to be laid out in advance. Technically this doesn’t follow from strong universality, but I don’t know of any case where we have the latter without the former. Note that context-sensitive languages and 1d CAs can simulate each other easily, so this corresponds to a level of the Chomsky hierarchy. Both of the upper levels would correspond to the unrestricted languages.
    Then there’s “weak universality;” the CA can simulate any computational model, but may produce a polynomial blowup in time and space consumption. If the machine is simulating a nonterminating computation, we insist that a finite area with periodic boundary conditions be enough to supply whatever the computation needs. These boundary conditions may include an infinite array of gadgets that allow the computer to work, or infinitely many streams of particles that trigger transformations of the data.
    I don’t know the name, if any, but a still weaker property has been considered (it was used in Alex Smith’s universality proof: http://en.wikipedia.org/wiki/Wolfram%27s_2-state_3-symbol_Turing_machine); we could allow any computable mapping of nonterminating computations to infinite patterns (conceivably, even of terminating computations to infinite patterns, although Smith’s proof doesn’t do this).

  48. Bram Cohen Says:

    Time-reversibility is a very well-defined and interesting concept. I’ve long suspected that many of Wolfram’s automata are time-reversible or ‘nearly’ time-reversible, where there are multiple ancestors of some states but all but one of those ancestors are dead ends.

  49. domenico Says:

    Scott #25
    I don’t know if all is right, but each cellular automaton could be a boolean function of the neighbots cells, so that the n-power of the function cover block ever greater; so that it is complex to obtain an invertible function, because it is necessary to obtain the n-th root of the function.
    I had thought a different idea (to obtain invertible dynamic like the physical time inversion) to write the booloean function for a block of cells instead of the function in the central cell, using a general boolean function that connect all the cells in the block (it is a different invertible cellular automaton); the only possibility to obtain a dynamic is to use an input block that it is different from the output block, with the same dimension, and some overlapping, and the boolean function could be a simple lookup table that it is a permutation of the inputs, so that the invertibility is sure.

  50. D. Eppstein Says:

    Bram #48: The closest property I know to being “nearly reversible but not reversible” for cellular automata is the property of being locally injective (that is, there are no two finite patterns that can be substituted freely for each other with no future consequences for the substitution). For instance, rules based on exclusive or, such as the one-dimensional Rule 90, have this property. In Rule 90, every state has exactly four predecessors.

    But this property is equivalent to not having any Gardens of Eden (this is the Garden of Eden theorem) so it’s not the same as your idea that all but one line of predecessors leads to a dead end: the dead ends are Gardens of Eden and they can’t exist in a locally injective CA.

    If we try to model your version of near-reversibility more closely by looking for an automaton with the property that there exists a finite bound k such that every state has exactly one line of predecessors longer than k, then this is equivalent to a fully reversible automaton: one that performs k steps of the near-reversible automaton at a time. So I don’t know what advantage is to be gained by the generalization to near-reversibility.

  51. Alexander Vlasov Says:

    In universal circuits together with NOR gate should be implemented a “fork gate”, A -> A A. I may not find, how does that implemented?

  52. Cristóbal Camarero Says:

    It is possible to implement a Maxwell’s demon in this CA?

  53. Scott Says:

    Cristóbal #52: Good question! What Luke implemented could be seen as a sort of “Maxwell’s demon,” in that it painstakingly reverses whatever scrambling happened to the region of interest, exploiting the reversibility of the CA and its complete access to the region’s state.

    On the other hand, Luke’s construction also generates a huge amount of “entropy”—that is, it produces gas clouds even bigger than the ones it reverse-engineered. So arguably, it doesn’t do anything that wouldn’t be possible in principle in our universe as well: given complete knowledge of the initial state and hyper-precise nanotechnology, our physical laws too allow for the unscrambling of an egg.

    Luke’s construction takes advantage of something that we never have “in practice” in the real world, even though nothing in our physical laws seems to prohibit it: namely, omniscience and omnipotence regarding the initial microstate of some finite region (or a quantum version thereof).

  54. Scott Says:

    Alexander #51: Luke explicitly implements a “copy” gate—see page 11 of his paper, under the heading “Deflection.”

  55. Alexander Vlasov Says:

    Scott #54, O’K indeed (BTW, I would recommend MCell program I have used to check, it is working directly with Margolus neighborhood, yet Golly may be more convenient for huge patterns).
    There is also questions with synchronizations almost not discussed in the paper. E.g., I know an example of CA that may not be universal just because exchange of signals needs for odd number of steps, but interaction even.

  56. Scott Says:

    Alexander #55: You can email Luke (or wait for him to check this thread again) if you have detailed questions—but the fact that he actually implemented the construction in Golly, and it works, is strong evidence that he sufficiently thought through the synchronization issues. 🙂

  57. Alexander Vlasov Says:

    Scott #56, Sorry, I am asking because result is important, but there are only a few examples, e.g. I would like to see operation on small set of small patterns, e.g. 4×4.

  58. Luke Says:

    Itai Bar-Natan #39

    I like these ideas.

    1. I hadn’t considered this approach. You could certainly redirect the particles to space them out, and it is trivial to redirect the particles in a single cell so they won’t interact (put them on slightly different diagonals so they miss, or point all of them northeast). But it will be tricky (but not necessarily impossible) to do both at the same time. That is, I worry that the extra particles in each macrocell will interfere with each other, or the input.

    2. This is certainly plausible. I can imagine a CA with mirrors which deflect incoming particles from one direction, but collapse if hit from any other direction. So the mirrors can be used to keep particles from escaping, but can always be disassembled if they’re in the input. Or you could have a mirror which absorbs the incoming heat particle and radiates it back after a couple timesteps, but is destroyed if it is hit by another heat particle before it can radiate the first one. The price you pay for walls like this is that extracting the input bits is more difficult.

  59. Luke Says:

    Alexander #55

    In the absence of other particles and signals, you can redirect a signal anywhere you like given enough time. In the Margolus neighbourhood version of this CA, for any position (x,y) there exists a time t0 such that you can redirect the signal to position (x,y) in any time t >= t0. The idea is to use deflections to do the actual work of getting the signal into position, then waste as much time as you like with a pair of reflections. You should try to convince yourself of this fact.

    If there is some kind of parity issue, and a particle arrives at a position on an odd timestep instead of an even timestep, then we can stall (using the argument above) for an extra timestep to fix the problem.

    That said, I did run into a parity issue in the last step of my construction, taking a signal from the column of output bits to its final destination in the output region. I could have added more reflection/deflection operations to fix this, but I decided to manipulate the output bits instead. The output signals are not actually consecutive, but spaced out so that each one has the right parity.

    You can see where I add spaces at the top left of the “computation triangle”. The line of particles there correspond to places where I took the “NOR” of a single bit, known to be 1, to create a 0 output bit. This is not the most efficient way to space out the output bits, but it was the easiest at the time.

    I used to use MCell, but I had some trouble getting it to run on Windows 8, and it doesn’t seem to be actively maintained (most recent version is Sept 2001?). I also doubt it could match Golly’s speed, which matters for large patterns.

  60. Luke Says:

    Liron #41

    The number of distinguishable configurations (and similarly the number of possible output configurations) of an n*n region is going to be exponentially smaller than 2^(n^2).

    For instance, we cannot distinguish between an empty 3×3 block and a 3×3 block with one live cell in the middle, since both blocks become empty after one timestep. In other words, there are at most 2^9-1=511 distinguishable configurations of a 3×3 block. If we divide the entire region into 3×3 blocks, then there are n^2/9 blocks and hence at most 511^(n^2/9)=1.9996^(n^2) distinguishable configurations of the entire region.

    On the other hand, there is probably some way to pack Omega(n^2) bits into an n*n square in the Game of Life, such that we can perform an arbitrary transformation on those bits.

    You might be able to define the number of “arbitrarily manipulable bits” you can pack into an n*n region. By the arguments above, this should be less than log_2(1.9996) n^2, but at least Omega(n^2). Presumably the number of bits is asymptotically cn^2 + o(n^2), but the constant c would be hopelessly difficult to estimate, possibly even uncomputable.

  61. Alexander Vlasov Says:

    Luke #59, Thank your for reply.
    I am mainly troubling currently about step 3 in Scott’s classification, i.e. it is not obvious for me that CA is universal. An example of problem: if row of cells has no distance between cells, then exchange occupies one step, but any delay for synchronization must be even. So it is better to use row with even distances between cells. On the other hand I found problematic to reorder n x n square into such a row.
    PS. I also got problem with MCell with Windows 7, but it was resolved using compatibility with Windows XP mode.

  62. Alexander Vlasov Says:

    Luke #59

    If there is some kind of parity issue, and a particle arrives at a position on an odd timestep instead of an even timestep, then we can stall (using the argument above) for an extra timestep to fix the problem.

    I would like to see any example of introduction extra timestep (or, in general, odd number of time steps). I myself failing to do that with MCell and Golly implementation hiding all too much, e.g. in your example all cells in 5×5 square have even positions with spaces between them.

  63. Alexander Vlasov Says:

    PS. I guess, we are considering different tasks – I mentioned situation when all cells of region may be used, not only quarter of them with both coordinates are even.

  64. J Says:

    Is/Are there algebraic way/s to think about/study CA?

  65. Luke Says:

    Alexander #59

    I think you may be confused about the “sparse” Golly implementation, in which cells are empty unless both coordinates have the same parity as the timestep. I describe it as a performance optimization (compared to the dense representation) on the webpage, but you can also look at it as a simulation of the Margolus neighbourhood CA.

    Take the Margolus neighbourhood and “collapse” each 2×2 block into the top left corner. That cell will have 16 states corresponding to the 16 configurations of the block. Only cells with both coordinates even (top left corners) are nonempty. In the next timestep, the blocks shift, so the top left corner of each block has odd coordinates. Take a look at
    http://web.mit.edu/lrs/www/physCA/sparseexample.png and http://web.mit.edu/lrs/www/physCA/binarycoloured.png to see the relationship between the sparse representation and the Margolus representation.

    I’m not sure where you get the idea that exchanging signals necessarily takes an odd number of timesteps. Here’s an example: http://web.mit.edu/lrs/www/physCA/exchange.mc
    The gray cells should be initially empty, or contain a particle moving northeast (for simplicity, we forbid the other 14 possible states). Then the configuration above will swap the contents of the gray cells in 16 steps.

  66. Luke Says:

    J #64

    I’m not an expert, but here are some mathematical results I’ve come across.

    1. I would say the Curtis-Hedlund-Lyndon theorem (http://en.wikipedia.org/wiki/Curtis%E2%80%93Hedlund%E2%80%93Lyndon_theorem) is an algebraic statement about cellular automata.
    2. “Linear cellular automata” are cellular automata where the states are vectors in some finite vector space, and the update rule is a linear map. There are a lot of papers on LCAs. For instance, here is a characterization of the “columns” (i.e., the history of a single cell) in a 1 dimensional LCA. http://thales.math.uqam.ca/~rowland/papers/A_characterization_of_p-automatic_sequences_as_columns_of_linear_cellular_automata.pdf
    3. Evaluating a cellular automaton is comonadic. http://blog.sigfpe.com/2006/12/evaluating-cellular-automata-is.html

  67. Matthew O Says:

    I’m missing something…can someone please explain what Luke’s construction or physically universal CAs have to do with physics? Are analogies trying to be made between the arbitrary region of interest to the universe and the complement of this region to a view from outside the universe (whatever that means) and Luke’s CA rules to the “laws” of physics? Has luke created a “toy universe”? Does quantum mechanics have to be able to perform any transformation to a given region of the universe from outside that region? Confused.

  68. Scott Says:

    Matthew #67: CAs are precisely “toy universes”—that is, models that capture some aspects of the laws of physics, though obviously not other aspects. Their simplicity and discreteness makes them very convenient if one wants to study things like computational universality, the rise and fall of complex behavior, or other phenomena that aren’t necessarily specific to “our” physical laws.

    Luke’s CA, in particular, captures the following aspects of the real laws of physics:

    – Spatial locality
    – Reversibility
    – A finite “speed of light”
    – Translational symmetry
    – Rotational symmetry (albeit only fourfold)
    – Particles that propagate across space at a constant velocity, unless perhaps they collide with other particles
    – Conservation of “energy”

    Meanwhile, his CA fails to capture the following aspects of real physics, among others:

    – Galilean and Lorentz symmetry
    – Quantum mechanics
    – The existence of gravity (or any other gauge force)
    – General relativity (i.e., spacetime being dynamical)
    – Conservation of “momentum”
    – Meaningful notions of mass or of angular momentum
    – The number of spatial dimensions (it’s 2 rather than 3)

    Now, the purpose of Luke’s construction is to prove the conceptual point that it’s possible to have laws of physics that implement “physical universality,” in the sense of the ability to implement any mapping between physical states whatsoever within any finite region, by appropriately initializing the state outside of that region. It’s possible that “our” laws of physics implement (a quantum analogue of) physical universality, but it’s not obvious; indeed, it’s not even obvious precisely what one should mean by that. But I think it would definitely be interesting to know! And now, at least we know that physical universality holds in some semi-reasonable toy universe, which is more than we knew about the matter before Luke’s work.

  69. Alexander Vlasov Says:

    Luke #65

    I’m not sure where you get the idea that exchanging signals necessarily takes an odd number of timesteps. Here’s an example: http://web.mit.edu/lrs/www/physCA/exchange.mc
    The gray cells should be initially empty, or contain a particle moving northeast (for simplicity, we forbid the other 14 possible states). Then the configuration above will swap the contents of the gray cells in 16 steps.

    Sorry, if I was unclear – I mean that exchange in row of cells with odd distances between them causes odd delay with respect to evolution without exchange. I checked your example – the delay here is odd. Just make copy of layer, clear all cells around given pair and compare evolution on both layers. Hope I did not miss something. BTW you should synchronize both cells after exchange.

  70. Alexander Vlasov Says:

    PS I mean delay with respect to diagonal row. I your example distances between cells are even and so exchange is OK.

  71. Alexander Vlasov Says:

    PPS But one my initial question still valid: could we rearrange nxn square into even spaced row?

  72. Alexander Vlasov Says:

    O’K I see, indeed if to consider field as some chessboard then parallel motion is always along the cells with same color and so distances are even. It is possible rearrange nxn square because deflection introduces odd delay for cells with “wrong” chessboard color. So problem with parity is resolved due to property of CA.
    I think that following usual sanity check it is enough to demonstrate Fredkin gate to be (almost absolutely) sure about universality

  73. rrtucci Says:

    “Now, the purpose of Luke’s construction is to prove the conceptual point that it’s possible to have laws of physics that implement “physical universality,” in the sense of the ability to implement any mapping between physical states whatsoever within any finite region, by appropriately initializing the state outside of that region. It’s possible that “our” laws of physics implement (a quantum analogue of) physical universality, but it’s not obvious; indeed, it’s not even obvious precisely what one should mean by that. But I think it would definitely be interesting to know! ”

    Crazy idea (aren’t all my ideas crazy)
    your “physical universality” reminds me of String Theory Holography. Maybe you should call it holographic universality

  74. Alexander Vlasov Says:

    Seems, the CA also can make arbitrary number of copies of a region with arbitrary functions applied to each one. Yet they all would exist only one time step, but after that each such copy may be again replicated with modification using arbitrary functions … again to exist only during one moment of time. Such a sad CA story.

  75. Luke Says:

    Alexander #70

    I rearrange a square into an even spaced column in the first eight hundred steps of this Golly file http://web.mit.edu/lrs/www/physCA/transformation.mc. Remember that the square and row are as compact as possible (except the big hole in the middle of the row, but that is not essentially).

    Alexander #74

    Yes, the output only exists for a single time step. In any physically universal CA there must be some configuration which is not static (i.e., changes after one timestep) and since we may be required to output such a configuration, the output can only last for one step, in general.

  76. Alexander Vlasov Says:

    Luke #75

    I rearrange a square into an even spaced column in the first eight hundred steps of this Golly file http://web.mit.edu/lrs/www/physCA/transformation.mc.

    Thanks. Sorry, I realized that myself after the posting and write in #72 why it is possible in principle – a useful property of the particular CA.
    Yet if you get Fredkin gate sometimes, please inform. I have impression that with Fredkin gate constructions of further logical operations are much simpler, but construction of Fredkin gate itself is “AI complete task”.

  77. Alexander Vlasov Says:

    Luke #75

    Yes, the output only exists for a single time step. In any physically universal CA there must be some configuration which is not static (i.e., changes after one timestep) and since we may be required to output such a configuration, the output can only last for one step, in general.

    I see two principle ways of resolution of “one-step problem”
    (1) CA may have possibility to conserve pattern using some output configuration. Weaker version, relevant to CA under consideration is finding smallest T with property that pattern is reproduced after such period due to interaction with cells outside. If T could be one, the pattern would be stable.
    (2) State of CA is represented as a pair (s,m) with second element is analogue of momentum and we asking only about operations on regions with states (s,0)

  78. Henry Minsky Says:

    Have you seen any of Fredkin and Miller’s rcent work on reversible 3D cellular automata?


    Circular Motion of Strings in Cellular Automata, and Other Surprises (arxiv)

    Two-state, Reversible, Universal Cellular Automata In Three Dimensions

    The “SALT model” framework was developed by Fredkin to try to design in analogs of some of the basic conservation laws known in physics, and to preserve CPT symmetry (time reversibility, parity, and something like charge although that is a work in progress).

    Miller has a nice implementation of a simple rule developed for the SALT model, which he called “BusyBox” which exhibits some very thought provoking behaviors http://busyboxes.org/?hash=PeciXwDuTA0

    Among the surprises were an asymptotically circular motion path of a particular glider configuration, and some behaviors that have parallels perhaps to nuclear decay.

    They also found interesting wave-like gliders which seem to have harmonics of oscillation.

    I did some experiments I call “desktop particle collider” ,colliding the circular and linear gliders in the BusyBox model. I saw some interesting behaviors that perhaps are precursors to models of momentum transfer and photon-electron interactions (where two gliders will collide, form an orbit for a fixed number of cycles, and then decay off again into gliders).

    I wrote some notes on those ‘collider’ experiments at http://www.digitalphilosophy.org/wp-content/uploads/2014/02/GlidersWithCircularandLinearMotion-2.pdf

    I recently did a reimplementation of Miller’s BusyBox system in Java, to see if I could get some speedup, and to make it a little easier to experiment with new rules. (It turns out Java is not much faster than Javascript these days!)


    Just as an experiment, I ran Miller’s BusyBox rule, starting from a mostly symmetric ‘cube’ of cells, with a small asymmetry inside, and it produced an interesting ‘diffusion’-like ‘big-bang’ pattern similar to the ones described in Luke’s paper:


  79. Henry Minsky Says:

    In answer to Alexander’s question about ‘single time step’, and momentum; The Fredkin SALT model is a ‘2nd order’ system, in that it has two interdigitated planes, so that you can get wave-like and momentum-like behaviors.

    Time is divided into phases, so that in one phase, the cells in the ‘real’ plane will cause the cells in the ‘imaginary’ plane to change state, and then in the next phase, the imaginary cells cause the real cells to transition. This follows Fredkin’s principle of reversibility, whereby the ‘present’ transforms the ‘past’ into the ‘future’. He has a nice description of building a reversible formulation of the Schroedinger wave equation in this manner.

    In the SALT model, these planes are implemented as the set of ‘even’ cells (those whose coordinates add to an even number) and ‘odd’ cells. In the 3D SALT model, these cells interdigitate like the atoms of a Sodium-Chloride crystal. He then divides time into six phases, and in Miller’s BusyBox rule, each plane (XY, XZ, YZ) is processed with a 2D rule. This provides a three dimensional rule which provides time and parity reversal. (see the reference above).

  80. Alexander Vlasov Says:

    Henry #79, I not quite understand- I supposed that SALT is Margolus neighborhood. I am considering such scheme as some additional (not listed in Scott #68) disadvantage for modeling physics due to low level space/time inhomogeneity (chessboard and odd/even time step) . On the other hand, second order CA lacks of this problem. For some CA such as Q2R chessboard scheme may be converted into 2’nd order, but I did not know about such a thing for SALT.

  81. Norm Margolus Says:

    I’m a big fan of the idea of studying CA’s as toy universes, and asking questions about what properties of real physics can be captured in these simple models. I would quibble with using the term “physical universality” for the property you’ve defined, but it is a crisp abstraction of a physics-like property of state control, applicable in a system without a clear separation of a macroscopic scale from a microscopic one. So a well-defined problem, and a nice result!

    I agree that computation universality by itself is no guarantee that a CA universe will be interesting except on a tiny subset of initial states: those that simulate more interesting systems! So I think a very good question in general is, What additional properties can we include in simple spatial models, to capture more of the richness of real physical dynamics?

    Of course classical lattice gas models have been used since the dawn of QM in statistical mechanics, and dynamical versions of these are interesting. Some of the earliest physical properties captured in dynamical lattice gases were conservation laws: this led to the field of lattice gas hydrodynamics. With just a few discrete conservations and a few discrete values of microscopic momentum, realistic sound propagation and fluid flow are recovered in the macroscopic limit. Add in a few more properties and we also get thermodynamics and chemistry. With a bit more elaboration, crystallization and elastic materials. Invertible long range forces are an open problem.

    Since much of the interesting complexity in our world comes from evolution, you might ask specifically what physics-like properties in a CA are essential for a robust darwinian evolution? One answer to this seem to be: special relativity. You have to be able to set complex large-scale structures in motion, without disrupting their internal chemistry and dynamics. Large-scale dynamics can be made independent of inertial motion by just incorporating appropriate discrete relativistic conservations into the microscopic lattice gas rule.

    Then there are questions about hierarchical structures and separation of macroscopic and microscopic dynamics. To address the original problem we started with, about whether we can model macroscopic control of microscopic dynamics in a CA, we should really use a CA that naturally develops multiple levels of hierarchical structure. I think this is tremendously interesting because much of the apparatus of physics is designed to deal with this situation (hamiltonians, thermodynamics, etc.). It’s worth pointing out, in this context, that classical lattice gases are special cases of quantum lattice gases, and so can be analyzed using the quantum formalism.

    If you believe that the universe can be viewed as just a big quantum computation, then fundamental physical and computational quantities and concepts are really the same thing. One might learn a lot from studying classical special cases of spatially organized computation.

  82. Scott Says:

    Norm #81: Thanks so much for the fascinating comment! Just as a quick point of information, we didn’t introduce the term “physical universality”; it was coined by Janzing.

  83. Ed Fredkin Says:

    To communicate better with each other (and understand more) about possible Cellular Automaton models of physics, we need to deal with the problem of language. That issue was one of the main subjects of the paper: “Discrete Theoretical Processes”. In order to read the paper, you can go to:

    Essays and Articles
    Discrete Theoretical Processes

  84. Alexander Vlasov Says:

    Maybe I confused in #80 “chessboard” and 2×2 blocks partitions, but anyway inhomogeneity in both cases is producing some problems with my intuition about physical models. This inhomogeneity often makes implementation of some properties very simple, but it can be some kind of “streetlight effect”. Maybe homogeneous reversible CA devotes more attention.

  85. Ed Fredkin Says:

    While it may be interesting to think about the idea of a CA that has the property of “Physical Universality”, it is also doubtful that that property is useful in thinking about CA models of physics. What we want in a realistic CA model of physics is that it produces physics, and nothing else. The problem is not “…can we create any configuration?” The problem is getting a CA that only produces the particles that happen to exist in the physics of the real world. What’s important is the the combination of the CA rule and the initial conditions should be unable to produce states that physics is also unable to produce.

  86. Scott Says:

    Ed #85: Ironically, it’s a lot easier to produce CA models with specific functionalities that are (or might be) fundamentally beyond what the real laws of physics provide, than it is to produce CA models that yield “physics and nothing else”! (For one thing, I personally don’t think there’s any hope of the latter unless your CA incorporates both quantum mechanics and dynamical causal structure from the outset, but maybe we disagree about that.)

  87. Alexander Vlasov Says:

    Maybe #77 (2) is also relevant with suggestions in #85,#86? – i.e., a representation of state as a pair (a,b) with only a should be used for universal manipulations.