Does fermion doubling make the universe not a computer?

Unrelated Announcement: The Call for Papers for the 2024 Conference on Computational Complexity is now out! Submission deadline is Friday February 16.


Every month or so, someone asks my opinion on the simulation hypothesis. Every month I give some variant on the same answer:

  1. As long as it remains a metaphysical question, with no empirical consequences for those of us inside the universe, I don’t care.
  2. On the other hand, as soon as someone asserts there are (or could be) empirical consequences—for example, that our simulation might get shut down, or we might find a bug or a memory overflow or a floating point error or whatever—well then, of course I care. So far, however, none of the claimed empirical consequences has impressed me: either they’re things physicists would’ve noticed long ago if they were real (e.g., spacetime “pixels” that would manifestly violate Lorentz and rotational symmetry), or the claim staggeringly fails to grapple with profound features of reality (such as quantum mechanics) by treating them as if they were defects in programming, or (most often) the claim is simply so resistant to falsification as to enter the realm of conspiracy theories, which I find boring.

Recently, though, I learned a new twist on this tired discussion, when a commenter asked me to respond to the quantum field theorist David Tong, who gave a lecture arguing against the simulation hypothesis on an unusually specific and technical ground. This ground is the fermion doubling problem: an issue known since the 1970s with simulating certain quantum field theories on computers. The issue is specific to chiral QFTs—those whose fermions distinguish left from right, and clockwise from counterclockwise. The Standard Model is famously an example of such a chiral QFT: recall that, in her studies of the weak nuclear force in 1956, Chien-Shiung Wu proved that the force acts preferentially on left-handed particles and right-handed antiparticles.

I can’t do justice to the fermion doubling problem in this post (for details, see Tong’s lecture, or this old paper by Eichten and Preskill). Suffice it to say that, when you put a fermionic quantum field on a lattice, a brand-new symmetry shows up, which forces there to be an identical left-handed particle for every right-handed particle and vice versa, thereby ruining the chirality. Furthermore, this symmetry just stays there, no matter how small you take the lattice spacing to be. This doubling problem is the main reason why Jordan, Lee, and Preskill, in their important papers on simulating interacting quantum field theories efficiently on a quantum computer (in BQP), have so far been unable to handle the full Standard Model.

But this isn’t merely an issue of calculational efficiency: it’s a conceptual issue with mathematically defining the Standard Model at all. In that respect it’s related to, though not the same as, other longstanding open problems around making nontrivial QFTs mathematically rigorous, such as the Yang-Mills existence and mass gap problem that carries a $1 million prize from the Clay Math Institute.

So then, does fermion doubling present a fundamental obstruction to simulating QFT on a lattice … and therefore, to simulating physics on a computer at all?

Briefly: no, it almost certainly doesn’t. If you don’t believe me, just listen to Tong’s own lecture! (Really, I recommend it; it’s a masterpiece of clarity.) Tong quickly admits that his claim to refute the simulation hypothesis is just “clickbait”—i.e., an excuse to talk about the fermion doubling problem—and that his “true” argument against the simulation hypothesis is simply that Elon Musk takes the hypothesis seriously (!).

It turns out that, for as long as there’s been a fermion doubling problem, there have been known methods to deal with it, though (as often the case with QFT) no proof that any of the methods always work. Indeed, Tong himself has been one of the leaders in developing these methods, and because of his and others’ work, some experts I talked to were optimistic that a lattice simulation of the full Standard Model, with “good enough” justification for its correctness, might be within reach. Just to give you a flavor, apparently some of the methods involve adding an extra dimension to space, in such a way that the boundaries of the higher-dimensional theory approximate the chiral theory you’re trying to simulate (better and better, as the boundaries get further and further apart), even while the higher-dimensional theory itself remains non-chiral. It’s yet another example of the general lesson that you don’t get to call an aspect of physics “noncomputable,” just because the first method you thought of for simulating it on a computer didn’t work.


I wanted to make a deeper point. Even if the fermion doubling problem had been a fundamental obstruction to simulating Nature on a Turing machine, rather than (as it now seems) a technical problem with technical solutions, it still might not have refuted the version of the simulation hypothesis that people care about. We should really distinguish at least three questions:

  1. Can currently-known physics be simulated on computers using currently-known approaches?
  2. Is the Physical Church-Turing Thesis true? That is: can any physical process be simulated on a Turing machine to any desired accuracy (at least probabilistically), given enough information about its initial state?
  3. Is our whole observed universe a “simulation” being run in a different, larger universe?

Crucially, each of these three questions has only a tenuous connection to the other two! As far as I can see, there aren’t even nontrivial implications among them. For example, even if it turned out that lattice methods couldn’t properly simulate the Standard Model, that would say little about whether any computational methods could do so—or even more important, whether any computational methods could simulate the ultimate quantum theory of gravity. A priori, simulating quantum gravity might be harder than “merely” simulating the Standard Model (if, e.g., Roger Penrose’s microtubule theory turned out to be right), but it might also be easier: for example, because of the finiteness of the Bekenstein-Hawking entropy, and perhaps the Hilbert space dimension, of any bounded region of space.

But I claim that there also isn’t a nontrivial implication between questions 2 and 3. Even if our laws of physics were computable in the Turing sense, that still wouldn’t mean that anyone or anything external was computing them. (By analogy, presumably we all accept that our spacetime can be curved without there being a higher-dimensional flat spacetime for it to curve in.) And conversely: even if Penrose was right, and our laws of physics were Turing-uncomputable—well, if you still want to believe the simulation hypothesis, why not knock yourself out? Why shouldn’t whoever’s simulating us inhabit a universe full of post-Turing hypercomputers, for which the halting problem is mere child’s play?

In conclusion, I should probably spend more of my time blogging about fun things like this, rather than endlessly reading about world events in news and social media and getting depressed.

(Note: I’m grateful to John Preskill and Jacques Distler for helpful discussions of the fermion doubling problem, but I take 300% of the blame for whatever errors surely remain in my understanding of it.)

104 Responses to “Does fermion doubling make the universe not a computer?”

  1. Moshe Says:

    Two quick comments,

    I think the concept is fundamentally extra-scientific, very much theological in nature. All the examples of empirical consequences you give are not predictable using the usual tools of science, even in principle. Like believing in a god interfering arbitrarily in the mechanisms of nature, accepting that hypothesis is rejecting the idea that you can have a mechanical and scientific description of nature. This is why I view this as a traditional religious viewpoint dressed up in technological lingo.

    I also want to point out that what scientists call a simulation in this context is very much different from what people have in mind here. When you simulate the strong interactions for example you run many simulations of the situation at hand, each one of them gives a qualitatively different result than what occurs in nature. You then have a very sophisticated post-simulation analysis to tease out a very small signal which is the physical answer. This is how you get for example Lorentz invariant answers from lattice models, and any solution to fermion doubling will only increase that overhead that you have to filter out to get a physical answer. There is never a single computer simulation that is literally identical to what occurs in nature, which we can then imagine existing in.

  2. Doug S. Says:

    If spacetime is quantized at the Planck length, would that be a lattice model? I only had one semester of undergraduate modern physics…

  3. Scott Says:

    Doug S. #2: No, that would be discreteness at a fundamental level. In lattice models, by contrast, you impose a lattice spacing, and then consider the limit as that spacing is taken to zero (and you hope that the limit exists!).

  4. Scott Says:

    Moshe #1: I think I disagree on two points.

    First, if we want to say that science discovered all these mathematical regularities in Nature, then we have to say that it could have turned out otherwise, that science could have instead found that natural phenomena like eclipses and hurricanes really were (as the ancients believed) ordered to reward or punish humans or to send them messages. To take the obvious: if you heard the voice of God from the booming thundercloud or the burning bush — granted that you didn’t — would that not be an empirical fact that you’d then need to (massively) update on?

    Second, the question of what kind of simulation is necessary or sufficient to “bring something into actual physical existence” is complicated and disputed! If God has to run a bunch of Monte Carlo trials to estimate the expectation values of the observables of our world, who are we to say that that’s not just an implementation detail?

  5. Mark Spinelli Says:

    Thanks! Your paragraph beginning:

    > I can’t do justice to the fermion doubling problem in this post (for details, see Tong’s lecture, or this paper by Preskill).

    does link to Tong’s YouTube lecture but does not seem to have a link to any particular paper by Preskill?

  6. Oleg S Says:

    Hi Scott,
    Sorry for off-topic (although not too far),
    but is there an N such that busy beaver BB(BB(N)) < BB(N+1)? What about BB(N+100)?

  7. Moshe Says:

    Scott, I agree with both points but I think it does not change things much. I still think the simulation hypothesis is a thinly disguised religious hypothesis. Sure you could imagine that god exists and one day you will be presented with incontrovetible evidence, but until that day comes all evidence is that either one of these isomorphic hypotheses is not particularly useful or interesting. Yeah maybe in some sense with enough caveats it is possible, but infinitely many other things are possible and our time to think is finite.

    As for the second point, I just think most popular presentations of the computer simulation are much more simple minded than what is done in practice when you simulate nature. If you want to extend the hypothesis such that we come into being not in the matrix itself but as faint signal in some complicated post-processing, you can, I don’t think that will make great science fiction though.

  8. Scott Says:

    Mark Spinelli #5: Thanks and sorry about that, fixed now!

  9. Scott Says:

    Moshe #7: I mean, I just try once every few years to counter bad philosophy around the simulation hypothesis with better philosophy, so that you and your friends can continue to ignore the whole topic and do physics. You’re welcome! 😀

  10. Scott Says:

    Oleg S #6:

      is there an N such that busy beaver BB(BB(N)) < BB(N+1)? What about BB(N+100)?

    Sure, take N=1. Then BB(BB(1)) = BB(1) = 1 < BB(2) = 6 <<< BB(101). 😀

  11. Ted Says:

    Minor quibble with your question “Does fermion doubling present a fundamental obstruction to simulating QFT on a lattice … and therefore, to simulating physics on a computer at all?”:

    Even if there were an airtight no-go theorem that completely ruled out the ability to locally simulate the Standard Model on a lattice – which appears not to be the case – then that still wouldn’t imply that the Standard Model was uncomputable or un-simulateable. There could still be some other computable non-lattice method for extracting observables (e.g. a more mathematical rigorous version of existing analytical methods like perturbation theory). So you wouldn’t necessarily even need to bring in any beyond-SM physics to rescue the computability of the laws of physics.

    It isn’t obvious to me that there could even conceivably exist a no-go theorem that ruled out every possible computable simulation technique for any theory of physics that even vaguely resembles our universe (e.g. without “hard-coding” an oracle for the halting problem right into the laws of physics themselves, which would make for a very different physical theory than our current ones).

  12. Moshe Says:

    Scott, thanks!

    (I don’t mind so much the metaphysical baggage, there is a long and distinguished tradition of trying to provide proofs of god’s existence, some of these things can be interesting. I just wish that people were a little more upfront that this is what they are doing.)

  13. Scott Says:

    Ted #11:

      Even if there were an airtight no-go theorem that completely ruled out the ability to locally simulate the Standard Model on a lattice – which appears not to be the case – then that still wouldn’t imply that the Standard Model was uncomputable or un-simulateable.

    Not only do I 100% agree with you, but a major purpose of this post was to make that very point! 🙂

      It isn’t obvious to me that there could even conceivably exist a no-go theorem that ruled out every possible computable simulation technique for any theory of physics that even vaguely resembles our universe (e.g. without “hard-coding” an oracle for the halting problem right into the laws of physics themselves, which would make for a very different physical theory than our current ones).

    I mean, many problems have been shown to uncomputable that don’t explicitly “hard-code” oracles for the halting problem—e.g., solving Diophantine equations, tiling the plane, matrix mortality, deciding whether a Hamiltonian is gapped or gapless…—because it turns out, nontrivially, that the halting problem can be reduced to them. A priori, it’s conceivable that simulating physics could be another such problem (and that’s basically what Roger Penrose believes). But I agree that any physical laws supporting such a reduction would have to look radically different from the laws that we believe are true! (At least, if we leave aside “reductions” that manipulate physical quantities to infinite precision, or use unbounded energy, or otherwise make physically ridiculous assumptions.)

  14. Tobias Maassen Says:

    This is not the simulation hypothesis people care about. People care about simulations of complete humans, like the NPC in any video game. The fundamental object of the simulation is the Person, with a few hundred possible character traits, and a special case to forge physics experiments. This is sometimes called the “Ancestor Simulation”, because only your ancestors are worth being simulated, and obviously they want their simulated ancestors (us) to have exactly their experiences. These people would never understand xkcd 505. Whenever someone asks about the simulation hypothesis, let them clarify the level of simulation they talk about.

  15. Scott Says:

    Tobias Maassen #14: Right, but given that we can do physics experiments and observe their results (and make astronomical observations, etc.), it would seem that by far the most elegant way to simulate our experience would be to simulate the laws of physics of the entire universe. I agree that numerous shortcuts are possible to save on computational power. But if so, we’re right back to the dichotomy with which I opened the post:

    (1) If the computational shortcuts don’t lead to any observable consequences, then I don’t care about them.
    (2) If someone says they do lead to observable consequences, that person should “put up or shut up”! 🙂

  16. zyezek Says:

    I think you’re 100% right IF the Physical Church-Turing Thesis (PCTT) of (2) really is true. However, I think the argument breaks down if it isn’t.

    The whole premise of the Simulation Hypothesis (SH) is that we’re actually software being executed by some universal Turing machine (or strictly more powerful “hyper-computer). I would argue that at the most fundamental level, the difference in (2) vs. (3) is simply this: Is our reality fully simulatable on a Turing machine (2) or is it literally such a simulation (code) running on some universal Turing machine (3)? Both cases share a critical core premise: All physical reality can be modeled on Turing machine, to arbitrary precision. It is the PCTT that makes the SH both feasible and hard to falsify, because it is extremely difficult to distinguish a reality that operates as if it were a computer program from one that literally IS a computer program.

    However, the PCTT itself is in principle a scientific, testable hypothesis. For example, General Relativity’s mathematics allows for configurations of space-time, like Closed Time-Like curves, that if observed in the real world would allow for hyper-computation. Various unified theories trying to combine GR with Quantum field theory may also allow for systems that share this ‘hyper-computational complexity’ property. Observing or creating such a system would then disprove the PCTT and require a post-Turing model of computation.

    Now, my understanding is that the vast majority of complexity theory and its results implicitly applies only to (quantum) Turing machines. And the universality properties of Turning machines don’t necessarily extend to hyper-computers. So if new physics implies the universe contains “Type U” hyper-computers, and there IS NO “universal Type U hyper-computer” in the same way there is a universal Turing machine, then such a universe can’t be a simulation in the same sense the SH is proposing.

    I get that there would still be loopholes- a modified SH could claim the ‘higher’ reality has vastly different laws of physics with correspondingly more powerful machines, etc. The most generic form of it is probably unfalsifiable in principle. But any specific variant, especially the common “reality is The Matrix” ones, could in principle be falsified by identifying the minimally powerful model of computation necessary to simulate physical reality & then seeing if the model actually permits such simulation. Turing machines clearly do, which is why the familiar SH is likely unfalsifiable should the PCTT be true.

  17. fred Says:

    Does it count if our universe is being “simulated” on a quantum computer, which can be seen to be somewhere between a digital/discrete (bits and lattices) and an analog/continuous (wave functions) machine?

  18. Adam Treat Says:

    Scott says,

    “…it still might not have refuted the version of the simulation hypothesis that people care about…”

    But my eagle eyes note that you didn’t specify which of the three you believe people care about. Scratch that. I want to know which you think others generally care about it and which *you* care most about. Guessing you care most about #2?

  19. A1987dM Says:

    “(By analogy, presumably we all accept that our spacetime can be curved without there being a higher-dimensional flat spacetime for it to curve in.)”

    Well, actually…

    https://en.wikipedia.org/wiki/Nash_embedding_theorems
    https://doi.org/10.1007/BF02106973

    🙂

  20. Doug S. Says:

    If God has to run a bunch of Monte Carlo trials to estimate the expectation values of the observables of our world, who are we to say that that’s not just an implementation detail?

    On that note, if “God’s dice” were actually a (very good) classically computable pseudorandom number generator instead of “truly” random, would there be a way to figure that out?

  21. Ted Says:

    Scott #13: Yes, I apologize for belaboring the obvious. I just wanted to point out that one can slightly strengthen your observation that “Even if it turns out that lattice methods can’t properly simulate the Standard Model, that tells us little about whether any computational methods could simulate the ultimate quantum theory of gravity”: In fact, that wouldn’t even tell us whether computational methods could simulate the SM itself, let alone gravity.

  22. fred Says:

    Dreams are clearly simulations (in the sense that when we dream, we accept things as real, most of the time, and whatever happens in there is generated entirely within our own brain).
    And, even more fundamentally, the “objective reality” we supposedly live in is itself “simulated” inside our brain from perception data.

    When we look at a “table”, all there is is an apparition in consciousness (built from electrical signals coming from the optic nerves.. all those words referring to other types of “tables”), supposedly highly correlated with something else “out there”, of a totally mysterious nature, which we call “matter”, “quantum fields”, ….

    We only know that dreams are dreams after we wake up, when the brain restores a higher degree of scrutiny for self-consistency. But even “objective reality” fails that sense of self-consistency at some level, one example being the impossibility humans have to “understand” QM (impossibility to make sense of dual particle/wave nature, the subjective/murky nature of measurement, etc).

  23. Scott Says:

    fred #17: Why wouldn’t it “count,” whichever of questions 1-3 you care about? Keep in mind that any simulation a QC can do, a classical computer can do as well, albeit possibly exponentially slower.

  24. Scott Says:

    Adam Treat #18: I definitely care more about #2 than about #1. I’m not sure whether #3 even has any possible consequences for anything in our experience, but supposing it did, I’d care about it most of all. I’d imagine most people have a similar ordering?

  25. Scott Says:

    A1987dM #19: I’m familiar with Nash’s embedding theorem (as well as Whitney’s embedding theorem, which is much easier to prove). But the whole reason why those theorems have content, is that it’s also possible to talk “intrinsically” about a curved manifold, without specifying what it’s curved in if anything. In much the same way, it’s possible to talk “intrinsically” about a computable universe without specifying what if anything is computing it.

  26. Scott Says:

    Doug S. #20:

      On that note, if “God’s dice” were actually a (very good) classically computable pseudorandom number generator instead of “truly” random, would there be a way to figure that out?

    Plausibly yes! Just for starters, Bell’s Theorem would then tell you that superluminal communication was needed to coordinate the pseudorandom measurement outcomes between faraway entangled particles, which would break most of physics, which most of us regard as an extremely strong argument that the randomness of quantum measurement is real. For more on this point, see e.g. my American Scientist article.

  27. fred Says:

    Another crucial point is the nature of computation.
    If we assume “computation” can give rise to reality, then it can give rise to consciousness.
    But computation is itself “in the eye of the beholder”, the only computations we (humans) are directly aware of are extension of our consciousness. What this mean is that we associate computation with some hardware that has memory and registers, and moves steps by steps. But at what point of that process would “consciousness” rise? When the registers are loaded? When the internal clock advances? When the current micro-instruction is executed?… the problem is that we can realize the very same computation by using a bunch of trained monkeys marking 0s and 1s on big sheets of paper. When would the consciousness rise in that equivalent situation? When the monkeys write down a fresh batch of ones and zeros? When they look up some rule book to implement the equivalent of an ALU, to “compute” the next state?
    And this entire setup can again be reduced even further by just having a giant stack of sheets of paper marking all the possible states of the computer, and just picking from one to the next following some rules. At what point would consciousness be instantiated in that case? When the next sheet of paper is being “picked”. But we don’t need to pick each state, … just by virtue of its own existence, the stack of sheets “implements” all the possible computations at once, therefore all possible realizations of consciousness just exist at once as well?
    And why even bother with the sheets of paper and symbols? Each machine state can be encoded in an integer, therefore the existence of the natural numbers is enough to instantiate all the possible conscious experiences at once?
    This seems to lead to the idea that reality is mathematical in nature, and the mere existence of numbers and logic leads to the creation of consciousness and all the possible perceived realities. And there’s no such thing as “matter”.

    The alternative is that computation just can’t instantiate consciousness, and consciousness would be some fundamental thing on which everything else is built.

  28. fred Says:

    Scott #23

    I brought up QC because I didn’t understand where that concept of lattice came from.
    It seemed like someone’s making the assumption that simulating our universe would have to happen on a digital computer, in the realm of discrete spaces, so, if our universe is itself a simulation, fundamental laws of physics would reveal that discrete nature.
    But, if reality is “continuous” in nature, only “analog” machines could truly simulate it, and QC are analog in that sense.
    A qubit = a |0> + b |1>, but a and b are complex numbers, therefore could encode infinite information. Something a discrete digital computer can’t do (with finite resources) and only approximate.

  29. fred Says:

    I think the idea of “reality is a simulation” has less to do with the Church-Turing hypothesis, but spawned of the philosophical musing shown in The Matrix.
    And is getting more salient by the day as Virtual Reality hardware keeps improving.
    Eventually Virtual Reality won’t be about wearing goggles but directly injecting digital data into the perception channels of the brain, controlling not just vision, audio, but also sense of acceleration, smell, etc. At that point humans will likely spend more time inside virtual universes than in “raw” reality. And at that point it will be pretty obvious to such humans that they live inside simulations and that ‘reality is a simulation’, and hardly anyone will give a shit whether the “raw” boring zero level reality is itself a simulation (everyone will assume it is but won’t care).

    As an analogy, it’s like asking a person from the 12th century whether a painting could be indistinguishable from reality. And then you put them in front of an 8K 100 inch wide HDR Oled TV, and they’d probably lose their mind trying to understand what the fuck they’re looking at.

  30. Scott Says:

    fred #28: No, even a QC is made up of a discrete set of qubits (each of which can have a continuum of amplitudes—an extremely different kind of continuum). For that reason, putting everything on a lattice is one of the basic techniques even if you’re only trying to simulate QFTs on a QC, as in the Jordan-Lee-Preskill papers.

  31. fred Says:

    Imagine a future where “raw” zero level reality is entirely managed by machines/AIs, and human brains are plugged at birth into a virtual reality system.
    Newborn brains are first exposed only to ‘traditional realities’ as they grow/mature, and then later one exposed to more powerful/complex levels of reality, based on their own preferences.
    Not only would the system be able to totally simulate all aspects of “raw” reality, i.e. you’d feel like you have an actual body and be able to run on a beach, feeling sunlight on your skin, holding hand with your soul mate,… but the plasticity of the brain means you’d be able to go beyond the limitation of the organic senses, you’d be able to see in 360 degrees, learn to fly or have 6 limbs, sleep for arbitrary amounts of time for some computation to finish, learn to live in 4 dimensional worlds, etc.
    If you think that’s crazy, imagine explaining to a 12th century person that some people in the 21st century make a living by sitting still in a chair all day long, playing “video games”.

  32. Joe Says:

    Hi Scott,

    I don’t fully grasp this topic but I find it fascinating. I’m curious about your thoughts on a slightly different, but related matter: simulating special relativity, and in particular, time dilation. Would that be a candidate for arguing that we are not in a simulation? Pardon my naivety, but it seems impossible to me to run the simulation in a particular reference frame.

  33. Scott Says:

    Joe #32: I don’t think so. You can always just pick a reference frame and run the simulation in it, which is exactly what people do in lattice QFT (or even just simulating Maxwell’s equations). Just because your theory has some symmetry doesn’t mean that a simulation needs to exploit it!

  34. fred Says:

    Joe #32

    special and general relativity are implying a static “block” universe in spacetime, deterministic, and the realities perceived by different reference frames are defined by different ways of slicing this block universe.
    But I have no idea how “easily” that block universe can be computed (at least by an “outside” observer, the simulator).

  35. Adam Treat Says:

    Scott #24,

    I’m with you that none of the above matter if they don’t result in well defined consequences. #2 for me is the most well defined so I guess I care about this the most. My inner sci-fi teenage nerd cares about #3 I guess, but this is the one I have the least hope would actually result in well defined consequences.

    I’ve stated this before, but I’m on record in email saying I think #2 is true and I hope that we’ll have real world consequences in my lifetime. My best guess for how this would play out is relying upon the notion of uncomputable kolmogorov complexity. A real world consequence would be finding some physical process that compressed information “perfectly” in the sense in terms of kolmogorov complexity. It is hand wavy, but I think this is what is going on inside blackholes: perfect compression of information. And what better place to find an uncomputable process than the singularity of a blackhole?

    Now, how we could devise an experiment (even theoretically) to indicate this is way beyond me and I already stipulate that if no such experiment could result in a real world consequence only at the heat death of the universe when the radiation has all evaporated out, then it would not matter and I would acknowledge I was wrong about #2 being true.

  36. JimV Says:

    Fred at #17 refers to a “analog/continuous (wave functions) machine”. As I have mentioned probably too many times, I know of no such (actual vs. nominal) machines. Analog computers use electricity which is composed of discrete electrons. Waves in a fluid or gas are composed of discrete molecules. Light is composed of photons. Slide rules are made of discrete molecules. We use continuous mathematics as a good approximation to such things, but I’m with Zeno. (It’s an illusion, like the motion in a movie.) The universe’s speed limit c = dx/dt, where dx is the minimum spacial increment and dt is the minimum time increment. Half of c is composed of equal numbers of dx increments and zero increments. The increments are too small for us to detect, even in Lorentz Transformation effects. (I could be wrong, but that makes the most sense to me.)

  37. Oleg S Says:

    Scott #10: Sure, take N=1. Then BB(BB(1)) = BB(1) = 1 < BB(2) = 6 << X, f(f(N)) > f(N + 100). However BB function is such a crazy fast-growing function that there is Y > X such that if if X<N BB(N + 100), but once N > Y, BB(BB(N)) < BB(N + 100).

  38. Lane Says:

    Regarding #3, what thought have you given to what that universe might look like to make simulation of our universe relatively trivial?

  39. Adam Says:

    Oleg #37: Your comment is hard to make sense of but maybe this will help:

    BB is strictly monotonic, i.e. BB(N) < BB(M) if and only if N < M.

    So BB(BB(N)) < BB(N + 100) would imply that BB(N) < N + 100.

    That wouldn't be a "crazy fast-growing" function but rather a sublinear one.

    But in fact BB is much faster than linear, so this won't happen.

    (As a matter of fact, I once made a very similar mistake in a Scott-inspired "who can name the biggest number" contest on a now-dead forum. So I think I’m familiar with the intuition you’re drawing on — A function that grows even faster than recursion! — but unfortunately it can't work.)

  40. Danylo Yakymenko Says:

    The question 3 is clearly metaphysical and can’t be answered because we can’t know the “true” laws of outer universe.

    Nevertheless, the believe of what the answer is has a real impact on our world (our simulation, if you like). If people believe that everything they see isn’t “that real” then it changes their behavior in life. In the worse directions. They could become less certain about what they perceive, not value established facts, be more apathetic and less emphatic to life of others (like to NPCs in games). It’s much easier to manipulate and control people that don’t believe in anything certain. I think this is why people like Musk push this idea into masses. They profit on that.

    Of course, there is an inherent ability of humans to believe in what they like more instead of facts. But the simulation hypothesis amplifies this.

  41. fred Says:

    JimV #36

    what I meant is that, with a traditional digital computer, the state evolution is determined by transformation matrices that only have 0s and 1s in them (combination of NAND gates is enough to represent any circuit).

    With a QC, in all generality, the evolution of the state is determined by transformations represented by multiplications with continuous unitary matrices (terms are complex numbers), and the state rotates in a continuous space.
    Now, it’s likely that unitary transformations can only be approximated in practice, leading to inaccuracies (and maybe that’s also taken care of by the many physical qubits to one logical qubit “magic”). And a universal quantum gate requires the ability to “rotate” by an arbitrary angle.

  42. Clint Says:

    Scott, thank you, couldn’t agree or encourage you more in this direction …

    In conclusion, I should probably spend more of my time blogging about fun things like this, rather than endlessly reading about world events in news and social media and getting depressed.

    It is interesting (to me anyway) that evolution produced a lattice-amplitude-interference-based computational model for simulating space in mammals that so uncannily resembles the lattice-amplitude-interference-based computational model “running” the universe. Of course, shouldn’t we expect (successful) evolution to produce a simulation machine that matches the universe’s class of computation as nearly as possible?

    Anytime someone says “Man, it’s like we live in a simulation!” I say, “Yes, and it’s running in the computational machinery between your ears.”

  43. James Cross Says:

    “apparently some of the methods involve adding an extra dimension to space”

    Would that be the same as the Kazusa-Klein theory? Or, some other kind of extra dimension?

    It still would present a problem proving it, wouldn’t it?

    The idea that our world is a facade of some other reality goes back to the Vedic concept of maya.

    from Wikipedia

    In later Vedic texts, māyā connotes a “magic show, an illusion where things appear to be present but are not what they seem”; the principle which shows “attributeless Absolute” as having “attributes”

  44. Scott Says:

    Clint #42:

      Of course, shouldn’t we expect (successful) evolution to produce a simulation machine that matches the universe’s class of computation as nearly as possible?

    I’d say: we should expect the brain to simulate the external universe faithfully, insofar as doing so would’ve been both feasible and directly useful in the ancestral environment. So for example, it should be able to represent solid objects as having positions and velocities in 3D space, as being able to hit and occlude each other and to feel the earth’s gravity, etc. On the other hand, I see no good reason for the brain to have “hardwired” any information about quantum mechanics or general relativity, or even electromagnetism or the subtler parts of Newtonian mechanics, and I also see no evidence that it has.

  45. Scott Says:

    James Cross #43: No, I don’t think it’s closely related to Kaluza-Klein, except insofar as they both involve hidden extra dimensions. Unlike with Kaluza-Klein, here the extra dimension isn’t being posited as actually physically existing: it’s “just” a calculational tool introduced to break an unwanted symmetry. But maybe one of the physicists reading this can give you a more technical answer.

  46. James Cross Says:

    Clint #42

    But are the grid cells really “computing” or are they just “imitating” or ‘mimaking”?

    I think more likely the latter because of the lack of precision in judging things like distances?

    For example

    “judging distances and depth may be trickier than it seems. A recent study, published Oct. 23 in the Journal of Neuroscience, finds that people’s depth perception depends on their perception of their arm’s length. Trick someone into thinking their arm is shorter or longer, and you can influence how they perceive distances between two objects.

    https://www.scientificamerican.com/article/judging-distances-and-depth-perception-change-with-arm-length/

    We wouldn’t expect such lack of precision from a computer.

  47. Joe Says:

    Scott #33: You are absolutely right! It turns out what I had in mind was a Matrix like game, where “players” still lived in the real world. In that case it’s indeed not possible to simulate time dilation. But in a pure simulation, one can simply simulate each observer in their own frame of reference, like you suggested.

    This was a revelation to me as it made me realize the difference between The Matrix and a pure simulation where consciousness is just an emergent property of it. Anyway, thanks for setting me straight!

  48. Adam Treat Says:

    James Cross #46,

    You might anticipate something like this from a neural net though where the variables encoding arm length are highly correlated with those encoding distance functions. Given arm length is relatively stable timewise compared to the distance measurements we necessarily must make on much faster timescale (being chased by a predator) it seems reasonable that a neural net might so encode.

    I’m not sure anyone has said that neural nets are fundamentally different from computation nor can I imagine such a distinction could be made in any reasonable way.

    In conclusion, a computer simulating a neural net might have just that kind of lack of precision and employ a quite useful heuristic.

  49. James Cross Says:

    Scott #45

    That’s what I guessed.

    However, it’s interesting whether a dimension is “real” comes up in the context of whether we live in simulation.

    If an extra unreal dimension is required to make the calculations work, wouldn’t that be indirect evidence we are living in a simulation? All of the dimensions could be unreal.

    Or, maybe all of the dimensions (and more) are real, but we only think the 3+1 dimensions are real because that is all that is “directly useful in the ancestral environment”.

  50. Scott Says:

    James Cross #49: Ah, who among us can say which elements of our theories are “real,” and which are mere calculational conveniences? ‘Tis a rabbit-hole that stretches all the way back to the dawn of modern physics. Truly, ’tis. 😀

    Having said that, I believe there’s at least the following crucial distinction: in Kaluza-Klein theories, and in the modern string theories that build on them, if only we could do experiments at sufficiently extreme energies, the compactified extra dimension(s) would appear just as “real” to us as the 3+1 large dimensions of everyday life. With this solution to the fermion doubling problem, by contrast, the extra dimension would presumably remain empirically inaccessible no matter the energy of our probes. (Though it’s an interesting technical question whether, if you took the solution seriously as physics rather than just as a calculational device, the extra dimension would become accessible from the boundary given high enough energies…)

  51. James Cross Says:

    Adam Treat #48

    Computer neural nets do a good job simulating what the brain does. And they are getting better at it.

    But keep in mind, the grid cells in the hippocampus are actual physically laid out cells in a spatial pattern. They are not values on an algebraic matrix.

  52. Ajit R. Jadhav Says:

    Dear Scott,

    Thanks for this post, and also for pointing out Prof. Tong’s talk.

    I’ve read a bit about QFT, and am at a beginning stage of taking more systematic notes on QED. (I am not interested in other QFTs.) After reading your post (and watching Tong’s talk) I was wondering about the following couple of points…

    – – –

    1. What the Nielsen-Ninomiya theorem seems to be really saying is this:

    Wilson’s ideas (the renormalization group flow-based explanation) cannot properly resolve the issue of the infinities in QED. In other words, the theorem seems to be dealing a blow to the very idea that we do have an effective field theory that’s based on the current QED.

    [Note, consistency would require the perturbative series that behaves well up to any arbitrary high order, not “just” with the first few terms with which predictions have been made so far.]

    2. Tong says: “No one knows how to simulate the laws of physics on a computer.”

    Isn’t it rather because the “laws of physics” themselves aren’t consistent (in particular, QED is not consistent), and so, there can’t be any surprise that they aren’t “simulatable” either?

    I was wondering whether a QFT expert could shed light on these two points.

    Best,
    –Ajit

  53. Scott Says:

    Ajit #52: Those are complicated questions! As far as I understand, by far the best candidate we currently have for a rigorous mathematical definition of most QFTs, is “whatever thing you get by putting the QFT in question onto a lattice and then taking the lattice spacing to be very small—with appropriate fixes to deal with chiral fermions and so forth.” The issue is just that that “definition” leaves some key questions unanswered, for example:

    – Does the limit as the lattice spacing goes to zero exist, as a well-defined mathematical object? If it exists, does it have the basic properties physicists want (like a mass gap)?

    – Does the QFT, so defined, continue to make sense if we allow arbitrarily high energies? And if it doesn’t, is that a serious problem, or is it OK because we expect the QFT to be supplanted at those energies by a better theory anyway?

  54. fred Says:

    I don’t quite get the assumption that the “machine” that simulates the world would itself be made of that same “stuff” it simulates (at this moment, we model “stuff” as quantum fields). Why would the limits implied by Church/Turing or QM even apply to the “machine” that simulates our world?
    For all we know the “higher” reality is not subjected at all to the same limitations on their models of computations.

    Two thoughts/questions:

    1) Imagine that computer science was a thing before the discovery of QM (somehow Babbage was way more productive and successful). Would such computer scientists ever come up (entirely on their own) with the concept of quantum computers, as an imaginary but possible model of computation?

    2) Imagine we’d stick advanced AIs in a totally isolated sandbox world that’s like a gigantic instance of the video game Grand Theft Auto. Possibly they could be evolved entirely within the rules of this world, and so these AIs would only perceive the simplified physics of this video game world (smooth 3D objects with rigid body physics, and whatever arbitrary limitations the game physics engine imposes).. would such AIs ever be able to realize or prove that they’re inside a world that’s simulated on some external “hardware”? If so, would they have any reason to think that this hardware isn’t also running in a world ruled by the same limited set of physics they perceive, or would they eventually deduce that this hardware has to run on QM+general relativity physics?

  55. fred Says:

    It also clear that the assumption behind the “world is a simulation” hypothesis is that there would be some conscious beings (in the same way we are conscious) that run the damn thing, so that their world would be somewhat similar to our world, i.e. a new modern twist on the concept of god.
    Otherwise there would be no point to the idea because the “modern” view of the world is already that of a dumb blind mechanical construct that can’t feel and doesn’t care… whether “computation” as we think of it would be involved or not wouldn’t bring anything new or interesting to the picture

  56. gerben Says:

    I asked you this question before (https://scottaaronson.blog/?p=3208). However, aside of things like computability/hyper computability. It seems to me that there is a very interesting question that is, is our universe finite or infinite. The chiral anomaly points to an intrinsic infinite Hilbert space with the gauge anomaly producing measurable low energy phenomenon that can not be constructed consistently.

  57. Dimitris Papadimitriou Says:

    fred # 34

    The term “static” is a technical one in GR , better not be used for (useless anyway, as they don’t say anything that we don’t already know about SR or GR) philosophical concepts like the “block universe”.
    Moreover, people who talk about “block
    universes” usually forget that there are solutions in GR that are non globally
    hyperbolic ( many of them are”indeterministic” in a much stronger sense
    than QM is.
    The latter gives us probabilistic
    predictions, but there are many solutions in
    GR that although they are locally
    deterministic, they’re failing to be globally
    so. These are unpredictable in a much
    stronger sense, as they have usually only
    partial Cauchy hypersurfaces).
    For example, a spacetime that has a timelike
    (“naked”) singularity is non globally hyperbolic thus unpredictable.
    Another common example is from the semi-classical theory (combining curved spacetime from GR with QFT) where you have Hawking – evaporating black holes that completely disappear in the end leaving locally flat Spacetime.
    These are also non globally hyperbolic: The semi-classical version of the information loss problem.
    The ” block universe” idea is completely useless for such cases that have Cauchy horizons.
    Sometimes extra boundary conditions can fix the problem of the loss of global predictability, but problems with instabilities usually remain.

    There are difficulties when people trying to simulate highly complicated and non linear GR phenomena like black hole collisions (especially the interiors), let aside a whole realistic looking universe.

  58. Prasanna Says:

    Scott,

    Glad you are back on the home turf and enjoying the deep ride on nature of reality.

    There seems to be another front opening up, this one even more closer home. The theory proposed by Jonathan Oppenheim on “Gravitationally induced decoherence” seems to be getting some traction recently. Does this not have direct implications for Quantum Computing ? Though Penrose has been proposing similar ideas, this on seems to have some mathematical grounding. Of course the broader claims are extraordinary and is still an open problem, but the implications to QC will be profound even if this theory is in the right direction ?

    https://www.nature.com/articles/s41467-023-43348-2

  59. Pascal Says:

    Fred #27 makes a good point. Notwithstanding Scott #15, one crucial difference between us and the characters in a video game is that we have experiences (or “qualia”, as the philosophers put it). That is, we can feel pain and joy, the redness of the color red and the smell of a rose.
    The characters in a video game do not have any of that, or else it would be unethical to shoot ’em up!

  60. Scott Says:

    Prasanna #58: I know and like Jonathan, and I’m glad he has the freedom to pursue a heterodox, “out-there” proposal in which spacetime would somehow stay classical in spite of QM. So far, though, the popular interest seems like 10,000x greater than expert interest. If expert interest ticks up, maybe that will motivate me to spend some time to understand the proposal and what implications (if any) it would have for quantum computing.

  61. fred Says:

    At a higher level, if it’s assumed the world is a simulation, it’s implied the world is deterministic (so all questions about consciousness can be ignored) and the question is whether one can write a mix of typical program and data (the simulation) that eventually finds evidence of the particulars of its hardware, and possibly hacks it.
    On one level it seems there’s the typical limitation that a subsystem can’t itself entirely simulate the system it belongs to (because it would be recursive and need infinite resources)… but, on the other hand, it’s also common for programs to crash, and then find evidence of their “previous lives” inside log files or memory dumps, which would give them clues about their parent system and their own nature.
    But if the program is well isolated from the rest of the system, this may not be a possibility. So it all depends on how “secure” the machine that simulates our universe has been designed (e.g. if written in classic C, pointers can give access to unprotected memory, etc).

  62. Nico Says:

    It might be interesting to note that Tong in his lectures on gauge theory (http://www.damtp.cam.ac.uk/user/tong/gaugetheory/gt.pdf) makes a similar point that Scott made. I’ll quote the passage (p232,233) in full:

    “There is currently no fully satisfactory way of evading the Nielsen-Ninomiya theorem. This means that there is no way to put the Standard Model on a lattice. On a practical level, this is not a particularly pressing problem. It is the weak sector of the Standard Model which is chiral, and here perturbative methods work perfectly well. In contrast, the strong coupling sector of QCD is a vector-like theory and this is where most effort on the lattice has gone. However, on a philosophical level, the lack of lattice regularisation is rather disturbing. People will bang on endlessly about whether or not we live “the matrix’”, seemingly unaware that there are serious obstacles to writing down a discrete version of the known laws of physics, obstacles which, to date, no one has overcome.”

    The “philosophical” problem is that we don’t know how to connect the continuum formulation of QCD with its lattice version. And even more disturbing: LatticeQCD seems to be very successful at predicting phenomena like the anomalous magnetic moment of the muon. So if one believes, like Naturalism does, that our most successful physical theories should tell us which entities exist, then we should believe in the lattice. But that conflicts with Nielsen-Ninomiya. It really is a philosophical problem because lattice fermion approximations work just fine.

  63. MaxM Says:

    What is the (hyper)complexity class of classical universe? Infinite light speed, real computation?

    Could we formulate QM and Relativity as time and space resource constraints for a computer in a classical universe?

  64. Clint Says:

    Thanks again for the post and link to Tong, Scott!

    Scott #44: The question may be better phrased as “Should we expect the evolution of a computational device to approach that of the underlying class of computation of its universe/environment?” Maybe on complexity grounds of wanting to be best equipped to “decode” or “simulate” the device’s environment? But … evolution can get stuck in a local basin of attraction and not have enough opportunity/time/energy to escape to search over other parts of the complexity landscape. Or … maybe it is evolutionarily a “best/better fit” to evolve a model in a class of computation that is strictly contained within the universe’s class because of energy/configuration reasons – meaning, a lesser class could be cheaper/easier to run … maybe trying to build a QC teaches us something here … The question then is if there is some “law of attraction” in the evolution of a computational device that it should approach (or seek to approach) the underlying class of computation of its universe/environment … or if there is something like an energy/entropy bound that makes that less than a best fit … again, good reason to try to build QCs 🙂

    James #46: Good question. I’m a simple doofus so … I would say if a configuration of physical components successfully mimics a computer … then it is a computer (!) Also, a “computer” can be a really, really, really dumb/slow/stupid/simple thing. Being a “computer” is a much much lower bar than we may expect given our daily exposure to our highly developed electronic devices …
    The best (full) explanation I’ve found to the “what really is computing?” question is given in Moore and Mertens’ The Nature of Computation . It is possible to build a universal computer out of just about anything … and the bar is extremely low … meaning “computing” doesn’t require some high level of “precision” or speed or memory.
    And I would add here that the same goes for any class of computation – for example, to be a QC a device only needs to satisfy the 4 postulates of QM.

    Fred #54 (1): See Scott’s QCSD Lecture 9: “Quantum mechanics is what you would inevitably come up with if you started from probability theory, and then said, let’s try to generalize it so that the numbers we used to call “probabilities” can be negative numbers. As such, the theory could have been invented by mathematicians in the 19th century without any input from experiment. It wasn’t, but it could have been.” So, yes, “quantum theory” as an alternative “probability theory” could have been invented/discovered LONG before physicists “discovered” it. I like to imagine what it would have been CALLED in that alternate timeline: “Negative Probability Theory”?, “Complex Probability Theory”? (complex numbers / amplitudes being required) I also think we would then be much less inclined to be hung up on thinking it is “about” atomic physics …

    But to circle back around to the point I was trying to make in my post above … The “Do we live in a simulation?” discussion seems to always be directed at the external world/universe. And I agree with the viewpoint that such a question is unanswerable/unknowable/metaphysical/religious in nature … because the only way to answer it is to be able to “see/find” something/evidence that is by definition “beyond/extra/super” the computational boundaries of the algorithm that “you” are running in. To me, a much more interesting question is the one we KNOW starts with a true premise: “We KNOW that our brains are running a simulation of the outside world AND a simulation of our self (so system(s)/observer), therefore, does this put us in the same position as the “external simulation” question, namely, that we can NEVER know the true nature of external reality because we are locked into a simulation (in our brain’s computational model/class) and everything we know/conceive MUST be experienced/conceived in the model of that simulation, OR, is it possible to know the nature (computational class/model) of our external reality/universe through something like the scientific method? I mean … this impacts how we ACTUALLY DEFINE the context of our experiments (a non-trivial step in quantum mechanics (!!) ) For example, I think that QBism goes right up to the cliff edge of this question – that QM is just “all in our heads” or at least “all in our mathematical modeling”.

    Are these external/internal simulation questions ultimately then the same question/paradox? Why or why not?

  65. Scott Says:

    Nico #62: From a modern, Wilsonian perspective, all known QFTs are low-energy effective theories, not theories that we should’ve ever expected to tell us what the “fundamental entities of reality” are—presumably we need a quantum theory of gravity for the latter. It’s that fact that makes all the issues around the continuum limits of lattice QFTs, such as fermion doubling, strike me as more technical than philosophical, which is good news for anyone hoping to make progress on them!

  66. Scott Says:

    Clint #64: As far as anyone knows, the complexity class that captures which decision problems are efficiently solvable in our universe seems to be BQP. But I don’t see any “law of attraction” that brought the human neocortex any closer to doing BQP in its million or so years of existence … except insofar as humans today are indeed trying to build scalable quantum computers! 😀 This is kind of humanity’s thing, that we can (sometimes) use science and technology to extract ourselves from the local optima that our evolution steered us into.

  67. Scott Says:

    MaxM #63:

      What is the (hyper)complexity class of classical universe? Infinite light speed, real computation?

    The answer to that question strongly, strongly depends on what someone thinks the ultimate laws of physics “would have been” if quantum mechanics had been false: for example, would the fundamental building blocks have been discrete or continuous? (Incidentally, we can of course have a finite speed of light in classical physics too.)

      Could we formulate QM and Relativity as time and space resource constraints for a computer in a classical universe?

    I personally don’t see how to understand either of them that way. Relativity of course limits the speed of information transmission, and quantum mechanics limits our ability to measure and copy information, but they both have content that goes well beyond those limitations.

  68. fred Says:

    “Incidentally, we can of course have a finite speed of light in classical physics too.”

    Taking video games as a reference for humans coming up with “classical” worlds, the treatment of speed is wildly inconsistent.
    Some shooters realistically limit bullet speed so that players have to learn the necessary skill to “lead” based on distance, but others just use infinite bullet speed (laser like) so that players basically hit what they’re aiming at.
    Space “simulators” typically model flight in space as though the ships are traveling in a fixed medium (like ships on water), so that there is a maximum speed (and after a while applying more acceleration is useless). Most of them don’t even model speed as a vector, but a scalar.
    As for time relativity, in single player games it’s common to slow time down as a special ability, to simulate heightened perceptions (in order to be more accurate during movement). The only way to do it in a multiplayer context is by creating a sort of limited bubble around the player who activated the ability, and everyone who’s in that bubble and in the line of sight of that player is forced to move slower, and everyone outside that bubble still moves at normal speed.

  69. Sandro Says:

    (e.g., spacetime “pixels” that would manifestly violate Lorentz and rotational symmetry)

    Firstly, all of these measurements have a limited measurement precision, so maybe violations haven’t been seen yet. Secondly, it’s a common misconception that discrete and lattice theories must violate symmetries:

    A Noether Theorem for discrete Covariant Mechanics, https://arxiv.org/abs/1902.08997

    A Discrete Analog of General Covariance — Part 2: Despite what you’ve heard, a perfectly Lorentzian lattice theory, https://arxiv.org/abs/2205.07701

    and that his “true” argument against the simulation hypothesis is simply that Elon Musk takes the hypothesis seriously (!).

    I don’t get the Elon hate. Do people seriously discount his intellect simply because he likes shitposting on Twitter?

  70. Mike Says:

    Scott please do do (ugh) more posts on stuff like this! Talking of chiral QFTs … Tong has just released an awesome looking set of notes on the Standard Model!

    http://www.damtp.cam.ac.uk/user/tong/standardmodel.html

  71. K Says:

    Scott #0.
    To me the answer #1 (As long as it remains a metaphysical question, with no empirical consequences for those of us inside the universe, I don’t care.) looks like some double standards. One could assume that it means that it’s your position on ANY question that has no empirical consequence for those of us inside the universe, not just this particular one.

    On the other hand, you care (and rightly so!) about an awful lot of questions that seem to have no more empirical consequences than the simulation hypothesis. I mean, for example, what real consequences could be there if the value of BB(6) is this or that, and if the value of BB(100) can be proved in ZFC?
    Of course, one could argue that the methods developed for answering these kinds of questions could probably turn out useful for answering other questions as well (perhaps some even with some real consequences) but wouldn’t the same be true if we could prove that we are in a simulation or not (or at least deduct some interesting consequences for either case)?

    Wouldn’t it be FUN to know that we are at most in 27th level of simulation? Or that, say, it is (much) more probable that we are in the 4th level than that we are in the 3rd level? Or that the higher levels need not always have larger entropy (or some other measure) than the lower ones? Or that at the top level things with high probability have evolved with natural selection? Or any other nontrivial fact that has no obvious immediate empirical consequences for us? Not saying that we are anywhere near to answering these.

  72. Scott Says:

    Sandro #69: I don’t claim that a theory of discrete spacetime would necessarily violate known symmetries, just that the obvious ways of implementing the idea do. And of course, the more success one can have getting all the required symmetries out of a discrete model, the harder it is to extract any testable predictions of the discreteness hypothesis.

    As for Musk, as you’re surely aware, the current narrative is that he’s a human embodiment of everything wrong with the world—a selfish, arrogant, unfunny superbillionaire who comes from a family of apartheid racists and who oppresses his workers, forces them to sleep in the office, and fires them in temper tantrums and who sires a bunch of kids with his female employees and who promotes technical quick-fixes to problems like climate change requiring social transformation and who shitposts offensive things to “own the libs” and who’s somehow antisemitic and pro-Zionist at the same time and who’s unleashed the hounds of misinformation-hell on Twitter and thereby endangered the world.

    Regardless of what percentage of these charges are really justified—certainly more than 0% and less than 100%—this narrative has made jokes about the badness of Elon as standard, popular, and non-eyebrow-raising in academia as jokes about the badness of Trump.

  73. Scott Says:

    K #71: Note that, if we ever learned whether the value of BB(100) was or wasn’t independent of ZFC, we’d learn it via a proof and/or computation that fit inside the universe and that could presumably be digested and understood by beings inside the universe. For example, an independence proof might explicitly construct a 100-state TM with Gödelian behavior.

    Yes, if there were similarly a convincing scientific or mathematical argument, verifiable within this universe, that we did or didn’t live in a simulation, I would be exceedingly interested in that argument! But what breaks your analogy with BB is that I don’t understand what such an argument could look like even in principle, or what we could possibly do to make progress toward one.

  74. Mitchell Porter Says:

    Partly inspired by Nico #62:

    Not sure what to think about Tong’s attempt to extract philosophical significance from all this. Theorems like Nielsen-Ninomiya are about one stage in the process of calculation, regularization. This is where you “cut off” part of the theory, e.g. if you use a lattice, you are cutting off high frequency behaviors that involve details less than a lattice spacing. Then in renormalization, you add something (e.g. “counterterms”) which can be adjusted to cancel out the effects of regularization. The result is a renormalized theory whose predictions are independent of the cutoff.

    The issue at stake in Nielsen-Ninomiya etc is the behavior of symmetries in the regularized theory. A slightly more general discussion can be found in https://arxiv.org/abs/1306.3992 (which leads via https://arxiv.org/abs/1905.08943 to recent work on “generalized symmetries”). So OK, it’s a relevant topic in QFT calculation.

    But Tong is hinting that it might have *ontological* significance, e.g. that it suggests that reality isn’t “discrete”. I guess I have trouble with this idea, given the unfinished mathematical foundations of gauge theory in general. Recall that the Millennium Prize is for defining pure gauge theory (no fermions) and then proving a mass gap (i.e. confinement). In the part where you’re defining gauge theory rigorously, you want to do something like, showing that a family of regularizations of a gauge theory, and their continuum limit, all make sense in terms of a single Hilbert space. I think this has been done for some theories in lower dimensions, but not in 4 dimensions.

    My perspective is that if even a *non-chiral* theory like “QCD with vector fermions” hasn’t been rigorously grounded in a single Hilbert space, it’s a bit early to deduce very much from the technical problems of regularizing chiral gauge theory. I like visionary speculation, but let’s be aware of how many layers of speculation are involved… But I remain open to hearing any informed defense of Tong’s views.

    (To put it more concisely: are the difficulties of regularizing chiral gauge theory, any better evidence against discrete physics, than the lack of continuous rotational symmetry in a discrete grid?)

  75. Scott Says:

    Mitchell Porter #74: I endorse every word.

  76. matt Says:

    Simulation proponents like to argue that we have gotten better over time at simulating the physical universe, so surely at some point we will be simulating other life forms and eventually the simulated life will outnumber us. However, Aharonov’s “no fast forwarding” arguments imply some lower bound on the complexity of actually simulating a physical, quantum-mechanical universe. So, if others are simulating us, then either BQP=PSPACE, or somehow we are governed by some special non-generic Hamiltonian, or they are cheating and not simulating full quantum mechanics for us, or the resources they are using to simulate us are such that their simulation takes them roughly 1 Earth’s worth of resources running for 1 day to simulate the Earth for a day (ok, maybe they save an O(1) or even polynomial factor), or what?

  77. E.G. Daylight Says:

    Scott writes (#73): “Yes, if there were similarly a convincing scientific or mathematical argument, verifiable within this universe, that we did or didn’t live in a simulation, I would be exceedingly interested in that argument!”

    In my understanding, Alan Turing almost gave such an argument, from the perspective of mathematical monism. For him, the ‘real world,’ comprising tangible computers, unfolded as a seamless continuum. In contrast, the Turing machine served as a digital abstraction — a conceptual representation or approximation — of this physical reality. That digital abstraction was useful in connection with the Hilbert program, not per se with regards to studying physics an sich. In this regard, Turing’s reception of Godel 1931 is intriguing: https://www.dijkstrascry.com/lecture3

    The term “physical Church-Turing thesis” is a neo-Russellian expression, diverging significantly from the perspectives that Church and Turing held regarding the interplay between reality and mathematics. The term “Church-Turing thesis” says something about Church and the Princeton school, not Turing. All this, no doubt, in my very limited understanding of the sources.

    Carl Petri’s advocacy for digital physics and Turing machines (1962) aligns more closely with the contents of this blog post. However, he eschewed actual infinity in his mathematical approach to faithfully model physics. His account notably lacks consideration for concepts such as asymptotic complexity and undecidability. So here, we find another way to mathematize some of the philosophical views held by some/many of us. (Personally, I appreciate exploring a diverse range of intellectual viewpoints.)

  78. Scott Says:

    E. G. Daylight #77: I’m genuinely grateful for the erudition, but I’m still trying to piece together the argument that you say Turing “almost” gave, presumably for why we can’t be living in a simulation.

    Even if the best laws of physics that we can possibly discover involve a genuine continuum, that still wouldn’t settle the question of whether that continuum can be digitally simulated so well that no one in the simulated world could ever tell the difference. And moreover, Turing would surely have understood that.

  79. Scott Says:

    matt #76: Yeah, while this is orthogonal to what I wanted to discuss in the post, I could’ve added that I’ve never been a fan of the argument for the Simulation Hypothesis that appeals to how many simulations our descendants are likely to create.

    This argument, as you point out, has always had a strong aspect of “sawing off the branch it sits on.” Like, presumably our descendants’ simulations will be of a universe less complicated than this one—which suggests that if we’re being simulated, then it’s by a more complicated universe than the one we see, and therefore not by our own descendants.

    And yes, there are many possible ways around that conclusion, but they’re all confusing, which means that the simplicity of the original intuition for why “most people who will ever exist are sims” has been undermined.

  80. James Cross Says:

    Interestingly, there are a number of neurological syndromes where an individual believes there is lack of reality to something.

    With the Capgras delusion an individual beliefs someone they know has been replaced by an imposter. In a variation a person believes a location has been duplicated.

    More directly related is the Truman show delusion a person believes they are living a reality show.

    https://en.wikipedia.org/wiki/Truman_Show_delusion

    These syndromes are a sign of brain damage or early schizophrenia.

    But it does point out how much our neurology controls our sense of reality.

    Perhaps strong belief we are living in a simulation is simply a new variation. Maybe we could call in the Elon Musk syndrome and it frequently includes the belief that whatever we do doesn’t matter because it all a simulation anyway. Or, maybe whatever we do only matters to the simulators so let’s try to impress them as much as possible.

  81. fred Says:

    Turns out Elon Musk ain’t John Galt after all.
    I’m gutted.

  82. James Cross Says:

    Just when I thought we had retired the extra dimension discussion, Quanta Magazine comes out with an article about physicists proposing two extra dimensions. One I assume is super small Kaluza dimension but the other is relatively large and proposed as a container for dark matter.

    https://www.quantamagazine.org/in-a-dark-dimension-physicists-search-for-missing-matter-20240201/

    Didn’t mention this earlier, but some research on neural patterns found that when the temporal firings are mapped geometrical forms emerge that are up to seven dimensions. These are not actual physical dimensions but mathematical dimensions required to express the complexity of the patterns.

  83. fred Says:

    Alan Watts on the nature of reality

  84. Job Says:

    The brain is basically an organic simulation platform.

    I mean, it’s plausible that evolution figured out that simulating an entity within a modeled universe is useful and has been optimizing for a strong sense of self and immersion.

    I guess it’s not a simulation one enters voluntarily, but how could it be if there is no prior self to begin with? It has to be manufactured first.

    If we’re in a simulation, isn’t it more likely to be an organic rather than synthetic one? The former is of a practical, essential nature, the latter is theatrical and likely sourced from an organic simulation to begin with.

  85. John Lawrence Aspden Says:

    So nice to see you writing light-hearted philosophy again!

    I’m not even sure that the simulator could hurt us by turning off the simulation or even modifying it to make the sun go nova or something like that.

    It seems to me that the r-pentomino and its evolution in some sense exists whether or not anyone is currently running a game-of-life with that starting state. It’s something fundamental in the structure of maths itself, and I’d expect aliens to have found it and thought about it.

    And we know that the life universe can support computation.

    So there are presumably games-of-life that contain thinking beings.

    Do those beings only exist if simulated? If I run two separate simulations of one does that make it more real? If I turn one off, or pause it, does the being cease to exist?

    There’s a program which runs all programs. Is that what the universe is?

    A totally unfalsifiable idea. It makes no predictions. And yet I find it intriguing, as, I imagine, do some of the other thinking beings embedded in the structure of mathematics.

  86. mls Says:

    Clint #42 #64

    Everything in discourse has presuppositions. So, too, does computing. I would not compare grid cells to computation in the manner you seem to pursue. I believe you will find the elementary system of Spekkens toy model,

    https://en.m.wikipedia.org/wiki/Spekkens_toy_model#Elementary_systems

    more fruitful.

    The article will speak of the fourfold configuration as minimally useful for a quantum-like system. Relative to this are six questions and an existence criterion.

    The smallest affine geometry has 4 points. It has 6 lines. The questions may be seen as line labels.

    It may be compared with a labeled tetrahedron. The six questions, then, may be seen as edge labels. The three question pairs which are orthogonal to one another may be compared with skew edges. There are reasons for comparison with tetrahedra. One comes from Bachmann’s investigation of tetrahedra and hexagons (Fundamentals of Mathematics, Vol 2). Hexagons are the midpoint figures of tetrahedra. Bachmann provides a classification of jexagons. One, the AC type, relates to 4-dimensionality.

    In contrast with logicist excluded middle using unary negation and inclusive disjunction, Spekkens questions rely upon exclusive disjunction. As every AI researcher knows, exclusive disjunction is not linearly separable. For any particular problem domain, the XOR problem can be solved. Generally, however, a linearly inseparable switching function can only be approximated by linearly separable functions. Exactness would presume “digital infinity.”

    The interest of Spekkens toy model for physicists leads to using a Cartesian product to combine “qubits.” I would suggest something different.

    Spekkens’ “decidability” depends upon 2 questions being answered. If his questions are organized as vertex labels of a hexagon, the associated complete graph has 15 edges. Those edges may be seen as “question pairs.”

    Whereas 2 qubits in Spekkens’ model is studied with a 4×4 grid, the complete graph on 6 points also bears relationship to the 4×4 grid. It is described by Assmus and Salwach,

    https://eudml.org/doc/44829

    The geometry of interest, here, is often called a Kummer configuration. Its continuous manifestation, the Kummer surface, has been of interest in quantum gravity (and quantum encryption, I believe).

    In turn, they will refer to Kibler’s paper on which groups yield

    https://www.sciencedirect.com/science/article/pii/0097316578900316

    One of those groups is “quantified” (abstract groups are instantiated with “quantities”) by the 4-dimensional vector space over the 2-element Galois field.

    Admittedly, 4-vectors of 0’s an 1’s do not provide for switching functions. However, the lambda calculus is similar in some respects to category theory (function-based), and, the lambda calculus admits self-application. In Section 3 of the paper,

    http://boole.stanford.edu/pub/PrattParikh.pdf

    Vaughn Pratt describes self-application in systems of Boolean vectors. For the case of

    m=n=2

    you obtain 4096 “axioms” implementing compositionality for the 16 basic Boolean functions.

    Among those functions is exclusive disjunction.

    I began by pointing out a relationship with labeled tetrahedra precisely because labeled tetrahedra have orientations. Metaphysics may require “true” and “false,” but, the metamathematical investigatiin of logic requires only distinctness.

    Spekkens’ existence criterion actual corresponds to a class of finite geometries called Steiner quadruple systems. The smallest of these consists of four triples over a 4-set. These would be the 3 empty cells for any definite configuration of Spekkens’ existence criterion.

    Do grid cells “compute”? I doubt it.

    Are grid cells related to geometric reasoning? Probably

    And, for what this is worth, they have been shown to participate with conceptual reasoning in the visual field. So, a less-than-platonic understanding of “mathematics” and “computation” involves a little bit of syntactic “self-booting.” This does not, however, eliminate “digital infinity.” Pratt’s axioms involve self-application, whence there is an infinite regress.

    This translate into an upward increase of recursively generated finite geometries.

    Spekkens’ toy model is not quantum mechanics. But, it can serve to undermine a great deal of bad philosophy.

    Most people who study “logic” are addicted to the word “not” and its linear separability.

    In 1999 Pavicic and Megill demonstrated that truth table semantics for classical logic is not categorical. This has, for the most part, been ignored by academic communities.

    I find it difficult to read these blogs and try not to comment too much simply because I see things so differently. I hope, however, that this gives you a different view of how grid cells might relate to computation.

  87. Lou Scheffer Says:

    If you get rid of the fancy words, the no-simulation conclusion fails high school logic. It shows a particular computational approach X can never reproduce real-world feature Y. From this it concludes that no computational approach can reproduce feature Y.

    In any other field, the (correct) conclusion would be that no-Y is a known limitation of method X.

  88. Jan 2024 – Hermit, Hibernate and Build Something – Dalliance Says:

    […] * Does fermion doubling make the universe not a computer? = Shtetl-Optimized […]

  89. Scott P. Says:

    Lou #87,

    Not at all. Rather the argument goes that for the simulation hypothesis to be true, we would have to show (among many other things) that Y is computable (along with many other things). Since that proposition has failed to be demonstrated, at least as of right now, we have failed to reject the null hypothesis.

  90. Clint Says:

    mls #86:

    Hello, thank you for the thoughtful comments! Unfortunately, I do not have the expertise to generate a coherent reply to Spekkens’ model – other than to say, “Cool”. 🙂

    In using the word “computing” to describe what grid cells are (or the brain is) doing in the entorhinal cortex I am relying on the use of that word by neuroscientists (described here in the 2014 Nobel prize award).

    Also, I am relying on the concept of universal computing used by Moore and Mertens: The Nature of Computation. This concept allows for a very broad range of devices/systems to be “computing”. I own any misunderstanding.

    My education is graduate school computer engineering. So, I think of the brain as somewhat along the lines of programmable hardware like FPGA/FPAA/PSoC devices.

    However, the model of computation in the brain, while resembling “configurable hardware” does not resemble a classical model – most crucially in the fundamental feature of interference of positive and negative amplitudes (complex numbers) in the dendrites (the computational gates/devices). Nor does it resemble an analog model because of discrete outcomes. Again, I’ll cite the neuroscientists themselves in discussing the possible role of interference and here’s an interesting recent paper for a photonic device inspired by the dendritic computational gates.

    Neuroscientists have known for a long time that the brain encodes information as positive or negative complex numbers (amplitudes) at the inputs to the dendritic gates, not as classical bits. Also, see Koch for many examples of sophisticated computational gates realized by dendrites – not just (as Koch says) “moronic” threshold sum operators.

    Thank you again!

  91. nihao Says:

    Comment seems to have been deleted, so I’ll write it again.

    A few years ago, I asked you a question about the effects of cosmic radiation on QC. Thank you for your answer at that time.

    I have another question for you now. A new problem has been raised that could be critical for quantum computers. I would like to ask you if this is a serious problem.

    problem

    quote

    While the issue isn’t exactly pressing, our ability to grow systems based on quantum operations from backroom prototypes into practical number-crunching behemoths will depend on how well we can reliably dissect the days into ever finer portions. This is a feat the researchers say will become increasingly more challenging.

    Whether you’re counting the seconds with whispers of Mississippi or dividing them up with the pendulum-swing of an electron in atomic confinement, the measure of time is bound by the limits of physics itself.

    One of these limits involves the resolution with which time can be split. Measures of any event shorter than 5.39 x 10-44 seconds, for example, run afoul of theories on the basic functions of the Universe. They just don’t make any sense, in other words.

    Yet even before we get to that hard line in the sands of time, physicists think there is a toll to be paid that could prevent us from continuing to measure ever smaller units.

    Sooner or later, every clock winds down. The pendulum slows, the battery dies, the atomic laser needs resetting. This isn’t merely an engineering challenge – the march of time itself is a feature of the Universe’s progress from a highly ordered state to an entangled, chaotic mess in what is known as entropy.

    “Time measurement always has to do with entropy,” says senior author Marcus Huber, a systems engineer who leads a research group in the intersection of Quantum Information and Quantum Thermodynamics at the Vienna University of Technology.

    In their recently published theorem, Huber and his team lay out the logic that connects entropy as a thermodynamic phenomenon with resolution, demonstrating that unless you’ve got infinite energy at your fingertips, your fast-ticking clock will eventually run into precision problems.

    Or as the study’s first author, theoretical physicist Florian Meier puts it, “That means: Either the clock works quickly or it works precisely – both are not possible at the same time.”

    This might not be a major problem if you want to count out seconds that won’t deviate over the lifetime of our Universe. But for technologies like quantum computing, which rely on the temperamental nature of particles hovering on the edge of existence, timing is everything.

  92. Scott Says:

    nihao #91: No, I don’t think that the precision of clocks per se is a major bottleneck for QC. As you point out, we already have atomic clocks that would barely be off after a billion years, which seems more than good enough. Having said that, there are closely-related engineering issues that are extremely pertinent: in optical QC, for example, building single photon sources that each generate one photon deterministically and at the same time remains one of the central challenges.

  93. Shtetl-Optimized » Blog Archive » On whether we’re living in a simulation Says:

    […] The Blog of Scott Aaronson If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel. And also: deliberately gunning down Jewish (or any) children is wrong. « Does fermion doubling make the universe not a computer? […]

  94. mls Says:

    clint #90

    Thank you for these links. The Moore and Mertens book comes highly recommendef by some guy named Scott Aaronson. A must read? No doubt. I will have to look into it.

    Undoubtedly the meaning of the word “computation” has changed over a century. Computation in the sense of the Church-Turing thesis is syntactic. (Although, the neuroscientists also reference Turing’s interest in “artificial life.” So, the meaning has always been vague, then.) And, it is this syntactic origin that leads to fundamental questions in mathematics.

    On that count, the introduction to”The Hippocampus as A Cognitive Map,”

    https://repository.arizona.edu/handle/10150/620894

    will provide a particularly lucid history of what kind of issues arise in that domain.

    This speaks to the nature of “science” itself.

    That is one reason I had emphasized that Spekkens’ model is not quantum mechanics. It is often the case that my questions are perceived as anti-science or postmodernism. However, as sections of that introduction indicate, it is quite plausible that cognition corresponds with a Kantian perspective on mathematics. In turn, however, these views have been terribly ostracized by a philosphical community fully intending to declare “science” as truth.

    It would be reasonable to characterize science in terms of abductive reasoning conditioned by non-monotonic defeasibility. But, abduction inverts the directionality of logical entailment. That is, scientists make guesses about causes. Guesses are guesses — they are not the “revealed knowledge” written at the site of burning bushes.

    So, the “correctness” of science must have an intrinsic circularity where the deep time of evolutionary biology meets physics. When Popper initially introduced “falsifiability” he knew it was a “logical principle.” And, in view of current scientific practices, it has not stood the test of time. Initially, Popper had denied evolutionary biology as being “science.”

    What mathematicians have been studying in relation to arguments over what may or may not be true are the morass of opinions from people who use mathematics. Truth is incredibly problematic.

    Scientists and engineers, however, do not need these details. The fact that there is no general solution for a quintic polynomial is unimportant when methods from numerical analysis are sufficient. That XOR is not linearly separable is unimportant when one can introduce hidden layrs for a particular application.

    An important aspect of physics is that introduces a language for “energy” in relation to “actions.” This is essential for describing witnessed phenomena that can be measured. An epsilon-delta proof does not require a “real number.” By proving that no interlocutor can produce a counterexample, a theory depending on measurement is defensible.

    It is when guesses go beyond what can be measured that physics finds itself confronted with its dependence on algebra.

    When neuroscientists speak of “computation,” I think this is influenced by a platonism generally found in mathematics. The spiking of neurons is an electrical signal. The “energy” language of physics applies — as does all of the attendant algebra.

    So, how shall we think of this without undermining physics with social constructivism?

    Currently, grid cells are being modeled with the idea of chaotic attractors. In so far as that is stable, it coincides with “a priori” form spoken of in the foundations of mathematics. The demands for finiteness which led to Hilbert’s metamathematics may relate to this cognitive structure through a 16-set organized as a finite geometry.

    For what this is worth, by messing with the 4×4 array in Assmus and Salwach, you will find that the 6-sets extend to 7-sets using a toroidal rook’s graph. Each symbol connects to 6 others. As for thinking about Boolean vectors situated in such an array, there is an entire site devoted to fascinating symmetries,

    http://finitegeometry.org/sc/16/geometry.html

    All one needs to get switching functions (syntactically) is Pratt’s method.

    Since you mentioned educational background, I have an undergraduate degree in mathematics. Unfortunately, life prevented completion of higher goals. I have studied the continuum hypothesis for 35 years. If one wishes to adopt a particular paradigm, one can say that this problem is resolved. Otherwise, one must learn to navigate a plethora of paradigms claiming to demarcate what mathematics may or may not be.

    I have studied these problems by writing axiom systems to sort out the different paradigms. Unfortunately, this has left me somewhat deficient in methods which take the real number system for granted.

    One thing you might consider is that the theory of closed real fields is decidable (Tarski). However, by Richardson’s theorem,

    https://en.m.wikipedia.org/wiki/Richardson%27s_theorem

    you cannot extend real closed fields arbitrarily with trigonometric functions. Thinking about the continuum hypothesis across paradigms is essentially looking for undecidability wherever it may appear. This is one reason for my particular interest in linear inseparability as a limitation on Boolean polynomials.

    Again, thank you for your links!

  95. Godric Says:

    Scott, thanks for your post on this. It revealed to me the full difficulty of what physicists who are trying to simulate physics on quantum computers are struggling with, and why you were dismissive to me a few months ago when I asked some of these questions from what is now clear was a naive viewpoint.

    I think that Lenny Susskind and his fellow black-holenauts are slowly learning theoretical computer science, but have probably gone a bit overboard for these exact same simulation reasons. A QC simulated black hole has a relationship to a real black hole that is not obvious, and will be a fruitful and growing point of contact between physics and theoretical computer science.

  96. nihao Says:

    scott #91
    thank you scott.
    Does this mean this could be bad news for optical quantum computers?

  97. On whether we’re residing in a simulation – TOP HACKER™ Says:

    […] the heels of my post on the fermion doubling disaster, I’m sorry to exhaust even more time on the simulation hypothesis. I promise this might […]

  98. Douglas Knight Says:

    Scott 53:
    Don’t we know that the limit does not exist for most QFTs, eg, QED or Higgs?

    What is the difference between your two bullet points? Isn’t small lattice spacing the same as high energy?

    Can’t we say something stronger, not just that lattice QED fails, but that no theory with the asymptotic expansion of QED can exist?

    (I’m largely repeating what Ajit said, but I’m not sure that an effective field theory has to be consistent.)

  99. mls Says:

    clint #90

    Just thought I would let you know that I have purchased the Moore and Mertens book. Thanks.

    I also looked up the “artificial life” musings by Turing in Copeland’s book. Apparently, he began thinking about such things after beginning work on electronic implementations of his symbol manipulations.

    With regard to chaotic attractors, there is significant mathematical complexity related to topology and fixed point theorems. But, when reduced to something more anenable to syntactic methods, one gets something closely related to the theory of recursive functions. This can be seen in the Wikipedia link,

    https://en.m.wikipedia.org/wiki/Logistic_map

    Recursion does not work like first-order logic. Each current value is “ontologically dependent” upon prior values. The stipulation of logical domains as collections of individuals arises from the idea that “truth” is “ontologically dependent” on individuals.

    This arises in the distinction between “semantic foundations” and “combinatorial foundations” (proof theory) with Herbrand logic,

    https://www.cs.uic.edu/~hinrichs/herbrand/html/index.html

    Herbrand logic abandons naive conceptions of truth, whence semantics reduces to whatever an interlocutor says his words mean. Lewis Carroll made fun of this with Humty Dumpty in “Alice in Wonderland.”

    The syntax of first-order logic can be reduced in the sense of Backus-Naur form precisely because it is applied to Herbrand logic.

    I have read that Tarski had been a nominalist. I do not know if that is the case. When he algebraized first-order logic, his transitivity axiom for equality had taken on an interesting form,

    AxAy( x=y Ez( x=z /\ z=y )

    This can be seen in the second definition for cylindric algebra,

    https://en.m.wikipedia.org/wiki/Cylindric_algebra#Definition_of_a_cylindric_algebra

    As can be seen, the variables ‘x’ and ‘y’ occur on both sides of the biconditional. If one constructs a free algebra over this syntax, one will have to treat each generation of symbols as “distinct objects.” So, equality is “realized” in the sense that every “individual” is the root of a complete binary tree.

    And, the generation of the free algebra is comparable to the method of iteration underlying chaos theory and logical maps.

    One reason I do not accept “the philosophy of formal systems” as a foundation for mathematics is because there is no algebraic proof of the fundamental theorem of algebra. That theorem “proves” the complex numbers to be an algebraically closed field. The relationship between complex numbers and trigonometry speaks to the importance of Richardson’s theorem.

    It would appear — and, it is my opinion — that “mathematics” cannot be subsumed by “algebra” or the linguisic analyses which form the practice of formalizing natural language with “encodings.”

    Even Turing had acknowledged that computational models would depend upon analysis to the extent that electronic computing relies upon finite resources.

    With regard to grid cells, physics becomes applicable to the biology because cells walls introduce a measurable voltage to sustain the biochemistry of life within the cell. Consequently, the mathematics of signal flow graphs and Shannon’s theory of communication become applicable.

    With regard to Tarski’s axiom, one need only formulate inference rules respecting Markov’s constructive mathematics — Markov introduces “strengthened implication” interpreted according to “givenness.” The first step is to treat the universal quantifier as a restricted quantifier whose restriction is based upon reflexive equality statements.

    Ax( phi(x)

    becomes

    a=a -> phi(a)

    I know this can be done because I have done it and used it to formulate “counting arithmetic” in the sense of Thoraf Skolem.

    So, the same kind of thing happens with Dr. Pratt’s use of category theory to generate axioms for Boolean spaces of dimension n. As with the free algebra method applied to Tarski’s axiom, The Boolean systems will get “crazy big” “crazy fast” and infinitely deep (digital infinity).

    This is all “the same thing” as chaotic attractors, but within the kind of syntactic domain studied by formal systems.

    Anyway, as I said before, there is a reason I should not post to blogs. Thanks again for the links.

    And, thank you to Dr. Aaronson for his wonderful blog.

  100. Stam Nicolis Says:

    Fermion doubling does not mean that it isn’t possible to describe chiral fermions, beyond perturbation theory; it, just, means that the regularization doesn’t respect the symmetries of the theory, which raises the computational cost, since it means keeping track of more terms that would be necessary could the regularization respect the symmetries of the theory. It doesn’t mean that it’s not known how to deal with these terms in principle. It’s necessary to distinguish issues of principle from issues of practice.

    Regularization of a field theory is a means to an end, not an end of itself: What has physical significance is the theory in the “scaling limit”, that is independent of the regularization.

  101. Gustaf Says:

    Maybe I misunderstand something, and I admit that I haven’t read all the comments, but surely there is no reason to believe that someone simulating our reality would be bound by our physical laws? The opposite would seem far more likely.

  102. David Kaplan Says:

    Hi Scott — your reference to chiral symmetry getting better and better when surfaces of an extra dimension get further apart is a reference to an old idea from the 1990s that works very well for approximating global chiral symmetries in theories describing the electromagnetic or strong interactions. However, it isn’t sufficient for the Standard Model to be computable, which requires exact chiral symmetry that one can gauge on a finite sized extra dimension. A recent proposal has suggested that this too is possible when the extra dimension theory has a single connected boundary, such as a solid 5d cylinder or ball (https://arxiv.org/abs/2312.01494, to appear in Phys. Rev. Lett.) . It is too early to claim that this proposal works, but it is looks promising. While the extra dimension is being added as a computational trick, the fact that it is the only trick that seems to work to compute properties of chiral fermions, and chirality is central to how our world works, makes you wonder whether we shouldn’t just take the hint the that is actually how the universe works. (And to some of the commenters, it is not like Kaluza-Klein where the extra dimension is rolled up tight — instead we are creatures inhabiting its surface). As a fun intellectual connection — these theories are closely related to the topological insulators condensed matter physicists now create and study in the lab routinely (although not in 5d!).

  103. Tyler Says:

    I’m wondering if this problem is at all related to approximating the infinite dimensional (anti-)commutation relationships in finite dimensions,

    Stone-Von Neumann Theorem

    In that same article they discuss the finite dimensional matrices that satisfy a version of a commutation relationship. They use a “braiding relation” where swapping the order results in a multiplication by a global phase that is proportional to the canonical commutation relations.
    I know global phases are irrelevant quantum mechanically, but is there some way to treat this is a kind of “relative phase” that would be useful for a quantum simulation? Is this at all related to fermion doubling?

  104. Rob Says:

    I think Pascal #59 can only say that if he is making substantial assumptions in philosophy of mind, and then for only a subset of simulation hypotheses. Let’s get the non-included subset out of the way first. On an avatar model of a simulation, of course, we can have player characters whose conscious experience is not wholly accounted for in terms of what is simulated. They would, so to speak, bring consciousness with them into the simulation. That would leave room for wholly simulated non-player characters (NPCs) that would presumably not be unethical to shoot up. There would, however, be the puzzle, even on an avatar model, of how much ethical concern to extend to others when we can’t tell which characters in the simulation are avatars and which are NPCs.

    On a Sims model, where beings in the simulation are wholly simulated, Pascal has to be assuming either that the requirements for having real experiences or qualia are either extremely low (so even what we would call NPCs in a contemporary video game are already conscious — and we should worry about them getting shot in first-person-shooter games) or they are such that they cannot be realized at all within a simulation. An obvious alternative is that some (presumably high) level of functioning within a simulation is sufficient for real consciousness. That alternative could underwrite both of two plausible ideas: that we (whether in a simulation or not) can recognize other conscious beings, subject to fuzzy boundaries about which we don’t have to be completely sure, and that wherever the boundaries are, NPCs in contemporary video games don’t have the kind/level of functioning needed to support conscious experience.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!