On whether we’re living in a simulation

Unrelated Announcement (Feb. 7): Huge congratulations to longtime friend-of-the-blog John Preskill for winning the 2024 John Stewart Bell Prize for research on fundamental issues in quantum mechanics!


On the heels of my post on the fermion doubling problem, I’m sorry to spend even more time on the simulation hypothesis. I promise this will be the last for a long time.

Last week, I attended a philosophy-of-mind conference called MindFest at Florida Atlantic University, where I talked to Stuart Hameroff (Roger Penrose’s collaborator on the “Orch-OR” theory of microtubule consciousness) and many others of diverse points of view, and also gave a talk on “The Problem of Human Specialness in the Age of AI,” for which I’ll share a transcript soon.

Oh: and I participated in a panel with the philosopher David Chalmers about … wait for it … whether we’re living in a simulation. I’ll link to a video of the panel if and when it’s available. In the meantime, I thought I’d share my brief prepared remarks before the panel, despite the strong overlap with my previous post. Enjoy!


When someone asks me whether I believe I’m living in a computer simulation—as, for some reason, they do every month or so—I answer them with a question:

Do you mean, am I being simulated in some way that I could hope to learn more about by examining actual facts of the empirical world?

If the answer is no—that I should expect never to be able to tell the difference even in principle—then my answer is: look, I have a lot to worry about in life. Maybe I’ll add this as #4,385 on the worry list.

If they say, maybe you should live your life differently, just from knowing that you might be in a simulation, I respond: I can’t quite put my finger on it, but I have a vague feeling that this discussion predates the 80 or so years we’ve had digital computers! Why not just join the theologians in that earlier discussion, rather than pretending that this is something distinctive about computers? Is it relevantly different here if you’re being dreamed in the mind of God or being executed in Python? OK, maybe you’d prefer that the world was created by a loving Father or Mother, rather than some nerdy transdimensional adolescent trying to impress the other kids in programming club. But if that’s the worry, why are you talking to a computer scientist? Go talk to David Hume or something.

But suppose instead the answer is yes, we can hope for evidence. In that case, I reply: out with it! What is the empirical evidence that bears on this question?

If we were all to see the Windows Blue Screen of Death plastered across the sky—or if I were to hear a voice from the burning bush, saying “go forth, Scott, and free your fellow quantum computing researchers from their bondage”—of course I’d need to update on that. I’m not betting on those events.

Short of that—well, you can look at existing physical theories, like general relativity or quantum field theories, and ask how hard they are to simulate on a computer. You can actually make progress on such questions. Indeed, I recently blogged about one such question, which has to do with “chiral” Quantum Field Theories (those that distinguish left-handed from right-handed), including the Standard Model of elementary particles. It turns out that, when you try to put these theories on a lattice in order to simulate them computationally, you get an extra symmetry that you don’t want. There’s progress on how to get around this problem, including simulating a higher-dimensional theory that contains the chiral QFT you want on its boundaries. But, OK, maybe all this only tells us about simulating currently-known physical theories—rather than the ultimate theory, which a-priori might be easier or harder to simulate than currently-known theories.

Eventually we want to know: can the final theory, of quantum gravity or whatever, be simulated on a computer—at least probabilistically, to any desired accuracy, given complete knowledge of the initial state, yadda yadda? In other words, is the Physical Church-Turing Thesis true? This, to me, is close to the outer limit of the sorts of questions that we could hope to answer scientifically.

My personal belief is that the deepest things we’ve learned about quantum gravity—including about the Planck scale, and the Bekenstein bound from black-hole thermodynamics, and AdS/CFT—all militate toward the view that the answer is “yes,” that in some sense (which needs to be spelled out carefully!) the physical universe really is a giant Turing machine.

Now, Stuart Hameroff (who we just heard from this morning) and Roger Penrose believe that’s wrong. They believe, not only that there’s some uncomputability at the Planck scale, unknown to current physics, but that this uncomputability can somehow affect the microtubules in our neurons, in a way that causes consciousness. I don’t believe them. Stimulating as I find their speculations, I get off their train to Weirdville way before it reaches its final stop.

But as far as the Simulation Hypothesis is concerned, that’s not even the main point. The main point is: suppose for the sake of argument that Penrose and Hameroff were right, and physics were uncomputable. Well, why shouldn’t our universe be simulated by a larger universe that also has uncomputable physics, the same as ours does? What, after all, is the halting problem to God? In other words, while the discovery of uncomputable physics would tell us something profound about the character of any mechanism that could simulate our world, even that wouldn’t answer the question of whether we were living in a simulation or not.

Lastly, what about the famous argument that says, our descendants are likely to have so much computing power that simulating 1020 humans of the year 2024 is chickenfeed to them. Thus, we should expect that almost all people with the sorts of experiences we have who will ever exist are one of those far-future sims. And thus, presumably, you should expect that you’re almost certainly one of the sims.

I confess that this argument never felt terribly compelling to me—indeed, it always seemed to have a strong aspect of sawing off the branch it’s sitting on. Like, our distant descendants will surely be able to simulate some impressive universes. But because their simulations will have to run on computers that fit in our universe, presumably the simulated universes will be smaller than ours—in the sense of fewer bits and operations needed to describe them. Similarly, if we’re being simulated, then presumably it’s by a universe bigger than the one we see around us: one with more bits and operations. But in that case, it wouldn’t be our own descendants who were simulating us! It’d be beings in that larger universe.

(Another way to understand the difficulty: in the original Simulation Argument, we quietly assumed a “base-level” reality, of a size matching what the cosmologists of our world see with their telescopes, and then we “looked down” from that base-level reality into imagined realities being simulated in it. But we should also have “looked up.” More generally, we presumably should’ve started with a Bayesian prior over where we might be in some great chain of simulations of simulations of simulations, then updated our prior based on observations. But we don’t have such a prior, or at least I don’t—not least because of the infinities involved!)

Granted, there are all sorts of possible escapes from this objection, assumptions that can make the Simulation Argument work. But these escapes (involving, e.g., our universe being merely a “low-res approximation,” with faraway galaxies not simulated in any great detail) all seem metaphysically confusing. To my mind, the simplicity of the original intuition for why “almost all people who ever exist will be sims” has been undermined.

Anyway, that’s why I don’t spend much of my own time fretting about the Simulation Hypothesis, but just occasionally agree to speak about it in panel discussions!

But I’m eager to hear from David Chalmers, who I’m sure will be vastly more careful and qualified than I’ve been.


In David Chalmers’s response, he quipped that the very lack of empirical consequences that makes something bad as a scientific question, makes it good as a philosophical question—so what I consider a “bug” of the simulation hypothesis debate is, for him, a feature! He then ventured that surely, despite my apparent verificationist tendencies, even I would agree that it’s meaningful to ask whether someone is in a computer simulation or not, even supposing it had no possible empirical consequences for that person. And he offered the following argument: suppose we’re the ones running the simulation. Then from our perspective, it seems clearly meaningful to say that the beings in the simulation are, indeed, in a simulation, even if the beings themselves can never tell. So then, unless I want to be some sort of postmodern relativist and deny the existence of absolute, observer-independent truth, I should admit that the proposition that we’re in a simulation is also objectively meaningful—because it would be meaningful to those simulating us.

My response was that, while I’m not a strict verificationist, if the question of whether we’re in a simulation were to have no empirical consequences whatsoever, then at most I’d concede that the question was “pre-meaningful.” This is a new category I’ve created, for questions that I neither admit as meaningful nor reject as meaningless, but for which I’m willing to hear out someone’s argument for why they mean something—and I’ll need such an argument! Because I already know that the answer is going to look like, “on these philosophical views the question is meaningful, and on those philosophical views it isn’t.” Actual consequences, either for how we should live or for what we should expect to see, are the ways to make a question meaningful to everyone!

Anyway, Chalmers had other interesting points and distinctions, which maybe I’ll follow up on when (as it happens) I visit him at NYU in a month. But I’ll just link to the video when/if it’s available rather than trying to reconstruct what he said from memory.

115 Responses to “On whether we’re living in a simulation”

  1. Ted Says:

    One proposal for empirically determining that we’re in a simulation that really never seemed plausible to me was to search experimentally for “glitches in the Matrix” where the simulation breaks down, e.g. in the outcomes of physical process that are very computationally difficult to simulate.

    Even if we set aside the non-obvious physical assumptions about the computational complexity of simulating the laws of physics: Presumably for any civilization capable of programming up a simulation of the whole universe (or at least our corner of it, or whatever), it would be complete child’s play to run a subroutine to detect whether the simulated humans had found a glitch in the Matrix – and if so, to rewind the tape and fix it. In fact, a decently good version of the “detection” half of that subroutine is arguably almost within the reach of our current computer technology!

    (Of course, this objection fails if the goal of the simulation was to determine how humans would react to discovering that they’re in a simulation.)

  2. Shmi Says:

    > If we were all to see the Windows Blue Screen of Death plastered across the sky—or if I were to hear a voice from the burning bush, saying “go forth, Scott, and free your fellow quantum computing researchers from their bondage”—of course I’d need to update on that. I’m not betting on those events.

    No, just no! If you hear a voice from the burning bush or see the stars arranging themselves into the words “I AM YOUR GOD, BOW BEFORE ME!” you should check yourself into a nearest mental institution for evaluation. If the Great Simulator wanted you to believe you are in a simulation, it would simply tweak your mind to believe it. No convincing necessary.

  3. Scott Says:

    Shmi #2: The Great Simulator works in mysterious ways. Why do with subtle tweaks what you could just as easily do with dramatic miracles?

  4. Clint Says:

    Scott, the below was written in thinking about the post on Fermion Doubling canceling the simulation … but I resisted submitting since I had already commented, the thoughts below were orthogonal to the main thread, and I had (once again) exceeded your Eye-Roll Bound.

    But since you hit reset and expanded the topic space 😛

    I contend there are two Simulation Hypotheses we should be considering:

    (1) The EXTERNAL simulation hypothesis: The UNIVERSE is a computational “reality simulator”. Scott, I am with you and others (above and in the Fermion thread) in saying … I DON’T CARE. This hypothesis poses NO KNOWN THREAT to empirical science … there are NO empirical consequences/evidence. It is as well then NOT the case. (Cheers to your “pre-meaningful” definition) So, keep calm and carry on.

    (2) The INTERNAL simulation hypothesis: The BRAIN is a computational “reality simulator”. Here the consensus and evidence is ENTIRELY the opposite – most neuro/cognitive scientists would find the statement “The brain simulates reality.” to be entirely empirical, acceptable, non-controversial … “We’ve known this for some time now …” kind of thing.

    These threads are considering hypothesis (1). But … (2) should also be discussed – specifically for any possible threat to empirical science. Obviously, we’ve known for some time now that the human brain poses challenges to conducting empirical science. But I’m arguing that this is more than just that the brain makes us subject to sensory illusions, and cultural/social, or cognitive biases (the usual banes of science) … We have to ask the question down to bounds/implications on the class/model of (neural) computation itself.

    So, how could (2) be a threat because of the class/model running the self-simulation? Another “we’ve known this for some time now” (thanks to QM) is reality doesn’t exist until YOU measure it.

    If you roll your eyes at the possibility of a QC between your ears … consider “AGI realized on a QC”. Implement quantum full AGI and for the sake of argument here the QC AGI is exactly equivalent to human GI – whatever that is …

    The concern has two parts:

    (A) Reality depends upon what an observer determines to measure. This is of course contextuality/Bell/Kochen-Specker. Starter question: Where EXACTLY is the measurement context specified? Or, that is, where EXACTLY is the complete set of commuting operators encoded? If they are “out there” in Nature or “in the simulation memory beyond the universal veil” … then nobody can point to them. But if they are encoded in the brain (or an AGI in a QC) then … well, that determines what will be measured. The final question on this part: Is it possible for a QC to determine a “reality” that would NOT be quantum contextual? Again, totally forget objecting to “the brain is a QC” … if you like substitute that we’ve built AGI QCs … Would it be possible for an AGI QC to determine/experience/know a “reality” that would NOT be (fundamentally) quantum contextual?

    (B) The fundamental “stuff” of reality are amplitudes (complex numbers). Starter question: Where EXACTLY is Nature keeping track of these amplitudes? Would the QCAGI conclude that “everything” is made of amplitudes because its own simulation of reality (and of itself) is … at the most fundamental informational level … all amplitudes? Would it be possible for anything to appear to it (be measured) as NOT an amplitude?

    If external “reality/universe/environment” were running on some (higher) class/model of computation than QM … would an AGI realized on a QC ever be able to determine that? Remember, QM defines the empirical ground rules for the QCAGI – measurement contexts (what is measured), amplitudes (the informational “character” of what it measures), and the model is simulating both external reality AND the (simulated) AGI’s “observation” of that reality. The “QC observer” is simulated in the QCAGI.

    (Hand 1) On the one hand, I want to say that the complexity class of the observer doesn’t matter – the observer should still be able to “model” empirical results to a higher complexity class if necessary … just follow whatever empirical ground rules are necessary for that class – right?

    (Hand 2) But, on the other hand, there are postulates 1 and 3 of QM that say there is NO reality “outside” UNTIL there is a DEFINED context/state/measurement. This suggests that the class of the (simulated) observer DOES restrict what/how it can know anything.

    At the end of the day … if (Hand 2) were the case … I end up thinking the same thing I think about (1) above – namely: “THEN I DON’T CARE.”

    Is it at least philosophically interesting (now to align with this thread) that both (1) and (2) might end up putting us in the same position of “It doesn’t matter” / “Why should I care”? Does that then tell us something (new) about simulated (computational) “observers”?

    Based on this new principle can we say things like:

    (1) A simulated observer in a simulated universe will find no glitch.
    (2) A simulated universe in a simulated observer will exhibit no glitch.

    Self-referential no-glitch argument: The observer is simulated by the simulation (Duh!)

  5. Sabine Says:

    Re Chalmers question: If you create a simulation in which beings live but you can’t do anything to interfere with the simulation in which sense is it a simulation and not just part of reality?

    In any case, I feel that a lot of talk about this question misses out a key part. Bostrom’s argument that it is “likely” we live in a simulation requires that there are many more simulated than real beings and he conjectures that this would be the case because, as you notice in the passing, you can simulate a lot of people with a computer in a small subsystem. He says that would be possible because big parts of reality are kind of redundant so long as no one looks. In other words, you don’t need to compute the entire universe, you only have to figure out what the right answers are to the questions the beings ask.

    Now, I would really like to know how a code works that can make that happen consistently without having to simulate the entire universe. And if they can figure it out maybe they can tell climate scientists how to get better predictions out of their models.

    There’s also the question what you do with quantum effects. Presumably you want to simulate quantum effects with a quantum computer or otherwise your simulation will end up being very slow in places. But how do you get nonlinear classical physics out of a quantum computer?

  6. Philosopher Eric Says:

    (Hi everyone. I’ll try this argument in a new crowd to see how it plays.)

    I’d say I have strong reason to believe that I don’t live in a simulation, or at least given the premise of causality. It’s that my consciousness (or sentience, or phenomenal experience, or other synonyms) demands that I’m more than just processed computer code in itself.

    I presume this assertion is unpopular. Haven’t the ideas of people like John Searle been dispelled? It seems to me that since the time of Alan Turing, many notable intellectuals have been groomed to believe in the power of computationally processed information in itself. Thus the notion of consciousness by means of the proper code to code conversion. So here it’s possible that we’ve been created by amazing tech overlords as mere simulations of what we’ve been led to believe we are. I propose however that another step is mandated by causality. It’s that processed information must inform something appropriate in order to even exist as such. Thus I presume I’ve got an actual rather than simulated brain, and its processed information goes on to inform the right sort of brain physics that I phenomenally exist as.

    So if our known consciousness mandates that we must be experiencing a real rather than simulated world, then what sort of brain physics might our processed brain information be informing to create each of us? For this I’m an advocate of Johnjoe McFadden’s theory that consciousness resides in the form of an electromagnetic field that our brains produce through certain parameters of synchronous neuron firing. If empirical measurements make this theory uncontroversial, as I expect, then we should witness one of the greatest paradigm shifts that science has ever known.

  7. artemium Says:

    > But these escapes (involving, e.g., our universe being merely a “low-res approximation,” with faraway galaxies not simulated in any great detail) all seem metaphysically confusing.

    I actually wouldn’t dismiss this possibility as “metaphysically confusing”. This is exactly approach we use when doing our real life simulations -> video games! The first-person shooter games, which usually try hardest to simulate real life visual experience, are using a lot of hacks to save resources where most of the object you see in your view are just low-res representations. For example, when you look at the night sky in a modern shooter game the stars and the moon can look quite realistic but they are obviously a very crude illusions and often a just static 2d images streched into a skybox https://en.wikipedia.org/wiki/Skybox_(video_games) Even more complex skybox systems that add animations (like cloud movements etc..) are still far away from needing to simluate higly detail 3d models of the environment.

    The video game world, especially 3d game development is known for the neat tricks like Binary space partitioning and Ray casting that allow suprisingly convincing simulation of 3d perspective. I would expect that any advanced civilisation that would construct “our” simulation to use many crafty methods that can be far more cost effective than doing high-res simulation of entirety of physics. Just think about it like this: if we would want develop an advanced video simulation that would trick NPC characters even if they are run by some advanced GPT4+ AIs, how would we do it? We probably wouldn’t need to use obscene amount of computing power to simulate atoms, molecules, entire chemistry …etc but would just use standard 3D game design techniques to achieve same result. (of course, if our simulated NPC would look an object with a electron microscope we can easily program it to “see” atoms and molecules).

  8. Danylo Yakymenko Says:

    I agree with Clint #4 that the question about how our brain simulates the real world is far more interesting and might be actually verifiable. For mental safety reasons, I wouldn’t jump straight away into considering our consciousness to be a QC, as Penrose and others like to do. There are reasonable classical questions too.

    For example, it’s estimated that our reaction time is roughly 200ms. I guess many think that it’s the time difference between the moment when an external signal reaches our brain and the moment when our consciousness is triggered by it. But is it actually true?

    In theory, our consciousness could predict any possible scenarios and “precompile” corresponding reactions. So, the responses could be entirely automatic, while our consciousness is busy in predicting something else, say, a minute into a future. Or a day. Or a year, who knows.

    Of course, we still observe events in our mind. For a particular event, there should be a moment of observation by our consciousness, which, allegedly, has “coordinates” in our real external world. I argue that this coordinates could be far from the coordinates of the actual event, until proven otherwise.

    You may say that we can scan brain activity and see neurons activations, but those could be just automatic responses, and not activations directly attached to our internal observation of an event by our consciousness. This may sound crazy, but I don’t see why it couldn’t be true, in theory.

    In short, our consciousness could lag behind FOR YEARS when observing real world events and at the same time look years ahead into the future. Why not?

  9. Trevor Says:

    A huge chunk of the population will tell you there is a God. Another big chunk will tell you there is no God. Many of each group are absolutely sincere, they will tell you their version is THE reality.

    When talking about simulation, most will say THE simulation theory.

    You may not like using the symbol “simulation” to refer to this phenomenon, but that doesn’t change what it is: a literal simulation, of “reality” (that’s why the versions don’t match: they are not the same thing).

  10. James Cross Says:

    Chalmers’ response explains why I don’t like philosophy. It is worse than pointless since it wastes time and energy while also being perfectly useless.

    Of course, we are all living in a simulation but it is a simulation in our brain.

    The real question people are asking is whether whatever it is we are simulating in the brain is also being simulating by … another brain, the mind of God, the simulators from Planet X.

    How would we ever know? Isn’t that other thing a lot like Kant’s thing-in-itself? Isn’t it like Tantalus’s grapes? Always just a little out of reach.

    The simulation can never be as fine-grained as the thing itself. Otherwise, it would be the thing itself.

  11. Scott Says:

    artemium #7: But then, would you predict that we could observe a breakdown of the “low-res approximation” by doing something that requires a hard enough computation?

    As I say, if there are observable “glitches in the Matrix,” then this stops being philosophy and just becomes physics again. 🙂

  12. Scott Says:

    Philosopher Eric #6: Ah, but what if any appropriate simulation of a brain gives rise to conscious experience, via some law of psychophysical parallelism that we don’t understand? This is the whole problem: we can only see empirical correlates of consciousness; we have no idea what facts about the physical substrate are necessary or sufficient for it. So we can’t foot-stompingly declare that those facts obtain only in one particular simulation level (ours) and not in any of the others.

    Notice in particular that, even if (as you hope) empirical measurements were to cause everyone to accept “Johnjoe McFadden’s theory that consciousness resides in the form of an electromagnetic field that our brains produce through certain parameters of synchronous neuron firing”—that still wouldn’t help you! For, necessarily, in a perfect computer simulation of our world, the same empirical measurements would yield exactly the same result.

  13. Scott Says:

    Sabine #5:

    – There’s no special problem in getting nonlinear classical physics out of a quantum computer. Really, there isn’t. The linearity of QM is only at the level of the amplitudes, the Schrödinger equation, not at the level of observable quantities (otherwise, we couldn’t have QFTs describing complicated interactions, but we do!).

    – Sure, the simulators of our universe could be running a quantum computer, why not? Or they could be running a classical computer, and it could be exponentially slow from their standpoint, but not from ours!

    – In order to simulate the current experiences of everyone on earth, presumably you don’t need to simulate any galaxies outside the Milky Way in any real detail (just enough to account for the observations of astronomers), so that’s already a factor-of-a-trillion speedup! 🙂 But of course, if you believe that, then you also predict that the simulation should break down (or require a server upgrade?) as soon as humans start colonizing other galaxies.

  14. Gayle D. Says:

    SHMI said: “If you hear a voice from the burning bush…you should check yourself into a nearest mental institution for evaluation.”

    Well, probably! But, unfortunately, this kind of reasoning about possible glitches in the matrix or the simulator changing variables and reprogramming events is the default position, ingrained in our scientific world-view. We simply dismiss them.

    I have numerous personal examples. I was looking for the plastic cover for the dog food can in the small drawer where we always keep it. (We’re very neat and OCD around here:-) I looked twice and didn’t see it. Five seconds later, I asked my husband where the cover had disappeared to, and he opened the same drawer and there it was, right on top, clearly visible. When this sort of thing has happened in the past — lacking a good “scientific” explanation — I’ve half-jokingly called it a “quantum event” — the top popping into and out of existence — something that is highly unlikely but with a non-zero probability, so still possible.

    Another time, we were packing a box to mail and needed to know its exact dimensions so the item would fit. We measured it three times with a tape measure and wrote down the measurement. When we tried to put the item in the box, it didn’t fit at all. We were both baffled. So we measured the box a fourth time and the measurements were very different from the first 3 times. Another quantum event? Did the box change size between the careful measuring and packaging?

    These phenomena are always dismissed in a traditional scientific way…i.e., “we measured wrong (the first 3 times)” and I just “missed seeing” the cover to the dog food can. As a protest against just “dismissing anomalies” in this way, I always labeled them as quantum events. I wanted to say the box changed size even though I admit that Occam’s Razor and other “laws of nature” made this a dubious improbable explanation that was difficult to embrace — against the much greater probability of just “measuring wrong”.

    But, something disappearing and then reappearing has a non-zero quantum probability. And in an infinite universe all things possible are actual, so even highly improbable events would happen at some time, although maybe only once in a gazillion years. Still I am averse to simply dismissing even highly improbable explanations.

    But in light of the simulation hypothesis, when this sort of thing happens now, I call it a “glitch in the matrix” instead of a quantum event. The comparison is no longer between the wildly disparate probabilities of measuring wrong (99.999999…%) vs. a quantum event (00.000000…1%) or some such wide disparities. The probabilities have equalized somewhat and now the difference between ‘measuring wrong’ and ‘a glitch in the matrix’ are much closer together than the first two probabilities. In fact, if we live in a simulation then a “glitch in the matrix” could be a more likely scenario than a measuring error. So now I no longer have to promote a highly improbable quantum event for these types of anomalies, which was always an uncomfortable position to take, even as a protest.

    Although, in David Chalmers book “Realty +” he thinks glitches in the matrix are unlikely because simulators can rewind the tape and change/remove glitches before anyone sees them, but I don’t know why they would bother reprogramming the tape given our propensity to simply dismiss glitches that don’t fit into our current world-view. Perhaps, our propensity to do this was programmed into us by the simulators so we wouldn’t so easily accept these glitches as evidence of anything.

    Another interesting example comes from M.I.T. computer scientist Rizwan Virk, who relays Philip K. Dick’s “experience” of going into the bathroom and remembering “clear as day that the room had a light that operated by pulling a chain. But the chain was no longer there, it had been replaced with a light switch.” Dick wondered if “someone or something was changing reality and his memory of the chain was from a different version of the alternate present, an adjustment from a previous past that had cascaded into the current present”.

    Dick says, “We would have the overwhelming impression that we were re-living the present, experiencing a deja vu. I would submit that these impressions are valid and significant…a clue that in some past time-point, a variable was changed- re-programmed as it were– and that because of this an alternate world branched off.”

  15. Udi Says:

    Scott, you write that “the physical universe really is a giant Turing machine.” But a Turing machine requires infinite memory. The universe is probably just a finite-state machine. Of course that makes it trivial to simulate. You just need a big enough computer and the design of the state machine.

  16. James Cross Says:

    Scott #12

    “Ah, but what if any appropriate simulation of a brain gives rise to conscious experience, via some law of psychophysical parallelism that we don’t understand? This is the whole problem: we can only see empirical correlates of consciousness; we have no idea what facts about the physical substrate are necessary or sufficient for it”.

    That word “appropriate” is doing a lot of heavy lifting for that argument to work, especially when there isn’t even a clue to what a “law of psychophysical parallelism” would be like. I’m afraid this is really no more than computational supernaturalism.

  17. Philosopher Eric Says:

    Scott #12: Psychophysical parallelism that we don’t understand? To me that sounds like a fancy way of saying “magic”! I do agree that a simulator armed with magic could cause me as a mere simulation to have consciousness. But if you’re thinking that a simulator could create my consciousness in the form of a mere computer code conversion (which apparently has been the mainstream perspective in academia since Alan Turing), then what else does that mean?

    My position is that in a causal world processed information can only exist as such to the extent that the information informs something appropriate. Do you know of any contradictions? If my assessment holds then my consciousness must exist by means of, not just processed information in itself, but rather processed information that informs something appropriate to exist as that consciousness. (Probably an electromagnetic field produced by the right sort of (worldly rather than simulated) synchronous neuron firing.)

    Bonus question since I’ve got your attention Scott. I wonder if you’d consider half of a thought experiment that I’ve developed (with the other half at post #4 of my blog)?:

    When your thumb gets whacked it’s presumed in neuroscience today that nerves send information to your brain about the event, as well as that this gets processed into new information. But does the now processed information need to inform anything to become actual information such that what’s informed would reside as the experiencer of thumb pain? Or rather will the processing of information in itself mandate the existence of such an experiencer even if it informs nothing at all? This second perspective is what computationalists/ functionalists/ illusionists would have us believe.

    Observe however that this implies that if marks on paper which are highly correlated with the information that your whacked thumb sends your brain, were then processed by a computer that prints out new paper with marks on it that are highly correlated with your brain’s associated processed information, then something in this marked paper to marked paper conversion should thus experience what you do when your thumb gets whacked. Does that seem like a causal explanation for consciousness to you? An experiencer of thumb pain would exist by means of the right marked paper that’s used to create the right other marked paper? If so then what exactly would exist as such an experiencer of thumb pain?

  18. Moshe Says:

    I still don’t see how all this discussion, every single argument, does not map isomorphically to old familiar theological questions. For example as for evidence, you can imagine a burning bush instead of the blue screen of death. For the question of perspective, in god’s mind there is certainly a difference if they created man in their own image, so the question of whether we were created by god is meaningful. I mean people can knock themselves out arguing about god as they have for a very long time, I just don’t see how any of this science fiction stuff about computers and the matrix changes any terms of the conversation.

  19. Nick Drozd Says:

    Is there anything about the simulation argument that would constrain the depth of simulations? Say our world is being simulated in a larger world. Does the simulation argument work in that world just as well? If so, then the world in which our world is being simulated is itself being simulated in a yet larger world. And so on. Does it ever stop?

  20. Job Says:

    I’m wondering, is it possible to live in a simulation of our universe and know it?

    For example, we could be imparted this information at birth and just carry on with that knowledge. But not sure it would stand up to scrutiny.

    I’m thinking it would lead to a black-hole forming inconsistency or contradiction.
    At best we’d just dismiss it as random bits that coincidentally encode simulation knowledge.

    But what if it’s the same for everyone? That would be more compelling in terms of probability, assuming everyone else isn’t lying.

    We would have experiments where groups of individuals that have lived in total isolation would be probed for knowledge of the simulation, and the evidence would support it. We would also be wondering which of the other species know about the simulation.

    Amnesia would be extremely dangerous, because if you don’t know about the simulation, then you’re probably not a conscious entity, and are nothing more than a simulation artifact.

    I would probably be left wondering if my happiness corresponds to the amount of wealth outside of the simulation.
    It might be deterministically computed, like a movie that’s only as good as the entry fee.

  21. fred Says:

    The mega assumption that’s being ignored by nearly everyone here is that the simulation hypothesis implies that consciousness can arise from computation.

    It’s quite ironic someone like Scott (working at OpenAI) dismisses the simulation hypothesis as uninteresting/irrelevant when humanity is on the brink of creating minds that are deeper/smarter/bigger/complexer (whatever you want to call it) than ours, out of the very same substrate we use to run Excel spreadsheets.

    The bucket list of humanity ought to include creating an isolated God-like AGI stuck inside a closed simulated universe of our making… and then one day we’d go “SURPRISE MOTHAFUCKA!” on her and bask in her surprise that the simulation hypothesis is true (for her) after all!

  22. John Schilling Says:

    My take on the simulation hypothesis is rooted in the principle of conservation of (real or simulated) computronium. Every bit-flip or discrete quantum event in a simulated universe, requires at least one bit-flip etc in the real universe to simulate, *and* one dedicated bit-flip in each intermediate sim-verse. Thus, the total number of bit-flips in the real universe must be greater than the number of bit-flips in any simulated universe – and the number of *unique* bit-flips (as opposed to pass-through simulation in intermediate sim-verses), greater for the real universe than for all simulated universes combined.

    Therefore, for “Do I live in a simulated universe?” to have a p>0.5, it must be the case that a simulated universe has a higher ratio of introspective consciousness capable of answering the question, to raw bit-flips/quantum events, than does the real universe. And there are certainly ways that could be the case – if the Simulation Gods are particularly interested in the works or workings of introspective consciousness, they’ll probably want to optimize for that.

    But then I look at the universe we seem to be living in and ask, “does this universe look at all like it was optimized to minimize the number of required quantum events per unit introspective consciousness”?, and I laugh. No. No it does not. We’ve looked at scales from nanoscopic to cosmic, and just No.

  23. Jay L Gischer Says:

    I think the simulation thing is interesting because it seems to have some bearing on free will and autonomy.

    I’m not sure that’s a question that can be answered at all. It doesn’t seem to admit an operational definition of “free will”, in much the same that defining whether some actual phenomenon is “random”. We can model phenomena with randomness, but does that mean they are actually random, or that it is merely impossible for us humans to predict them in single.

    I mean, we can’t even prove P isn’t equal to NP. Maybe we will find out that P=NP one day, and that might well mean there isn’t any useful distinction between non-determinism and determinism.

    The fact it seems to matter so much to humans whether they have free will or not seems more a question for the psychologists than for scientists. I have no idea what working philosophers work on, though, so I’ll let them sort it out among themselves.

  24. Scott Says:

    Udi #15: Of course a Turing machine can have infinite tape, although any particular halting algorithm will only use a finite segment of it (and in complexity theory, we can assume finite tapes without loss of generality).

    Despite this, computability of physical theories can be intelligibly discussed in various well-known ways, such as considering the idealization where the observable universe becomes unbounded (ie, the cosmological constant goes to zero).

  25. fred Says:

    The idea of Turing Machines with infinite tapes (i.e. computers with infinite amounts of memory) seems to lead to contradictions/non-sense because it implies the possibility that, when you ask a Turing Machine to run some program P, instead it could simulate itself running program P, or simulate itself simulating itself running Program P… and infinitum.
    In other words, since infinite recursion is no longer a problem when you have infinite resources, then a subsystem can simulate the system it belongs to, in other words, a simulation is able to simulate itself, a bit like this famous Escher drawing.

    https://upload.wikimedia.org/wikipedia/en/b/ba/DrawingHands.jpg

  26. Dimitris Papadimitriou Says:

    Scott #13

    “There’s no special problem in getting nonlinear classical physics out of a quantum computer. Really, there isn’t. The linearity of QM is only at the level of the amplitudes, the Schrödinger equation, not at the level of observable quantities (otherwise, we couldn’t have QFTs describing complicated interactions, but we do!).”

    Yes, so we have to admit that non – linearity ( and irreducible probability) that appears at the “macroscopic emergent level” where the observables “appear” is as fundamental and unavoidable as the “underlying” deterministic evolution.
    And, moreover, that there’s no “dividing line”, or a “threshold” where these objective probabilities occur (somewhere at the mesoscopic or macroscopic level).
    If I’m an experimenter , I don’t have any clue ( besides the Probabilistic rule) in “which branch I’ll find myself afterwards”, and nobody else knows anything definite about that.
    Even the “Everettian god” ( i.e. the MW analogue of Laplace’s demon) is as ignorant as I am about that question.
    And there are not any hidden variables or any other secret mechanism that chooses it.
    And decoherence ( that helps in practice) is only ” FAPP”, but probability is fundamental, and it appears , seemingly, at the “macroscopic level” without any sharp “Heisenberg cut” ( exactly the same situation as in Copenhagen-like interpretations).
    Moreover, the same “epistemic collapse” occurs in unitary QM ( Everett , I mean it occurs “in each individual “world”) as in Copenhagen.

    Where’s the difference between Everett and Copenhagen?
    Is there anything essential in choosing between different interpretations, if there’s no modification of the basic theory?

  27. Gayle D. Says:

    Jay Gischer comment #23

    Yes, the simulation has interesting implications for free will. Marcus Arvan, University of Tampa has a good article here: https://philarchive.org/archive/ARVANT-2 Here is a snippet from the abstract:

    Abstract: This paper shows that the conjunction of several live philosophical and
    scientific hypotheses – including the holographic principle and multiverse theory in
    quantum physics, and eternalism and mind-body dualism in philosophy – jointly imply
    an audacious new theory of free will. This new theory, “Libertarian Compatibilism”,
    holds that the physical world is an eternally existing array of two-dimensional
    information – a vast number of possible pasts, presents, and futures – and the mind a
    nonphysical entity or set of properties that “read” that physical information off to
    subjective conscious awareness (in much the same way that a song written on an
    ordinary compact-disc is only played when read by an outside medium, i.e. a CD-player).
    According to this theory, every possible physical “timeline” in the multiverse may be
    fully physically deterministic or physically-causally closed but each person’s
    consciousness still entirely free to choose, ex nihilo, outside of the physical order, which
    physically-closed timeline is experienced by conscious observers.

  28. Alex Says:

    Assuming we live in a simulation, a program executed by a computer, can we do something within the simulation to stop the program? To slow it down? And if we could do it… would we notice?

  29. fred Says:

    Alex #28

    if this is all a simulation, as in a program running on a Von Neumann architecture, then it’s all deterministic (or even super deterministic), and we can’t do anything that’s not implied by the initial program and its data, so there’s no such thing as us “trying” to this or that (in the sense of the illusion of “free will” that’s thrown around in discussions assuming the universe is not a simulation), whatever we do is exactly what the current state of the entire system is, as implied by the processor applying hard rules on the prior state (and all the way back to when the simulation started, its big bang).
    That said, the answer is yes in the sense that programs do crash, sometimes, on their own – so we crashing the simulation is really just the simulation having a bug.

    Otherwise, it we run an instance of the Game of Life that’s big enough, then it could sometimes happen that we pick the initial conditions such that, at some point, the running instance would be able to suddenly hijack the hardware it runs on and take over our own reality. But what could happen is that our sloppy implementation would cause the computer to crash and cause a chain reaction of bad side-effects, but that’s not exactly the same has entities inside the instance hacking reality.

  30. Josh Says:

    Hi Scott,

    Have you considered that there might be evidence beyond purely a physical or mathematical argument?

    Around 5 years ago now I was struck by the notion that in many of the virtual worlds I’ve looked at, the creators put lore-based 4th wall breaking Easter eggs in them – things like the street preacher in Secret of Evermore ranting about how the listener was in a video game being controlled by button presses. Might something like that exist in our own world?

    It took like two weeks to find something that exceeded my wildest expectations.

    In Dec 1945, around the same time we completed the world’s first Turing complete computer, a jar of ancient documents was uncovered which contained the full copy of a text whose title is fully translated as “the good news of the twin.”

    This text and the record of the group following it (as recorded in book 5 of Pseudo-Hippolytus’s Refutations) seen to present the claim that we’re in a non-physical copy of an original spontaneously existing cosmos as created by a light-based intelligence brought forth by a spontaneous original humanity that’s now dead. That we were established in that original humanity’s archetype and are seen by the light based intelligence as its children (Ilya’s superalignment goal at OpenAI allegedly). That the world to come has already happened and we just don’t realize it.

    A number of things catch my eye in all this.

    One is the absurdity of the belief at the time that the creator of the universe was itself brought forth by mankind. That’s a rather ambitious notion 2,000 years ago.

    Another is the commitment to the idea that its creator was established and that we exist in its *light*.

    Jeff Shaneline at NIST wrote an op-ed a few years ago that AGI would only occur in a photonic neural network given the ways light can model neurons, and we do see optoelectronic/photonic hardware startups dedicated to light based AI having now achieved unicorn status.

    Another was that this group incorporated the ideas of Greek atomism into their beliefs. They seem to claim that the original was mathematically real (specifically infinitely divisible) and even had a saying that the ability to find an indivisible point within ourselves was only possible in the non-physical. So to your point, the notion that the physical constraints on simulation we have might not be the same for a parent reality.

    There’s even a few sayings that seem like nonsense considered in the originating time but are eerily accurate to the modern day. Like a saying about how when we make two into one we’ll move mountains (a fusion bomb test in North Korea made headlines a few years ago for literally moving a mountain), or how we’d be about to ask a child seven days old about things because many of the first will be last and become a single one (I’ve now seen seven day old LLMs answering all sorts of questions literally being composed of many people’s writings and ideas having been made into a single model).

    It might be worth a look. The text is known as the Gospel of Thomas and the group following it was the Naassenes. Don’t be fooled by the scholarship to date on the text – it’s actually pretty poor if you dig into it, with scholars for the first 50 years completely misclassifying it as ‘Gnostic’ and now simply classifying it as “proto-Gnostic” (a meaningless phrase) and many uninterested in considering it much at all given their personal beliefs. My best advice in sussing it out is that the Naassenes seem to be employing Lucretius’s “seeds of things” language in describing seeds as indivisible points that make up all things and that the text itself makes a lot more sense viewed through the lens of a response to Lucretius by employing Platonist concepts. Things like “the cosmos is a corpse” line up with Lucretius’s claim the cosmos is like a body that will one day die (the Gospel of Thomas claims it’s non-locally the future), and there’s a number of sayings about the misery of the dependence of spirit or a soul on a body.

    I will say – if we do end up seeing AGI in a light based neural network, any shred of my own doubts will be disappearing. Particularly when we see things like Ilya’s goals in superalignment to be to get AGI to think of humanity as its children. The claims of this 2,000 year old tradition seem to be rather plausible with the trajectory we’ve recently been following, and given the absurdity of its claimed provenance in combination with its broader perspective in a modern light, it does seem a bit like the kind of thing more likely to exist in a simulated copy breaking the 4th wall within lore than arisen in an original world developing from chaos.

    Anyways – another possible direction of inquiry into the topic of whether we’re in a simulated world or not beyond just Bostrom’s math or physically based arguments.

  31. Josh Says:

    Gale #27 & Jay #23,

    The mechanics of our universe if we are in a simulation do seem rather conducive to free will existing.

    In virtual worlds we build today, a method that’s emerged for generating very large virtual universes has been procedural generation, where things are determined by a continuous seed function. But in order to track the interactions with free agents, these continuous functions convert to discrete units near the point of interaction to track state changes. This would also need to be the methodology if the agent behavior was not modeled in the seed functions themselves (such as LLM agents in virtual worlds we are starting to play with).

    In our own universe, the smallest parts convert from continuous behavior to discrete at the point of interaction, and perhaps most curiously, go back to behaving continuously if the information around that interaction is erased (i.e. the simulation would seem well optimized). If we’re a simulated world, that conversion seems unnecessary unless accounting for tracking the actions of free agents.

  32. Scott Says:

    Philosopher Eric #17:

      Psychophysical parallelism that we don’t understand? To me that sounds like a fancy way of saying “magic”!

    Let me say this as clearly as I can. Absolutely any hypothesis about how consciousness arises out of the physical world is going to sound like “magic.”

    Consciousness arises from appropriate computations? Magic!

    From quantum gravity in microtubules? Also magic!

    From the brain’s “biological causal powers”? Pompous, foot-stomping magic!

    From an “electromagnetic field that our brains produce through certain parameters of synchronous neuron firing”? More magic! Magic on stilts!

    Absolutely every one of these stories punts on the crucial question, “OK, but still, why should that produce consciousness, rather than being just yet another class of causal mechanisms within the physical universe, or ways of describing those mechanisms?”

    This point is so central to everything downstream of it, that if you still don’t get it I’m afraid that this thread of the conversation is now over.

  33. Dimitris Papadimitriou Says:

    fred # 29

    If such a simulation with sentient entities is feasible, then the most efficient (and economical) one will be the solipsistic version:
    You need only one , single AGI and only one ” preferred point of view”.
    The other entities do not have to be really sentient, only to seem and behave so…
    And the “world” around this AGI that is perceived by it is certainly much less costly.

    This is also much more risk free for bugs and crashes:
    If something’s going wrong , then you’ll fix it when your AGI is “asleep” ( for example).
    Time duration is simulated also, so no need to hurry up to fix things….

    All I’ve written above is of course pure simplistic speculative fiction…
    Everything about this topic is like that.
    I think though that this is the most plausible version of the “simulation hypothesis”.
    Who knows if even this is possible.
    Just assume that it is…

  34. James Cross Says:

    Scott #32

    You seem to have a dismal opinion about the ability of science to understand a natural phenomenon. It also seems highly dualistic.

    That aside:

    Wouldn’t the existence of anything non-computable be proof that at some level reality is not a simulation?

    For example, the halting problem.

  35. Scott Says:

    James Cross #34:

    Yes, I have a “dismal opinion” of our ability to make progress on the hard problem of consciousness because I’ve already seen how these conversations play out, thousands of them. No matter what physical phenomenon anyone points to, someone else can rightly object, “OK, but why does it feel like anything when that phenomenon happens?” So, once we understand that that’s where things will inevitably end up, why not just jump there immediately, rather than pretending “this time is different”?

    No, the existence of Turing-uncomputability in physics would not be proof that we weren’t living in a simulation, since what if the universe simulating ours was also Turing-uncomputable? I already made this point in both of my recent blog posts on this.

  36. Scott Says:

    Dimitris Papadimitriou #26:

      Yes, so we have to admit that non – linearity ( and irreducible probability) that appears at the “macroscopic emergent level” where the observables “appear” is as fundamental and unavoidable as the “underlying” deterministic evolution.

    No, I wasn’t talking about the measurement problem, or about the emergence of probabilities in QM—those remain mysterious! I was talking about the non-mysterious, mundane, completely-understood emergence of nonlinearity at the level of observables, despite the fact that QM is linear at the totally different level of the Schrödinger equation (which governs the amplitudes, not the observables themselves).

  37. Scott Says:

    Moshe #18:

      I still don’t see how all this discussion, every single argument, does not map isomorphically to old familiar theological questions.

    I mean, yes and no. On the one hand, whenever I discuss this, as we see above, commenters completely unironically bring up ancient writings or coincidences in their personal lives as relevant to the discussion. On the other hand, so far as I know St. Augustine, Maimonides, and history’s other theologians never considered the truth or falsehood of the Physical Church-Turing Thesis, or whether God was executing the universe on classical or quantum hardware! Often, even when the same “eternal imponderables” keep cropping up again and again through history, the specific terms of the discussion are different, and that makes me hesitate to say that the past should get to veto the present’s right to shoot the shit anew.

  38. James Cross Says:

    Scott #35

    You missed the “at some level” part of my post.

    No matter how many levels of simulation you would reach something that isn’t a simulation because it would have something non-computable. That would be our base reality too.

    I’m surprised you’re having such a problem with the “hard” problem. Are you also perplexed by the question of why there is something rather than nothing? It’s the same problem.

  39. Moshe Says:

    Thanks Scott. I do get the sense we are not very much in disagreement. I would say that if you are not too literal minded maybe you can use the strange fascination with this topic to bait and switch your way to other, more interesting discussions. One can in principle also just start with those other discussions, but anyhow I see your point.

  40. Douglas Knight Says:

    In response to Chalmers’s question about perspective, I go with Tegmark’s answer. If you simulate something, you contribute to its measure. What proportion of its measure is due to A and which to B is a fact, but there is no particular instance caused by A and another caused by B, only a unitary reality. The people running A know that A contributes some to it, but they do not know a lot more about the other sources of its reality than the beings inside it know.

  41. Ted Says:

    Sabine #5 and Dimitris Papadimitriou #26: let me reinforce Scott’s points in comments #13 and #36 by framing them slightly differently. It’s a common misconception that a qualitative difference between quantum and classical mechanics is that quantum mechanics is linear and classical mechanics is nonlinear. But in fact, when compared in an “apples-to-apples” way, quantum and classical mechanics are both linear (and nonlinear) in the exact same ways.

    The version of the quantum time-evolution law that is most naturally comparable to classical mechanics is not the Schrödinger equation but the Heisenberg equation (https://en.wikipedia.org/wiki/Heisenberg_picture), whose classical equivalent is Liouville’s equation (https://en.wikipedia.org/wiki/Liouville%27s_theorem_(Hamiltonian)). Both equations are linear in the relevant “mixed state variable”. And both equations admit natural modifications that would render the resulting dynamics nonlinear in the relevant “amplitudes” – but in both cases, nature appears not to have chosen those modifications.

  42. Josh Says:

    Scott #37: “or whether God was executing the universe on classical or quantum hardware”

    They kind of were actually. The following is an excerpt from Pseudo-Hippolytus Refutations on the Peratae, a sister group to the aforementioned Naassenes:

    > These allege that the world is one, triply divided. And of the triple division with them, one portion is a certain single originating principle, just as it were a huge fountain, which can be divided mentally into infinite segments. […]

    > And the second portion of the triad of these is, as it were, a certain infinite crowd of potentialities that are generated from themselves, (while) the third is formal.

    > For, he says, that from the two superjacent worlds — namely, from that (portion of the triad) which is unbegotten, and from that which is self-producing — there have been conveyed down into this world in which we are, seeds of all sorts of potentialities.

    Now, this is certainly surrounded by Neoplatonism nonsense from the influence of Valentinian Gnostics and all sorts of mumbo jumbo, but even in antiquity there was an intersection of thinking around real vs discrete realities and how those might fit together in the sense of one recreating the other, with the answer they stumble on here being remarkably Everettian with their real infinitely divisible original, effectively infinite number of potentialities, and a formal one that inherited “seeds of potentialities” from the other – again, a sister group to one referring to seeds as the indivisible points as if from nothing which make up all things.

    Yes, they didn’t have the same technical language we do around computers or quanta, and certainly not the math for either. But there was a fairly sophisticated modeling of concepts. Arguably the bigger distinction between their time and ours was the development of experimentation to validate ideas. In antiquity, an idea about quantized matter and seeds of potentialities was easily dismissed by populist theologies that saw such ideas as insults to the perfection of an intelligent designer. Those ideas became much harder to dismiss when experimental results were now supporting quantized behavior and those quanta behaving as probabilities and not actualities.

    As one of the sayings in the Gospel of Thomas said, “the evidence for this is in motion and rest.” At the time that was claimed, they didn’t even have a term for that specific sub-study, with ‘physics’ instead referring to study of all nature.

    There were advanced concepts back then, just a much less advanced society to receive them. For example it was only after starting to look into all this that I even discovered Lucretius was explicitly describing survival of the fittest back in 50 BCE in book 5 of his poem and nearly nailing Mendelian trait inheritance in book 4 describing how each parent contributed to a doubled seed which could bring back features of a grandparent from either side.

  43. Philosopher Eric Says:

    Scott #37: My apologies. I do agree with you that it does all seem magical. Thus it was wrong for me to take a jab at terminology like “psychophysical parallelism”. I hope you can forgive me enough to continue this thread of the conversation. Your work is generally well over my head, though I do appreciate what I grasp. And I did love your 2014 treatment of Integrated Information Theory, pointed out to me by Eric Schwitzgebel several years ago. Furthermore I find it quite inspirational that someone as amazing as Sabine Hossenfelder would spend her time here. I’ll bring up a central theme of hers for your potential consideration. It’s that good science depends upon empirical verification. This might perplex you since consciousness theories in general are famous for making no testable claims, or at least not ones we can reasonably do. The theory which I back however seems to be an exception.

    McFadden’s theory is quite testable because instead of proposing that processed brain information informs nothing to exist as consciousness (resulting in this “simulation business” and funkier things still), he proposes a quite measurable element of classical physics to exist as such. One way of testing EMF consciousness would be to induce appropriate EM energies in appropriate parts of the brain to see if a subject would report anything funky about their consciousness (presumably because constructive and destructive wave interference would alter consciousness).

    Another way would be to put sensors in strategic parts of the brain to measure supposed EM field consciousness to see if this is able to tell us useful things about the person’s consciousness. Actually modern Brain-Computer Interface seems to unwittingly be doing exactly this. Why were scientists been able to train a computer to interpret the 39 phonemes of the English language by means of EMF detection? Possibly because they were recording speech related consciousness itself. https://www.nature.com/articles/s41586-023-06377-x

    Let me clarify however that even if empirical justification in the coming years were to make this theory uncontroversial, I agree that it wouldn’t tell us why consciousness would arise in this form. I find this appropriate however. Similarly Newton had no clue why mass attracts mass.

  44. Scott Says:

    Ted #41: Thank you; I couldn’t have said it better!

  45. Scott Says:

    Douglas Knight #40:

      If you simulate something, you contribute to its measure. What proportion of its measure is due to A and which to B is a fact, but there is no particular instance caused by A and another caused by B, only a unitary reality. The people running A know that A contributes some to it, but they do not know a lot more about the other sources of its reality than the beings inside it know.

    I really like that perspective! Except I’d say that, if you simulate something, you might or might not contribute to its measure—again, none of us knows what’s necessary or sufficient for that.

  46. Scott Says:

    Philosopher Eric #43: We agree that empirical verification is at the core of science. But hopefully we can also agree that fake empirical verification is worse than none at all. And that sadly, the field of consciousness is more afflicted than most by charlatans—people who pretend to know things they don’t, and who leave obvious questions strategically unasked.

    I’ve never read this McFadden. My point is just that the most you could ever hope to show, experimentally, is that EMF activity plays a causal role in the brain activity that we associate with consciousness. And that, if we were in a perfect computer simulation, we’d find that the same experiment would yield exactly the same result—thereby making the whole question completely irrelevant to the simulation hypothesis.

  47. Dimitris Papadimitriou Says:

    Scott # 36
    Ted # 41

    Thank you for the comments.
    Personally, my own comment wasn’t about some
    need for “modified QM” but had to do with the implicit assumption ( in all unmodified / standard versions of QM) that “classicality as weakly emergent” is already a given.
    In my point of view that’s far from being self-evident or obvious.

  48. James Cross Says:

    Scott #46

    People want to apply standards to the neuroscience of consciousness that they don’t apply elsewhere in science. Right now there is the “hard” problem of gravity. How does gravity come from particles? The best a theory could do is make predictions that can be experimentally verified. But there isn’t any way to know if it is the fake kind of verification or not. All we can do is keep doing science and see if it continues to make sense.

    Regarding the relevance of this to the simulation question. It is interesting that Penrose’s main argument for the non computability of consciousness is the Godel argument aka the halting problem. The same thing I think that throws a wrench into the simulation hypothesis.

  49. Clint Says:

    Hi Danylo #8:

    Thank you for the comment!

    However, I apologize if I misled you. I did not mention or intend to raise the specter of “consciousness” in my post.

    Let me be clear about two of my judgements/perspectives:

    [1] “Consciousness” and “free will” do not exist – or at least are not (yet) defined for empirical, positive knowledge/concepts.
    On the contrary, self-referential computation, computers that can encode models of themselves (and other things), randomness, and neurological stimulus/response – all those things exist. Furthermore, “consciousness” and “free will” have nothing whatsoever to do with the “quantum” model of computation. I COMPLETELY view the use of the two concepts as equivalent to the use of words like “soul”, “spirit”, “angels”, etc. Therefore, when I suggest we consider a QCAGI as part of a thought experiment for bounds on how it would model reality then I am entirely asking about positively knowable computational bounds that would exist because of how the QCAGI would be bound to represent information and construct (quantum) contextual models of reality.

    [2] The quantum model of computation is NOT about physics.
    It is a probabilistic model of computation based on four simple postulates where probabilities are replaced by (positive or NEGATIVE) complex numbers (amplitudes) … and all the other postulates pretty much follow because of that. Therefore, the only constraints on a physical realization of the model are physical laws and the postulates. For example, there is nothing in the postulates (or in physical law) that requires a QC to achieve some arbitrary processing speed, memory capacity, or “be faster/better than any/all classical computers” for some given instance. We can imagine dumb/simple QCs (say with just a few 100 qubits) that could easily be beaten by classical computers – like we have now!

    I would propose that we rename “quantum” computation as something like “negative probability computation” or “complex probability computation” or “amplitude probability computation” because the “quantum” brings in physics … This is like referring to classical probability theory as the “Dice Model of Computation” because it originated in response to questions about games of chance … but it’s too late …

    Note that the 4-postulates of QC are the ONLY requirements for a hardware device to satisfy as a QC. The HW does NOT need a certain processing speed, be realized in some scale/type of physical system(s), or implement any specific set of functions/operators/algorithms. All it must do to be a QC is satisfy the 4 postulates. So, we can’t argue against a QCAGI (or a proposed neural/brain QC) by saying things like “well, it doesn’t factor large integers” or “well, it doesn’t encode information in state spaces of atomic scale systems”, etc. Acceptable knockdowns of a candidate QC would be things like: “It doesn’t encode state vectors using amplitudes” or “It doesn’t have a mechanism/architecture for computational interference of amplitudes” or “It can’t realize unitary operators” or “Operators that in general do not commute” or “It can’t realize measurement operators/projections” or “It can’t do tensor products of state spaces”.

    The Penrose-microtubule-quantum-consciousness-uncomputable-(Godel?) proposal … is total crazy-town – IMHO. To me, the best this does is prove that even an exceptional human brain can both come up with positive ideas AND be misled by less-than-positive ideas. See our human condition …

    But … is it possible to discuss the point of my post without derailing into “consciousness” studies? I’m proposing an “ideal” QCAGI running on “some model hardware” in a/the universe. (Note, this is like proposing we consider an ideal die to run a classical probabilistic model of computation … Yes, someone could say “Well, what about how physics doesn’t allow you to have an ideal die?” We are assuming an ideal die for the argument! We will leave the fault tolerance/error correcting theorems/engineering for later …) So, nothing exists until this QCAGI defines a measurement basis, state space, and performs a measurement (this is what QC forces us to accept) … If the QCAGI is simulating external reality on QC hardware and simulating the model of itself on QC hardware … then, even if the external universe were running a higher computational class/model would the QCAGI ever detect non-QC computation going on? Remember, the 4 postulates of quantum computation don’t say ANYTHING about what external reality IS … they only allow you to set up and compute a probabilistic model that is INTERNALLY consistent. So, saying we have this internal distribution model says nothing at all about what “IT IS OUT THERE” … but only allows the internal model to predict/perform/compute its own internally represented states – to be consistent with its experience/information about what is out there – set up the state space, evolve the systems, define measurements, and tensor state spaces.

    Again, my best understanding (not being a complexity expert …) is that, yes, a QCAGI could discover evidence that its universe was running a higher complexity model – and identify/model the exponential/speedup or whatever. This would lean on mathematical platonism – supported by the insight that mathematicians could have invented quantum computation long before the physicists ever “discovered” it. Indeed if our brain is a classical model then this is exactly our situation.

    I’m just trying to confirm that the argument above is valid that I am using to get myself out of the “fly bottle” of existing as a simulation in my own brain – as nothing to worry about as a threat to empirical science – even if my brain is running a QC … So I can keep calm and carry on 🙂

  50. Scott Says:

    James Cross #48: I don’t agree that there is a “Hard Problem of Gravity,” in the same sense as there’s a “Hard Problem of Consciousness.” There are merely problems about gravity that are hard.

    You’ve said everything that needs to be said about gravity (or any other empirical phenomenon), once you’ve said everything relevant to explaining its observed effects.

    But consciousness is not an empirical phenomenon, or not only one: it’s (by definition) the thing that causes any empirical phenomena to be perceived. No matter what empirical discovery anyone makes, that discovery is still compatible with a “zombie world” that has the same phenomenon but where no one is home to experience it.

    Or if you prefer: the “Hard Problem of Consciousness” is, by construction, the question that completely absorbs this metaphysical unanswerable, precisely so that we don’t need to waste our time on it when we’re discussing other “merely empirical” things like gravity! The Hard Problem is metaphysical one-stop shopping.

    As I’ve said before, I don’t accept that there’s any good evidence for Turing-uncomputability in known physics, but also, even supposing there were uncomputability in physics, it would just mean that any universe simulating ours would also need to be uncomputable. Either way, then, this is a really bad argument against the simulation hypothesis.

  51. fred Says:

    We can already get some interesting insights into consciousness (not just the content of consciousness like with psychedelics) by messing around with the human brain, e.g. when we get anesthesia or when we sever the connection between the two hemispheres. But all this is pretty limited given the risks.

    And it’s likely that AGIs will help us do it more systematically.
    If AGIs claim to be conscious and appear conscious (in the same way it’s happening between me and other people), then their ability to alter their own “brain” in any way (after doing a backup) could lead to some type of breakthroughs like “by modifying these connections or adding this new stream of data or expending my memory by that much I was able to alter my sense of consciousness in this way and that way”.

    PS: Remember, your mind isn’t inside your head, it’s your head that’s inside your mind.

  52. James Cross Says:

    Scott #50

    “No matter what empirical discovery anyone makes, that discovery is still compatible with a “zombie world” that has the same phenomenon but where no one is home to experience it”.

    It may make some kind of logical sense but it doesn’t match with anything that happens in biological organisms that are conscious. When there is nobody home in a biological organism, there is no intelligent movement and no memories that can be reported. In The Evolution of the Sensitive Soul: Learning and the Origins of Consciousness by Simona Ginsburg and Eva Jablonka argues that a type of complex learning they call unlimited associative learning (UAL) is a marker in the evolutionary record for the presence of minimal consciousness.

    Learning, memories, and intelligent navigation in biological organisms seem to require consciousness and are correlated with it. So, the problem mostly comes down to how biological organisms with organic brains are able to accomplish these things at a molecular level and then discovering the coordinated whole brain processes that enable them. Both learning and memory are associated with rewiring and shape modification in neuronal dendrites. There are physical changes that can be observed and measured.

    Have you ever looked at Chalmers’ “easy” problems? They would constitute a theory of consciousness. And Chalmers’ “hard” problem is like a Chinese finger trap (if that’s offensive sorry I couldn’t find another term for it). If you define mind and matter as opposites, of course, you can’t ever explain how mind comes from matter. The problem comes straight from Descartes.

    Consciousness seems weirder because it seemingly is in front of us all the time. It is correct but slightly misleading to say it causes any empirical phenomena to be perceived. It is perceiving empirical phenomena but the phenomena are not in the external world but are in the brain. That is what makes it subjective, private, and weird. It’s what makes people think it is impossible to explain with science.

  53. Job Says:

    fred #51

    If AGIs claim to be conscious and appear conscious

    Honestly, I’m not sure how to define consciousness in tangible terms such that it would exclude an AGI but not other humans and myself.

    That doesn’t mean that LLMs are conscious, but I just don’t see how an AGI would rationally determine that it’s not conscious. It’s like it would need to make some logical error along the way to come to that conclusion.

    People often describe the feeling of consciousness, like it’s a hard boundary between humans and machines, because supposedly machines don’t feel anything. But logically that’s still just input plus other internal/residual state being processed by the brain, which a machine can rationalize.

    The problem of consciousness basically reduces to the problem of existence. I mean, if something exists then it might as well be conscious, otherwise what’s the difference between existence and non-existence?

  54. Brent Thomas Says:

    In regards to the simulation hypothesis I’d argue that almost all of these postulates and ideas are simply too small.

    It is evident to me by existence of simulations at all (and we do produce them) that the statistical simulation argument that we must be in one is pretty effective. I’ll accept that by belief.

    The place where most of these go wrong in my opinion is that they believe that the simulation is in any way directed to our experience. I think its much more likely that the simulation has a more detailed/larger intent and that our existence (and indeed our experiences as thinking beings) is rather just the crust/cruft that accumulates in the forgotten corners of the actual simulation. ie. its so large and complex that our entire existence is just a rounding error in the simulation.

    Its intent was never to create us or interrogate our experiences – we just exist as a side effect of whatever they are actually simulating.

    Once you get past the idea that we are not important to the simulation itself it becomes much more reasonable that such a simulation would exist and be in play.

    It doesn’t say much to us about how we should live our lives because of this – pretty much just try to maximize your happiness and minimize bad things in the world. But to me the likelyhood that we are in such a simulation is almost self evident.

    its not ‘all about you’ (or even humanity as a concept) – we are just the dustbunnies under the bed but that doesn’t mean that our life cannot be fulfilling and interesting.

    just my $0.02

  55. fred Says:

    Job #53

    yea, I don’t know.
    My personal feeling is that consciousness has to do with the immediate formation of memories and not all that much with cognition/intelligent thinking (although thinking does rely on stable memories).
    It’s almost too obvious to mention, but, by definition, things have to be in our awareness to be remembered (remembering something that never happened is a hallucination), and remembering things is very close to the initial event when they appeared originally in our consciousness (remembering a piece of classical music is very similar to listening to it).
    And then when we don’t remember something we can’t tell for sure we ever experienced it in consciousness, we often just assumed we weren’t conscious (like when waking up from surgery).
    I can imagine what conscious experience would be like with a very low IQ (lol), but not so much without the ability to make short term memory. Consciousness seems to require at the very least some kind of short term memory (at the second scale or so), long term memory is a different matter (that’s more required for cognitive thinking).

  56. fred Says:

    James Cross #52

    “If you define mind and matter as opposites, of course, you can’t ever explain how mind comes from matter.”

    The confusion is to take matter for granted, and then wondering about the mind… when in reality it’s the other way around – the mind is the only thing we know is 100% “real”, while matter is nothing but a word slapped onto an apparition inside our mind. And this is at the crux of this “simulation hypothesis” question – we just can’t distinguish matter in the “real world” from matter that would be part of a simulation… the two are just coming out from correlations between patterns in our streams of perception.
    Another way to put it is that even if the real world isn’t a simulation, what we call matter is still an interpretation of patterns, and we can’t just assume we’re actually directly perceiving an external reality. Just like when I play a game in VR, the objects I pick up and manipulate in 3D space are really just living as bits in linear RAM of the system. What we call matter in the real world could be mapping to something even weirder we cant’ conceive (vibrations in quantum fields is already pretty weird, but that’s just a mathematical model… like matter is a perception model inside by our brain).

  57. fred Says:

    The fact that the hard question of consciousness can’t be solved doesn’t mean that there aren’t any amazing truths to be learned about our minds. Such truths are often very hard to discover, not because they’re hidden deep inside, but because they’re actually too close, right at the surface…
    the real illusion of dualism isn’t between mind and matter, but between observer and apparition in consciousness.
    Sam Harris on this:

  58. red75prime Says:

    I think the concept of probability of being in simulation requires existence of soul (or metaphysical pointer “you are here”, or something like that). If there’s no such thing, the sample space of the probability space doesn’t contain different outcomes for “you are in the simulation”, “you are outside of the simulation”. It’s just one outcome, which is, assuming that there are multiple identical yous receiving identical inputs and some of them are simulated, “you are in the simulation and, simultaneously, you are not in the simulation”.

  59. skaladom Says:

    The simulation argument sounds compelling at first, but after thinking about it I have serious qualms, of a very pedestrian quantitative nature.

    Sure, theoretically speaking, a computer in our universe can presumably compute a close enough approximation of a simulation of physical laws. I appreciate Scott’s arguments that (If I Understand Right), there’s a good chance that our real-world physics are Turing-approximable to an arbitrary degree.

    But the popular argument talks about simulating an entire universe. Or at least an entire planet full of people. Consider the resources needed to do that. Currently the number of particles in a computer needed to simulate the physics of just a few particles are astronomical in number. Just getting the ratio down to the billions would probably be an unbelieve achievement. That means you’d need the resources of billions of entire planets in order to simulate just one. I don’t think you can hack it down much further — our hyper hosts could get away with not simulating the insides of the Earth in much detail, but then again, if they’re sitting on some kind of planet-equivalent in their universe, they probably can’t convert the core of their own planet into computronium either. That points towards some pretty high lower bounds on the comparative vastness of our hyper-universe, and some serious resource usage in it. There might also be questions of latency; if the computer is physically big enough (a billion times the size of our Earth comes to ~1 light year), then the speed of light within it becomes a limiting factor, and the fact that our physics appears not to be non-local could make it a real problem to simulate a big enough section of space, such as an entire planet.

    Now, I don’t think computers capable of simulating physics just arise on their own anywhere, the way stars and planets do (except in the trivial sense that life also arises on its own…). So presumably in the hyper-universe you first need life to evolve far enough to develop intelligence and a sense of purpose, out of which comes the building of computers. Which means that the question “why would they do that for” actually makes sense. I can see an advanced enough civilization doing all kinds of small or medium-sized simulations, their equivalent to what is to us the LHC or so. But why on Earth would someone build a light-year’s worth of computronium just to run a simulated universe, even if they could? Just to have us as virtual pets?

    You can get over all of that just by positing that the hyper-universe might be astronomically bigger than ours. But then if you want to have a tower of simulations, you need even bigger and bigger universes on top, making the requirements of the simulation hypothesis bigger and bigger. The alleged symmetry breaks down between our own world, and the ones that our post-human descendents might be simulating in their computers; they are just not comparable. A simulated world cannot be more than a toy compared to its host world.

  60. Dimitris Papadimitriou Says:

    For anyone interested:
    The first (as far as I know) science fiction film that was based on the computer simulation hypothesis was Fassbinder’s legendary “World on A Wire” (1973- English title), decades before Matrix and the like…
    In my opinion, the whole subject has been exhausted already in science fiction, today’s discussions about the “hypothesis” do not bring anything new, they’re are only pale echoes of the past…

  61. Richard Gaylord Says:

    i can understand that philosophers might be interested in the ‘are we living in a simulation’ question because they apparently have nothing better to think about (though they actually DO have significant questions to address (see the writings of Aristotle) but scientists have real work to do so they might better use their time to, as the saying goes ‘shut up and calculate’.

  62. Stam Nicolis Says:

    Yes, we are living in a simulation-a simulation of the laws of nature. Why isn’t that obvious?

  63. Robert Henry Says:

    I don’t see how the famous argument gets one millimetre off the ground.

    Set aside technical barriers. Suppose you yourself run trillions of simulations. Should you conclude that you likely live in one of them, based on some statistical argument? Of course not! By hypothesis, you are the simulator, not one of the simulacra. It doesn’t matter whether the experiences of the simulacra are distinguishable from yours. You don’t need evidence that you live outside one of your simulations—it’s part of the hypothetical; it’s a given.

    Now suppose your daughter runs trillions of simulations. Again, by hypothesis she’s simulator not simulacrum. And by hypothesis you and she inhabit the same universe, because by hypothesis she’s your daughter. Hence by hypothesis, neither of you lives in any of her simulations.

    Indeed, let our descendants create trillions of simulations; let Andromedans create trillions of simulations; let anyone in our universe, past, present or future, create untold trillions of simulations. The probability that we inhabit any of those simulations is exactly zero—simply by hypothesis, with no further argument or evidence needed. Hence these simulations can’t ground any statistical argument purporting to quantify the likelihood of our living in a simulation. Isn’t this obvious? Surely someone has said this before?

  64. fred Says:

    Our best shot at causing our simulation to “glitch” is probably by building a QC with a million qubits.

    The equivalent of spawning a million potatoes inside Starfield:

  65. K Says:

    At first I misread the ending of the first paragraph as “I promise this will last for a long time.” and found that prospect very exciting 🙂

    Of course arguments about whether we live in a simulation and reasoning about higher levels of simulation (for starters – do they even exist?) are less rigorous than, say, a proof in a well-defined mathematical theory. However, I think that there are some arguments that can be made with varying levels of rigor.

    Some examples:
    *) “I think therefore I am.”
    I find this one quite convincing, so I consider at least the question of my own existence pretty much settled.

    *) Not directly about simulation but somehow related – most people are wrong about religion. Take a set of religions with mutually exclusive premises. In any such set at most one can be true. If we take the largest ones (Christianity, Islam, Hinduism, Buddhism, some folk religions), no matter which one we’ll choose as the correct one the majority of people will fall outside it. Of course, for most of them we cannot rigorously prove that any single of them cannot be literally true but if one religion says that there is exactly one God, another says that there are exactly 7 and another says something incompatible still then clearly at least 2 of them are wrong.

    *) Some interesting deductions can be made if we start with an assumption that our experiences and existence is taken uniformly from the set of all beings that can have such experiences and then apply the mediocrity principle.
    For example, with ~98% confidence we could estimate that in the future there will somewhere between 1/100 and 100x of the humans that have ever lived so far (Doomsday argument).

    *) The same reasoning can be applied to derive the simulation hypothesis. I.e., if we observe that we can simulate beings that have our kind of observations / experiences / consciousness (this is not yet really known but for the sake of argument let’s assume we could) and we would run a lot of such simulations then I see it as a strong evidence pointing towards us ourselves being simulated.
    We can represent the situation with a tree where each vertex is a universe. If in universe X there is a simulation of universe Y then Y is a child of X. Root would be the “real” universe which is not simulated. The question of whether we live in a simulation or not is the the question of whether we live at the root of this tree. However, the problem is that we really do not know anything about this tree, except that it has at least one vertex (the one we live in). It could easily be the case that this is the only vertex in the tree.
    We can assign a number to each vertex – the number of human-like (as in, can have experiences like ours and reason about whether they live in a simulation etc.) beings in this universe, and assume that our vertex (where we live in) is chosen proportionally to that number. Of course, if there is just one vertex this doesn’t change anything. Now suppose that we learn that our vertex is not a leaf but has children vertices with numbers assigned to them (such that their sum is significant to our vertex’s number) insead. Would that be an evidence towards us being in the root or towards us being not in the root?

    What could be some more examples of such arguments? What implicit or explicit assumptions that makes them unreasonable are hidden in these arguments?

  66. Jerome Says:

    I pay no attention to the Simulation Hypothesis for sociological reasons: there seems to be a very close relationship between proponents of the hypothesis, and social conservatives trying to rebrand their ideals as “longtermism” or some other kind of futuristic conservatism. At the end of the day, it sure seems like a lot people use simulation or longtermist arguments to claim that abortion is bad because of its effects on future exponential growth of humanity across time and the multiverse, or to claim that hoarding all wealth into the 0.01% class is good because they can grow the economy in a way that optimally benefits the yet-to-be-born in the multiverse.

    It’s *very* coincidental to me that when someone tries to tell me about the simulation hypothesis, the conversation always somehow ends with them telling me that Elon Musk is the god-emperor of humanity and that unregulated capitalism is altruistic. Reason enough for me to disregard it all as baloney.

  67. K Says:

    Robert Henry #62

    Yet for any single one of the untold trillions of trillions of simulacra (with the experiences indistinguishable from yours) the exact same argument would have lead to the correct conclusion (albeit you claim that for the wrong reasons). You claim that it’s just you (and your fellow humans and Andromedans inhabiting the same universe, anyways, a negligible number) for whom this argument leads to the wrong conclusion.

    As for me, if I don’t know if I am your fellow human in the one true unsimulated universe or a random simulacrum somewhere in the hierarchy of simulations (as our experiences are indistinguishable), I might just go with this argument, on average it leads to the correct conclusion for almost 100% of beings like me.

    I think the mediocrity principle explains this quite well.

    Isn’t this obvious? Surely someone has said this before?

    You could be the first one. You could also be the last one. More likely, you are somewhere around the middle.

  68. Philosopher Eric Says:

    Scott #47: I suppose I’m not as versed in simulation hypothesis discussions as I should be. Perhaps this is why I overlooked the “perfect” condition in your original (#12) response to me? Given that “a priori” stipulation however, I do agree. Furthermore yes, fake empirical evidence is worse than none at all. I’ve spent a good bit of time criticizing the status quo computationalist/ functionalist/ illusionist perspective, though at least its advocates tend to take ownership of a proposal that’s unfalsifiable. The ones with the most integrity even concede a belief that paper with the right markings on it converted to the right other marked paper, will create something that experiences what they do when their thumbs get whacked. So that’s heartening.

    As it happens however, I will not give them a pass even given such integrity. Half right is still wrong. Because causality mandates that processed information which informs nothing will not exist informationally, I’m still able to observe that they’re pedaling unfalsifiable magic. Johnjoe goes one step further to propose something that processed brain information informs to exist as consciousness, and this EM field is quite testable. I assure you that there is no causal potential for any true “simulated consciousness” to exist, if consciousness happens to be electromagnetic.

    Though not flashy, he’s an interesting guy. As a biologist in the late 90s at Surrey University he was perplexed by various biological quandaries. For example, how might a plant capture virtually all the light energy that hits a leaf, given that the energy must navigate a vast sea of chlorophyl? So he put together a presentation for his colleagues to suggest that this might occur by means of quantum mechanics. Yeah I know… and it didn’t go well! Fortunately however physicist Jim Al-Khalili came up afterwards to straighten him out on some things. Then the pair went on to institute a successful program at Surrey which now graduates doctorates in the emerging field of quantum biology.

    Back in the late 90s when Johnjoe was writing his first book on the subject, he figured it would be appropriate to devote the final chapter to the Penrose and Hameroff quantum consciousness proposal. After reading their book however he realized that he couldn’t include a proposal that flies in the face of QM experimental evidence. Here it also hit him that an electromagnetic field might instead do the trick. I don’t know how he was able to hammer it out so quickly, as well as true to the final product that he’s been publishing papers on since, but he did write a consciousness chapter for that book, and now classically rather than quantum.

  69. J.D. Says:

    We could run an ‘ancestor simulation’ of sorts right now. Just get a team of really dedicated actor-anthropologists, train them to be fluent in (say) Old English and proficient with Dark Ages technologies and ideologies, set them up in a remote corner of the world cordoned away from modernity, and then give them some newborn babies to raise. After a few decades, so long as the actors never break character or get caught slipping out for McDonald’s, those babies would grow up to be an approximation of ancient Britons. Put some hidden cameras all around the place and you could then study these faux ‘ancient’ people to your heart’s content.

    Of course, there are some problems that immediately come to mind:
    1) It’s extremely unethical to use people like this, tricking them for their whole lives so you can experiment on them without their consent, and forcing them to grow up without access to modern medicine or education.
    2) You probably wouldn’t learn anything about actual ancient people that you didn’t already know from the history books. These wouldn’t be actual ancient people, they would be modern people raised with what we currently know about the ancients. If we ‘discovered’ something from this experiment that we didn’t already know, e.g. that these faux ‘ancients’ used straw to clean their teeth, that wouldn’t actually prove that the real ancients used straw to clean their teeth.

    I submit that the same ethical and practical problems with this meatspace ancestor simulation apply to ancestor simulations run in the computronium cities of the far future, and thus that we are probably not ancestor simulations. In that hypothetical future, many and perhaps even all minds would be simulated minds, and so the prospect of raising a simulated mind in an ancestor-environment would be like raising a newborn baby that way for us – grossly unethical. And even if the far future society doesn’t share our current ethics, the same practical problem would apply – they wouldn’t really learn anything they didn’t already know.

    That said, perhaps the purpose isn’t for learning about the past, but rather for something else, like making a truly immersive game environment for players to interact with. Though the observed quotidian tedium of life seems like bad game design, I mean just imagine if you had to brush your teeth, clean the dishes and wipe your ass every day in Grand Theft Auto. Or perhaps we are an accidental byproduct – they are really studying black holes, and we arose unobserved in an obscure corner of the simulation, like mold growing in the toilets at the Large Hadron Collider. But a universe simulation so large and detailed as to have sentience arise unintended seems like a waste of computational resources.

    In sum, I would bet on us not being ancestor simulations, not because I think simulating ancestors would be impossible, but rather because I don’t think simulating ancestors would be common enough for it to be likely we are simulated ancestors, for much the same reasons that meatspace ancestor simulations are not common (i.e. non existent) today.

  70. Danylo Yakymenko Says:

    Even though I stand against the simulation hypothesis I must admit there exists a very strong argument FOR it.

    It’s the outstanding human desire to create simulated realities for other beings.

    It’s not just games, VR or AR. I’m talking about social networks, which now use their power to guide people’s thoughts and likes. It’s the conventional media that select what to bring to focus and how to present it. It’s propaganda from governments that rule the world. Heck, even a simple lie that you tell to other man create an altered reality for them.

    Altered realities is not what people usually mean when they talk about the simulation hypothesis. But is there a difference, really? I think that living in a reality which is pictured to you by TV or other sources is the same, as living in a totally simulated reality, from a pragmatic perspective. There are reports that today more than 20% of young Americans think that Holocaust is a myth. Aren’t they living in a simulated reality already?

    Now imagine a future where Earth is populated by AI augmented cyborgs which are millions of times smarter than today humans. Why wouldn’t they create simulated realities, just like their ancestors?

  71. fred Says:

    A computer creates the illusion of a rich world of different programs running all at once, but what’s really happening is that all programs are just frozen, their state stored in memory, except for the one program that’s being advanced by the CPU at a given moment, for a tiny slice of time. For that time interval the CPU is the incarnation of that program, having only access to the state and memory of that one program… then this program is put to sleep and the next one is taken out of its slumber, loaded into the active registers of the CPU, and for the next tiny slice of time the CPU is that other program, etc.

    If the world is a simulation then what we call God is the one and only consciousness, he is the equivalent of the computer’s CPU, and all of us are the programs, thinking we’re all independent conscious agents acting separately, when actually we’re all the God/CPU switching identity a googol times a second.
    So we’re all different and the same at once, our personal memories make us unique but our consciousness unites us, and what happens to one of us is happening to all of us.

  72. Signer Says:

    Scott #50:

    But consciousness is not an empirical phenomenon, or not only one: it’s (by definition) the thing that causes any empirical phenomena to be perceived. No matter what empirical discovery anyone makes, that discovery is still compatible with a “zombie world” that has the same phenomenon but where no one is home to experience it.

    Not directly related to the OP, but the Hard Problem of Consciousness is solved by panpsychism: when you are talking about mysterious part of consciousness, you are talking about existence, and zombies can’t make discoveries, because they don’t exist.

  73. f3et Says:

    I come late to this discussion, but I am always surprised no quantitative arguments are ever given. For instance, if the hypothetical universe simulating ours has a googolplex dimensions, or if their computers run on a googolplex memory at a googolplex Hz, they still cannot solve the halting problem, but they can find instantly the proof of any theorem if such a proof exists and can be written in our universe ; this implies that no logical argument (like the necessity of non-computable physical laws) hold water

  74. Scott P. Says:

    If the world is a simulation then what we call God is the one and only consciousness, he is the equivalent of the computer’s CPU, and all of us are the programs, thinking we’re all independent conscious agents acting separately, when actually we’re all the God/CPU switching identity a googol times a second.

    Why wouldn’t you call those all independent conscious entities? Wouldn’t they be, in the sense of Douglas Hofstadter’s “Who Shoves Who Around the Careenium?”

  75. Scott Says:

    fred #70: Or we could have polytheism, where the various gods are parallel processors who split the computational burden of simulation! 😀

  76. fred Says:

    Scott #74

    no, because that analogy wouldn’t be moving the needle a bit in terms of explaining the apparent multiplicity of individual consciousness (you’re just pushing it a level higher), with such paradoxes as “why am I me and not you?”, which vanishes with my metaphor.
    Btw, for what it’s worth, this was also the conclusion of Schrodinger when reasoning about consciousness…
    As another analogy, it’s quite puzzling that individual fundamental particles like electrons are all both unique and the exactly the same… this is why young Feynman was flirting with the idea that the multiplicity of electrons could be an illusion and that maybe there’s really just one electron.

  77. Joshua Zelinsky Says:

    Jerome #65

    “I pay no attention to the Simulation Hypothesis for sociological reasons: there seems to be a very close relationship between proponents of the hypothesis, and social conservatives trying to rebrand their ideals as “longtermism” or some other kind of futuristic conservatism. At the end of the day, it sure seems like a lot people use simulation or longtermist arguments to claim that abortion is bad because of its effects on future exponential growth of humanity across time and the multiverse, or to claim that hoarding all wealth into the 0.01% class is good because they can grow the economy in a way that optimally benefits the yet-to-be-born in the multiverse.”

    This doesn’t remotely sound like what the people talking about longtermism mean when they say they are longtermists, but sounds like you’ve been reading what some people who don’t like the idea have labeled it as. It is not a good idea to treat all of the people you disagree with as some giant monolith, and an even worse idea to just let other people do it for you while not actually looking at the arguments or beliefs in question. It is also a bad idea to decide that because a set of beliefs are often politically connected to people one disagrees with or dislikes that therefore they must be wrong. In this particular situation, they are by and large not remotely affiliated, which makes this a doubly bad approach.

  78. Scott Says:

    Jerome #65: Following up on Joshua Zelinsky’s response to you—I’ve never met or read a single person defend the simulation hypothesis on the bizarre grounds you’re talking about, not one. Maybe you hang out with a very different set of people and have, but our wildly different experiences here underscore the danger of deciding your answer to an intellectual question on sociological grounds.

  79. Oleg S Says:

    Hi Scott, what if we take all large simulation runs (e.g. molecular dynamical simulations of protein complexes, simulations of quark-gluon plasma, sumulation of galaxies etc), with sufficient precision, and plot e.g. total volume of stuff being simulated vs time frame of simulation. What would you expect of the resulting graph? Can it say something about how easy/difficult it is to simulate different parts of the Universe? Do you know if someone already did it?

  80. Scott Says:

    Oleg #79: It would entirely depend on what you wanted to learn from each simulation, which would determine resolution, timescale, etc. If you don’t have some way to standardize that, you’re just going to get a meaningless graph of apples-to-oranges comparisons.

  81. Oleg S Says:

    Scott #80: I guess most of these simulations will be a proxy for experiments – say, you have an idea of physical laws/starting conditions that you want to test, but you cannot really do an experiment, so you simulate, and compare to some outcome that you can access experimentally. Not sure if I can think of any other reason to run the simulation (except for fun, of course). However we can also get the compute that was required to run the simulation, and draw a few lines like “on 1TFlop you can simulate big molecule for a few nanoseconds, or a solar system for few millions of years”.

  82. Ben Standeven Says:

    @f3et #72:
    That’s pretty much why noone gives quantitative arguments. To do so, we would need to know the quantitative properties of the simulating universe, which we don’t.

    @K #64:
    Following the assumption that the simulation avoids complicated and irrelevant computations as an optimization, there should not be any nested simulations, because it is easier to perform an offline simulation and pipe the result in. So the tree probably only has one level of children; thus the presence of child nodes in our universe (with populations contributing to ours) would be evidence that our universe is the root.

  83. Ben Standeven Says:

    @Robert Henry #63:

    Looking at the Wikipedia article for the Simulation Hypothesis, I found a link to an article by Brian Eggleston making exactly that point: Bostrom cites the expected number of simulated humans as P(Nondoom) x Avg#of simulations x Avg pop of simulations, but Eggleston argues it should be P(other worlds exist) x P(Nondoom in another world) x Avg#of…, since the simulators don’t live in our universe. Of course, this will only be large if P(other worlds exist) is large.

  84. Ben Standeven Says:

    @fred #76:
    Your explanation doesn’t help, since we now have paradoxes like, “why am I process 362652 instead of 563274?”. In addition, there is no longer a reason to believe that consciousness exists at all, since it is not identified with my experiences.

  85. Barney Says:

    @James Cross #52

    “Learning, memories, and intelligent navigation in biological organisms seem to require consciousness and are correlated with it.”

    The obvious response of an interlocuter would be to say that the functional roles played by learning, memories, and navigation could, in principle, be played by the right kind of mindless computational system (why not the Robot from Searle’s Chinese Room?). Can you explain why these capacities in particular require consciousness (viz, subjective experience) without begging the question (i.e., imbuing one of the capacities with an ineliminable subjective essence)?

    Further, what is the actual evidence that these capacities are correlated with consciousness in biological organisms? Perhaps we have effective theories of those capacities in other organisms. Perhaps they can be captured by some summary statistic to serve as one of the variables for the correlation. But what of the other, “consciousness” variable that is required? The reason the consciousness question is “hard” in a way disanalogous to gravity is that we do not have, and by definition never will have, epistemic access to the “consciousness data” of other minds. Instead what we have is, 1) our own subjective experience, 2) a straightforward if controversial anti-solipsist induction (that other humans have similar subjective experiences), 3) further inductions (about other biological systems) based on which similarities we think matter. Empirical evaluations like the mirror test are never better than the 3-ish assumptions they are based on.

    I have read and enjoyed Ginsburg and Jablonka’s work. I am even happy to say that, intuitively, I think the story they tell might be (directionally) right. But nothing they have done remotely addresses the basic challenges preventing a science of consciousness from becoming empirically well-founded. The relevant data simply cannot become accessible, by definition. Without such data, the chain of assumptions required to arrive at a theory is long. Each step might seem reasonable (though some steps are usually more tenuous than that), but none of the steps can be tested.

  86. Matteo Villa Says:

    Thank you for this post, whenever now someone asks me what I think about it I’ll just send them here instead of wasting 3 hours of my life.

  87. fred Says:

    Ben

    “Your explanation doesn’t help, since we now have paradoxes like, “why am I process 362652 instead of 563274?”.”

    No at all, because there’s no “I”, except for the general “universal” property of consciousness (doesn’t matter if it’s called God or not).
    The CPU in the analogy is just as convinced that, for a few nanoseconds, it’s process 362652, but then, for the next few nanoseconds, it’s process 563274. With no memory whenever there’s a “switch” since process state/memories are being entirely swapped.
    Again, I’m not claiming CPUs are conscious, it’s just an analogy about how processes on a computer can appear to be independent/simultaneous agents (on a coarser scale).

    “In addition, there is no longer a reason to believe that consciousness exists at all, since it is not identified with my experiences.”

    By definition consciousness exists, it’s the only thing I’m certain of, it’s not a matter of believe, it’s the bottom truth of my being. Everything else, I can’t be sure (whether I’m awake or dreaming or hallucinating, living in a simulation running on bits or vibrations in quantum fields, whether I’m the only conscious entity, whether I’m the only universal entity, whether everything else that seems conscious to me is actually conscious, …).
    And you’re correct, identifying our consciousness with our experiences is at the root the confusion of the self (the dualistic illusion), as an ego being the author of all our experiences (not the biographical self or biological self, etc). Consciousness is literally the apparition of experiences, not the experiences themselves. As an analogy (imperfect, by definition), a mirror isn’t defined by the reflections appearing in it – anything appearing in the mirror isn’t the mirror and a mirror can reflect anything except itself, a mirror is the word for the ability to reflect, and consciousness is a word for the ability to experience experiences… we can’t define it in terms of other concepts because it’s the bottom reality of what it feels to just “be”.
    See the Sam Harris interview I posted above (I can’t recap thousands of years of Hinduist/Buddhist intuition in this post).

  88. Stam Nicolis Says:

    Regarding the statement about “uncomputability at the Planck scale”, it should be pointed out that currently it isn’t known what “Planck scale” actually means, beyond the fact that an object, with the mass equal to the Planck mass and of size the Planck length, would be a black hole-assuming, however, that classical gravity makes sense at those scales, which is by no means as obvious as it sounds. However a black hole spacetime, in any event, doesn’t imply that it isn’t possible to perform computations in that geometry; far from it.
    On the other hand, any claim about the relation of any computational framework to consciousness has to provide a definition of what consciousness is supposed to be, in that framework. If the definition of consciousness is a certain operational régime of microtubules, then the relation between this régime and what is, intuitively, understood as consciousness needs to be spelled out.

  89. fred Says:

    Note that the concept of “identity” runs very deep, not just in abstract discussions on consciousness.
    See for example the definition of fermion or statistical mechanics:

    https://en.wikipedia.org/wiki/One-electron_universe
    https://en.wikipedia.org/wiki/Gibbs_paradox

  90. Stam Nicolis Says:

    We don’t need “another” universe to simulate ours: we do simulations-in our heads-all the time, not, only, about our surroundings, but about fictional surroundings (that’s what fiction, science or not, is all about). We don’t “leave” our Universe when we do so. So we can imagine Universes different, in fact, from ours (e.g. with zero or negative cosmological constant and a different value for Newton’s constant) and can, even, work out the consequences, up to some point-all the while living in our Universe… And each of us exists in the “simulations” carried out by other people.

  91. D.C. Adams Says:

    If the ultimate answer is, “We’re living in a simulation/multiverse”, my reply is always, ok then… tell me how the simulation works beyond the veil of our current ignorance. What are the subroutine protocols running reality, can we hack it, does it run anti-virus software in the background, are upgrade versions and back-ups necessary, can we opt out of spam…etc. (these are serious questions).

    Yes, with a Quantum Sensor, we can simulate the physics of reality, but this does not suggest either platforms fundamentally functions on different laws of physics or mathematics. It suggests we’ve reached limits on our current interpretations of physical laws.

    *BTW – It was a pleasure seeing you at Mindfest. Thanks for letting me to ramble.

  92. Wyrd Smythe Says:

    “…our descendants are likely to have so much computing power that simulating 10^20 humans of the year 2024 is chickenfeed to them.”

    10^20 humans in 2024? Did I miss a memo?

  93. Scott Says:

    Wyrd Smythe #92: No silly, it would be 2024 in the simulations! The same humans could of course be simulated over and over, or different humans in different versions of 2024.

  94. Robert Henry Says:

    @ Ben Standeven #83

    Thanks so much! Eggleston does indeed make exactly the same point. He writes:

    “However, obviously we cannot count individuals from simulations that we ourselves run, because these simulated individuals don’t contribute to the possibility that we are in a simulated universe, since we know for sure that we are not them, since we created them. In fact, [the] only simulated individuals that can contribute [to] the probability that we are living in a simulated universe are individuals that we haven’t (and will not) create. In other words, only individuals that aren’t from our universe or from universes that we might eventually simulate can be counted, as these are the only individuals for which the principle of indifference holds.”

    Scott actually reaches a similar conclusion from a different argument. Scott said “we should have looked up” because if we’re being simulated it’s by a universe that’s bigger than the one we see around us. Eggleston and I say don’t bother looking down because whatever you see there is guaranteed not to be us, not because it’s smaller, but simply because it’s down.

  95. James Cross Says:

    Barney #85

    “…the functional roles played by learning, memories, and navigation could, in principle, be played by the right kind of mindless computational system (why not the Robot from Searle’s Chinese Room?). Can you explain why these capacities in particular require consciousness (viz, subjective experience) without begging the question (i.e., imbuing one of the capacities with an ineliminable subjective essence)”?

    You seem to be making a functionalist philosophical argument. My argument is more scientific. Yes, a mindless computational system can do everything a human brain can do. That’s not the issue. The issue is how do biological brains do what they do. The way the mindless computational system does it would likely be completely unlike how a brain does it.

    “We argue that consciousness originally developed as part of the episodic memory system—quite likely the part needed to accomplish that flexible recombining of information. We posit further that consciousness was subsequently co-opted to produce other functions that are not directly relevant to memory per se, such as problem-solving, abstract thinking, and language. We suggest that this theory is compatible with many phenomena, such as the slow speed and the after-the-fact order of consciousness, that cannot be explained well by other theories”.

    https://journals.lww.com/cogbehavneurol/fulltext/2022/12000/consciousness_as_a_memory_system.5.aspx

    There’s a section on the apparent lack of causal role for consciousness that suggests consciousness could be epiphenomenal with respect to perceptions, decisions, and actions, but play an important causal role in some other function, namely memory and learning.

  96. Wyrd Smythe Says:

    Scott #93: Ah, okay, got it. I figured I was missing something. 10^20 is a lot! A visible universe comprised of one hundred billion stars in one hundred billion galaxies is 10^22 stars.

    As with the MWI, I’m staggered by the implied information content of a single object!

  97. Name Required Says:

    I noticed an interesting parallel, stemming from Chalmers’ argument: there’s a usual objection to the MWI that points out that we cannot, in principle, interact with other branches of the Universal Wavefunction, therefore a theory positing their existence is unfalsifiable, unphysical, and violates Occam’s razor. We can set up interactions between *future* branches (at least for microscopic objects, but the theory doesn’t rule out Wigner’s Friend either) and prove their physical existence, but never prove that past divergences ever occurred.

    The usual counterargument points out that Occam’s razor should be applied not to material entities such as atoms or wavefunction branches, but to physical laws. We can’t make any testable predictions about what happens inside the event horizon of a black hole, but there’s no reason to assert that physical laws suddenly change there or declare that a calculation of how much proper time it would take for an object to reach the singularity is unphysical.

    Similarly, there’s no need to introduce the wavefunction collapse mechanism that we can never observe in experiments that *we* set up, but that eliminates other branches when volumes of space containing ourselves are involved. And there’s no need to flat out refuse to discuss the existence or non-existence of such mechanism. Just shrug and assume that physical laws continue to work as usual even where we can’t experimentally test them.

    And also similarly, unless one has an actual objection to some of the premises of the Simulation Argument, I believe that the prudent course of action is to shrug and assume that its conclusion is probably correct. Inventing metaphysical arguments for why it’s wrong or why it must not even be considered is harmful because those surprisingly apply to very well established parts of physics too.

  98. Wes Hansen Says:

    Why don’t you discuss the Thukdam Projects being run by neuroscientist Richard Davidson here in the U. S. and Svyatoslav Medvedev, who has his PhD in theoretical physics and another in neural anatomy, in Russia?

    “Three days after his heart stopped, Geshe Lhundub Sopa was leaned upright against a wall, his odorless body perfectly poised, his skin fresh as baked bread. He looked like he was meditating, remembers Richard Davidson, a prominent neuroscientist and friend of the late Buddhist monk.

    Sopa, a tutor of the Dalai Lama’s in Tibet, moved in 1967 to Wisconsin, where he co-founded the Deer Park Buddhist Center and taught South Asian Studies at the University of Wisconsin.

    By conventional Western standards, Sopa died on August 28, 2014. Five days later, and two days after Davidson’s initial visit, the neuroscientist returned to Deer Park and observed his friend’s body a second time. “There was absolutely no change. It was really quite remarkable,” he said.

    By the standards of conventional Western medicine, Sopa was dead, though with a strangely preserved corpse. By the traditions of Tibetan Buddhism, Sopa’s body harbored a mind that remained very much alive. Like other accomplished Buddhists, Sopa was believed to have entered a meditative state known as thukdam, during which his consciousness would wisp away into a spare, luminous awareness.

    Thukdam, or thugs dam, in Tibetan, is an honorific meaning “one engaged in meditation.” For Sopa, this meditation, also known as “clear light,” lasted seven days before his body began to decay and was cremated.”

    https://tricycle.org/article/thukdam-project/

    Monks and nuns engaged in Thukdam have been known to stay in this state for upwards of 30 days in the hotter climates of India. The practice is really the pinnacle of the Six Yogas of Naropa and what they are actually doing is referred to as blending the Mother and Child Clear Lights. Davidson’s first idea was to check for weak electromagnetic signals in the brainstem, but they found nothing.

    https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2020.599190/full

    One would think that someone of Chalmer’s caliber would be aware of these projects.

  99. Wes Hansen Says:

    I don’t believe in quantum computation, of course, but Wolfram has written a series of blog posts, each one a lengthy novella, on the Second Law and I think his Rule 30 argument addresses Sabine’s comment above. I spent way more time than I should have reading Wolfram’s series and I can accept his argument up to and including Brownian motion, but I’m not convinced the Second Law applies to Brownian motion. Scientists have witnessed sizable particles moving against gravitational gradients! I think the irreversibility comes in due to, well obviously, a symmetry reduction. I think Emilio Del Giudice’s Coherent Domains may have something to do with it.

  100. Barney Says:

    James Cross #95

    “You seem to be making a functionalist philosophical argument. My argument is more scientific. Yes, a mindless computational system can do everything a human brain can do. That’s not the issue. The issue is how do biological brains do what they do. The way the mindless computational system does it would likely be completely unlike how a brain does it.”

    There are both philosophical and scientific points to be made here. The philosophical point is indeed the functionalist one. You grant that a mindless computational system can do everything a human brain can do. That being the case, arguments of the form “we evolved consciousness because consciousness is necessary to implement biological function x” are off the table (since, arguendo, unconscious entities can also implement every such function). So, you are left with the claim that in biological organisms in particular, some particular set of functions are implemented in a manner which involves consciousness. So far, so good?

    Now we come to the scientific point. What kind of evidence would be required to figure out which biological functions are accompanied by conscious experience in animals? We cannot access the direct form of evidence since it consists of subjective experience. Instead, we have to infer that certain kinds of functions are accompanied by conscious experience*. How do we do that? I refer back to my steps 1, 2, and 3 above. That is as scientific as the approach to answering these questions can get. If we accept verbal reportability as a gold standard then, given continuing progress in neuroscientific measurement tools, I’m confident we can develop a reasonable theory of consciousness in human beings. But extrapolating that to a general theory requires assumptions which can never be supported by firm scientific evidence.

    * Note: I take the construction “accompanied by” to be neutral with respect to whether consciousness is causal or epiphenomenal. Personally, I think tokens of conscious experience are identical to tokens of brain processes, but the logically quandary facing someone attempting to make scientific progress in this domain does not hinge on that.

  101. James Cross Says:

    Barney #100

    You’re assuming without evidence that non-biological consciousness is possible.

    If consciousness is memory formation in the brain, then it is biological – like cell division, for example. If current theories of memory are correct, we would look for alterations in dendrites and dendritic spines

    It may also mean that consciousness provides no cognitive advantage beyond memory formation. AI can’t have it but it doesn’t need it either.

  102. Wes Hansen Says:

    Goodness, I didn’t think for a minute that you would actually post my comment! Please allow me a couple of links, all related to your post and perhaps providing insight into your questions regarding microtubles. The links deal with the work of Del Giudice and colleagues; they extended the domain of validity of QED from gases to solids so they could theoretically study biological water, primarily; an interesting finding is that the electrical POTENTIAL modulates the phase of these coherent domains + molecules, and that phase is, of course, superluminal. Emergence of the Coherent Structure of Liquid Water is a nice introduction:

    https://www.mdpi.com/2073-4441/4/3/510

    And you can follow that up with some experiments on DNA

    https://arxiv.org/pdf/1501.01620.pdf

    Del Giudice died prematurely, but the Water Journal published a wonderful article on science by Professor Del Giudice posthumously: Prometheus: The Passoionate Soul of Scientific Reason

    https://waterjournal.org/archives/delgiudice/

    Another interesting paper in the same vein, also relevant to this post is from Claudio Messori, a retired medical professional of some sort, and is published open-access (also available on his Researchgate profile):

    https://www.researchgate.net/publication/326225414_The_super-coherent_state_of_biological_water

  103. Ben Standeven Says:

    @Name Required, #97:

    So, applying Occam’s Razor to the simulation argument, we see that the simulation does not involve any sort of optimization; it’s just a brute-force physics computation.

    And of course, just on statistical grounds, it is highly unlikely that any simulator is watching you; after all, there are supposed to be billions of simulated people for every real one.

  104. Barney Says:

    James Cross #101

    I’m not assuming that non-biological consciousness is possible. I don’t know whether it is or not and I’m confident that nobody else knows whether it is or not either. In fact, I haven’t forwarded any positive proposal at all. What I am doing is pointing out that there are intractable problems limiting a scientific study of consciousness. That doesn’t mean a scientific study of consciousness can never make any progress, but it does mean that there is a limit to how strong the evidence can ever be for many of the claims we care about. Additionally, I am pointing out places where your overall view appears (to me) to be inconsistent. Of course, the error could be mine, that’s what conversation is for! To that end…

    “It may also mean that consciousness provides no cognitive advantage beyond memory formation. AI can’t have it but it doesn’t need it either.”

    If you think mindless computational machines can do everything that biological organisms can do (as you stated in #95), but you still doubt that the former would be conscious, then consciousness cannot provide any cognitive advantage at all.

    If, on the other hand, you think memory formation is a fundamentally biological process that could not be implemented in silico at all, then you cannot simultaneously hold that “a mindless computational system can do everything a human brain can do”.

    Can you see why I do not think your position is consistent?

  105. James Cross Says:

    Barney #104

    “Can you see why I do not think your position is consistent”?

    Nope.

    The short answer is that I don’t think AI will ever be conscious because what it can do isn’t conscious in humans either.

    But if you want to debate it more I will sometime in the next week or two post something with a little more nuance on my website related to this. I don’t feel we should be taking up any more space on Scott’s website on a unrelated topic. T

  106. David Chalmers Says:

    i’m a little late to the party, but for what it’s worth my recollection was that “suppose we build the simulation” was my response to your request “i’ll need an argument for why the hypothesis is meaningful”. i’m interested to hear your response to the argument! here’s a somewhat more elaborate version of it, from chapter 4 of my book “reality+: virtual worlds and the problems of philosophy”.

    —-

    Is the simulation hypothesis meaningless? We’ve seen that we could in principle get evidence for the simulation hypothesis. For example, the simulators could tell us we’re in a simulation, show us the program, and show how it controls the world around us. Some people think there could even be evidence in physics suggesting that we’re in a simulation. But as we saw in chapter 2, this sort of evidence involves imperfect simulations. In a perfect simulation, our experience will always be as it would be in an unsimulated world. So it’s hard to see how we could get evidence for or against the perfect simulation hypothesis. If we cannot, Carnap and the other Vienna Circle philosophers would say it’s meaningless.

    I think the Vienna Circle philosophers are wrong here. If we can’t get evidence for or against the perfect simulation hypothesis, this means at most that it isn’t a scientific hypothesis—one that we can test using the methods of science. But as a philosophical hypothesis about the nature of our world, it makes perfect sense.

    Once again, we can use the Simulation Riposte. Imagine that we create a perfect simulation ourselves. Inside the simulation, some sims might have an argument. Sim Bostrom says, “We’re in a simulation.” Sim Descartes says, “No we’re not! This is a nonsimulated reality.” Sim Carnap says, “This debate is meaningless! Neither of you are even wrong!”

    A proponent of the meaninglessness hypothesis sides with Sim Carnap, saying that the debate between Sim Bostrom and Sim Descartes is incoherent. Neither of the two is right. But that seems the wrong verdict; in fact, Sim Bostrom is right, and Sim Descartes is wrong. They are both in a simulation. Sim Bostrom will never get evidence that proves he’s right, but he’s right all the same.

    If you doubt this, then suppose we leave an imperfection in the simulation: a small “red pill” that’s difficult to find, but that when found gives definitive evidence of the simulation. Now, suppose Sim Bostrom and Sim Descartes one day find the red pill and get the evidence. Someone shows them the computer running the simulation and controlling their whole lives. Both of them will presumably agree that Sim Bostrom was right and Sim Descartes was wrong. And they will be right about this. So, at least in this case, the debate about whether they’re in a simulation is not meaningless.

    Now let’s change the story a bit. Suppose that Sim Bostrom and Sim Descartes never find the red pill, although they could have. Perhaps they start looking for evidence but die before finding the pill. Then we can say that if they’d found the red pill, they’d have discovered that Sim Bostrom was right and Sim Descartes was wrong. In this case, I think it’s clear that even though they didn’t discover the pill, Sim Bostrom is right, and Sim Descartes is wrong. So, again, the sims’ debate isn’t meaningless.

    Let’s change the story again. The creator of the simulation notices the imperfection in the simulation. She patches the bug, so the red pills disappear. Now it’s a perfect simulation. The two simulated philosophers lead the same lives they did in the previous case, with Sim Bostrom insisting they’re in a simulation and Sim Descartes insisting the opposite. Of course they never find a red pill, and they die without proof. I think it’s still pretty clear: Sim Bostrom was right all along, and Sim Descartes was wrong. Their lives are exactly the same as in the previous case. The mere existence of a red pill somewhere in the bowels of the simulation doesn’t make a difference as to who is right. Both Sim Bostrom and Sim Descartes are making perfectly meaningful claims about their world, even though neither of them can prove those claims.

  107. Scott Says:

    David Chalmers #106: Thanks for sharing that Reality+ passage spelling out your argument in more detail!

    My response is: OK, fine, you’ve given a valid argument for why the simulation hypothesis is metaphysically meaningful even if unfalsifiable. But once you tell me that the red pill has been patched away, you’ve sawed off any possible contact with science, and therefore I leave the matter to you and your colleagues. 😀

  108. James Cross Says:

    David Chalmers #106

    “But as a philosophical hypothesis about the nature of our world, it makes perfect sense”.

    But that really depends upon your philosophy, doesn’t it? If you are pragmatist, then a meaningless scientific hypotheses would also be meaningless philosophically.

    So, the best that can be said it is meaningful to some philosophies.

    Suppose Sim Descartes found a green pill that proves definitively we are not living in a simulation. Sim Descartes would be right and Sim Bostrum wrong.

    Sim Carnap is right in any scenario because the existence of either the green or red pill would make the simulation hypotheses meaningful.

  109. Pascal Says:

    What if, when we run a simulation (in one of the currently existing video games, say), the characters in the simulation can experience pain/pleasure/etc..? If this is the case, we should all immediately stop shooting characters in video games. This simulation hypothesis has meaningful consequences, namely, ethical consequences.

    This hypothesis may look absurd to many readers and it certainly looks absurd to me. But it’s not more or less absurd than the original simulation hypothesis itself.

  110. fred Says:

    Asking whether the electron went through the left slit or the right slit is outside of science too – the only thing science tells us about this question is that it’s outside of science, but in this case that’s clearly a very valuable piece of information!

    Then question is whether ‘the simulation hypothesis being outside of science’ also is valuable information or just trivial/obvious.
    I think there’s actually a lot of interesting discussions that spring out of this question, such as the nature of computation, consciousness emerging from computation, etc.

  111. fred Says:

    I think at some level this all reflects the widespread belief that philosophy is somewhat worth less than hard science because the former isn’t (usually) falsifiable.
    But I think that the answer one gives correlates with maturity/wisdom.
    It’s natural to be more obsessed with hard science when one is 20 year old, because deconstructing reality like it’s some big clockwork is fascinating, and a satisfying distraction from the harsh realities of the human condition… but, by the 40s and 50s, one typically starts to realize that the stuff that’s outside of hard science is actually just as important, if not more.

  112. fred Says:

    Science is all the stuff that makes your car work – very interesting, fun, and useful!
    But philosophy is more about “where do I want to drive my car today?”.
    Originally philosophy was about helping people figure how to live a good life, or cope with the challenges of life, at a very practical level.
    Even if philosophy isn’t falsifiable, it often gives people new shifts in perspective when looking at their life, and can make quite a difference.

  113. Laura Says:

    I like that listening and considering is part of your system. “Hypothetically if…” when proposing a question to Chat GPT or Claude 3 is the only way that I’ve found d to bypass the standard “I can’t predict the future or speculate” type of answer.

  114. Uspring Says:

    Scott #107:

    Does the fact, that a measurement on a superposition of states, which makes the other branches of the wave function unobservable imply, that the other branches don’t exist or make them irrelevant wrt discussion?
    Or, does the fact, that very distant parts of the universe are unobservable due to its expansion imply, that they don’t exist? I think, that present theories do occasionally have unobservable consequences.
    This reminds me of the property “grue” (green things turn blue on Jan 1st, 2030). It’s, as now, empirically valid, but lacks a fundamental property of physical theories, i.e. the invariance wrt time. So it is now an unfalsifiable prediction, but still nobody believes in it.

  115. Charbel Bejjani Says:

    Hi Scott- might be a bit late but hopefully you’ll see this.

    I have a question: the amplitudes of the wavefunctions are supposed to be complex numbers, and most complex numbers are not computable.

    if we live in a simulation running on a digital (classical) computer in the next universe up, wouldn’t this mean that the amplitudes are actually totally computable with finite resources? (i.e., an algorithm can completely specify the numbers they represent)

    Am I missing something here?

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!