Author’s Note: Below is the prepared version of a talk that I gave two weeks ago at the workshop Quantum Foundations of a Classical Universe, which was held at IBM’s TJ Watson Research Center in Yorktown Heights, NY. My talk is for entertainment purposes only; it should not be taken seriously by anyone. If you reply in a way that makes clear you did take it seriously (“I’m shocked and outraged that someone who dares to call himself a scientist would … [blah blah]”), I will log your IP address, hunt you down at night, and force you to put forward an account of consciousness and decoherence that deals with all the paradoxes discussed below—and then reply at length to all criticisms of your account.
If you’d like to see titles, abstracts, and slides for all the talks from the workshop—including by Charles Bennett, Sean Carroll, James Hartle, Adrian Kent, Stefan Leichenauer, Ken Olum, Don Page, Jason Pollack, Jess Riedel, Mark Srednicki, Wojciech Zurek, and Michael Zwolak—click here. You’re also welcome to discuss these other nice talks in the comments section, though I might or might not be able to answer questions about them. Apparently videos of all the talks will be available before long (Jess Riedel has announced that videos are now available).
(Note that, as is probably true for other talks as well, the video of my talk differs substantially from the prepared version—it mostly just consists of interruptions and my responses to them! On the other hand, I did try to work some of the more salient points from the discussion into the text below.)
Thanks so much to Charles Bennett and Jess Riedel for organizing the workshop, and to all the participants for great discussions.
I didn’t prepare slides for this talk—given the topic, what slides would I use exactly? “Spoiler alert”: I don’t have any rigorous results about the possibility of sentient quantum computers, to state and prove on slides. I thought of giving a technical talk on quantum computing theory, but then I realized that I don’t really have technical results that bear directly on the subject of the workshop, which is how the classical world we experience emerges from the quantum laws of physics. So, given the choice between a technical talk that doesn’t really address the questions we’re supposed to be discussing, or a handwavy philosophical talk that at least tries to address them, I opted for the latter, so help me God.
Let me start with a story that John Preskill told me years ago. In the far future, humans have solved not only the problem of building scalable quantum computers, but also the problem of human-level AI. They’ve built a Turing-Test-passing quantum computer. The first thing they do, to make sure this is actually a quantum computer, is ask it to use Shor’s algorithm to factor a 10,000-digit number. So the quantum computer factors the number. Then they ask it, “while you were factoring that number, what did it feel like? did you feel yourself branching into lots of parallel copies, which then recohered? or did you remain a single consciousness—a ‘unitary’ consciousness, as it were? can you tell us from introspection which interpretation of quantum mechanics is the true one?” The quantum computer ponders this for a while and then finally says, “you know, I might’ve known before, but now I just … can’t remember.”
I like to tell this story when people ask me whether the interpretation of quantum mechanics has any empirical consequences.
Look, I understand the impulse to say “let’s discuss the measure problem, or the measurement problem, or derivations of the Born rule, or Boltzmann brains, or observer-counting, or whatever, but let’s take consciousness off the table.” (Compare: “let’s debate this state law in Nebraska that says that, before getting an abortion, a woman has to be shown pictures of cute babies. But let’s take the question of whether or not fetuses have human consciousness—i.e., the actual thing that’s driving our disagreement about that and every other subsidiary question—off the table, since that one is too hard.”) The problem, of course, is that even after you’ve taken the elephant off the table (to mix metaphors), it keeps climbing back onto the table, often in disguises. So, for better or worse, my impulse tends to be the opposite: to confront the elephant directly.
Having said that, I still need to defend the claim that (a) the questions we’re discussing, centered around quantum mechanics, Many Worlds, and decoherence, and (b) the question of which physical systems should be considered “conscious,” have anything to do with each other. Many people would say that the connection doesn’t go any deeper than: “quantum mechanics is mysterious, consciousness is also mysterious, ergo maybe they’re related somehow.” But I’m not sure that’s entirely true. One thing that crystallized my thinking about this was a remark made in a lecture by Peter Byrne, who wrote a biography of Hugh Everett. Byrne was discussing the question, why did it take so many decades for Everett’s Many-Worlds Interpretation to become popular? Of course, there are people who deny quantum mechanics itself, or who have basic misunderstandings about it, but let’s leave those people aside. Why did people like Bohr and Heisenberg dismiss Everett? More broadly: why wasn’t it just obvious to physicists from the beginning that “branching worlds” is a picture that the math militates toward, probably the simplest, easiest story one can tell around the Schrödinger equation? Even if early quantum physicists rejected the Many-Worlds picture, why didn’t they at least discuss and debate it?
Here was Byrne’s answer: he said, before you can really be on board with Everett, you first need to be on board with Daniel Dennett (the philosopher). He meant: you first need to accept that a “mind” is just some particular computational process. At the bottom of everything is the physical state of the universe, evolving via the equations of physics, and if you want to know where consciousness is, you need to go into that state, and look for where computations are taking place that are sufficiently complicated, or globally-integrated, or self-referential, or … something, and that’s where the consciousness resides. And crucially, if following the equations tells you that after a decoherence event, one computation splits up into two computations, in different branches of the wavefunction, that thereafter don’t interact—congratulations! You’ve now got two consciousnesses.
And if everything above strikes you as so obvious as not to be worth stating … well, that’s a sign of how much things changed in the latter half of the 20th century. Before then, many thinkers would’ve been more likely to say, with Descartes: no, my starting point is not the physical world. I don’t even know a priori that there is a physical world. My starting point is my own consciousness, which is the one thing besides math that I can be certain about. And the point of a scientific theory is to explain features of my experience—ultimately, if you like, to predict the probability that I’m going to see X or Y if I do A or B. (If I don’t have prescientific knowledge of myself, as a single, unified entity that persists in time, makes choices, and later observes their consequences, then I can’t even get started doing science.) I’m happy to postulate a world external to myself, filled with unseen entities like electrons behaving in arbitrarily unfamiliar ways, if it will help me understand my experience—but postulating other versions of me is, at best, irrelevant metaphysics. This is a viewpoint that could lead you Copenhagenism, or to its newer variants like quantum Bayesianism.
I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?
Of course, there are already tremendous difficulties here, even if we ignore quantum mechanics entirely. Ken Olum was over much of this ground in his talk yesterday (see here for a relevant paper by Davenport and Olum). You’ve all heard the ones about, would you agree to be painlessly euthanized, provided that a complete description of your brain would be sent to Mars as an email attachment, and a “perfect copy” of you would be reconstituted there? Would you demand that the copy on Mars be up and running before the original was euthanized? But what do we mean by “before”—in whose frame of reference?
Some people say: sure, none of this is a problem! If I’d been brought up since childhood taking family vacations where we all emailed ourselves to Mars and had our original bodies euthanized, I wouldn’t think anything of it. But the philosophers of mind are barely getting started.
There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?
To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?
Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker. In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry. What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point. So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc. But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.
You can probably see where this is going. What if we homomorphically encrypted a simulation of your brain? And what if we hid the only copy of the decryption key, let’s say in another galaxy? Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?
When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones. So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U. But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1. Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time? Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?
Here’s a better one: many of you have heard of the Vaidman bomb. This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room. What’s the solution? Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it. And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition). Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb. By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε. So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small. (Incidentally, this is extremely closely related to how Grover’s algorithm works.)
OK, now how about the Vaidman brain? We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question. We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε. Yet you still manage to communicate your “yes” answer to the outside world. So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious? (The answer could matter a lot for anthropic purposes.)
You might say, sure, maybe these questions are puzzling, but what’s the alternative? Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wavefunction of the universe, or else we’re back to saying that beings like us are conscious, and all these other things aren’t, because God gave the souls to us, so na-na-na. Or I suppose we could say, like the philosopher John Searle, that we’re conscious, and the lookup table and homomorphically-encrypted brain and Vaidman brain and all these other apparitions aren’t, because we alone have “biological causal powers.” And what do those causal powers consist of? Hey, you’re not supposed to ask that! Just accept that we have them. Or we could say, like Roger Penrose, that we’re conscious and the other things aren’t because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity. But neither of those two options ever struck me as much of an improvement.
Yet I submit to you that, between these extremes, there’s another position we can stake out—one that I certainly don’t know to be correct, but that would solve so many different puzzles if it were correct that, for that reason alone, it seems to me to merit more attention than it usually receives. (In an effort to give the view that attention, a couple years ago I wrote an 85-page essay called The Ghost in the Quantum Turing Machine, which one or two people told me they actually read all the way through.) If, after a lifetime of worrying (on weekends) about stuff like whether a giant lookup table would be conscious, I now seem to be arguing for this particular view, it’s less out of conviction in its truth than out of a sense of intellectual obligation: to whatever extent people care about these slippery questions at all, to whatever extent they think various alternative views deserve a hearing, I believe this one does as well.
The intermediate position that I’d like to explore says the following. Yes, consciousness is a property of any suitably-organized chunk of matter. But, in addition to performing complex computations, or passing the Turing Test, or other information-theoretic conditions that I don’t know (and don’t claim to know), there’s at least one crucial further thing that a chunk of matter has to do before we should consider it conscious. Namely, it has to participate fully in the Arrow of Time. More specifically, it has to produce irreversible decoherence as an intrinsic part of its operation. It has to be continually taking microscopic fluctuations, and irreversibly amplifying them into stable, copyable, macroscopic classical records.
Before I go further, let me be extremely clear about what this view is not saying. Firstly, it’s not saying that the brain is a quantum computer, in any interesting sense—let alone a quantum-gravitational computer, like Roger Penrose wants! Indeed, I see no evidence, from neuroscience or any other field, that the cognitive information processing done by the brain is anything but classical. The view I’m discussing doesn’t challenge conventional neuroscience on that account.
Secondly, this view doesn’t say that consciousness is in any sense necessary for decoherence, or for the emergence of a classical world. I’ve never understood how one could hold such a belief, while still being a scientific realist. After all, there are trillions of decoherence events happening every second in stars and asteroids and uninhabited planets. Do those events not “count as real” until a human registers them? (Or at least a frog, or an AI?) The view I’m discussing only asserts the converse: that decoherence is necessary for consciousness. (By analogy, presumably everyone agrees that some amount of computation is necessary for an interesting consciousness, but that doesn’t mean consciousness is necessary for computation.)
Thirdly, the view I’m discussing doesn’t say that “quantum magic” is the explanation for consciousness. It’s silent on the explanation for consciousness (to whatever extent that question makes sense); it seeks only to draw a defensible line between the systems we want to regard as conscious and the systems we don’t—to address what I recently called the Pretty-Hard Problem. And the (partial) answer it suggests doesn’t seem any more “magical” to me than any other proposed answer to the same question. For example, if one said that consciousness arises from any computation that’s sufficiently “integrated” (or something), I could reply: what’s the “magical force” that imbues those particular computations with consciousness, and not other computations I can specify? Or if one said (like Searle) that consciousness arises from the biology of the brain, I could reply: so what’s the “magic” of carbon-based biology, that could never be replicated in silicon? Or even if one threw up one’s hands and said everything was conscious, I could reply: what’s the magical power that imbues my stapler with a mind? Each of these views, along with the view that stresses the importance of decoherence and the arrow of time, is worth considering. In my opinion, each should be judged according to how well it holds up under the most grueling battery of paradigm-cases, thought experiments, and reductios ad absurdum we can devise.
So, why might one conjecture that decoherence, and participation in the arrow of time, were necessary conditions for consciousness? I suppose I could offer some argument about our subjective experience of the passage of time being a crucial component of our consciousness, and the passage of time being bound up with the Second Law. Truthfully, though, I don’t have any a-priori argument that I find convincing. All I can do is show you how many apparent paradoxes get resolved if you make this one speculative leap.
For starters, if you think about exactly how our chunk of matter is going to amplify microscopic fluctuations, it could depend on details like the precise spin orientations of various subatomic particles in the chunk. But that has an interesting consequence: if you’re an outside observer who doesn’t know the chunk’s quantum state, it might be difficult or impossible for you to predict what the chunk is going to do next—even just to give decent statistical predictions, like you can for a hydrogen atom. And of course, you can’t in general perform a measurement that will tell you the chunk’s quantum state, without violating the No-Cloning Theorem. For the same reason, there’s in general no physical procedure that you can apply to the chunk to duplicate it exactly: that is, to produce a second chunk that you can be confident will behave identically (or almost identically) to the first, even just in a statistical sense. (Again, this isn’t assuming any long-range quantum coherence in the chunk: only microscopic coherence that then gets amplified.)
It might be objected that there are all sorts of physical systems that “amplify microscopic fluctuations,” but that aren’t anything like what I described, at least not in any interesting sense: for example, a Geiger counter, or a photodetector, or any sort of quantum-mechanical random-number generator. You can make, if not an exact copy of a Geiger counter, surely one that’s close enough for practical purposes. And, even though the two counters will record different sequences of clicks when pointed at identical sources, the statistical distribution of clicks will be the same (and precisely calculable), and surely that’s all that matters. So, what separates these examples from the sorts of examples I want to discuss?
What separates them is the undisputed existence of what I’ll call a clean digital abstraction layer. By that, I mean a macroscopic approximation to a physical system that an external observer can produce, in principle, without destroying the system; that can be used to predict what the system will do to excellent accuracy (given knowledge of the environment); and that “sees” quantum-mechanical uncertainty—to whatever extent it does—as just a well-characterized source of random noise. If a system has such an abstraction layer, then we can regard any quantum noise as simply part of the “environment” that the system observes, rather than part of the system itself. I’ll take it as clear that such clean abstraction layers exist for a Geiger counter, a photodetector, or a computer with a quantum random number generator. By contrast, for (say) an animal brain, I regard it as currently an open question whether such an abstraction layer exists or not. If, someday, it becomes routine for nanobots to swarm through people’s brains and make exact copies of them—after which the “original” brains can be superbly predicted in all circumstances, except for some niggling differences that are traceable back to different quantum-mechanical dice rolls—at that point, perhaps educated opinion will have shifted to the point where we all agree the brain does have a clean digital abstraction layer. But from where we stand today, it seems entirely possible to agree that the brain is a physical system obeying the laws of physics, while doubting that the nanobots would work as advertised. It seems possible that—as speculated by Bohr, Compton, Eddington, and even Alan Turing—if you want to get it right you’ll need more than just the neural wiring graph, the synaptic strengths, and the approximate neurotransmitter levels. Maybe you also need (e.g.) the internal states of the neurons, the configurations of sodium-ion channels, or other data that you simply can’t get without irreparably damaging the original brain—not only as a contingent matter of technology but as a fundamental matter of physics.
(As a side note, I should stress that obviously, even without invasive nanobots, our brains are constantly changing, but we normally don’t say as a result that we become completely different people at each instant! To my way of thinking, though, this transtemporal identity is fundamentally different from a hypothetical identity between different “copies” of you, in the sense we’re talking about. For one thing, all your transtemporal doppelgängers are connected by a single, linear chain of causation. For another, outside movies like Bill and Ted’s Excellent Adventure, you can’t meet your transtemporal doppelgängers and have a conversation with them, nor can scientists do experiments on some of them, then apply what they learned to others that remained unaffected by their experiments.)
So, on this view, a conscious chunk of matter would be one that not only acts irreversibly, but that might well be unclonable for fundamental physical reasons. If so, that would neatly resolve many of the puzzles that I discussed before. So for example, there’s now a straightforward reason why you shouldn’t consent to being killed, while your copy gets recreated on Mars from an email attachment. Namely, that copy will have a microstate with no direct causal link to your “original” microstate—so while it might behave similarly to you in many ways, you shouldn’t expect that your consciousness will “transfer” to it. If you wanted to get your exact microstate to Mars, you could do that in principle using quantum teleportation—but as we all know, quantum teleportation inherently destroys the original copy, so there’s no longer any philosophical problem! (Or, of course, you could just get on a spaceship bound for Mars: from a philosophical standpoint, it amounts to the same thing.)
Similarly, in the case where the simulation of your brain was run three times for error-correcting purposes: that could bring about three consciousnesses if, and only if, the three simulations were tied to different sets of decoherence events. The giant lookup table and the Earth-sized brain simulation wouldn’t bring about any consciousness, unless they were implemented in such a way that they no longer had a clean digital abstraction layer. What about the homomorphically-encrypted brain simulation? That might no longer work, simply because we can’t assume that the microscopic fluctuations that get amplified are homomorphically encrypted. Those are “in the clear,” which inevitably leaks information. As for the quantum computer that simulates your thought processes and then perfectly reverses the simulation, or that queries you like a Vaidman bomb—in order to implement such things, we’d of course need to use quantum fault-tolerance, so that the simulation of you stayed in an encoded subspace and didn’t decohere. But under our assumption, that would mean the simulation wasn’t conscious.
Now, it might seem to some of you like I’m suggesting something deeply immoral. After all, the view I’m considering implies that, even if a system passed the Turing Test, and behaved identically to a human, even if it eloquently pleaded for its life, if it wasn’t irreversibly decohering microscopic events then it wouldn’t be conscious, so it would be fine to kill it, torture it, whatever you want.
But wait a minute: if a system isn’t doing anything irreversible, then what exactly does it mean to “kill” it? If it’s a classical computation, then at least in principle, you could always just restore from backup. You could even rewind and not only erase the memories of, but “uncompute” (“untorture”?) whatever tortures you had performed. If it’s a quantum computation, you could always invert the unitary transformation U that corresponded to killing the thing (then reapply U and invert it again for good measure, if you wanted). Only for irreversible systems are there moral acts with irreversible consequences.
This is related to something that’s bothered me for years in quantum foundations. When people discuss Schrödinger’s cat, they always—always—insert some joke about, “obviously, this experiment wouldn’t pass the Ethical Review Board. Nowadays, we try to avoid animal cruelty in our quantum gedankenexperiments.” But actually, I claim that there’s no animal cruelty at all in the Schrödinger’s cat experiment. And here’s why: in order to prove that the cat was ever in a coherent superposition of |Alive〉 and |Dead〉, you need to be able to measure it in a basis like {|Alive〉+|Dead〉,|Alive〉-|Dead〉}. But if you can do that, you must have such precise control over all the cat’s degrees of freedom that you can also rotate unitarily between the |Alive〉 and |Dead〉 states. (To see this, let U be the unitary that you applied to the |Alive〉 branch, and V the unitary that you applied to the |Dead〉 branch, to bring them into coherence with each other; then consider applying U-1V.) But if you can do that, then in what sense should we say that the cat in the |Dead〉 state was ever “dead” at all? Normally, when we speak of “killing,” we mean doing something irreversible—not rotating to some point in a Hilbert space that we could just as easily rotate away from.
(There followed discussion among some audience members about the question of whether, if you destroyed all records of some terrible atrocity, like the Holocaust, everywhere in the physical world, you would thereby cause the atrocity “never to have happened.” Many people seemed surprised by my willingness to accept that implication of what I was saying. By way of explaining, I tried to stress just how far our everyday, intuitive notion of “destroying all records of something” falls short of what would actually be involved here: when we think of “destroying records,” we think about burning books, destroying the artifacts in museums, silencing witnesses, etc. But even if all those things were done and many others, still the exact configurations of the air, the soil, and photons heading away from the earth at the speed of light would retain their silent testimony to the Holocaust’s reality. “Erasing all records” in the physics sense would be something almost unimaginably more extreme: it would mean inverting the entire physical evolution in the vicinity of the earth, stopping time’s arrow and running history itself backwards. Such ‘unhappening’ of what’s happened is something that we lack any experience of, at least outside of certain quantum interference experiments—though in the case of the Holocaust, one could be forgiven for wishing it were possible.)
OK, so much for philosophy of mind and morality; what about the interpretation of quantum mechanics? If we think about consciousness in the way I’ve suggested, then who’s right: the Copenhagenists or the Many-Worlders? You could make a case for either. The Many-Worlders would be right that we could always, if we chose, think of decoherence events as “splitting” our universe into multiple branches, each with different versions of ourselves, that thereafter don’t interact. On the other hand, the Copenhagenists would be right that, even in principle, we could never do any experiment where this “splitting” of our minds would have any empirical consequence. On this view, if you can control a system well enough that you can actually observe interference between the different branches, then it follows that you shouldn’t regard the system as conscious, because it’s not doing anything irreversible.
In my essay, the implication that concerned me the most was the one for “free will.” If being conscious entails amplifying microscopic events in an irreversible and unclonable way, then someone looking at a conscious system from the outside might not, in general, be able to predict what it’s going to do next, not even probabilistically. In other words, its decisions might be subject to at least some “Knightian uncertainty”: uncertainty that we can’t even quantify in a mutually-agreed way using probabilities, in the same sense that we can quantify our uncertainty about (say) the time of a radioactive decay. And personally, this is actually the sort of “freedom” that interests me the most. I don’t really care if my choices are predictable by God, or by a hypothetical Laplace demon: that is, if they would be predictable (at least probabilistically), given complete knowledge of the microstate of the universe. By definition, there’s essentially no way for my choices not to be predictable in that weak and unempirical sense! On the other hand, I’d prefer that my choices not be completely predictable by other people. If someone could put some sheets of paper into a sealed envelope, then I spoke extemporaneously for an hour, and then the person opened the envelope to reveal an exact transcript of everything I said, that’s the sort of thing that really would cause me to doubt in what sense “I” existed as a locus of thought. But you’d have to actually do the experiment (or convince me that it could be done): it doesn’t count just to talk about it, or to extrapolate from fMRI experiments that predict which of two buttons a subject is going to press with 60% accuracy a few seconds in advance.
But since we’ve got some cosmologists in the house, let me now turn to discussing the implications of this view for Boltzmann brains.
(For those tuning in from home: a Boltzmann brain is a hypothetical chance fluctuation in the late universe, which would include a conscious observer with all the perceptions that a human being—say, you—is having right now, right down to false memories and false beliefs of having arisen via Darwinian evolution. On statistical grounds, the overwhelming majority of Boltzmann brains last just long enough to have a single thought—like, say, the one you’re having right now—before they encounter the vacuum and freeze to death. If you measured some part of the vacuum state toward which our universe seems to be heading, asking “is there a Boltzmann brain here?,” quantum mechanics predicts that the probability would be ridiculously astronomically small, but nonzero. But, so the argument goes, if the vacuum lasts for infinite time, then as long as the probability is nonzero, it doesn’t matter how tiny it is: you’ll still get infinitely many Boltzmann brains indistinguishable from any given observer; and for that reason, any observer should consider herself infinitely likelier to be a Boltzmann brain than to be the “real,” original version. For the record, even among the strange people at the IBM workshop, no one actually worried about being a Boltzmann brain. The question, rather, is whether, if a cosmological model predicts Boltzmann brains, then that’s reason enough to reject the model, or whether we can live with such a prediction, since we have independent grounds for knowing that we can’t be Boltzmann brains.)
At this point, you can probably guess where this is going. If decoherence, entropy production, full participation in the arrow of time are necessary conditions for consciousness, then it would follow, in particular, that a Boltzmann brain is not conscious. So we certainly wouldn’t be Boltzmann brains, even under a cosmological model that predicts infinitely more of them than of us. We can wipe our hands; the problem is solved!
I find it extremely interesting that, in their recent work, Kim Boddy, Sean Carroll, and Jason Pollack reached a similar conclusion, but from a completely different starting point. They said: look, under reasonable assumptions, the late universe is just going to stay forever in an energy eigenstate—just sitting there doing nothing. It’s true that, if someone came along and measured the energy eigenstate, asking “is there a Boltzmann brain here?,” then with a tiny but nonzero probability the answer would be yes. But since no one is there measuring, what licenses us to interpret the nonzero overlap in amplitude with the Boltzmann brain state, as a nonzero probability of there being a Boltzmann brain? I think they, too, are implicitly suggesting: if there’s no decoherence, no arrow of time, then we’re not authorized to say that anything is happening that “counts” for anthropic purposes.
Let me now mention an obvious objection. (In fact, when I gave the talk, this objection was raised much earlier.) You might say, “look, if you really think irreversible decoherence is a necessary condition for consciousness, then you might find yourself forced to say that there’s no consciousness, because there might not be any such thing as irreversible decoherence! Imagine that our entire solar system were enclosed in an anti de Sitter (AdS) boundary, like in Greg Egan’s science-fiction novel Quarantine. Inside the box, there would just be unitary evolution in some Hilbert space: maybe even a finite-dimensional Hilbert space. In which case, all these ‘irreversible amplifications’ that you lay so much stress on wouldn’t be irreversible at all: eventually all the Everett branches would recohere; in fact they’d decohere and recohere infinitely many times. So by your lights, how could anything be conscious inside the box?”
My response to this involves one last speculation. I speculate that the fact that we don’t appear to live in AdS space—that we appear to live in (something evolving toward) a de Sitter space, with a positive cosmological constant—might be deep and important and relevant. I speculate that, in our universe, “irreversible decoherence” means: the records of what you did are now heading toward our de Sitter horizon at the speed of light, and for that reason alone—even if for no others—you can’t put Humpty Dumpty back together again. (Here I should point out, as several workshop attendees did to me, that Bousso and Susskind explored something similar in their paper The Multiverse Interpretation of Quantum Mechanics.)
Does this mean that, if cosmologists discover tomorrow that the cosmological constant is negative, or will become negative, then it will turn out that none of us were ever conscious? No, that’s stupid. What it would suggest is that the attempt I’m now making on the Pretty-Hard Problem had smacked into a wall (an AdS wall?), so that I, and anyone else who stressed in-principle irreversibility, should go back to the drawing board. (By analogy, if some prescription for getting rid of Boltzmann brains fails, that doesn’t mean we are Boltzmann brains; it just means we need a new prescription. Tempting as it is to skewer our opponents’ positions with these sorts of strawman inferences, I hope we can give each other the courtesy of presuming a bare minimum of sense.)
Another question: am I saying that, in order to be absolutely certain of whether some entity satisfied the postulated precondition for consciousness, one might, in general, need to look billions of years into the future, to see whether the “decoherence” produced by the entity was really irreversible? Yes (pause to gulp bullet). I am saying that. On the other hand, I don’t think it’s nearly as bad as it sounds. After all, the category of “consciousness” might be morally relevant, or relevant for anthropic reasoning, but presumably we all agree that it’s unlikely to play any causal role in the fundamental laws of physics. So it’s not as if we’ve introduced any teleology into the laws of physics by this move.
Let me end by pointing out what I’ll call the “Tegmarkian slippery slope.” It feels scientific and rational—from the perspective of many of us, even banal—to say that, if we’re conscious, then any sufficiently-accurate computer simulation of us would also be. But I tried to convince you that this view depends, for its aura of obviousness, on our agreeing not to probe too closely exactly what would count as a “sufficiently-accurate” simulation. E.g., does it count if the simulation is done in heavily-encrypted form, or encoded as a giant lookup table? Does it matter if anyone actually runs the simulation, or consults the lookup table? Now, all the way at the bottom of the slope is Max Tegmark, who asks: to produce consciousness, what does it matter if the simulation is physically instantiated at all? Why isn’t it enough for the simulation to “exist” mathematically? Or, better yet: if you’re worried about your infinitely-many Boltzmann brain copies, then why not worry equally about the infinitely many descriptions of your life history that are presumably encoded in the decimal expansion of π? Why not hold workshops about how to avoid the prediction that we’re infinitely likelier to be “living in π” than to be our “real” selves?
From this extreme, even most scientific rationalists recoil. They say, no, even if we don’t yet know exactly what’s meant by “physical instantiation,” we agree that you only get consciousness if the computer program is physically instantiated somehow. But now I have the opening I want. I can say: once we agree that physical existence is a prerequisite for consciousness, why not participation in the Arrow of Time? After all, our ordinary ways of talking about sentient beings—outside of quantum mechanics, cosmology, and maybe theology—don’t even distinguish between the concepts “exists” and “exists and participates in the Arrow of Time.” And to say we have no experience of reversible, clonable, coherently-executable, atemporal consciousnesses is a massive understatement.
Of course, we should avoid the sort of arbitrary prejudice that Turing warned against in Computing Machinery and Intelligence. Just because we lack experience with extraterrestrial consciousnesses, doesn’t mean it would be OK to murder an intelligent extraterrestrial if we met one tomorrow. In just the same way, just because we lack experience with clonable, atemporal consciousnesses, doesn’t mean it would be OK to … wait! As we said before, clonability, and aloofness from time’s arrow, call severely into question what it even means to “murder” something. So maybe this case isn’t as straightforward as the extraterrestrials after all.
At this point, I’ve probably laid out enough craziness, so let me stop and open things up for discussion.