“Did Einstein Kill Schrödinger’s Cat? A Quantum State of Mind”

No, I didn’t invent that title.  And no, I don’t know of any interesting sense in which “Einstein killed Schrödinger’s cat,” though arguably there are senses in which Schrödinger’s cat killed Einstein.

The above was, however, the title given to a fun panel discussion that Daniel Harlow, Brian Swingle, and I participated in on Wednesday evening, at the spectacular facility of the New York Academy of Sciences on the 40th floor of 7 World Trade Center in lower Manhattan.  The moderator was George Musser of Scientific American.  About 200 people showed up, some of whom we got to meet at the reception afterward.

(The link will take you to streaming video of the event, though you’ll need to scroll to 6:30 or so for the thing to start.)

The subject of the panel was the surprising recent connections between quantum information and quantum gravity, something that Daniel, Brian, and I all talked about different aspects of.  I admitted at the outset that, not only was I not a real expert on the topic (as Daniel and Brian are), I wasn’t even a physicist, just a computer science humor mercenary or whatever the hell I am.  I then proceeded, ironically, to explain the Harlow-Hayden argument for the computational hardness of creating a firewall, despite Harlow sitting right next to me (he chose to focus on something else).  I was planning also to discuss Lenny Susskind’s conjecture relating the circuit complexity of quantum states to the AdS/CFT correspondence, but I ran out of time.

Thanks so much to my fellow participants, to George for moderating, and especially to Jennifer Costley, Crystal Ocampo, and everyone else at NYAS for organizing the event.

29 Responses to ““Did Einstein Kill Schrödinger’s Cat? A Quantum State of Mind””

  1. jonas Says:

    Is there a transcript?

  2. Scott Says:

    jonas #1: Sorry, no! When I post a transcript, people complain that there’s no video; this time there’s video but no transcript. Anyway, thanks for commenting!

  3. Ian Says:

    It never occurred to me to actually think about why the term “firewall” was being used in this context (black hole firewall paradox). I thought your explanation was very clear, and I now appreciate the importance of this question a lot more. Does the involvement of those special one-way functions you described in the possible resolution of the paradox (making the computations take inordinate amounts of time) change how you feel about the possible existence of said functions?

  4. Scott Says:

    Ian #3: No, I was and remain extremely confident that OWFs exist. From a pure theoretical computer science perspective (i.e., ignoring the black hole aspect), this is simply yet another hardness construction based on OWFs, very far from the most surprising or dramatic one.

  5. Job Says:

    I watched the discussion and enjoyed it well enough.

    I would normally pass on black-hole thought experiments, but the computational argument in there is intriguing.

    I imagine that we can’t just go from computationally-hard to physically-impossible, though the paper’s abstract suggests that it was viable in this case:

    We argue that the quantum computations required to do these experiments take a time which is exponential in the entropy of the black hole under study, and we show that for a wide variety of black holes this prevents the experiments from being done.

    How did they manage that? (No, i don’t want to read the paper)

  6. Scott Says:

    Job #5: In most cases, by arguing that the black hole would’ve already evaporated anyway, long before the experiment was complete.

  7. fred Says:

    Scott #6

    maybe we could also start to rely more on the argument “the human race will have vanished anyway, long before experiment xyz is complete”.

  8. adamt Says:

    Finally able to take a little bit and watch this video after the 4th of July weekend and glad I did 🙂

    I really enjoyed Scott’s characteristic humor on display with Harlow cracking up throughout the talk. Scott being an expert in the, “field of what we couldn’t do even with computers we still don’t have” haha

    One thing that strikes me about why those who appeal to computational complexity for why they don’t think the firewall is a problem after all is that this seems to me at base an appeal to the non-existence of counterfactual definiteness. That we don’t have to worry about whether spacetime breaks down at the singularity or at the event horizon because even in principle this can never be *observed* and the underlying assumption is that things that can not in principle be observed are “not a problem.”

    When it comes to the multiverse and other in principle non-observables a lot of scientists wave there hands and say that this not worth the time even thinking about… but here we have something in our own universe where lots of folks intuitively believe in counterfactual definiteness and computational complexity is swooping in to put the firewall in the realm of non-existent due to not being observable.

    I don’t believe in counterfactual definiteness, but I wonder if a lot of the physicists/computer scientists who hold the firewall as “not a problem” are introspecting on how they are relying upon the non-existence of counterfactual definiteness.

  9. Job Says:

    In most cases, by arguing that the black hole would’ve already evaporated anyway, long before the experiment was complete.

    Don’t you have to exclude the possibility that a given instance might turn out to be efficiently solvable even though the problem is computationally difficult?

    Also, if the problem is in NP you could still try solutions at random. The odds of success would be extremely low, but non-zero, so there would still be a minute possibility of being able to carry out the experiment.

    What would happen then?

  10. Jay Says:

    In the same vein as Job #9: what if we construct specific black holes designed such that the odds of success would be high? Should we imagine firewalls appear if and only if the instance turn out feasible?

  11. adamt Says:

    A question I’ve been pondering is if their is some minimal change to the laws of physics (ie, changes to the speed of light or gravitational constant) that would make the computational complexity of Alice’s job commensurate with her jumping into the blackhole before it evaporates and thus either running into the firewall or not?

    Kinda like #9 and #10, but different in that if the answer to the above were affirmative, then this would imply that the computational complexity “answer” to the firewall problem is just an anthropic argument in disguise, no?

  12. Scott Says:

    Job #9 and Jay #10: A 10-10^70 probability of creating a firewall no one really worries about, since quantum mechanics gives similar probabilities for a cheetah materializing in your living room and all sorts of other such events.

    And yes, we can’t rule out the possibility that the laws of quantum gravity would have some miraculous symmetry that made the “real” problem easy with high probability, even though the general case was hard. But if you look at the problem, it looks more likely that the opposite would be true—i.e. that the “real” problem would be even HARDER than the special cases for which we’re able to give hardness reductions! I’ll have lecture notes online very soon that explain it in more detail.

  13. adamt Says:

    Scott #12, that doesn’t sound very satisfying because cheetahs and firewalls are very different: one is commensurate with known laws of physics while the other seems to present a paradox asking us to give up some of our assumptions. Infinitesimal probability does not the paradox fix.

  14. Scott Says:

    Adam #11: There are changes to the laws of physics (e.g. turning off gravity) that would give you no black holes, let alone no firewalls. But I don’t typically regard statements about black holes as anthropic preconditions for my own existence!

    Adam #13: No, they’re totally commensurate. I’ll even make a stronger statement: given what we know about QFT and GR, there’s a 10-10^70 probability or whatever that a firewall (that is, a breakdown of the geometry of spacetime) will occur right in front of you, right now.

  15. adamt Says:

    Scott #14, neither do I, which is why saying it is anthropic is for me another way of saying it isn’t a very satisfying answer 😉

    If that is indeed what our current knowledge of QFT and GR are saying, then I’d regard this as pretty good evidence that our current knowledge is… flawed. I can’t abide that the physical laws allow paradoxical solutions.

  16. Scott Says:

    Adam #15: A firewall isn’t literally a paradox. It’s just something that you wouldn’t expect to find at an event horizon, having only made low-energy probes, on a conventional understanding of GR and QFT. But ultimately, quantum gravity sets the rules.

    (If you accept the Harlow-Hayden argument, it means that the difference between quantum gravity and GR/QFT for this sort of experiment might ultimately not be observable anyway, for reasons of complexity. But even without that, still, no literal paradox.)

  17. Jay Says:

    Scott #12

    >the “real” problem would be even HARDER than the special cases for which we’re able to give hardness reductions! I’ll have lecture notes online very soon that explain it in more detail.

    Cool! But what about ”artificial” problems? For example, suppose we create a black hole out of a large bunch of heralded photons, plus one unknown quantum state. Wouldn’t that simplify the computation enough so that the paradox appears with high probability?

  18. adamt Says:

    Scott #16,

    If non-existence of firewall isn’t a paradox, then what is the problem that Harlow-Hayden are trying to answer?

    From the talk, it seems the motivation for this whole problem is that GR/QFT in the context of a black hole seems to be pointing to where the laws of nature are not time reversible. Here is how I would summarize the history that I gleaned from the talk…

    The famous bet that Hawking conceded seemed to indicate that (most?) everyone agreed that the laws are time reversible and … black hole complementarity or something.

    Along came this thought experiment with Alice and the firewall and people looked again and said, “maybe we don’t have an answer to this challenge to the time reversibility of the laws of nature!”

    Ok, so the firewall is an invention – in that it IS NOT predicted or revealed by the underlying mathematics that the breakdown of spacetime should be at the event horizon as opposed to the singularity – to keep the laws of QFT from being violated, ie a paradox or inconsistency in QFT. The firewall is not a literal paradox, but the non-existence of a firewall WOULD BE a literal paradox, right? And there is nothing in the mathematics of the theory of GR that would point to the event horizon as having this breakdown of spacetime == firewall, right?

    But, this “firewall solution” to the the paradox doesn’t satisfy anyone, because why should we be making up such ad hoc and cumbersome constructions that are not revealed by the mathematics of the theory, but are added ex post facto to overcome seeming inconsistencies, right?

    So along comes Harlow-Hayden saying, “Hey, we don’t need to invent any firewall, because the computational complexity in actually carrying out this thought experiment is such that it can never be done even in principle!”

    Now, if this is true, then I can’t see how Job #9 and Jay #10 are wrong in that the Harlow-Hayden answer ISN’T SATISFYING because the very paradox/inconsistency in QFT that was revealed by the thought experiment could be revealed by chance even if that chance is infinitesimally unlikely. Saying that in this small infinitesimally unlikely event that we’ll just forgo Harlow-Hayden and rely again upon the firewall non-answer just doesn’t seem satisfying. In this way Harlow-Hayden doesn’t seem up to the task of replacing the firewall, right?

    What have I got wrong here? Also, while the firewall thought experiment has disabused many of the notion that the time-reversibility challenge has been sufficiently answered… I’m not sure in what way the seeming inconsistency revealed by the thought experiment is related to the time reversibility challenge of information destruction in blackhole evaporation. Could you elucidate on that?

  19. Scott Says:

    Jay #17: It takes a lot more work than you said to create an artificial black hole with a firewall—in particular, it seems likely to require exponential pre-computation time—but yes, Oppenheim and Unruh wrote a paper precisely to argue that it could in principle be done. My own preferred response is to shrug and say, OK then, if you do that crazy pre-processing, then you can have your black hole with a firewall! But at any rate, Harlow-Hayden has done its job of explaining why we’re not going to see firewalls for ordinary, astrophysical black holes (and even for specially engineered black holes, it might take 1010^70 years of preparation time before we can see a firewall).

  20. Scott Says:

    adamt #18: The question Harlow and Hayden are trying to answer is why ordinary black holes have interiors (i.e., why the AMPS argument isn’t relevant to them).

    Again: if ordinary black holes lacked interiors, that wouldn’t be a “paradox.” So they’d lack interiors, deal with it. What it would be, is a huge upheaval in our understanding of black holes. In science, the best solution is usually the most conservative one: the one that preserves as many past successes as possible, while still demarcating what the extreme situations are where the past successes would cease to apply. In this case, that means preserving the detailed picture of black hole physics that we had from GR and QFT, while clarifying what sorts of things (e.g., hypothetical exponential computational speedups) could cause that picture to break down.

    And, again, a ~10-10^70 probability of a firewall, or some other crazy thing, is not only a complete non-issue, but completely expected: we had that even in ordinary QM, since 1926, before we said anything about the black hole information problem. I’m sorry if it “doesn’t seem satisfying” to you, but welcome to QM! 🙂

  21. adamt Says:

    Scott #20, so it seems that your position is basically that even had Harlow-Hayden turned out to be the reverse – that the computational complexity wasn’t a problem – then the firewall solution was a perfectly acceptable answer and that AMPS was no real problem at all. In other words, even if the probability were 1 rather than ~10-10^70 for ordinary black holes not having interiors we should just take it as the answer to AMPS.

  22. Scott Says:

    adamt #21: Acceptable vs. unacceptable is not binary. For each problem, you go for the least crazy explanation available.

    If the Harlow-Hayden argument were unavailable, I think the next best option would be to say that yes, there’s some crazy processing you can do that would create a firewall, and yes, the processing could even in principle be done in polynomial time, but it’s still much less likely to happen by chance than an egg unscrambling itself. So for “natural” black holes, we still don’t expect to see firewalls. (I.e., the problem would still be thermodynamics-hard, even if not exponentially hard.)

    And if even that argument were unavailable … well, sure, one can conceive of a hypothetical universe, different from our universe, where no black holes had interiors, and where “all event horizons are firewalls” was the correct answer. To paraphrase the Yiddish proverb: if my grandmother had balls, her actually being my grandfather might indeed become the most parsimonious explanation. 🙂

  23. adamt Says:

    Scott #22, haha, i like the proverb 🙂 However, I think you forgot another option: admit we just don’t know rather than affirmatively believe in crazy notions even if they are the least crazy we can imagine 🙂

  24. Scott Says:

    adamt #23: Admitting you don’t know—i.e., that you might need to change your mind later when better ideas or evidence become available—is sort of the price of admission to science, and certainly to a part of science like quantum gravity! So far, no one has even observed Hawking radiation, let alone Hawking radiation that encodes the infalling information, let alone a firewall. Arguably, it’s only in the last 6 months, with LIGO, that the existence of black holes themselves became established beyond all reasonable doubt.

    And yet, it’s still true that the main way we make progress, when direct experiments aren’t available, is by fearlessly working out the logical implications of the principles we’re already pretty sure about (like reversibility, linear quantum mechanics, the existence of one-way functions…). At least that way we learn where we might need to look for a breakdown of the principles, and we have a semi-coherent web of beliefs that lets us say just how surprising a new discovery is or would be. To me, the firewall discussion seems like an excellent example of this “radical conservatism” in action—i.e., of physicists improving their understanding by applying standard QM, GR, and QFT to some of the most extreme situations imaginable.

  25. Jay Says:

    Scott #19

    Thanks for the link!

    >even for specially engineered black holes, it might take 1010^70 years of preparation time

    I’m not sure I get your point here. If we assume complexity is the actual reason why ordinary black holes don’t have firewall nor Alice-like paradox, then surely we should say:

    “it MUST take some crazy long precomputation time (or crazy tiny probability) for ALL black holes”

    …otherwise there would be some reason for some special black holes to escape paradox and then the complexity argument would appear as supplementary not necessary even for “ordinary” black holes.

    So, are you suggesting that, indeed, there’s no way to form these special black holes, or are you saying it may be possible but you don’t see any problem with that?

  26. Scott Says:

    Jay #25: I’m saying it might be possible, but if so I don’t necessarily see a problem with it. In particular, it wouldn’t preclude computational complexity being a good explanation for why firewalls can’t be created for ordinary, astrophysical black holes. As an analogy, the Second Law of Thermodynamics is an excellent explanation for why ordinary eggs don’t unscramble themselves, despite the fact that one could in principle engineer a special egg that did unscramble itself (and this would be many orders of magnitude easier than creating a black hole with a firewall!).

  27. Jay Says:

    Excellent analogy, thanks!

    PS: amusingly it’s sort of possible to unscramble eggs -sort of


  28. v Says:

    Well, if it means anything, the only parts of the video I sort of understood were of your explanations, Scott, and nothing about what the other dudes said.

    I still do not get why people think there can’t be an entanglement between B and H though… The way I understand it, the entanglement that existed between B and R already was destroyed by the measurement that established it. So now B is free to be entangled with H instead. Sort of like in quantum teleportation perhaps, R got teleported to H or something.

    If there’s one thing I know about QM from my layman no-math approach is that even with entanglement and apparent non-local correlations, no measurement ever actually visibly changes anything non-local. Not with quantum teleportation, not with entanglement swapping, not with quantum eraser experiments or anything else can there be an non-local effect physically observable by Bob letting him know if Alice did or did not do some specific measurement.

    So the notion of Alice’s measurements forcing space-time in the BH to be destroyed just does not make sense. If it were true and Bob jumped in the BH with (or slightly before) Alice, he’d learn if she did her firewall-causing measurement from what he finds inside. I can not accept that could be true and I can not believe serious scientists are even contemplating the possibility.

  29. Igor Pikovski Says:

    Hi Scott,

    I enjoy reading your blog, even though I never left a comment. But I think I can help to demystify the title they chose, as it relates to my own work (http://arxiv.org/abs/1311.1095): We studied decoherence due to gravitational time dilation, and several popular science articles reported on it under that title. Fittingly, in other places it ran as “Einstein saves Schroedinger’s cat”. Clearly this has nothing to do with the topic of your panel, but I assume someone liked this headline.