## Firewalls

Updates (Aug. 29): John Preskill now has a very nice post summarizing the different views on offer at the firewall workshop, thereby alleviating my guilt for giving you only the mess below.  Thanks, John!

And if you check out John’s Twitter feed (which you should), you’ll find another, unrelated gem: a phenomenal TEDx talk on quantum computing by my friend, coauthor, and hero, the Lowerboundsman of Latvia, Andris Ambainis.  (Once again, when offered a feast of insight to dispel their misconceptions and ennoble their souls, the YouTube commenters are distinguishing themselves by focusing on the speaker’s voice.  Been there, man, been there.)

So, last week I was at the Fuzzorfire workshop at the Kavli Institute for Theoretical Physics in Santa Barbara, devoted to the black hole firewall paradox.  (The workshop is still going on this week, but I needed to get back early.)  For some background:

I had fantasies of writing a long, witty blog post that would set out my thoughts about firewalls, full of detailed responses to everything I’d heard at the conference, as well as ruminations about Harlow and Hayden’s striking argument that computational complexity might provide a key to resolving the paradox.  But the truth is, I’m recovering from a nasty stomach virus, am feeling “firewalled out,” and wish to use my few remaining non-childcare hours before the semester starts to finish writing papers.  So I decided that better than nothing would be a hastily-assembled pastiche of links.

First and most important, you can watch all the talks online.  In no particular order:

Here’s my own attempt to summarize what’s at stake, adapted from a comment on Peter Woit’s blog (see also a rapid response by Lubos):

As I understand it, the issue is actually pretty simple. Do you agree that
(1) the Hawking evaporation process should be unitary, and
(2) the laws of physics should describe the experiences of an infalling observer, not just those of an observer who stays outside the horizon?
If so, then you seem forced to accept
(3) the interior degrees of freedom should just be some sort of scrambled re-encoding of the exterior degrees, rather than living in a separate subfactor of Hilbert space (since otherwise we’d violate unitarity).
But then we get
(4) by applying a suitable unitary transformation to the Hawking radiation of an old enough black hole before you jump into it, someone ought to be able, in principle, to completely modify what you experience when you do jump in.  Moreover, that person could be far away from you—an apparent gross violation of locality.

So, there are a few options: you could reject either (1) or (2). You could bite the bullet and accept (4). You could say that the “experience of an infalling observer” should just be to die immediately at the horizon (firewalls). You could argue that for some reason (e.g., gravitational backreaction, or computational complexity), the unitary transformations required in (4) are impossible to implement even in principle. Or you could go the “Lubosian route,” and simply assert that the lack of any real difficulty is so obvious that, if you admit to being confused, then that just proves you’re an idiot.  AdS/CFT is clearly relevant, but as Polchinski pointed out, it does surprisingly little to solve the problem.

Now, what Almheiri et al. (AMPS) added to the simple logical argument above was really to make the consequence (4) more “concrete” and “vivid”—by describing something that, in principle, someone could actually do to the Hawking radiation before jumping in, such that after you jumped in, if there wasn’t anything dramatic that happened—something violating local QFT and the equivalence principle—then you’d apparently observe a violation of the monogamy of entanglement, a basic principle of quantum mechanics.  I’m sure the bare logic (1)-(4) was known to many people before AMPS: I certainly knew it, but I didn’t call it a “paradox,” I just called it “I don’t understand black hole complementarity”!

At any rate, thinking about the “Hawking radiation decoding problem” already led me to some very nice questions in quantum computing theory, which remain interesting even if you remove the black hole motivation entirely. And that helped convince me that something new and worthwhile might indeed come out of this business, despite how much fun it is. (Hopefully whatever does come out won’t be as garbled as Hawking radiation.)

For continuing live updates from the workshop, check out John Preskill’s Twitter feed.

Or you can ask me to expand on various things in the comments, and I’ll do my best.  (As I said in my talk, while I’m not sure that the correct quantum description of the black hole interior is within anyone‘s professional expertise, it’s certainly outside of mine!  But I do find this sort of thing fun to think about—how could I not?)

Unrelated, but also of interest: check out an excellent article in Quanta by Erica Klarreich, about the recent breakthroughs by Reichardt-Unger-Vazirani, Vazirani-Vidick, and others on classical command of quantum systems.

### 111 Responses to “Firewalls”

1. Two Cultures | Not Even Wrong Says:

[…] Update: Scott Aaronson now has a blog posting about the KITP workshop here. […]

2. Nex Says:

Is information really conserved in the real world?

Take radioactive decays for example, if they are indeed truly random they create information all the time. For example let’s rig a digital timer to a Geiger counter and place it next to a radioactive sample. The first decay starts the timer the next one stops it. We end up with a string of bits representing the time between two decays – new information. If you believe information should be conserved then can you tell me where did it come from in this case?

3. jd Says:

I repeat my comment on Dr. Woit’s site.

Are there relevant ideas in PRL 110, 101301(2013)?

4. Scott Says:

Nex #2: Good point. The nature of measurement in quantum mechanics (i.e., where, when, and why do the amplitudes get converted to probabilities) might indeed be mysterious. But I think it’s really orthogonal (har, har) to what people mean by “unitarity” in the black-hole context, and it’s useful to keep the two issues separate. In the context of the black-hole information puzzle, a “breakdown of unitarity” would mean a breakdown that happens even for an isolated system (e.g., a black hole) that doesn’t include you, the observer. Crucially, that would entail a flat-out revision to the evolution rules of quantum mechanics, regardless of which interpretation you prefer.

In other words, everyone already knows that unitary appears to break down when you get involved with your pesky Geiger counters—though some people (the many-worlders and decoherentists) insist that the breakdown is only apparent, and that the evolution remains strictly unitary from a “larger” perspective! The question that interests us here is whether a breakdown can also occur for an isolated physical system “doing its own thing.”

5. Scott Says:

jd #3: At the least, you could provide a link. 🙂

6. Scott Says:

(1) I should’ve given credit to Yaoyun Shi, who improved my original Ω(N1/5) lower bound for the collision bound Ω(N1/3). I apologize for the oversight.

(2) I mishandled Eva Silverstein’s question about 20 minutes in. I was so ready for a question about why the quantum circuit model captures the full generality of what black hole physics can do, that I got thrown off track when Eva instead asked why circuits capture the specificity of black hole physics. (The answer, of course, being that they don’t—but that it seems black hole physics would need to be “exquisitely special” in order to evade hardness arguments like Harlow-Hayden’s or mine. Certainly, if one adopts the view that a black hole emitting Hawking radiation is conceptually little different from a burning lump of coal, then one would “generically” expect to get a hard problem. And I did eventually say as much, but it took me a long time and I wasn’t very clear.)

7. jd Says:

http://prl.aps.org/pdf/PRL/v110/i10/e101301

8. Michael Welford Says:

Scott

You want an isolated system showing irreversible and therefore nonunitary change? How about a perfectly insulated cavity radiator? I put some metal balls in – one of them very very hot. Through a tiny peephole I watch with amusement as details of the interior of the cavity gradually vanish from sight. Oh, no radiation is allowed in.

After shutting the peephole and allowing the interior of the cavity to reach equilibrium, I invite you in. I open the peephole and ask you to identify the metal ball that was so hot. Your reply is something like: “That’s an unfair question. I can’t see any metal balls. All I see is cavity radiation!”

I trust you to make the black hole analogies on your own.

9. Daniel Freeman Says:

To what extent can you alter the experience of an observer that’s fallen into the black hole, given 4?

Can you, for example, cause them to observe the event horizon dissolving?

10. Jay Says:

Scott,

1) would you accept as sound a physical TOE that is mathematically incoherent, iff the incoherences are too hard to compute?

2) If no, could you explain why Harlow and Hayden’s argument is not of this sort? If yes, what would you conjecture as lower and upper bound?

3) would the Harlow and Hayden’s argument hold in the case of two+ colliding black holes?

4) suppose Harlow and Hayden are right, shouldn’t we expect an infalling observer to gain huge computationnal power? (e.g. they have direct access to the result of a computation too hard to be solved)

11. Scott Says:

Michael Welford #8: On much the same basis, someone might object to Newton’s laws, on the ground that their form suggests the existence of perfectly reversible systems (pendulums that keeping swinging forever, etc) but we never see those in practice.

I think most people realize that the question here is not about what I could or couldn’t do in practice, but about the form of the laws of physics relevant to black holes: will the correct equations be reversible ones, in the same sense that the correct equations for a lump of burning coal (or for your metal balls in a cavity) are reversible? Or is there irreversibility, not only in practice, but even in the equations?

Now, you might object: if we only care about the form of the equations, and not about practical limitations, then why should we care about computational intractability a la Harlow and Hayden? That’s a good question. A glib answer would be: because computational intractability, while less “exalted” than outright physical impossibility, is nevertheless more exalted than mere practical difficulty. A better answer will have to await my response to Jay #10 below.

12. Scott Says:

Daniel Freeman #9: Well, if the black hole interior is “constructed out of” the exterior degrees of freedom (as the complementarity philosophy says), then by performing a suitable unitary transformation on the Hawking radiation of an old enough black hole, you ought to be able to effect a huge number of different unitary transformations on the interior. Probably not every possible unitary transformation on the interior Hilbert space: after all, some of the exterior degrees of freedom that encode the black hole interior should still be stuck near the event horizon, where you can’t access them (at least not semiclassically). What AMPS basically argued was that, after the black hole has evaporated at least half its mass as Hawking radiation, operating on the Hawking radiation should (“in principle”! 🙂 ) give you enough control over the interior to wreak havoc for an infalling observer: to cause that observer to experience, not the standard vacuum state of QFT, but a completely different vacuum state that would cause the observer to dissolve instantaneously.

Now, your question was about causing the infalling observer to “observe the event horizon dissolving.” So then I guess there are two questions: (1) what exactly you mean by that, and (2) whether there’s an argument, analogous to AMPS, that whatever you mean should be included in the set of unitary transformations that can be applied by acting on the Hawking radiation. (Of course, even if there’s no such argument, a suitable unitary might “happen” to belong to the set by chance.)

Perhaps not surprisingly, I have no idea how to answer (2), even assuming an answer to (1)! But if someone who knows the physics better is reading this comments section, maybe they’d like to chime in.

13. Scott Says:

Jay #10:

1) No, absolutely not. I don’t accept mathematical incoherence anywhere, for any reason. 😉

2) I already answered a similar question by Peter Shor over on Peter Woit’s blog, so let me repost my answer here.

> Nature is inconsistent, but because we only have limited
> computational power, we will never be able to catch Her
> in an inconsistency
(Peter Shor’s summary of Harlow-Hayden)

No, I don’t think that’s what Harlow and Hayden are saying at all. It’s more like: “yes, semiclassical field theory might have to break down and get replaced by a consistent quantum theory of gravity, even in a low-energy regime where physicists thought that QFT would work fine. But if the breakdown would take something like ~2^10^60 years to reveal, then maybe we don’t have to worry all that much, if our goal is to reassure ourselves that we already more-or-less understood what happened at low energies.”

3) Why wouldn’t it? I.e., what’s the argument for why colliding black holes should change the situation?

4) That’s an extremely interesting question, and one that I raised explicitly at the end of my talk (inspired by conversations with Lenny Susskind). However, what I can tell you for sure is that Harlow-Hayden doesn’t say anything, one way or the other, about any computational abilities gained by the infalling observer.

Harlow-Hayden (and my small improvement to it) are strictly about the computational problem: how do you produce a certain kind of quantum state in the Hawking radiation, by acting unitarily on the Hawking radiation alone? So you could say that the technical content of Harlow-Hayden never “reaches inside the black hole” at all, not even conjecturally. Of course, a main motivation for their result comes from what it might suggest about theories that would also describe the interior—but that’s all speculation at this point.

It would be extremely amusing if you could indeed invert any injective one-way function in the physical universe, but you had to really want to know the answer—enough to sacrifice your life by jumping into a black hole! And then you could know the decoding of some particular cryptographic message (or whatever) in the final, doomed seconds of your life, but you couldn’t tell anyone else about it. And the idea that the black hole interior is a “scrambled re-encoding” of the exterior degrees of freedom certainly encourages such speculations. However, I have to reiterate that nothing in Harlow-Hayden gives evidence that the speculation is actually true: as I said, the argument simply never talks about the interior at all.

14. Michael Bacon Says:

. . . jumping into a black
hole! And then
you could know
the decoding of some particular
cryptographic message . . .
in the final, doomed seconds
but you couldn’t tell any
one else . . . .

Scott,

Did you compose this dark and mysterious imagery simply because you weren’t feeling well. Or, is it more fundamental issue that you’re pondering? 😉

15. Scott Says:

Michael #14: LOL! I’ve posted things on this blog that had intense emotion behind them (too much of it, actually), but that wasn’t one of them. I really was just trying to make vivid what this particular speculative scenario would mean for computation.

16. wolfgang Says:

@Scott #13

I did not follow your response to Peter Shor (and I wrote a response on P.W’s blog, which you probably missed)

The “uncertainty relationship” does not “hide an inconsistency”, because the inconsistency arises only if one uses a classical description in the first place: If one uses a classical description where x and p exist simultaneously then the uncertainty relation “hides an inconsistency” and this is how Heisenberg’s thought experiment with the light microscope worked. However, in the final theory there are no inconsistencies.

The Harlow-Hayden paper shows imho that computational complexity “hides an inconsistency” but this is only because we do not have the final theory of quantum gravity yet.
However, I think the result is much more fundamental than you seem to think, because it points to the final theory in the same sense Heisenberg’s light microscope does.

And I think it is important that the H-H result shows that the calculation is different for Alice (who needs to jump into the b.h. in a finite time) than for Charlie, who remains outside and sees no problem with unitarity.

17. Igor Khavkine Says:

The talk of Bill Unruh should certainly be supplemented by the talk of Bob Wald who underscores the same conclusion in a somewhat complementary way.

18. Scott Says:

wolfgang #16: I think we’re saying similar things. As far as I can see, nothing is gained by even raising the possibility of an “inconsistency in the laws of physics”: that’s just a notion for people who like to confuse themselves and others. What physics does provide, over and over, are phenomena that would lead to inconsistencies in a crude or effective or earlier theory, but that are easily seen to do no such thing once we have a better theory. The point is now that, any time we encounter such phenomena, we also encounter the following metaquestion:

Why did the crude / effective / earlier theory work as well as it did? Why did we have to work as hard as we did, in order to see its inconsistency?

Past answers have included: because the new theory is extremely well-approximated by the old one (despite the old one’s inconsistency!) in the limit of small velocities, or large numbers of particles, or low energies, etc. If HH were right, then we could add a new kind of answer to this list: because the new theory is well-approximated by the old one unless we spend exponential computation time to distinguish the two.

Anyway, while I appreciate the vote of confidence :-), I don’t think one can claim that Harlow-Hayden is “fundamental” until we have the actual new theory for which HH is the thing that prevents you from seeing violations of the older, effective theory. For now, I’d say HH is nothing more (or less!) than a tantalizing possible link between black-hole physics and computational complexity.

19. wolfgang Says:

>> a tantalizing possible link between black-hole physics and computational complexity

the connection between cryptography and black hole physics is just so mind boggling that it has to be right … otherwise the designer of The Matrix overlooked a truly amazing punchline imho.

20. s.vik Says:

These theories are math. models which most likely do not match reality at all, so who cares??

Is there any way of testing if they are true or not??
By observing in-falling matter on to black holes??

If the pure-math. people can prove that the assumptions are inconsistent for just GR or just SM/QFT or other frameworks, what does that mean??

Perhaps one can prove that our way of looking at the world is wrong!!

21. Scott Says:

s.vik #20: Well, I’d say that for the past ~350 years, the main way physics has advanced has been by people looking for places where the current theories contradict each other, give absurd predictions, etc., and then trying to resolve the “paradoxes.” So, that’s what people are trying to do now with black holes, which are interesting as the main “conceptual laboratories” (besides the Big Bang, of course) where QM, QFT, GR, and thermodynamics all come together.

Testing is indeed a serious problem! If we could create small black holes in particle accelerators and watch them evaporate, we could at least confirm that Hawking radiation is a real phenomenon, and maybe even that the evaporation process is unitary. But it doesn’t look like nature will be so kind. And the chances of ever observing Hawking radiation from an astrophysical black hole, let alone manipulating the radiation in the ways AMPS, Harlow-Hayden, et al. are speculating about, seem pretty bleak.

So then, what’s the point? Well, physicists have already gotten quite far in understanding black holes over the last ~50 years, purely from the demand that the various theories they already know play nicely together. So it seems reasonable to expect that they could get further. And maybe the theories that resulted would then make predictions about other things (like CMB anisotropies?), theories that we could then go out and test.

Speaking for myself, though, I’m grateful not to have a dog in this fight! I’m happy enough if thinking about the Harlow-Hayden decoding problem leads to interesting new questions in quantum computing theory (questions having nothing to do with black holes), which indeed it has.

22. Michael Welford Says:

Scott #11

I need to learn to be less oracular in my comments. My main points are:
(1) It’s quite possible to have irreversible change in an isolated system ( like the interior of the cavity )
(2) Pure thermal radiation can’t tell you anything about its source except for the temperature of that source ( and the fact that the source has a well defined temperature ).
This applies to cavity radiation and to hawking radiation. ( Although Hawking seems to think otherwise. )

Anyway, if you’re trying to reconstruct a black holes history from hawking radiation, you’re too late. The informative time was when the hole was coming to equilibrium.

You mention newtonian physics. Let’s compare newtonian with quantum physics.

The newtonian rules tell us that if we know the forces on an object we can figure out its motion. The formulas for friction are ad hoc and ugly and approximate, but they give forces. We end up sacrificing the time symmetry that’s in the fundamental equations , but friction and other kinds of irreversibility still fit into the framework.

The traditional model of quantum mechanics involves intervals of unitary evolution interrupted by wave function collapse. The unitary part looks mathematically like a complicated rotation. The nonunitary collapse part looks like a geometric projection. It feels like we have two sets of rules. Furthermore, wave function collapse brings in stuff that physicists find to be not so esthetically pleasing. Things like time asymmetry, irreversibilty, information loss, nonlocality. It’s more relaxing to insist that all change is unitary, than it is to face so much unpleasantness.

23. Jay Says:

Scott #13

3) well, sorry if it’s too naive, but as the computation takes about e^MM for an isolated black hole, then I was thinking that maybe the computation for n colliding microblack holes should take n.e^MM, i.e. a time that may be shorter than the evaporation time once the micro black holes had collided.

1,2,4) thx – this and your dialogue with Wolfgang helps a lot

24. Scott Says:

Michael Welford #22: Now you seem to be simply denying, without argument, one of the most profound, well-established insights of 19th-century physics! Namely, that the reversible laws governing our universe can (and do) give rise to “irreversible-looking” phenomena, ultimately because of the specialness of our universe’s initial state. Furthermore, far from being overturned by quantum mechanics, this insight is dramatically upheld by it—something that I think the Many-Worlders and the Neo-Copenhagenists actually agree about. For anyone interested in more detailed exploration of these issues, I strongly recommend Sean Carroll’s From Eternity to Here.

25. Michael Welford Says:

Scott #24

Yes, I agree that the observable universe started from a special state. However entropy increasing phenomena aren’t just irreversible-looking. They’re irreversible since we can’t bring the entropy of the universe back down.

I was trying to say, that in the traditional description of quantum mechanics irreversible change, characterized by wave function collapse, looks mathematically different from reversible change characterized by unitary evolution. This creates a temptation to (incorrectly) decide that all change is unitary.

Somehow, with all my obscurity, my central point got lost. Thermal radiation is so noisy that all it can tell you is the temperature of its source. So hawking radiation is uninformative even in principle. This principle is easily testible with cavity radiation. Keep close enough to thermal equilibrium and you won’t be able to see a difference between a cavity with an ‘X’ carved into the back from a cavity with an ‘O’. (Outside illumination is cheating, since it violates thermal equilibrium.)

SAFETY TIP: If anyone is actually going to do this experiment, I suggest buying an infrared camera and maintaining thermal equilibrium with warm water.

26. Scott Says:

Michael #25: No, I don’t think lack of clarity is the problem. You were perfectly clear; the trouble is just that (without further elaboration) you’re wrong! 🙂

The great insight of the statistical interpretation of thermodynamics was that, in theory, you could look at so-called “thermal” radiation, and work backwards to learn the original source of the heat. In other words, that there exists a clear sense in which “thermal” is just a word for our own ignorance. Our inability to reverse the heat diffusion (or unscramble the omelette, or unshatter the broken glass, whatever) is “just” a practical limitation, having to do with our failure to keep track of all the relevant microscopic details (or alternatively, to control the entire relevant Hamiltonian so that we could “set it in reverse”).

Decreasing the “entropy of the entire universe” is a different matter, since there’s no one able to stand outside the universe to pump heat out of it. But if you wait long enough for a Poincare recurrence, then sure, you’ll even see the entropy of the universe (however you choose to define it) go down as well.

Now, I don’t claim that this picture is sacrosanct. Maybe someone, someday, will give a convincing dynamical account of the origin of quantum measurement, in which case you really would have “genuine irreversibility.” (Until that happens, most physicists will continue to think about quantum measurements as “just another instance of the Second Law,” reversible in principle and only irreversible in practice.) Or maybe you want to use cosmology to argue that, once some of the information needed to unscramble your omelette is rushing away from you at the speed of light, your omelette is now unscramblable in principle, and not only in practice—assuming, of course, that you live in a deSitter universe, and not an AdS universe! I even toyed with that idea myself, in my “Ghost in the Quantum Turing Machine” essay.

My point is simply that those are cases that have to be made explicitly, if they’re made at all—clearly indicating how and why you’re choosing to depart from the conventional picture. In other words, irreversibility can’t just be asserted; you need to state the mechanism for it!

27. asdf Says:

Scott, what’s this monogamy of entanglement thing, that any particle can be entangled with just one other particle? That’s the opposite of what I’d been hearing for all these years, that there’s some amount of entanglement between everything, that there’s really just fields rather than particles, etc. Is monagamy an actual theorem of old-fashioned QM? Thanks.

28. Mike Says:

asdf@27,

I think it means that if two qubits are maximally correlated they cannot be correlated at all with a third qubit C. There is a trade-off between the amount of entanglement between qubits and the extent to which correlation can be achieved with a third qubit. The key, I think, is the words are “maximally” and “amount”, but please correct me someone if I’m wrong.

29. Scott Says:

asdf: Yes, monogamy of entanglement is an actual theorem of old fashioned QM. The easiest way to understand it is to consider what happens if you try to entangle a single qubit, A, with two other qubits B and C. You can certainly produce what’s called the GHZ state:

(|0⟩A|0⟩B|0⟩C + |1⟩A|1⟩B|1⟩C)/√2

But then—darn! If you look only at A and B, you see only classical correlation, not entanglement, because the entanglement with C “decoheres” or “screens off” the entanglement between A and B. Likewise you see only classical correlation if you look only at A and C, since B decoheres their entanglement. Now that you’ve created this love triangle, the only way to see that they’re entangled is to do a measurement on all 3 of the qubits.

In other words, the “monogamy of entanglement” might better be called the monogamy of pairwise entanglement. As you correctly pointed out, there are huge amounts of entanglement all over the place—but the catch is that you can only detect entanglement between A and B by measuring them, if A and B are not also maximally entangled with anything else!

Crucially, this is completely different than the situation with classical correlation. If A, B, and C are maximally classically correlated with each other, then if you look at any two of them (A-B, A-C, or B-C) and ignore the third, you’ll still see maximal correlation.

30. s.vik Says:

Scott :Comment #21
Sounds promising. What about other assumptions which led to axions, susy, inflation. Are they on the table too??

How does AMPS affect merging large black holes. Some cases of these have been simulated just 2 or 3 years ago. Could we see the effects after a merger of galactic sized black holes?? This should have happened by now many times.

How far do the black holes get inside each other before they know??? (dumb question, sorry, 🙂 ).

s.vik

31. Scott Says:

s.vik #30: I don’t think axions, susy, or inflation are “absurd possibilities” such that if your theory predicts them, then one of your assumptions must have self-evidently been false. Sure, any one of those three ideas might be wrong, but that’s very different from being absurd! Moreover, in the case of inflation, today there’s decent observational evidence that something like it is probably true.

The question about AMPS and two merging black holes (do the black holes “feel each other’s firewalls”? 🙂 ) is interesting—anyone who knows more physics than I do care to comment on it? It could be that, once you had a quantum description of a single black hole, giving a quantum description of two merging black holes would just be a “glorified homework exercise”—or it could be that the latter raises some genuinely new issue. I honestly don’t know!

32. Peter W. Shor Says:

We have numerical general relativity, which gives us a very good idea about how two merging black holes look from the outside in classical GR. If your theory of quantum gravity predicts you would observe anything other than the results of these simulations, I would say that there is an extremely good chance that your theory of quantum gravity is wrong.

In other words, observations of black hole behavior on a macroscopic scale tell us very little about quantum gravity.

On the other hand, a real theory of firewalls should be able to say what happens to the two firewalls when black holes merge. I don’t know whether any of the firewall theories people have are yet at this stage.

33. Zoran Says:

One (obvious?) possibility is that black holes are true fractals. E.g. a Cantor Cube has zero density (at the limit), but does encode its own description/information (computing its entropy is trivial).

34. wolfgang Says:

@ Scott

>> The question about AMPS and two merging black holes

I think the question about the backreaction of the firewall on the b.h. geometry would need to be solved first.
Does anybody have an idea what the stress-energy tensor looks like for a firewall (perhaps idealized as null fluid?) ?

35. Daniel Harlow Says:

Nice post Scott! I’m glad to have somebody who actually knows complexity theory thinking about these things… The second week was also fun, although more of the fun happened outside of the lecture hall and thus unfortunately isn’t online. I’m looking forward to seeing how much the story has changed by November…

36. Luboš Motl Says:

Dear Scott,

your lists of “pieces” of topics that may play a role and some logical connections between them are ordered and sometimes correct and they indicate some knowledge but with your anti-black-hole-complementarity rants, you are showing that you don’t really have any clue which statements hold in quantum gravity and which don’t. You’re not as deluded as e.g. Peter Shor but you are still heavily deluded.

No mistake has been found in the black hole complementarity arguments and conclusions. Instead, Joe et al. suddenly presented a picture that violates not just complementarity but also about 10 additional important insights about quantum gravity that were made in recent decades.

In particular, he incorrectly believes that the metric tensor is a good variable up to arbitrary short length scales so that one may define operators background-independently. It ain’t so.

http://motls.blogspot.com/2013/08/one-cant-background-independently.html?m=1

He believes that the unitary evolution operator may be block-diagonalized to blocks that are linked to “decoherent” or “classically visualizable” subspaces of the Hilbert space. As made clear especially in papers by Hsu and by Nomura et al. in particular, it ain’t so.

http://motls.blogspot.com/2013/08/light-dark-matter-in-nmssm-and-non.html?m=1

This is related to the incorrect assumption that the black hole interior is a “tensor factor” of the full Hilbert space; you copied this meme as a part of one of your points. Because of the entanglement/correlation between the interior and the rest of the Hilbert space – entanglement arising from imposing the condition that a black hole of a given size exists somewhere – this splitting doesn’t hold.

He believes that the acts by Alice, before she falls into a black hole, aren’t allowed to influence her future while she is in the black hole. Of course that they can, there is no paradox and not even a conflict with causality. In fact, in the Einstein-Rosen-bridge-equipped picture of spacetime, locality isn’t violated at all.

http://motls.blogspot.com/2013/08/insiders-and-outsiders-debate-fuzz-or.html?m=1

His followers believe that the black hole interior has to be empty, as seen on the memes of the “frozen vacuum” and a privileged role that N=0 plays in various arguments by the firewall advocates. In reality, the N=0 occupation number for the infalling field modes is just the most likely, but otherwise equally good, value for an occupation number inside a black hole. Objects may fall inside the black hole for a while and some microstates – a low but nonzero percentage of them – describe black holes in such states.

http://motls.blogspot.com/2013/08/boussos-pseudoarguments-against-erepr.html?m=1

In general, Joe is treating the superpositions of macroscopically distinct microstates – Schrödinger cat states – incorrectly.

http://motls.blogspot.com/search?q=hsu&m=1&by-date=true

Quite generally, he always *assumes* that complementarity can’t be correct by assuming that the interior and exterior degrees of freedom can’t overlap.

And so on, and so on. It seems impossible to meaningfully argue about any of these things because not just one step but most of the steps made by AMPS and their advocates are wrong and even if one convinced Joe or others that they’re making a critical mistake, they would jump and rely on another mistake. It really looks like they must have been living in a different galaxy in the last 20 years.

Cheers
LM

37. Scott Says:

Lubos:

> You’re not as deluded as e.g. Peter Shor but you are still
> heavily deluded.

I wouldn’t mind having that for an epitaph on my gravestone. 🙂

38. Luboš Motl Says:

No problem, Scott. If I live long enough which is unlikely but not impossible, I will spray it on your gravestone. “He was always proud to be #2 in delusion right after the scary fat jerk from the MIT.” Is that OK? 😉

39. Roald Dahl Fan Says:

Just a question for the community here: Does Lubos do any actual research anymore and is he attached to any university, or is he just another crank shooting his mouth off after being banished to eastern europe? I take it from his arrogance that he hates himself and feels deeply insecure about the fact that people like Scott have tenured positions at top research schools and he doesn’t.

40. Luboš Motl Says:

Dear Roald, I don’t need to be attached to any university to do research, most of them are full of left-wing assholes, anyway.

Czech Republic isn’t Eastern Europe. It’s the very center of Europe by any measure. It’s Central Europe, it’s my home, and it has always been my home.

I left Harvard University mainly as a protest against the feminist sluts’ and professional blacks’ unrestricted terror against then Harvard president Lawrence Summers whom I respected and respect, unlike them.

41. Scott Says:

Roald Dahl Fan #39: Well, a hep-th search shows that Lubos hasn’t posted anything there since 2006. On the other hand, he’s an unbelievably prolific blogger and commenter, and as a fellow blogger, it’s hard for me to understand how he could blog at the rate he does were he not spending the majority of his time on it.

You know, after you’ve been reading Lubos for long enough (8+ years in my case), something interesting happens. You completely lose the ability to be shocked or offended by anything he says—no matter how hateful (or even misogynistic, racist, or threatening) it might be. It would feel like getting offended by the temper tantrums of a 2-year-old, or the outbursts of a Tourette’s-syndrome sufferer. Instead you just think, “oh, there goes our excitable Eastern … oops, I mean Central European friend again!”

So from that point forward, you can just strip-mine what he writes for whatever is valuable or insightful in it—which remains a nonzero fraction!—and ignore the rest. It’s in that spirit that I’ve chosen to leave Lubos’s comments here up, unless Peter asks me to remove them.

42. Bram Cohen Says:

Hey Scott, have you seen this paper? http://arxiv.org/abs/1210.1847 (My apologies if you’ve already been sent that link 100 times). It sounds like the authors are basically saying that the thing in the universe which would require the most computational power to simulate is the production of cosmic rays, so maybe there are subtle anomalies in those due to god’s computer getting overloaded. I’m guessing that you’ll find the god’s computer part of this outrageous – even I do, in fact, but anything which proposes more reasons to put money into cosmic ray detectors is fine by me.

43. Rahul Says:

Lubos #40:

Out of curiosity: Any links to your publications from recent years? Or have you stopped publishing too since journal editorial boards are left wing conspiracies too?

44. Scott Says:

Bram: No, I hadn’t seen that paper before. To me, the idea that the universe is not only a computer simulation, but that its simulated nature would reveal itself through the same sorts of errors that tend to occur with human beings’ present-day QCD simulations, is good fodder for an xkcd strip, or maybe even a hard sf novel (Greg Egan could do it). As a scientific hypothesis to be tested, it might need to wait its turn after the Easter Bunny and Sasquatch.

(For one thing, whoever programmed the universe presumably coded in quantum gravity! So right there, we know the simulation can handle enormously higher energies than lattice QCD codes are designed for. Unless you believe Peter Shor’s tongue-in-cheek suggestion, that trying to probe the energies where quantum gravity matters would simply cause the universe to crash.)

As for the notion that even totally ungrounded speculations are OK, if they lead to more funding for experiments that you like for other reasons … well, I always worry that that sort of thing will backfire. What happens after society funds the experiment, there’s no sign whatsoever of the phenomenon that never had any reason to be there in the first place, and then you have to request funding for the next experiment?

45. Peter w. Shor Says:

The idea that God would have an inadequate computer strikes me as somewhat blasphemous.

On the other hand, it’s possible that discreteness on the Planck scale could have the same effects on the laws of physics that lattice discreteness has on simulations of lattice QCD.

46. Luboš Motl Says:

Dear Scott and others, concerning the Central vs Eastern Europe, a similar irritation to mine was voiced as recently as a week ago by Václav Klaus, the Czech president until early 2013. See his London speech on mostly this topic:

http://www.klaus.cz/clanky/3396

47. Luboš Motl Says:

One more thing, Scott. Elsewhere, you say about Hawking’s bet concession:

“Though an obvious issue is that [Hawking] doesn’t say how large the amplitude [that the black hole doesn’t form in the first place] is: if it were nonzero but exponentially small, that wouldn’t seem to help much.”

I agree that he didn’t say how large it was. On the other hand, the second part of your sentence is – perhaps very unexpectedly – wrong.

It is actually sufficient to correct – through the contribution of black-hole-free intermediate states – matrix entries of the density matrix by exponentially small entries not greater than exp(-S) – totally, nonperturbatively tiny corrections – and a maximally mixed density matrix may be changed to a pure one and vice versa. See

48. Scott Says:

Lubos #47: Thanks, but doesn’t your argument require exponentially-small modifications to lots and lots of off-diagonal density matrix entries, all over the matrix? In which case, sure, it’s clear that you can change a mixed state to a pure state. By contrast, my (possibly-mistaken) understanding of what Hawking was saying was that the total state is a sum of an exp(-S)-sized component (call it ρNBH) where no black hole forms, plus a 1-exp(-S)-sized component (call it ρBH) where a black hole does form. Moreover, the process involving ρBH is badly non-information-preserving. However, purely because of ρNBH—and not because of (as in your argument) any changes to the off-diagonal entries of ρBH—the overall process is supposed to become information-preserving. And that’s the part that I couldn’t map onto anything in quantum information that I know about.

49. Luboš Motl Says:

Yes, Scott, your extra comments are mostly right. See the new text

http://motls.blogspot.cz/2013/09/an-apologia-for-ideas-from-hawkings-bh.html?m=1

to figure out what I mean a bit more accurately.

50. Joshua Zelinsky Says:

If we could create small black holes in particle accelerators and watch them evaporate, we could at least confirm that Hawking radiation is a real phenomenon, and maybe even that the evaporation process is unitary.

How would one go about testing the second claim?

51. Joshua Zelinsky Says:

Err, messed up the quotes on the last comment. The first paragraph should be block quoted.

52. Robert Rehbock Says:

Lubos Motl has linked to his blogs that discuss AMPS and firewalls #36 included links. He is persuasive I think.
I hope someone will address the merits of his reasoning instead of the personal questions and remarks.

53. Robert Rehbock Says:

52 Oops -the comments between LM and Scoot just above had been off my browser page and not noticed.

54. Peter w. Shor Says:

@Robert #52: I don’t understand Lubos Motl’s arguments at all,. It’s possible that I’m too stupid and know too little physics to understand them, but it’s also possible that these arguments are incoherent. Lubos seems to be the only physicist who doesn’t believe that AMPS shows there was a big hole in the idea of complementary (let me note that this doesn’t mean it couldn’t be patched). Given the caliber of physicists who were at the workshop, and given that they all believe that AMPS is a reasonable argument, who do you think you should believe.

55. T H Ray Says:

@ Scott

“4) by applying a suitable unitary transformation to the Hawking radiation of an old enough black hole before you jump into it, someone ought to be able, in principle, to completely modify what you experience when you do jump in. Moreover, that person could be far away from you—an apparent gross violation of locality.”

It’s no violation of locality at all. Because t = 0 at the event horizon, any distant observer sees only a frozen image. Proper time is still local between interacting observers at the horizon.

Tom

56. Scott Says:

Joshua #50:

How would one go about testing the second claim?

Well, I suppose you would check that, when the microscopic black hole is prepared by two perfectly-distinguishable procedures (i.e., particles with different energies, or colliding from different angles, etc.), the decay products that the black hole radiates into are also perfectly distinguishable from each other. This would not be a very easy test! For it to be feasible, I suppose the black hole would need to be extremely microscopic, and you would also need access to all or most of the decay products.

One question to which the answer must be known, and which would give us insight, is the following. How would one check that the collisions at (say) the LHC are unitary, if one didn’t already believe that? (Again, I can give an “in-principle” answer, but would be interested in what people actually do or would do experimentally.)

57. GP Burdell Says:

Peter Shor #45 & Scott,
As I have mentioned elsewhere, my favorite reason that the universe is the way it is is not that there was an all-powerful “creator”, but that there was a *committee* of them. And that supernatural super-committee probably wrote the equivalent of a Requirements document and handed it off to other Supreme Beings to implement. I claim that this is the reason the universe — rather than being a finely honed, completely logical mechanism — is, in actuality, an illogical, incomprehensibly complicated mishmash of ideas and principles with irrational details thrown in just to “make it work”. (Anyone who has had to try to interpret and implement the mandates of such a committee will know exactly what I am talking about, and that this reflects the way real computing systems actually work!) I claim that there is more experimental evidence to support this view than there is for any multiverse. And it has exactly the same explanatory power — i.e., none at all.
🙂

58. Daniel Harlow Says:

I know I’m way behind in usual blog time, but I thought I’d add that I think what Wolfgang and Scott say in comments 16 and 18 is a good representation of how were thinking about things.

I’ll add that at the conference after Scott left, there were some interesting discussions about how computational complexity may provide an interesting split between observers living “within” the bulk and observers living “outside” of it, such as a quantum mechanic simulating N=4 super yang mills theory on a quantum computer and occasionally sending in signals.

59. Scott Says:

GP Burdell #57: So, at what point in this committee’s bureaucratic deliberation process were (say) quantum mechanics or Lorentz-invariance invented? I can’t help thinking that there must’ve been someone on the committee who had some aesthetic sense…

60. GP Burdell Says:

Scott#57: Hey, these were Supreme Beings, not dummies. They just all probably had their own ideas of beauty and elegance and the practical needs of a universe – like any committee. Hence the mishmash, with chunks of it not quite fitting with other chunks, and the implementers having to patch over the gaps… 🙂

61. Luboš Motl Says:

Peter Shor, your comment doesn’t contain a glimpse of physics, just some vague sociological and ad hominem arguments and they’re completely untrue, too.

A majority of the Fuzz Or Fire workshop believes that the AMPS argument is wrong and when done right, it isolates no mistakes or paradoxes or holes (in the sense of contradictions, I don’t mean incompleteness – there’s of course some incompleteness in what we would like to know) in the state-of-the-art understanding of black holes in quantum gravity.

This list of mine includes Susskind, Maldacena, Raju, Nomura, Raamsdonk, Sanford, Mathur, Bena, Hawking, Jacobson, Verlinde (squared?), Banks, and others. If some excessive politeness of those people made you believe that they agree that AMPS is a crucial argument, that’s too bad, but they don’t really believe it.

Joe Polchinski has explicitly said repeatedly that “none of us” really believes that firewalls exist, too.

62. Scott Says:

GP Burdell #60: I see, then you’re back to the Greek or Roman gods model.

I’d simply remind you of two things:

(1) However dysfunctional this committee of gods may have been, they did ship a product that’s run for 13.7 billion years so far without crashing once! (Maybe there was a strong Zeus figure in charge?)

(2) In the past, things that have looked like gaps, mishmash, etc. in the laws of physics, have over and over again turned out to be merely gaps and mishmash in our own effective theories, and not at all in Nature. That doesn’t prove that there can’t be any mishmash at the bottommost level, but it certainly cautions against drawing that conclusion too hastily.

63. Peter w. Shor Says:

Hi Lubos,

I would try to address your argument that AMPS is wrong, but I don’t understand it. You haven’t presented it in enough detail. What you say is “Joe Polchinski believes X” and “Joe Polchinski believes Y”. But you don’t actually say where X and Y are crucial to the AMPS argument.

If you want somebody to argue physics with you, you have to actually tell us which step in the AMPS argument relies crucially on believe in X.

64. Wormhole Inside Blackhole – Quantum Bot Says:

[…] … Black hole firewalls … firewalls again and again … Einstein-Rosen bridges vs Einstein-Podolsky-Rosen paradoxes … Finally, I recalled one […]

65. Luboš Motl Says:

Dear Peter Shor, it’s you who is constantly pushing this debate towards annoying ad hominem debates and attacks. I have written 10 articles worth of arguments why AMPS is wrong, most of which are linked to in this very thread. So I offer all the science and you offer all the junk. It’s totally incredible that you have the chutzpah to pretend that the reality is upside down.

66. Jay Says:

Scott, if you persist to want Molt on your blog, could you please give him some dedicated post? It seems you missed ten of his recent papers on AMPS, he could start by that.

67. Scott Says:

Jay: When Lubos refers to “articles,” he means his blog posts. And I did read them—pretty much all of them. And I’d describe them as “classic Lubos.” Here’s my personal summary:

“Many of the world’s most prominent physicists claim to be puzzled about how to reconcile such-and-such list of principles. That’s because they’re idiots—or more charitably, because they agree with me but are too polite to say so. I’m not the slightest bit puzzled, and I haven’t been puzzled since I was 3 years old. Indeed, the solution is trivial: you simply need to repeat, but louder, that the principles are all perfectly consistent with one another! Unitarity is fine, and black hole complementarity is fine, and the equivalence principle is fine, and the low-energy QFT description of the event horizon is fine—they’re all fine! And if you think there’s some sort of contradiction between them, then that’s your problem, not Nature’s! So, OK, what would happen if you did the experiment that AMPS argue would reveal a paradox? Eh, a wormhole would form, if you like, between the black hole exterior and interior, to communicate information from the former to the latter. Or something like that. Nothing paradoxical or even strange about it—not even a violation of locality (after all, everything is still ‘local’ in the wormhole-containing spacetime). Or are you one of the imbeciles who can’t even wrap his mind around wormholes? Oh, you say that wormholes are OK, but you want a more detailed theory of them, that would reassure you they don’t lead to the most ‘obvious’ kinds of locality violations? In that case, we once again come back to the issue of your own mental limitations. Of course we don’t yet have the detailed theory that tells us exactly what happens—but that’s not the point! The point is that whatever that detailed theory might be, it’s necessarily free of paradoxes—end of discussion!”

Anyone who’s read Lubos’s posts is welcome to correct me wherever I got him wrong. 🙂

68. Peter W. Shor Says:

Jay:

The “articles”=”blog posts” have not been peer reviewed, and are difficult to understand. You have just seen what happens when you ask Lubos for clarification because you don’t understand them.

69. Luboš Motl Says:

Dear Peter Shor, if you have problems to understand sort of crystal clear articles at a technical but popularly formulated level, maybe you are (or you were, years ago) getting too old, if I kindly avoid the term “senile”.

Maybe you should have checked Scott’s comment before you wrote yours because you would have noticed that some younger information and complexity experts have… ehm… less severe problems to understand these issues than you seem to face (and boast). 😉

70. Luboš Motl Says:

Dear Scott,

I think that your summary is accurate enough – you have summarized all the important conclusions and insights that you may understand given your IQ’s limitations. So thanks for that, the same summary could be made accessible even to Peter Shor if you omitted the more complicated and technical 1/2 of your summary.

Cheers
LM

71. wolfgang Says:

@Scott @Peter Shor

Lubos’ “articles” usually reference some paper(s) and in this case one is from Stephen Hsu.
See infoproc.blogspot.com/2013/08/factorization-of-unitarity-and-black.html for the abstract and a link to his paper.

72. Vitruvius Says:

On page 184 of his book “The Great Equations“, Robert Crease recounts this anecdote (and I quote):

In September, 1946, in New York City, at one of the first postwar annual meetings of the American Physical Society, the presentation by the Dutch theorist Abraham Pais, who was struggling to explain the strange behaviour of a puzzling, recently discovered new particle, was interrupted by Felix Ehrenhaft, and elderly Viennese physicist. Ever since 1910, Ehrenhaft had been claiming to have evidence for the existence of “subelectrons”, charges whose values were smaller than the electron’s, and his efforts to advance his claims had long ago exhausted the patience of the physics community. Now approaching seventy, Ehrenhaft was still seeking an audience, and [he] approached the podium attempting to be heard.

A young physicist named Herbert Goldstein ~ who told me the story ~ was sitting next to his mentor and former colleague from the MIT Radiation Laboratory, Arnold Siegert. “Pais’s theory is far crazier then Ehrenhaft’s”, Goldstein asked Siegert, “Why do we call Pais a physicist and Ehrenhaft a nut?”

Siegert through a moment. “Because”, he said firmly, “Ehrenhaft believes his theory”.

The strength of Ehrenhaft’s convictions, Siegert meant, had interfered with the normally playful attitude that scientists require, an ability to risk and respond in carrying forward their dissatisfactions. (Conviction, Nietzsche said, is a greater enemy of truth than lies.) What makes a crackpot is not simply our prejudices, nor necessarily the claim, but our recognition of the disruptive effect of the author’s conviction. For conviction tends to wipe out not only the dissatisfaction but also the playfulness, the combination of which produces such a powerful driving force in science.

[–End Quote–]

73. Scott Says:

Vitruvius #72: I’m not sure what point (if any) you were trying to make in the context of the present discussion, but that’s a wonderful anecdote!

74. Peter w. Shor Says:

Wolfgang #71: I’ve now looked at Hsu’s paper, and I still don’t really understand which step of the AMPS argument this is invalidates. As I understand it, you don’t need to consider late-time nearly evaporated holes in AMPS, and at early times, while this factorization is approximate, it should be very nearly exact.

75. wolfgang Says:

@Peter Shor

Stephen makes his argument with equ. (7) and the following:

“The version of complementarity proposed here is that an Alice who experiences falling through a (particular) horizon is by definition not sensitive to other d branches. She also, therefore, cannot determine whether the radiation is in a pure state, or whether Bd ‘entangled with’ Cd.”

As I understand it, the claim is, similar to H-H, that Alice cannot really create a paradox; But the reason is different from H-H, this time the fact that Alice only sees one decohered branch of the wave function prevents her from distinguishing a pure state from thermal radiation.

Stephen makes it clear in the conclusion that his proposal is not a complete description yet.

76. Luboš Motl Says:

The Hilbert space “approximately” factorizes but it can be demonstrated – and it has been demonstrated – that in this approximation, a unitary evolution operator will not look unitary.

So if AMPS assume that the unitarity should hold separately in the sector that looks like a classical background with small fluctuations – that looks like a “decohered branch” of the wave function, so to say – they’re just wrong.

They only show that a “small square submatrix” of the actual matrix isn’t unitary. But that’s not surprising at all. A submatrix of a unitary matrix just usually isn’t unitary.

I don’t think that there is any need to add vague comments that “this is just a proposal”. This is a rock-solid identification of a flaw in the AMPS “proof” of a paradox. There are actually many other flaws in AMPS, it is not a quality paper in any sense.

77. T H Ray Says:

@Wolgang # 276

” … the fact that Alice only sees one decohered branch of the wave function prevents her from distinguishing a pure state from thermal radiation.”

This makes a great deal of sense to me, considering what I’ve learned from a 2011 Nature article (Zeeya Merali) on quantum discord. (Scott, I don’t know if I am allowed to publish links here; if I can, I will, next post.)

Quantum noise, including thermal noise I expect, can actually be an asset to computing a desired value. This allows a semi-classical approach to proving unitarity — maybe dual, if I can speculate, to the Hawking model of black hole radiation.

Tom

78. wolfgang Says:

@Tom @Lubos

Actually I am worried that Stephen’s argument proves too much. Unitarity is a properity of the S-matrix and if you will the Schroedinger evolution. But every real measurement involves a macroscopic device and at the end we only experience one decohered branch of the wave function.
So does this mean we can never demonstrate unitarity in a real experiment?

79. steve hsu Says:

Wolfgang,

It is quite difficult to demonstrate unitary evolution in an experiment because to do so one would have to detect decohered branches of the “many worlds” wavefunction. This requires either exponential sensitivity (of order exp (-S) where S is the number of dof of the measuring device), or the ability to prepare operators comprised of macroscopic superpositions.

This article clarifies the analogy between unitarity in BH evaporation and unitarity in ordinary QM measurements:

http://infoproc.blogspot.com/2009/03/black-holes-and-decoherence.html

80. steve hsu Says:

Peter #74,

Here’s a brief statement describing the problem with the “single mode” firewall argument (from my paper):

We can also formulate this discussion in terms of single modes. In the notation of \cite{Almheiri:2013hfa}, let $b$ be a late-time Hawking mode, $\tilde{b}$ its interior partner, and $e_b$ an early time Hawking mode entangled with $b$. Roughly speaking, the unitarity assumption implies that there is some early time mode $e_b$ with the property that $( b \, e_b )$ form a pure state. But this conflicts with the (no drama, or equivalence principle) assumption that $(b \, \tilde{b})$ form a pure state. The resolution is that $( b \, \tilde{b} )$ are excitations relative to the vacuum of a particular $d$ spacetime, whereas unitarity only holds across all $d$ branches — there is no $( b \, e_b )$ pair residing solely on a particular $d$ branch. The two uses of $b$ are incommensurate without the assumption of factorization \cite{MP}.

The small square sub-matrix that Lubos refers to is the radiation state in a single d block. Without assuming unitarity in this subspace there is no firewall argument.

I’m happy to discuss further if you are interested … BTW, I’m Page House ’86 🙂

81. Peter w. Shor Says:

Steve Hsu #80: Wouldn’t this have to imply that when Alice falls into the black hole, she’s seeing a black hole that’s coherent across several d branches? This doesn’t seem correct.

82. steve hsu Says:

Peter #81,

I’m not sure I understand the question, but here goes. The breakup into d sectors is only defined wrt to a certain level of observational power. If we define it to be Alice’s ability to resolve, e.g., the location of the BH, then she sees one d branch as she falls through. The logical hole in AMPS that I claim exists only requires that the total number of d branches being much larger than one. If that is the case, then the global state of the radiation over all branches can be unitary without conflicting with Alice’s observation that the near horizon modes (on her branch) are strongly entangled. Hope that helps!

83. Luboš Motl Says:

Dear Wolfgang #78,

you can surely not prove unitarity of the evolution operator (even in very simple system) by a single repetition of the evolution because the matrix elements of the evolution matrix aren’t observables – you only measure outcomes for observables and the evolution matrix gives you probability amplitudes for that.

If you repeat the evolution many times, you may measure/deduce all the matrix elements of the evolution matrix (up to an overall phase; with some statistical errors that become small for very many repetitions) from several probability distributions in a couple of cleverly designed experiments (each of which is repeated many times). You won’t need to use superpositions of “massively” macroscopically different microstates but you will of course need to measure some probability distributions for quantities that don’t commute with quantities from other repetitions of the experiment i.e. their eigenstates are general superpositions of eigenstates from another experiment (I hope it’s not shocking for you to learn that X,P don’t commute with each other and P eigenstates can’t be X eigenstates, i.e. localized, at the same moment).

Outside a black hole (unlike in the black hole interior where the technical possibilities are heavily constrained, and so is the time and space for your experiments), you may in principle build very (arbitrarily) sophisticated devices to detect macroscopic-superposition-like features of the evaporating black hole. In practice, of course, no one will ever distinguish the non-thermality or purity of the radiation coming from a black hole much heavier (and larger) than the Planck mass. All these problems are *obviously* just academic, to be solved mathematically assuming a certain theory for the black hole.

I don’t see why you think that Steve proved “too much”, except for obvious consequences of the uncertainty principle and other things.

Cheers
LM

84. Luboš Motl Says:

Dear Peter Shor #81,

it shouldn’t be surprising at all that Alice’s measurements in the black hole interior “collapse” (I hate this terminology) the state to a state that has support on man (well, all) the decohered $d$ branches (e.g. slightly different locations of the black hole’s center-of-mass).

The reason why Alice’s interior measurements can’t “collapse” her physics into a system that lives on one decohered $d$ branch only is simply that the observables she is able to measure don’t commute with the observables describing the branches $d$ such as the black hole center-of-mass position (but also infinitely many occupation number for external field modes etc.).

To elaborate on the first example, the black hole center-of-mass position is one of the natural observables accessible to the observers who are outside. It has always been the point of black hole complementarity that the outside observables don’t exactly commute with the interior field modes etc. (In ER-EPR, it’s because they’re actually close to each other if connected through the ER bridges that are produced as a part of the Hawking radiation; the interior is really timelike separated and in the future of the Hawking radiation in this chronology.)

For example, the black hole center-of-mass position doesn’t commute with the occupation number in the interior that Alice is just measuring. This non-commuting implies that the eigenstate of the occupation number Alice is just measuring can’t possibly be an eigenstate of the center-of-mass position at the same moment.

One may add much more detailed proofs why the commutator is actually nonzero (this may have been argued well before ER-EPR; the modes inside must be “adaptive” because they’re obliged to respect the shape and locus of the event horizon which is determined by the outside modes as well); and why the nonvanishing of the commutator implies all the non-block-diagonalization etc. that is neglected by AMPS, you, and others. But I won’t preemptively guess what you’re going to be unsatisfied with so let me stop this comment here.

The texts below are just a particular mathematical formalization of the correction of a very general class of mistakes people often make – they treat many things in quantum mechanics classically (as commuting observables; decohered sectors that can be assumed to be “really settled” and which are enough to discuss unitarity etc.). In reality, general observable pairs refuse to commute with each other, superpositions of many – even “visually” very different – states are always equally allowed in quantum mechanics and important for unitarity; evolution operators only reduce to a block-diagonal form in a basis of eigenstates of an observable that commutes with the evolution operators. Field modes don’t commute with the evolution operator so there’s no reason why the evolution operator should block-diagonalize in the field modes’ eigenstate bases.

Best regards
Lubos

85. wolfgang Says:

@Lubos

>> If you repeat the evolution many times, you may
>> measure/deduce all the matrix elements of the
>> evolution matrix

Sure, but keep in mind that the configuration of the macroscopic measurement device is different for each measurement and thus the geometry is (very slightly) different; This superposition of different geometries is the
main ingredient to Stephen’s argument.
I am no longer sure that you can control all of this if the complete measurement has to be done in a finite amount of time, because the more measurement devices Alice uses, or the longer she uses them, the worse the problems from (small) changes to geometry …

86. Luboš Motl Says:

Sorry, Wolfgang, I don’t understand what you’re saying.

If you want to measure the evolution matrix elements, you need to have a perfect control over the initial microstate. You need to repeat the procedure for every initial microstate, too. With a fixed initial microstate, you may determine properties of “one column” (or row?) of the evolution operator.

If you don’t have a control or knowledge about the initial state, you can’t quantify the evolution operator and you can’t decide whether it’s unitary.

If you do, you still have to repeat the experiments many times, to get statistics. Every time the result of the measurements is different. Indeed, this includes different location of the black hole and subtle geometry fluctuations around it. That’s the point of quantum mechanics that the future isn’t uniquely deterministically given by the past. But by repetitions of the same experiment with the same initial state, you may measure all the probabilistic distributions. I have no idea what you feel uncomfortable with.

In practice, we won’t be able to control “all of this”. But if you imagine some “reasonable” size of the black hole (and in principle, it can be made arbitrarily large), like 100 times Planck length in radius, it’s just some unstable particle whose behavior may be monitored by huge detectors surrounding it in detail, much like the properties of a W-boson. A black hole microstate *is* like the W-boson. All the final amplitudes may be measured up to an overall phase if you repeat the observations many times. The detectors surrounding the black hole may have to be vastly more massive and larger (perhaps exponentially) than the system you measure but this is just a technical limitation, not a conceptual one, because there’s no qualitative difference between a heavier black hole microstate and a W-boson.

87. steve hsu Says:

Re: #86, #85 and earlier,

I could be wrong, but I think Wolfgang is concerned about unitarity not just of (say) the particle collision under study, but that of the macroscopic *measuring device* as well (see #78). That is, I think he wants to be sure that even though we perceive only a single outcome (“saw spin up”), the true evolution of the entire state (spin + measuring device) has been unitary.

That can only be confirmed by a 3rd observer (“Wigner’s friend”) who looks at spin + device and then detects the *other* decohered branch to see that (non-unitary) von Neumann projection has not actually occurred. http://en.wikipedia.org/wiki/Wigner's_friend

I think it’s fair to draw the analogy between this question and BH evaporation because both the BH and measurement device are macroscopic. See the paper I linked to above for more detail.

Omnes wrote a book some time ago in which he does some rough calculations on the level of experimental sensitivity required to detect the “other” branch of a decohered 1 gm object. IIRC he concludes that there are not enough resources in the visible universe to make it possible. In any case, it’s quite challenging and even more so if you actually take gravity into account. I think this is the book; it’s probably referenced in my paper: http://www.amazon.com/Interpretation-Quantum-Mechanics-Roland-Omn%C3%A8s/dp/0691036691

88. Scott Says:

Lubos: I’m sure it’s just my low IQ, but I’m confused about how to reconcile all the different arguments you’ve endorsed for why AMPS is wrong. It’s fine to argue, as you have, that AMPS simply made lots of independent errors. The trouble is that there are many cases where you discuss both A and B, but if they had really erred in A, then it seems like there would be no reason even to discuss B. As one example, take

A = “a submatrix of a unitary matrix need not itself be unitary” (obvious), and

B = “apparently-nonlocal effects can be fine because spacetime, in our actual universe, can contain wormholes” (not obvious, to put it mildly 🙂 )

Are you claiming that A and B are actually two aspects of the same thing? If so, then that’s interesting! Forgive me for thinking that, if AMPS led directly to realizations like “ER=EPR” or “A=B” (as above), then it was probably a worthwhile paper to write.

89. T H Ray Says:

@ Peter Shor # 80

“Wouldn’t this have to imply that when Alice falls into the black hole, she’s seeing a black hole that’s coherent across several d branches? This doesn’t seem correct.”

Why wouldn’t it be, though? If it may be proven as a theorem that every classical observation has at least one quantum counterpart (I think it can be) then quantum discord (http://www.nature.com/news/2011/110601/full/474024a.html)
might tell us that if at some nonarbitrary condition of t = 0 — which is surely true to an observer outside the event horizon of a black hole — every observer orientation from that t = 0 origin experiences the same outcome.

Do you think that your Shor’s Algorithm can possibly survive giving up entanglement? Even Zurek, one of the fathers of the no-cloning theorem, reserves judgment in favor of an entanglement-discord complementarity.

Tom

90. Luboš Motl Says:

Dear Steve #87, agreed that a human can’t check the unitarity of her own evolution, that’s too ambitious and recursive a task. 😉 However, from a 3rd overlord observer’s viewpoint, even Alice and pals are parts of the degrees of freedom involved in the unitary evolution of the BH and its vicinity plus radiation.

Hi Scott #88, I completely agree with you that in a sane world, there would be no reason to discuss B because they made an error in A etc. All of this is a cannon used against a dead ant. However, there’s no natural or canonical “ordering” of the errors they made. That’s why I discuss all of the errors I am able to see (so far) because all of them may be important for other applications.

The AMPS theorem is like a theorem that Michelle Obama is destined to lead Ku-Klux-Klan because she’s a white male racist. Well, I am saying she’s not white, she’s not male, and she has no reason to lead such an NGO but I am not sure which ordering of these objections is the best one. 😉

If you think that there’s an actual inconsistency in anything I wrote while disagreeing with AMPS, let me know and I will show you why you are wrong.

It’s not true that the AMPS fallacies “led” to the ER-EPR correspondence. The latter arose from research of entanglement in quantum gravity by Maldacena himself (like the eternal AdS black hole and other things with doubled CFTs etc.) and by Van Raamsdonk and by Ruy-Takayanagi, among others. The ER-EPR explanation why AMPS arguments fail was just one of the (fashionable) applications of the insights on ER-EPR, a correspondence that is clearly more general and valuable than and independent from AMPS. When the dust settles, the importance of the counter-argument against AMPS will clearly go down along with the importance of AMPS itself but ER-EPR will be here to stay.

It would also be untrue that ER-EPR is “needed” to show that AMPS isn’t valid. There are many correct papers written before ER-EPR that show why AMPS doesn’t hold. ER-EPR is just a specific geometric visualization of “what’s actually going on” with the information during the evolution and measurements.

91. Peter W. Shor Says:

As I understand ER-EPR, it claims that entanglement is related to wormholes. I don’t understand how can be true. There seem to be consistent theories of physics (for example, quantum chromodynamics in Minkowski space) which have entanglement without wormholes, and consistent theories (for example, general relativity) that have wormholes without entanglement. So I don’t see why they should suddenly acquire this mysterious ER-EPR relation in string theory.

92. Luboš Motl Says:

Dear Peter Shor #91,

this is an unnecessary complaint of yours, really, but let me attempt to respond constructively. ER-EPR is meant to be a general law for all consistent quantum gravity theories – those that are quantum (respect postulates of quantum mechanics) and that contain gravity (in the Einsteinian sense, i.e. dynamical spacetime).

Of course that classical theories don’t even admit the term “entanglement” and (totally) non-gravitational theories don’t allow space to be deformed to a wormhole.

However, if you embed classical and/or non-gravitational theories into quantum gravity, either fully, or you just imagine to do so, it becomes legitimate to use ER=EPR again and everywhere.

Wormholes in classical GR or its extensions may still be thought of as some heavy correlation – classical artifact of entanglement – between two regions, those that are connected. What it precisely means may be figured out by taking the classical limit of the quantum description. Note that the black hole entropy is really infinite in the classical hbar=0 limit so the “amount of correlation/entanglement” becomes infinite (as a measure of information in bits) for a macroscopic wormhole in a classical theory.

And the case of non-gravitational quantum theories? Well, AdS/CFT really implies that every CFT – and, with less firm foundations, every QFT – actually is dual to gravity. This has implications. For example, a quark and an antiquark in a QCD-like theory combined into a singlet state also have this wormhole – which may be seen in the bulk or in the world sheet of the fluxtube connecting the quarks (the fluxtube produces a 2D gravitational theory on the world sheet). See Karch-Jensen and followups:

http://arxiv.org/abs/arXiv:1307.1132

So in this quantum, seemingly non-gravitational case of gauge theories, you’re downright wrong. ER-EPR holds and implies important things for the geometry of the world sheet of the fluxtube and horizons on it. The entangled quark and antiquark are connected with a wormhole that may be seen on the world sheet and that is also embedded in the AdS bulk.

All the best
Lubos

93. wolfgang Says:

@Stephen

>> concerned about unitarity not just of (say) the particle
>> collision under study, but that of the macroscopic
>> *measuring device* as well

and I think you have to be. Assume Alice stays outside the b.h. and she wants to check unitarity of the b.h. evaporation.
Her detector has to absorb all the Hawking radiation and its cumulative mass should be similar and actually exceed the mass of the b.h.

So if she cannot neglect the superposition of geometries due to the b.h. (if she jumps in), then I don’t see how she can neglect the same for her own detector imho.
I don’t see how she can control for that …

Now, if we bring in a 3rd observer (Wigner’s friend) it only
gets more complicated imho, as long as we cannot neglect the mass of the detectors…

The only argument I can think of is that after the b.h. disappears she has all the time in the world to check her detector, but I dont find this very convincing.

94. wolfgang Says:

@Lubos

>> The detectors surrounding the black hole may have to be
>> vastly more massive and larger (perhaps exponentially)
>> than the system you measure but this is just a technical
>> limitation

That is exactly what i don’t understand. You would basically surround the b.h. of mass M with a shell of mass M’ > M
and M’ would change randomly as it absorbs the Hawking radiation.
If Alice cannot neglect the random movement of the b.h. then she cannot neglect the random movement of her detector …
So it seems the Alice outside the b.h. has a similar problem as the Alice jumping in (except that she has perhaps more time to deal with it, but I would like to see a real calculation to know if she can do the measurement at least *in principle*).

95. wolfgang Says:

when I wrote ‘random movement’ above, it should have been ‘Schroedinger evolution including decoherence’ 😎

Btw the decoherence time for a massive detector would be very short and, as Stephen mentioned above, already for 1gm mass it is pretty much impossible (involving cosmological resources) to detect the other branches…

96. steve hsu Says:

Wolfgang, you seem sufficiently interested in this that I recommend you have a look at the paper I linked to earlier in the thread. The point I make is that to verify that BH evaporation is unitary (or radiation state is pure) you have to have experimental power similar to what is required to detect decohered Everett branches. I think in a gedanken sense it is possible, although not in practice.

97. Luboš Motl Says:

Dear Wolfgang, there is no sense in which unitarity holds for the infalling observer. Such an observer only has access to a very small subset of observables i.e. not to all microstates – but access to all microstates is needed to see unitarity.

Unitarity of the black hole life only holds from the viewpoint of an infinitely long-lived observer at infinity. In principle, this observer may surround the black hole by very large and very distant detectors whose position doesn’t fluctuate almost at all, because they’re large, and just measure the black hole, Alice approaching the black hole, and the radiation coming from the black hole.

Of course, the system including these large detectors can’t check the unitarity of itself – an even bigger facility would be needed. It’s like (but in no way the same thing as) the Gödel discussions that you can only prove the consistency by a more powerful system axiom than the axiom of interest.

98. wolfgang Says:

Steve,

I will read your paper because “experimental power similar to what is required to detect decohered Everett branches” seems quite close to “impossible” to me.

Btw I am not a believer in “many worlds” and I hope this does not mean that unitarity of b.h. depends on the interpretation (e.g. the Copenhagen “collapse” would not be reversible as far as I understand it) but this is another discussion…

99. steve hsu Says:

Wolfgang,

The BH information problem does not hinge on the many worlds interpretation (“no collapse”). However, in asking questions about the evolution of the full quantum state of a macroscopic object (ball of dust –> BH –> radiation) one is led to consider the same set of issues that led to MW. By asking whether evolution is unitary (note “collapse” is *not unitary*) one is at least adopting a *description* of the BH evolution which *looks like* many worlds (many decohered branches, on which Alice has qualitatively different experiences). How you “interpret” this state evolution is of course of to you ;^)

http://infoproc.blogspot.com/2012/08/gell-mann-feynman-everett.html

100. T H Ray Says:

@Steve Hsu #97

” … to verify that BH evaporation is unitary (or radiation state is pure) you have to have experimental power similar to what is required to detect decohered Everett branches. I think in a gedanken sense it is possible, although not in practice.”

I agree in full. It isn’t even possible in principle to reconstruct decohering branches of the Everett model without violating conservation of energy. In fact, though, this is actually an argument for unitarity, if the theorem “For every classical observation there exists at least one quantum state” can be proved. This gives quantum discord a primary role over quantum entanglement — guaranteeing that the last bit of classical information corresponds to the last coherent quantum state. Is that not what “unitary” actually means?

Good luck, Steve.

101. Luboš Motl Says:

Dear Steve Hsu, interesting it doesn’t “look” anything like “many worlds” to me. A physical system described by the wave function a*psi1+b*psi2 is *one* physical system that is in the state psi1 *or* psi2 with the appropriate probability amplitudes.

102. steve hsu Says:

Lubos,

I think even you sometimes refer to the subjective experience of one of the (many) Alices described by the wave function. Once you appreciate that Psi describes many possible experiences (“decoherent histories” or whatever), then it raises the question of whether you might be one of many Motls within a larger quantum state.

If you are comfortable with a pure state (single spin + Stern-Gerlach device) evolving into a mixed state — Prob 1/2 (“spin up” + “device registers spin up”) and Prob 1/2 (down outcome) — then why be worried about a pure state of dust evolving into a mixed state of Hawking radiation? (Please don’t reply Banks, Peskin, Susskind … 🙂

In the BH case you demand that pure –> pure, but if you demand that in the spin measurement you get many worlds 😉

103. steve hsu Says:

PS Anyone interested in this topic should look at J.S. Bell’s pedagogical essay Against Measurement linked to at the page below.

http://infoproc.blogspot.com/2009/03/black-holes-and-decoherence.html

104. Luboš Motl Says:

Dear Steve,

if the state is a*psi1+b*psi2 and you describe it by a “larger state” with many Motls, well, then your description will be easily falsified because the right state is a*psi1+b*psi2 and it says that I am unique, among many other things it says. 😉

I never said that I was OK with a pure state evolving to a mixed state. I only wrote that a*psi1+b*psi2 allows you to calculate the probabilities of having properties indicated by eigenstates psi1, psi2. These probabilities are |a|^2 and |b|^2, respectively, even in the pure state. The pure state also allows one to compute the probabilities of other observables that don’t commute with this one – and the relative phase of a,b will influence this probability – but I didn’t write anything that would contradict that fact, either.

“If you demand pure-to-pure in spin measurements, you get many worlds.”

I definitely don’t get many worlds. You don’t get them, either. Only if you’re drunk, you may *think* that you see any evidence for many worlds.

Cheers
LM

105. steve hsu Says:

Lubos, we might be talking past each other a little bit. Perhaps you can have a look at the Bell article I mention above and tell me what you think of p.36, which describes what I call “passing from a pure to mixed state” in the usual description of quantum measurement. The treatment (even notation!) is similar to your discussion (of BHs) in your recent Apologia … Hawking blog post. As you point out, this step is not OK in the BH context (it would lead to the erroneous impression of non-unitarity), but appears in every standard treatment of measurement. Equivalently, following Bell, we could say that, FAPP, BHs destroy information!

http://motls.blogspot.cz/2013/09/an-apologia-for-ideas-from-hawkings-bh.html?m=1

106. Luboš Motl Says:

Dear Steve, I read page 36 of “Against Measurement” – although I suffered – but I just did it for you. All this rubbish that QM is dividing the world, and we need to define what a measurement is or isn’t, and so on. That’s just not true. Quantum mechanics answers well-defined mutually compatible questions by calculating probabilities of outcomes. That’s it. Every meaningful prediction of science can be reduced to that.

A normal measurement anywhere in this quantum world does involve a loss of coherence – decoherence – caused by tracing over many degrees of freedom that reinforce the classical character of the measured information. So it’s completely correct to describe a measurement in this way.

But there’s no contradiction. Once you measure something and “annihilate” all the non-realized parts of the wave function, of course that you perform an operation that kills the unitarity. Unitarity refers to the unitarity of the evolution operator whose matrix elements can’t be directly measured – they have a probabilistic interpretation. If someone is trying to squeeze some approximations such as tracing over some degrees of freedom (e.g. decoherence) or collapses into the evolution, the state vector castrated in this way is of course not related to the initial state vector by the universal unitary transformation anymore. It’s not even related by any universal transformation because the result of the measurement is random. But that’s true not only in black holes. It’s true everywhere in quantum mechanics. So if you insanely decided that this means that the unitarity is broken, the conclusion would apply to any quantum system, not just black holes.

But all of this would be an artifact of a wrong interpretation of WHAT should be unitary.

107. steve hsu Says:

Hi Lubos,

Thanks for taking the time to read what Bell wrote. Apologies if it was painful 🙁 At the risk of subjecting you to more pain I will write some more below, but feel free to ignore and end the discussion if you are getting bored 🙂

I’m still not exactly sure what your position is on all this. If standard QM is just an algorithm for computing probabilities, do you agree that (at least in principle, if not in practical situations) Wigner’s friend, watching you compute your probabilities, might say “Lubos thinks he has a definite outcome (saw spin up), but I can detect the other Lubos (saw spin down) who is just as sure about his result”? Because what you regard as a “normal measurement … loss of coherence … etc.” is described by Wigner’s friend as just a part of the overall unitary evolution in his description of the system. Sort of like comparing the various Alices’ experiences in the BH context. (Ordinarily one can just refuse to consider a Wigner friend watching a human experimenter, because it’s so far from practical. But in the BH information context you can’t punt on this question without getting the “wrong” answer — i.e., dropping some of Alice’s decoherent histories from the calculation.)

Which of the various observers in the discussion actually transforms the unitary Schrodinger calculation into a “real”, “classical” outcome (or probabilities of outcomes)? Can an observer be *mistaken* about having done this (as Alice might be)? Sometimes largely decohered branches can be re-interfered, in the lab, through exquisite control.

Bell’s discussion focuses on the “split” or “when pure becomes mixed (and probability enters)” or exactly when “FAPP” should be imposed. This is an old question which I feel has never been answered and is being pushed on all the time as lab experiments get better at controlling decoherence. If you give a precise criterion for when FAPP applies (e.g., “when overlap of decohered branches is less than exp(-1000)”), then, FAPP, you’ll conclude (sufficiently big) BH evaporation lets pure –> mixed (i.e., is non-unitary).

108. Luboš Motl Says:

Dear Steve,

Wigner’s friend not only “can” say “Lubos thinks he has a definite outcome (saw spin up), but I can detect the other Lubos (saw spin down) who is just as sure about his result”?

He *must* say that, otherwise he is not doing a perfectly accurate (or at least more accurate than I can do) calculation (which is the point why he is inserted to the thought experiment at all). I am just another physical object so of course that as such, I do evolve to linear superpositions that include superpositions of macroscopically distinct states. If you have doubts about the superposition principle and linearity of QM evolution operators, you misunderstand lesson 2 of an undergraduate course. In that case, it’s nothing deep, it’s pure stupid.

Of course that it’s an isomorphic set of facts and different perspectives as we encounter with Alice in the black hole. How it could not be? Both discussions are about quantum mechanics, after all. Different observers may and often do measure observables that don’t commute with each other. In the black hole case, this becomes extremely natural, and not just Wigner’s friend academic discussion, because the “decoherence” operating from the viewpoint of the exterior and interior field mode operators are completely different, having different “preferred observables” and their eigenstates’ bases, and so on.

You ask: “Which of the various observers in the discussion actually transforms the unitary Schrodinger calculation into a “real”, “classical” outcome (or probabilities of outcomes)?”

I have answered this question about 180 times on 75 places already but you must have missed all of them. According to quantum mechanics, there is no “classical” or (in this sense) “real” i.e. objective reality. Quantum mechanics answers subjects’ questions and they may depend and generally do depend on the subject. There is no rule in QM that would objectively say which questions must be asked or how many questions should be asked. I can ask about something at the time when I observe something – and QM calculates the probability. But it’s not quite a set of perfectly consistent histories because Wigner’s friend may have a more consistent set of consistent histories in which he realizes that I also evolve into linear superpositions and they may recohere sometime in the future with a nonzero probability. So he will wisely avoid imagining that the objective single “classical” reality exists at the point of my measurement and will continue to work with superpositions of my brain in different states up to the moment of his, more accurate measurement, for which QM gives answers as well. No contradiction which would be “really weird” may ever occur in this setup because quantum mechanics predicts correlations between the answers to various questions – so whenever the “small picture” and “big picture” observers ask the same question, they get the same answers. But QM can’t say and doesn’t say that someone *must* treat the value of an observable at some moment as a classical fact. It never does it. On the contrary, QM always insists on summing over all histories, considering all possible superpositions as the intermediate states, when one wants to calculate the most accurate answer to a question. “Collapsing” the wave function or “truncating” the set of histories when in the intermediate states before the observation is always a mistake that may only be relatively harmless if classical physics is a good enough description for the collapsed degrees of freedom (whatever “good enough” is quantitatively). When it’s not, it’s just wrong to “collapse” premature. Always.

It’s sort of remarkable that you’re supposed to be on the side that *does* understand QM better than the other side but you too keep on emitting this breathtaking rubbish about “when quantum mechanics becomes classical”. It never becomes classical. Quantum mechanics fundamentally rejects the notion of objective reality seen in classical physics – because it is *not* classical physics – which emerges as an approximation, and only as an approximation to the laws of quantum mechanics. If I were your undergraduate or graduate QM instructor, I would probably give you the homework to write these elementary defining facts about QM 200 times on the blackboard or in a notebook. Quantum mechanics isn’t classical, stupid. Quantum mechanics isn’t classical, stupid. And so on.

“If you give a precise criterion for when FAPP applies…”

Haven’t I made it very clear that such a criterion doesn’t exist and all people who are looking for this criterion – for a moment when quantum mechanics becomes perfectly classical – are completely deluded? Quantum mechanics never becomes perfectly classical. What word do you need to be explained? Deluded?

Cheers
LM

109. steve hsu Says:

Based on your comments in #108 above it appears we don’t disagree on anything :^)

I had thought from earlier remarks that *you* believe there is an ultimate classical reality (which the QM formalism allows us to compute probabilities governing), etc. but apparently I was mistaken.

Cheers,
Steve

110. Luboš Motl Says:

Wow, that’s great news, Steve! I was just going to write a rant about the anti-quantum delusion of high-tier academic administrators in Michigan. 😉 Fortunately, I was busy and lazy.

111. Melissa Higgins Says:

Good thing technology has advanced. Firewalls have come a long ways since this post was published. Too bad we couldn’t get and updated article relevant to this post.