Interpretive cards (MWI, Bohm, Copenhagen: collect ’em all)

I’ve been way too distracted by actual research lately from my primary career as a nerd blogger—that’s what happens when you’re on sabbatical.  But now I’m sick, and in no condition to be thinking about research.  And this morning, in a thread that had turned to my views on the interpretation of quantum mechanics called “QBism,” regular commenter Atreat asked me the following pointed question:

Scott, what is your preferred interpretation of QM? I don’t think I’ve ever seen you put your cards on the table and lay out clearly what interpretation(s) you think are closest to the truth. I don’t think your ghost paper qualifies as an answer, BTW. I’ve heard you say you have deep skepticism about objective collapse theories and yet these would seemingly be right up your philosophical alley so to speak. If you had to bet on which interpretation was closest to the truth, which one would you go with?

Many people have asked me some variant of the same thing.  As it happens, I’d been toying since the summer with a huge post about my views on each major interpretation, but I never quite got it into a form I wanted.  By contrast, it took me only an hour to write out a reply to Atreat, and in the age of social media and attention spans measured in attoseconds, many readers will probably prefer that short reply to the huge post anyway.  So then I figured, why not promote it to a full post and be done with it?  So without further ado:


Dear Atreat,

It’s no coincidence that you haven’t seen me put my cards on the table with a favored interpretation of QM!

There are interpretations (like the “transactional interpretation”) that make no sense whatsoever to me.

There are “interpretations” like dynamical collapse that aren’t interpretations at all, but proposals for new physical theories.  By all means, let’s test QM on larger and larger systems, among other reasons because it could tell us that some such theory is true or—vastly more likely, I think—place new limits on it! (People are trying.)

Then there’s the deBroglie-Bohm theory, which does lay its cards on the table in a very interesting way, by proposing a specific evolution rule for hidden variables (chosen to match the predictions of QM), but which thereby opens itself up to the charge of non-uniqueness: why that rule, as opposed to a thousand other rules that someone could write down?  And if they all lead to the same predictions, then how could anyone ever know which rule was right?

And then there are dozens of interpretations that seem to differ from one of the “main” interpretations (Many-Worlds, Copenhagen, Bohm) mostly just in the verbal patter.

As for Copenhagen, I’ve described it as “shut-up and calculate except without ever shutting up about it”!  I regard Bohr’s writings on the subject as barely comprehensible, and Copenhagen as less of an interpretation than a self-conscious anti-interpretation: a studied refusal to offer any account of the actual constituents of the world, and—most of all—an insistence that if you insist on such an account, then that just proves that you cling naïvely to a classical worldview, and haven’t grasped the enormity of the quantum revolution.

But the basic split between Many-Worlds and Copenhagen (or better: between Many-Worlds and “shut-up-and-calculate” / “QM needs no interpretation” / etc.), I regard as coming from two fundamentally different conceptions of what a scientific theory is supposed to do for you.  Is it supposed to posit an objective state for the universe, or be only a tool that you use to organize your experiences?

Also, are the ultimate equations that govern the universe “real,” while tables and chairs are “unreal” (in the sense of being no more than fuzzy approximate descriptions of certain solutions to the equations)?  Or are the tables and chairs “real,” while the equations are “unreal” (in the sense of being tools invented by humans to predict the behavior of tables and chairs and whatever else, while extraterrestrials might use other tools)?  Which level of reality do you care about / want to load with positive affect, and which level do you want to denigrate?

This is not like picking a race horse, in the sense that there might be no future discovery or event that will tell us who was closer to the truth.  I regard it as conceivable that superintelligent AIs will still argue about the interpretation of QM … or maybe that God and the angels argue about it now.

Indeed, about the only thing I can think of that might definitively settle the debate, would be the discovery of an even deeper level of description than QM—but such a discovery would “settle” the debate only by completely changing the terms of it.

I will say this, however, in favor of Many-Worlds: it’s clearly and unequivocally the best interpretation of QM, as long as we leave ourselves out of the picture!  I.e., as long as we say that the goal of physics is to give the simplest, cleanest possible mathematical description of the world that somewhere contains something that seems to correspond to observation, and we’re willing to shunt as much metaphysical weirdness as needed to those who worry themselves about details like “wait, so are we postulating the physical existence of a continuum of slightly different variants of me, or just an astronomically large finite number?” (Incidentally, Max Tegmark’s “mathematical multiverse” does even better than MWI by this standard.  Tegmark is the one waiting for you all the way at the bottom of the slippery slope of always preferring Occam’s Razor over trying to account for the specificity of the observed world.)  It’s no coincidence, I don’t think, that MWI is so popular among those who are also eliminativists about consciousness.

When I taught my undergrad Intro to Quantum Information course last spring—for which lecture notes are coming soon, by the way!—it was striking how often I needed to resort to an MWI-like way of speaking when students got confused about measurement and decoherence. (“So then we apply this unitary transformation U that entangles the system and environment, and we compute a partial trace over the environment qubits, and we see that it’s as if the system has been measured, though of course we could in principle reverse this by applying U-1 … oh shoot, have I just conceded MWI?”)

On the other hand, when (at the TAs’ insistence) we put an optional ungraded question on the final exam that asked students their favorite interpretation of QM, we found that there was no correlation whatsoever between interpretation and final exam score—except that students who said they didn’t believe any interpretation at all, or that the question was meaningless or didn’t matter, scored noticeably higher than everyone else.

Anyway, as I said, MWI is the best interpretation if we leave ourselves out of the picture.  But you object: “OK, and what if we don’t leave ourselves out of the picture?  If we dig deep enough on the interpretation of QM, aren’t we ultimately also asking about the ‘hard problem of consciousness,’ much as some people try to deny that? So for example, what would it be like to be maintained in a coherent superposition of thinking two different thoughts A and B, and then to get measured in the |A⟩+|B⟩, |A⟩-|B⟩ basis?  Would it even be like anything?  Or is there something about our consciousness that depends on decoherence, irreversibility, full participation in the arrow of the time, not living in an enclosed little unitary box like AdS/CFT—something that we’d necessarily destroy if we tried to set up a large-scale interference experiment on our own brains, or any other conscious entities?  If so, then wouldn’t that point to a strange sort of reconciliation of Many-Worlds with Copenhagen—where as soon as we had a superposition involving different subjective experiences, for that very reason its being a superposition would be forevermore devoid of empirical consequences, and we could treat it as just a classical probability distribution?”

I’m not sure, but The Ghost in the Quantum Turing Machine will probably have to stand as my last word (or rather, last many words) on those questions for the time being.

323 Responses to “Interpretive cards (MWI, Bohm, Copenhagen: collect ’em all)”

  1. Jay Says:

    >what would it be like to be maintained in a coherent superposition of thinking two different thoughts A and B, and then to get measured in the |A⟩+|B⟩, |A⟩-|B⟩ basis?

    Do you see that as different from the problem of measuring in the |dead>+|alive>, |dead>-|alive> basis?

  2. Jay Says:

    Or maybe better: do you think it matters if the living cat is awake or sleeping?

  3. rossry Says:

    > there was no correlation whatsoever between interpretation and final exam score—except that students who said they didn’t believe any interpretation at all, or that the question was meaningless or didn’t matter, scored noticeably higher than everyone else.

    …made me involuntarily and uncontrollably laugh out loud (mildly inappropriate when one has apartmentmates still asleep…).

    Thanks for making my morning, Scott!

  4. Scott Says:

    Jay: Well, the form of the question is “what is it like?” Presumably it’s not like anything to be dead. But sure, one could also ask whether it’s like anything to be the live cat component of a state rotating unitarily through a 2-dimensional subspace that also includes a dead cat component, and if so how it differs from the normal experience of being a cat.

  5. Stephen Jordan Says:

    My sense is that you are possibly being overgenerous to the Copenhagenists. As I see it, the central difference between the Copenhagen cluster of interpretations and the Many-Worlds cluster is their stance on whether there exists a special category of interactions called measurements.

    My familiarity with the original sources is admittedly spotty but as far as I can tell the Copenhagen interpretation as originally formulated really did assert that there were some interactions that count as measurements, which collapse wave functions, and others that do not. This was supposed to be some kind of strict binary distinction built into fundamental physics, the delineation of which was left as an open problem.

    I think modern post-Everett scientists who identify as Copenhagen adherents have a tendency to assert that the original formulators couldn’t possibly have been so confused and really were advocating something more along the lines of a shut-up-and-calculate attitude. But as far as I can tell they really were that confused.

  6. Celestia Says:

    >(“So then we apply this unitary transformation U that entangles the system and environment, and we compute a partial trace over the environment qubits, and we see that it’s as if the system has been measured, though of course we could in principle reverse this by applying U-1 … oh shoot, have I just conceded MWI?”)

    If you go to the ‘Church of the Larger Hilbert Space’ enough times eventually you start believing it. 🙂

  7. Tim Maudlin Says:

    Yipe! I suppose we have to wait on the longer post, but…come on Scott!

    There are logically only three options for dealing with Schrödinger’s cat, i.e. the “measurement problem”:

    1) There are real, physical collapses. These ought to be governed by some sharp equations. Standard example: GRW. Does this “change quantum mechanics”? Not in any substantive sense, since “standard” quantum mechanics is a vague and opportunistic mix of linear evolution and collapse evolution. That’s how von Neumann axiomatized it.

    2) Forego collapses and add additional variables (and please don’t call them “hidden” variables: they better be manifest!). Standard example: Bohmian mechanics. The laws are perfectly sharp and tables are perfectly sharp and cats are perfectly sharp, and at the end of a Schrödinger’s cat experiment there are exactly as many cats as there were at the beginning, viz. one. Either alive or dead. Just as in GRW, and in accord with what everybody actually believes about the physical world.

    3) Many worlds. Get rid of collapses (thus altering standard QM as codified by von Neumann). One cat in, many cats out, some alive and some dead. And now you have a world of trouble both accounting for what probability might mean in such a theory and, more importantly, reconciling that account of the world with the world we thought we live in.

    Are there many rules that, as a matter of logic, could be used for the “guidance equation” in Bohmian mechanics, the law governing the additional variables? Sure: you could add some stochasticity and end up with Nelson’s stochastic mechanics. But there are clearly two non-arbitrary choices for the degree of stochasticity: turn it up to infinity and get Bell’s version of many worlds, or turn it down to zero and get the deterministic rule of Bohm. The standard Bohemian guidance equation is not in the least arbitrary: it is the simplest thing by far to do. At last count there were around ten different motivated derivations of the guidance equation. Here’s one quick one: I have a scalar field (the wave function) and want a velocity (and hence vector) field. What is the simplest way to get one? Take the gradient! Now you want something empirically adequate? The imaginary part of the gradient works perfectly, and the imaginary part behaves properly under time-reversal since we time reverse the wave function by taking the complex conjugate. Any of your other rules just complicates the situation to no purpose.

    Your main problem as a computer scientist, I think, is that the whole measurement problem is swept under the rug, but in a very precise sweeping: just posit “measurement gates”! Without them, your quantum computations don’t have outcomes. So you need them. But that’s as principled as the “and then a miracle happens” in the famous cartoon. That’s where your collapses are, but since they come at the end of your computation the collapses never show up in the dynamics of the computation that precedes them. For the rest of the circuit you are using a no-collapse theory, and so you end up talking in many-worlds terms.

    Nothing I have said above has brought in “us” or the hard problem of consciousness in any way. What I did say is that the world that we grow up in appears to have cats that do not multiply or split all the time. That seems to be the way of the physical world is, quite independently of us and our consciousness: there are planets, and stars and rocks and all sorts of non-conscious things, and (through experience) we know a certain amount about how they behave. Physics has the job of explaining that behavior. Leave us out of it: as material beings we are way too complicated to treat in terms of fundamental physics.

    The problem with Many Worlds, as you present it, is not that it leaves *us* out of the picture, it is that it leaves *the entire physical world as we take it to be* out of the picture! And without that input—a bunch of things we start out thinking we know about the physical world—physics is obviously impossible. You can, of course, question the veridicality of our initial impressions of the world: Descartes famously did. But then you are just not doing physics. That’s why when QBism collapses into quantum solipsism, it also ceases to be physics.

    We know you don’t like Bohmian mechanics for rather idiosyncratic and subjective reasons. As a quantum computation guy, you spend your life thinking about spin degrees of freedom, and not the position/momentum degrees that actually transport your qubits from one gate to the next. For you, that part of an physical quantum computer is boring and trivial and does not come into your abstract analysis at all. But those physical degrees of freedom with their infinite-dimensional Hilbert spaces are there nonetheless, and physics has to deal with them even if you don’t. Any foundational approach to quantum theory that is based purely in the features of spin is courting disaster when it has to account for the real world. So please step out of your specialization in computing when addressing the foundational problems and think about, oh I don’t know, actual physical cats. There’s a lot more to them than spin.

  8. Mateus Araújo Says:

    One issue that I think is crucial, and you haven’t mentioned, is the meaning of objective probabilities.

    I think the philosophers have understood very well what subjective probabilities are – they are nicely formalised in the Bayesian probability framework.

    The problem is that as far as we can tell the quantum mechanical probabilities are objective. But what are objective probabilities? I don’t think there is any satisfactory interpretation of probability that can make heads or tails of them.

    One should hope that one can understand them via the interpretation of quantum mechanics, but most interpretations are sorely disappointing is this sense: Copenhagen and Collapse models simply postulate that they somehow exist without trying to explain what they are, QBism just banishes them in favour of subjective probabilities, without trying to explain why they seem so objective, and Bohmian mechanics is the worst offender: not only the probabilities are not objective, but the subjective probabilities must be distributed downright conspiratorially in order to hide all the non-locality going on in the fundamental level.

    The only interpretation that tries to understand what objective probabilities are is the Many-Worlds interpretation, via the Deutsch-Wallace theorem.

  9. Scott Says:

    Stephen #4: For you, I just edited the post to add some wry comments on Copenhagen, as “shut up and calculate minus the shut up part” (actually I was planning to do so anyway). Hope you enjoy it! 🙂

    On the other hand, I also strongly believe that every interpretation needs to be judged by what its “modern” proponents (if there are any) say about it—or at the very least, that there needs to be a statute of limitations for being saddled with the misconceptions of the past.

  10. Bunsen Burner Says:

    I think any modern discussion of Quantum Foundations needs to consider a more comprehensive map of the interpretational landscape, as give here for example:

    https://arxiv.org/abs/1509.04711

  11. cds Says:

    I’ve always been confused by MWI because, near as I can tell (and I am not an expert so I will put it out here to others to correct me if I misunderstand) it still requires an additional postulate that tells us how to measure probability. Whenever people try to explain it, they consider states like |1, measured 1> + |2, measured 2> and everything seems to make sense. But I’m concerned about 0.1 |1> + 0.9 |2>, where |2> is that much more likely than |1>. What tells me I know |2> is measured more often than |1> and how do I think about that in the MWI?

    With an additional postulate like this, it seems to be just another version of thinking of quantum mechanics as some complex version of actual probabilities.

  12. Bunsen Burner Says:

    Also, QBism is now passe, more thought is leading to the Participatory Realism paradigm.

    https://arxiv.org/abs/1601.04360

  13. Scott Says:

    Bunsen Burner #11: Unless interpretations of QM are like hair accessories, who cares which ones are or aren’t “passé”?

  14. Bunsen Burner Says:

    My point is that the label QBism is passe, read what Fuchs et al are talking about now. They view Participatory Realism as more indicative of their ideas and a better label than QBism.

  15. Mateus Araújo Says:

    cds #10:

    I think the best way to understand that is through the toy Many-Worlds theory introduced by Adrian Kent. In Kent’s world, people live in a classical deterministic computer simulation, and when a measurement is made on the state m|1> + n|2>, m successor worlds are created with result 1, and n successor worlds are created with result 2. The coefficients m and n are restriced to be positive integers.

    One can then show that if takes m/(m+n) to be the “probability” of getting result 1, and n/(m+n) to be the “probability” of getting result 2, these “probabilities” behave as we would expect in regard to the law of large numbers and decision theory.

  16. Scott Says:

    cds #10: Yes, you do need some postulate to the effect that absolute squares of amplitudes give the probabilities of experiences. For 60 years, starting with Everett himself, people have sought and often claimed to have derived that postulate from something simpler. But of course, whatever assumptions they use in deriving it are also open to question.

    In MWI’s defense, though, it seems to me that every interpretation—not just MWI—has the problem equally (if you consider it a problem) of explaining where the Born probabilities come from. Even de Broglie – Bohm needs to explain why those probabilities reigned at the beginning of time (if they’re assumed to hold exactly).

    Once you’ve agreed to stick in probabilities at all, though, I confess that the question “why the Born probabilities” is not one that I ever lost sleep over, because one can give about 20 different arguments for why they’re the only choice that makes internal mathematical sense (that is, once you’ve already fixed the unitary part of QM, and thereby picked out the 2-norm as your fundamental norm). But there’s maybe a philosophical question about why different branches of the wavefunction should correspond to different experiences at all, or what properties a wavefunction needs to have before that happens.

    MWI differs from other interpretations precisely in denying that the amplitudes just give you a different, 2-norm-based way to calculate probabilities of observations: it says that each branch of the wavefunction with you in it corresponds to an actual experience that some version of you has, although there are “more yous,” or something, in the higher-amplitude branches. (Incidentally, my phone kept aggressively autocorrecting the word “yous”: guess it’s not on board with MWI!) Or better: if an interpretation says something like this, then we call it a variant of MWI.

  17. Peter Morgan Says:

    You find that your “students who said they didn’t believe any interpretation at all, or that the question was meaningless or didn’t matter, scored noticeably higher than everyone else” (good to know!), but that /might/ be understood as “those students who could comfortably switch between interpretations as they seemed more or less useful as a way to think strategically and intuitively when first looking at a problem …”. So perhaps also ask next time something like “Do you switch between interpretations when solving problems?”

  18. Bunsen Burner Says:

    Also, a very comprehensive account of the transactional interpretation is provided by Ruth Kastner in “The transactional interpretation of quantum mechanics: The reality of possibility”

  19. Peter Morgan Says:

    QFT is easier to interpret than QM. I’ve linked to this before, https://arxiv.org/abs/1709.06711, which is akin to de Broglie-Bohm interpretations in noticing that there’s a way to construct a random field using just the resources available when we use the quantized EM field (and, differently, for the quantized Dirac spinor field case). A new class of such constructions, and indeed there’s no more uniqueness about it than there is about other de Broglie-Bohm class interpretations, but sometimes it’s good to know more stuff and know when to use it or not.

  20. Scott Says:

    Bunsen Burner #9: Classifying interpretations is well and good (I also have a predilection for classifying things in big tables 🙂 ), but if you’re just trying to figure out for yourself what’s true or reasonable about a given philosophical question, then needing to address every subtle variation on every alternative version of every possible answer that anyone ever cared enough to write a paper about or attach a catchy slogan to, is just as likely to be a hindrance as a help. After spending some time to acquaint yourself with the literature, why not thereafter mostly restrict your attention to those possible answers that are actually live options for you?

  21. Bunsen Burner Says:

    I’m not sure what your point is, Scott. The classification I provided represents how experts in Quantum Foundations are thinking about the conceptual links between the various interpretations. If these experts consider all of those options still available then who am I, a complete dilletante, to say differently. That way lies Dunning-Kruger.

  22. Scott Says:

    Bunsen #20: No, it’s how Adán Cabello thinks about it, which is great! But I doubt that most physicists—even those who are interested in foundations of QM, publish papers on the subject, etc.—could give any clear account of most of the options he lists and how they differ from each other. That could of course mean they’re a bunch of ignoramii, but it could also mean that spending too much time taxonomizing all the different things people have said about a question, can sometimes take you too far away from thinking about the original question itself.

  23. Bunsen Burner Says:

    Oh, of course. I don’t think Cabello’s classification is an end in itself. I just gave the link as an interesting talking point, and to show how the landscape around quantum interpretations has been shifting. My suspicion is that a lot of interpretations were formed in the past before physicists had developed a rich vocabulary for discussing QM. Hopefully your generation, by introducing relationships to information and computation, and with better conceptual awareness of ideas like contextuality, can provide a richer, more modern interpretation.

  24. Scott Says:

    Tim Maudlin #6: I know the issue with calling them “hidden” variables; I called them that only because Bohm himself did.

    Am I wrong in thinking that the “uniqueness” of the de Broglie – Bohm guiding equation, delivered by the various derivations you talk about, is extremely special to the case of a bunch of nonrelativistic point particles moving around in a potential?

    If so, then forget about quantum computation—dBB loses its uniqueness (and thus, to me, much of its appeal) even when we try to deal with quantum field theory, and the degrees of freedom that it posits as being much more fundamental than point particles.

    I’m aware, of course, that there are more complicated degrees of freedom in the world than the internal states of spin-1/2 particles, a.k.a. qubits (at least in currently-understood physics—in the ultimate quantum theory of gravity, who knows?). But it’s equally true that there are other degrees of freedom in the world besides the positions and momenta of particles (and indeed, that QFT doesn’t even regard those degrees of freedom as particularly fundamental).

    Say whatever else you like about it, but because of its abstractness and generality, MWI (for example) is able to deal with whatever the fundamental degrees of freedom turn out to be. All it wants as input are the basic postulates of QM, together with a Hamiltonian that gives rise to a branching structure.

    dBB, by contrast, seems to me to have the blessing and curse of being forever tied to one particular class of quantum systems (namely, point particles moving around in a potential)—which makes it pretty nice for those systems but less so for anything else. You can, of course, generalize dBB to other systems—or make up your own, brand-new “manifest variable” theories to order for those systems (I’ve done it)—but then you lose a lot of what made dBB appealing in the first place.

    Incidentally, not that it matters, but in the superconducting QCs that Google, IBM, Intel, and Rigetti are right now building, the qubits don’t move at all: they just sit there, and are acted on by pairwise nearest-neighbor Hamiltonians. (In other architectures, like trapped ions, the qubits do sometimes move—and of course in photonic QC, they never stop moving!)

    Certainly I’ve never seen any claim that dBB specifically (as opposed to, let’s say, the more general idea of “manifest variables”) provides any new insight whatsoever for quantum computing or information. I’m curious: do you think that dBB provides new or different insight about any issue that’s been of research interest in physics (outside of quantum foundations) for, say, the past 40 years? Or, knowing you, would your claim be closer to: no, it probably doesn’t provide such insight, but that’s just as well, because physics has gone so far off the rails for the last 40 years that it would be better off restoring its state to 1978 anyway? 🙂

  25. Howard Wiseman Says:

    If I got to heaven and found God and the angels still arguing about the interpretation of QM, I think I’d kill myself.

  26. Andrei Says:

    I was very excited about learning Bohmian mechanics primarily because I helped it would somehow “explain” quantum-specific algorithms like Shor and Grover. Then I found out (from your papers primarily) that it only applies to continuous systems and discrete analogues are kinda forced don’t feel in any way like an explanation.

  27. Jon Tyson Says:

    The “who gives a shit about interpretations” interpretation needs a name so that people who want to discuss philosophy ad nauseum will not be able to omit it from the list, for example in places like wikipedia. To avoid swearing, how about the “many interpretations” interpretation?

  28. Tim Maudlin Says:

    Scott (#23)

    The main point of my post is to provide a fundamental division into three types of “interpretation” derived from the answers to two questions: 1) Does the wavefunction of a system provide a complete physical description of the system, i.e. are two systems ascribed the same wavefunction physically identical or not? and 2) Does the wavefunction always evolve via a linear equation of motion or not? This yields an exhaustive set of four categories, one of which (wavefunction not complete and evolution not always linear) has no canonical example because either one of the “no” answers provides the means to solve the measurement problem and so having both answers “no” is just gilding the lily.

    This is a better place to begin than with Adán Cabello’s table. First, the psi-epistemic category was killed off by PBR. And participatory realism? Measuring devices and observers are just more physics, governed by the same laws, unless you want to directly appeal to consciousness in an observer. If you do, you are screwed by the mind-body problem. If you try to make “observers” physically special (á la Wigner) you are never going to have mathematically formulable laws.

    So Bohmian mechanics is the most well-developed scheme for keeping the linear evolution and adding more variables. But it is not the only scheme. If you want to do QFT, one way is given by Bell in “Beables for Quantum Field Theory”. These so-called Bell-type quantum field theories are set on a lattice and have probabilistic dynamics. But contrary to your suggestion, there is always a unique minimal stochastic theory. So giving up determinism does not mean giving up a motivated choice among empirically indistinguishable theories.

    The general scheme is this. First, settle on the additional variables to be added. In Bohmian Mechanics that is the positions of particles, in Bell’s QFT it is fermion number density. (Bell recounts how he came to that decision. It wasn’t his first choice.) The additional variables are the local beables of the theory, in contrast to the wavefunction, which represents a non-local item. The image of the physical world we think we know, with tables and chairs and cats and stars and planets, is to be found in the history of the distribution of these local beables in space-time. So what we need from the theory is an account of how these beables move.

    How do we get that?. Once you have the local beables you automatically get an abstract configuration space: the set of possibilities for the distribution of the local beables. And the wavefunction, which never collapses, defines a velocity flow on the configuration space. There is a unique way to generate such a flow from the wavefunction: it is just the probability current. And although many different detailed particle motions can generate the same probability current, there is one obvious “laziest” set of trajectories that will give us all of the right results. Those laziest trajectories are what Bell choses, and it gives us his version of QFT. Not deterministic, but well defined (on a lattice) and empirically fine.

    Bell’s point about having local beables is that both the macroscopic experimental situation and, critically, the outcome of an experiment are typical described in terms of the locations and motions of macroscopic objects. Bohr insisted on this as well, but seems to have ended up with these macroscopic objects as not being subject to a quantum-mechanical treatment. But if there are microscopic local beables, as Bell wants, then the behavior of Bohr’s “classical” experimental equipment is just the collective behavior of a huge number of microscopic local beables. Then we can extend the quantum formalism to cover the experimental equipment—and the cat—without any problems. The wavefunction never collapses, but the cat either ends up alive or ends up dead.

    Does this help solve any real problem? Sure: make the geometry of space (or space-time) a local beable and now you have a coherent framework for doing quantum gravity.

  29. Atreat Says:

    Scott, thanks for the reply, but I still find this too cagey by half.

    We know that Scott is not an anti-realist. We know he believes P!=NP. We know he doesn’t buy Penrose’s argument about Strong AI. We know he has thought long and hard about the hard problem of consciousness and wrote a very thoughtful paper above about it.

    Now, we’ve learned that you find Qbism to be too radical subjective as well as Copenhagen and you appreciate MWI and the “shut up and calculate” philosophy… and yet, you acknowledge that both of these are kinda cowardly when it comes to including the observer.

    What we *still* don’t know is what Scott’s predilection on how to reconcile or deal with this.

    Let me see if I can be more explicit in what I am asking…

    You acknowledge your uncomfortability with subjective or anti-realist interpretations while *at the same time* you also acknowledge that MWI and “shut up and calculate” is running away from a real question.

    Naturally, I would think that objective collapse theories or Bohmian mechanics would be your “home base”, but you seem hesitant to throw your lot in with them.

    An answer i’d find gratifying would be something like, “Yes, Adam my own personal philosophical predilection would be for Bohm or objective collapse to be correct, but I have little faith that we’ll find evidence for this, but of course i’d be thrilled if we did!”

    Now, if you did finally come out and say it I might reply that your personal predilection for ontic philosophy seems mighty at odds with your courage in promoting the interpretations that jibe with it 🙂

  30. Mitchell Porter Says:

    The empirical content of the Born rule is that various events occur with certain frequencies, and that is a huge problem for many-worlds.

    You have to pick a preferred basis for decomposition of the wavefunction, and then postulate that the measure of each branch is given by the Born rule, applied to its coefficient.

    Or you take the Deutsch-Wallace path, and adopt an incomprehensible doctrine as to how the Born rule actually describes game-theoretic rationality in the multiverse.

    Everett himself made an argument about the asymptotic frequency of measurements *within* a single (typical?) branch, converging on the Born frequencies; an argument which seems to be part of current “cosmological” or “multiverse” interpretations like those of Tegmark & Aguirre, and Susskind.

    And there would be other examples. But every example of dealing with this problem for many worlds, involves either untenable contortion of argument (hello Deutsch and Wallace), or modification of the pristine theory (like selection of a preferred basis), or just leaving it unaddressed by ignoring it.

    Just to make it absolutely clear what I am talking about. If quantum mechanics says that event A should be twice as common as event B, and if you are proposing to explain QM with some kind of multiverse theory, then event A should occur twice as often *in your multiverse* as event B.

    But that’s not a thing you can just “postulate”, right? If your multiverse contains one world with a dead cat and one world with a live cat, then the cat dies 50% of the time, and the cat lives 50% of the time. You can’t just say, oh but the first world has a measure of 36% and the second world has a measure of 64%, and that allows us to save the phenomena. What does it mean to say that one world has twice the measure of another world, if it doesn’t mean that there are twice as many copies of the first world?

    That kind of approach *might* be defensible when we are dealing with a continuous infinity of worlds, because then you really do need a measure. But discussion of the Born rule for many worlds rarely gets even this far. It’s as if the cavalier attitude to reality exhibited by the Copenhagen interpretation has been inherited by many-worlders, so that they feel no need to say exactly what components or substructures of the universal wavefunction are the “worlds”.

  31. RandomOracle Says:

    Mateus Araújo #7. I didn’t know about the Deutsch-Wallace theorem and found it interesting, so thanks! However, I’m a bit confused as to how this addresses objective probabilities in QM. From what I understand, Born’s rule is derived using decision theory as an optimal betting strategy of rational agents. But how can something that speaks about “rational agents” and “betting strategy” be objective? What would the rational agents represent and why can’t they be irrational?

  32. John D. Says:

    You guys are ALL wrong, because you’re all too radical. Best interpretation? The OLD quantum theory! Orbits are real!

    (I’m kidding. Sort of. Wavefunctions, spin, virtual particles and QFT are really do exist. But Sommerfeld orbits, properly corrected, really are a lot easier to picture, as in:

    Rise and Fall of the Old Quantum Theory
    https://arxiv.org/abs/0802.1366 )

  33. Murmur Says:

    Why the transactional interpretation doesn’t make any sense?

  34. Scott Says:

    Murmur #32: See for example this Physics StackExchange thread, where Peter Shor makes the point that it’s not clear whether there’s any version of TI in the literature that’s well-defined enough to account for any nontrivial quantum algorithm (such as, well, Shor’s algorithm 🙂 ). Much like de Broglie – Bohm, actually, TI seems to have been developed with certain very specific examples of quantum systems in mind—and if you want to generalize it to arbitrary quantum systems, without being hopelessly vague, the only known way to do so is to graft a more conventional account of QM onto it, and thereby lose what was supposed to be appealing about the interpretation in the first place. Or if I’m wrong about that, then hopefully someone who knows TI can show me why.

    Incidentally, regardless of the object-level questions, much of the Wikipedia article about TI reads like an apologia written by a proponent and should probably be flagged for revision.

  35. Scott Says:

    Atreat #28: Everything you say about my views seems correct to me. As for why I don’t then throw in my lot with objective collapse theories or de Broglie – Bohm, if they “jibe with my personal predilection for ontic philosophy” … well, that’s a little like asking: “you say that you love justice and kindness, and detest evil and violence? Then why not throw in your lot with the belief in a wise God who will stand as Judge over us all, rewarding the righteous with eternal bliss and smiting the wicked with hellfire? After all, that belief certainly jibes with your personal predilection for good!”

    Whether it would satisfy any “personal predilections” if something were true, and whether there are any positive reasons for thinking it true, are two questions that ought to be separated so completely in one’s mind that there’s not even the “appearance of a conflict of interest” (as the bureaucrats say).

    If any evidence ever emerged for an objective-collapse proposal (like Penrose-Diosi or GRW), or for de Broglie – Bohm (which would of course necessitate an actual change to QM itself, for example in the form of “non-equilibrium matter”)—well, that would be the biggest revolution in physics since QM itself, and I hope I’d be one of the first to be on board with it. We’d lose a lot of the mathematical elegance of QM, and would need to deal with issues like superluminal signalling (in the case of non-equilibrium matter), violations of the conservation of energy (in the case of GRW), and the existence of a preferred reference frame. I think those are all excellent reasons for betting against this eventuality. But, OK, maybe they’re a price that many people would gladly pay in return for regaining a single well-defined world.

    Failing that, if it sounds like I’m saying that the foundations of QM should put any thoughtful person into a state of permanent low-level discomfort, analogous to what one perhaps feels about the mind/body problem—I think that’s right!

    But there’s no need to be all fatalist about it. As I mentioned in the post, maybe something new will be discovered that changes the whole terms of the argument (it’s happened many times in the history of science). But even if not, well, many people manage to live reasonably worthwhile lives, even though certain features of reality cause them discomfort that never does and never should really go away. Features like evil and injustice, for example.

  36. vzn Says:

    hi all/ SA. there is a lot of new cutting edge research theoretical/ applied pushing toward a new interpretation of QM called “pilot wave hydrodynamics”, havent seen much mention of it in your blog, and its now being pursued seriously by a substantial minority faction worldwide. it also fall under the header “emergent QM”. as gibson once said, “the future is already here, its just not eveny distributed”. have been collecting links/ leads for ~½ decade and its now quite formidable/ copious. would like to write it all up at some pt in a survey but lack time/ incentive at the moment. (not a professional academic like some ppl!). life goes on.

    https://vzn1.wordpress.com/2017/09/08/latest-on-killing-copenhagen-interpretation-via-fluid-dynamics/

  37. Sniffnoy Says:

    Sorry, minor thing, but Scott, you might want to fix up the formatting in your Copenhagen paragraph. You’ve got a pair of italics tags and a HTML-escaped ï showing through.

  38. Scott Says:

    Tim Maudlin #27: Thanks very much for your explanation of Bell’s local beables proposal, which was interesting and useful to me! But from your description, it sounds to me like that proposal again makes many choices that could be criticized as arbitrary. Most obviously, there’s the choice of a lattice, and (much more serious to me) presumably also the choice of a foliation of spacetime?

    But beyond that—while I’m glad to know that fermion number density wasn’t Bell’s “first choice,” but only the choice he settled on, that of course doesn’t rule out that there might be dozens of other choices that would work equally well. It’s like, if I wanted to track the location of a person in a room, I could probably do it using motion sensors, or heat sensors, or chemical sensors, or cameras and machine vision, or x-rays, or echolocation—any would probably work fine in normal conditions. But we don’t pick one and elevate it to the ontological definition of what it means to have a person in the room.

    What if there are multiple fermion species? Do we still have a unique “simplest” evolution rule, or do we lose it in that case? Also, how does Bell’s proposal handle fermion creation and annihilation? If supersymmetry were ever discovered, would you consider that a falsification of Bell’s proposal? 🙂 (To be clear, I’d regard the making of such a clear prediction as a feature, not a bug.)

  39. Scott Says:

    Sniffnoy #36: Thanks, fixed!

  40. Scott Says:

    vzn #35: We actually have discussed the hydrodynamics stuff on this blog (see especially my “Collaborative Refutation” post about Anderson and Brady). If we haven’t discussed it more, it’s because thinking about quantum particles as classical objects moving in a fluid doesn’t work to account even for 2-particle entanglement, and that fact is blindingly obvious to anyone who understands the Bell inequality. When Anderson and Brady tried to account for Bell violations, they did so only by retreating into total incomprehensibility (and incidentally, into predictions about QC that have already been falsified by experiment, not that that was a surprise).

    Yes, the hydrodynamic analogy looks nice for a single particle, and might even be instructive there. But the idea that it could possibly extend to two or more particles—without needing to situate the “fluid” in a high-dimensional configuration space, the way de Broglie – Bohm does—is all just a huge misunderstanding, or (more likely at this point) a refusal to face reality.

  41. Scott Says:

    Howard Wiseman #24:

      If I got to heaven and found God and the angels still arguing about the interpretation of QM, I think I’d kill myself.

    Wouldn’t you already be dead? (Or would you be hoping to reach some different, meta-heaven?)

  42. Howard Wiseman Says:

    Exactly.

  43. Tim Maudlin Says:

    Scott #27

    I’m glad to have brought out Bell’s point about local beables. This is a strain that runs through his work and is not discussed so much. So let me just talk it up a little more.

    Bell’s main moral is methodological: somehow or other there must be a point of contact between any proposed physical theory and empirical data, otherwise you don’t have an empirical theory at all. Now if you push on the “experience” connotation in “empirical” too hard, you end up thinking that your physical theory literally has to predict the character of conscious experiences. But then you hit the mind/body problem and you are cooked. The other way to go—Bohr’s way, interestingly—is to insist that all experimental outcomes must be reported not in terms of sense-data but in term of the “classical” description of macroscopic objects, i.e. objects large enough to be directly visible and tangible. Such a classical description assigns the objects shapes and motions, well-enough defined in physical space. So any comprehensible theory needs some “local beables”— some localized stuff that exists in small regions of the space-time—from which these macroscopic objects may be composed. Whatever these local beables are, they are not “made out of wavefunction” because the wavefunction (or better: the quantum state that the wavefunction represents) is not a local sort of the thing.

    As you say, there is no canonical choice for local beable. And indeed, questions exactly of the form you ask—are there different species of fermions that need to be separately kept track of, or does just the generic “fermion number density” suffice?—have been discussed in the literature. No one is saying that there is a unique way to realize the goal of having local beables. And to demand some sort of guaranteed uniqueness of how to implement a program seems like just wishful thinking.

    It is not a bug that there are many “pilot-wave” type theories. The more the merrier, and when trying to account for some new phenomenon, see which version works best.

    One last point. A huge sticking point is, as you mention, the question of whether a theory is fundamentally locally Lorentz invariant or needs to posit, e.g., a preferred foliation. This is a question of how the theory implements the non-locality that—as you appreciate!—the violation of Bell’s inequality for events at spacelike separation requires. This seems like an obvious place where some innovation is needed, and to my mind the postulation of a preferred foliation is both simple and relevant. So I would not reject that move at all.

  44. mjgeddes Says:

    SAI’s won’t still be arguing about interpretations Scott. If they didn’t quickly converge on a realist interpretation, they’d simply conclude that the whole notion of ‘interpretations’ was meaningless or incoherent and that Copenhagen is basically right – the math is the reality – you can’t say any more than that.

    Most of the so-called realist attempts like pilot wave, objective collapse, etc., seem like they’re based on basic misconceptions about quantum theory – they’re attempting to try to ‘force-fit’ the wave-function into physical space, but we know that the wave-function can’t be in physical space (since it’s in ‘Hilbert Space’). So ordinary space-time can’t cut it.

    *super-click*

    Ordinary space needs to be decomposed into something more fundamental that underlies it. I think a 2-D information land-scape representing 2 ‘partial dimensions’ of time. The two types of information combine to generate the single dimension of time and 3 dimensions of space that we’re familiar with on the macroscopic scale.

    To answer your ‘why complex numbers?’ challenge, I think they arise from the combination of the 2 informational dimensions underlying ordinary space, which function some-what like ‘proto-‘ or ‘partial-‘ time dimensions.

    In short, you can’t make sense out of superposition in ordinary space-time, but you’d need to utilize a radically different type of geometry if you want a realist picture.

  45. Atreat Says:

    Scott #34,

    “Whether it would satisfy any “personal predilections” if something were true, and whether there are any positive reasons for thinking it true, are two questions that ought to be separated so completely in one’s mind that there’s not even the “appearance of a conflict of interest” (as the bureaucrats say).”

    Hmm, I dunno. I think what you are really trying to say is that we shouldn’t let our personal (maybe non-rational?) biases get in the way of our ability to reason with evidence. Or better that any lover of truth ought to put that love above any personal predilections when staring in the face of the unknown. But, we poor humans can not help but let our personal predilections guide our thought. What I think we *really* ought to do is let our love of the truth update our biases.

    Look, I think your rational brain is weighing the evidence and putting you in the uncomfortable situation of tilting toward anti-realist interpretations. You fight this though because your ego is scared of a non-objective world 🙂 Solution: keep quiet until called out LOL

    We have this in common: if tomorrow experiments show that objective collapse is right we’d both be thrilled and mostly for nothing to do with our biases for or against realism. Just for a shared love of discovering some new truth.

    Tim #27, I detect that you share Scott’s predilection for an objective world. The assumptions you make in your effort to tease this out of Scott include a priori an objective world.

    Schrodinger’s cat itself includes this assumption: that macroscopic states themselves like “live cat” and “dead cat” are well defined. Hell, that “cat” is well defined.

  46. Shmi Says:

    If one tries to count the MWI worlds honestly, they multiply like there is no tomorrow (literally).

    Take any transition between energy levels: an atom in an excited state goes into a less-excited state and emits a photon. One of the simpler examples, one that avoids dealing with nuclear forces and barely touches on the weak force is electron-positron annihilation. But the issues are all the same.

    In the non-relativistic QM the transition happens suddenly at some time t, so this time already parameterizes the possible worlds, as there is a distinct one corresponding to each t, giving you a continuum of equally probable worlds.

    It is somewhat different if QFT is used. There the initial atom+field state is unitarily evolving into a different atom+field state through a continuum of states corresponding to different emission times and directions, but it only becomes a new “actual” state when it gets entangled with something else, like the collection of fields representing the measurement system, the one that “detects” the emitted photon.

    In any case, the number of possible worlds, assuming you want it countable, is counted in Planck units of time and momentum, and generally a bunch of other Planck-discretized continuous parameters. This is not counting (ehm) that the “detector” is also a collection of quantum fields with its own unimaginably complicated internal structure.

    Or one can abandon the idea of honest world counting and declare that MWI is simply postulating unbounded reversibility, at least in theory, with no collapse, ever and no attempt at representing the worlds. Which is fine (well, not too fine, GR would strenuously object, but assuming no gravity), and probably makes the most sense, but then one better be honest about this dishonesty.

  47. Alex V Says:

    Scott, Maybe I missed something, but I had impression that, e.g., in comic from previous post (and maybe in “Democritus”) there is at least one alternative interpretation relevant to such discussion, that I would call “ontological quantum probability” (or something like that)?

  48. Mateus Araújo Says:

    RandomOracle #30:

    Thanks for answering me here instead of in the old thread. As for your question, would you mind me asking you another question before answering it? What do you make of the explanation of probabilities in Kent’s world that I wrote in comment #14? Do these probabilities m/(m+n) count as objective? Or is there any meaning you attribute to objective probabilities that is not applicable to these fractions?

    I’m using Kent’s world instead of the full Many-Worlds theory because I think most philosophical problems related to probability in Many-Worlds can already be discussed there, without getting distracted by the details of quantum theory.

  49. Scott Says:

    mjgeddes #43: Continued categorical pronouncements about what super-intelligent entities would say or believe—their beliefs, of course, always happening to accord with your own—and/or repetitions of the phrase “super-click” (which is meant to suggest, I don’t know, that you’re clicking some button in your head to consult with your inner superintelligent fairy who’s never wrong?), may result in a temporary blog ban.

  50. Scott Says:

    Alex V #46: What exactly does the “ontological quantum probability” interpretation say?

    When I teach QM, I always talk about amplitudes and interference—how QM is structurally a lot like probability theory, except that now the little number you attach to each possible path the world could take is a complex number rather than a real in [0,1], and to know the probability of the world ending up somewhere when you look at it, you have to add up the complex numbers for each possible path the world could’ve taken to get to that point, and then take the squared absolute value of the result, and the upshot is that if the world could’ve reached that place via one path with the complex number pointing one way, but also via a different path with the complex number pointing the opposite way, then those two paths can “interfere destructively” and cancel each other out, with the result being that the world is never observed to have reached that place at all.

    I’m ready to go to the mat and say that this is the best way to explain QM to people who’ve never seen it, regardless of which interpretation (if any) anyone might prefer. (If authority is needed, it’s pretty much the way Feynman taught it.)

    But it doesn’t even try to answer the question of “how do you know when a measurement has happened?” Instead, it just treats “looking at the system” as an unanalyzed primitive—which of course is also what we do in real-world experiments, and when proving theorems about QM, but not when we’re philosophizing about the way the world is, and need to include ourselves in the description (or else explicitly disclaim doing that).

    Since I’ve seen the same confusion before, maybe it’s worth stating explicitly: not every set of words that’s useful to say about QM counts as an “interpretation,” in the sense of trying to answer the same question that MWI and de Broglie – Bohm are trying to answer, or that Copenhagen is pointedly declining to answer.

  51. Scott Says:

    Shmi #45: Just as a point of information, I think most MWI proponents would only regard “splitting” as having happened once you have two branches that have become “macroscopically distinct,” i.e. for which recoherence is now a thermodynamic absurdity. Of course that’s a somewhat vague criterion, but they would also say that it’s fine to be somewhat vague about such matters, because “classical worlds” are an inherently vague and approximate concept anyway—what’s important to be precise about are just the fundamental equations of reality.

  52. Mateus Araújo Says:

    Mitchell Porter #29:

    You wrote that “the empirical content of the Born rule is that various events occur with certain frequencies”.

    It is not that simple. What one can prove is that with high probability, certain events will occur with certain frequencies. But this “with high probability” stops us from using frequencies to define probabilities, on pain of circularity, and also deflates the empirical content: other frequencies are also possible, although with low probability. One cannot falsify the statement that this was the probability by observing some “wrong” frequency.

    Understanding what objective probabilities are is one of the biggest problems in philosophy, so one cannot criticise Many-Worlds on the grounds that we understand what single-world probabilities are, but are confused by many-world probabilities.

    One thing that I do agree with you is about the incomprehensibility of the Deutsch-Wallace argument. I think, however, that it is the fault of their presentation, and not of the argument itself, and wrote here a hopefully clearer explanation.

    About the choice of measure for counting worlds, the problem appear even for stylized measurements with a finite number of outcomes. Consider, for example, a polarisation measurement done on a photon in a superposition of |H> and |V>, say in the state \alpha|H> + \beta|V>. It is the prototypical two-outcome measurement, and one should say that there are two worlds, one with H and the other with V, after the measurement right?

    The problem is that this H and V are subjective labels that we apply to the measurement results, physically speaking there are several macroscopically distinct worlds that we coarse-grain together in the H and V labels. My experimentalist friends say, for example, that even a simple APD is sensitive to where exactly the photon hits it, and it is possible to find this out. Let’s say there is a detectable difference between the left and right sides of the APD used to detect H. Should we have then three worlds, H^L, H^R, and V?

    Surely we must use a measure for counting worlds that is invariant under this arbitrary labelling. One can then show that the only such invariant measure is the 2-norm.

  53. Scott Says:

    Atreat #44:

      We have this in common: if tomorrow experiments show that objective collapse is right we’d both be thrilled and mostly for nothing to do with our biases for or against realism. Just for a shared love of discovering some new truth.

    Actually, I never said I’d be thrilled. Thinking it over, I think I’d feel exactly the same way I’d feel if one of those 5-page P≠NP proofs people email me all the time, the ones that just do some freshman manipulations of 3SAT formulas before concluding that exponential time is needed, somehow turned out be right.

    I.e., I’d be thrilled that the mystery was finally solved, but also disappointed that the solution turned out to be so … banal. So cheesy. So much like the first thing countless people already thought of, but then set aside on the ground that God or Nature would surely be more surprising than that.

    Like really, there’s just some little hack where atoms sometimes spontaneously ‘collapse’ like little firecrackers, thereby setting off a cascade of wavefunction collapse in all the atoms around them? If the unitary evolution of the world can be spoiled so casually, then why was it even put there in the first place? (Also, if this is so, then couldn’t we someday use quantum error correction to negate the effect of the spontaneous collapses, and thereby “thwart” this solution to the measurement problem?)

    Again, though, one’s personal feelings about it should be strictly separated from one’s assessment of the evidence—and the more so the stronger the feelings are.

  54. Mateus Araújo Says:

    Shmi #45:

    As Scott says, only macroscopically-distinct count as worlds, because otherwise worlds would interfere with each other all the time, and they wouldn’t correspond to a quasi-classical reality.

    But more generally, even if you are trying to count the number of macroscopically distinct worlds you get into serious trouble, unless you “count” them using the 2-norm.

    This is however not a bug, but a feature, as the 2-norm is what gives us the Born rule.

  55. Bunsen Burner Says:

    The way physicists view the relationship between QM and hidden variables strikes me as often naive. They seem to think that its identical to the relationship between classical degrees of freedom and statistical mechanics. I don’t think that’s correct, the relationship can be a lot more conceptualy complicated than that.

    Also, in spite of the important 20th century debate on the meaning of probability, physicists seem to know nothing of this and muddle through without giving it much thought. E. T. Jaynes did a lot to show how to think about probability carefully, and how to develop classical statistical mechanics as a branch of statistical decision theory. It’s a shame physicists have such an allergy to subjectivist concepts. It leads to embracing a type of naive realism that isn’t much help in understanding QM.

    Finally, I think there should be more discussion on contextuality in the various interpretations. I actually find this more disturbing than non locality. I don’t see much about how to model value indefiniteness in the various interpretations, so perhaps focus more Kochen-Specker rather than Bell?

  56. Bunsen Burner Says:

    Also, Tim makes some strong statements about the PBR results and statistical interpretations. I think Matt Leifer did a good and insightful analysis of this result with a much more nuanced discussion.

    http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/

  57. Scott Says:

    Tim Maudlin #42: It’s fine with me if you say “the more the merrier” about beables theories. But if you do, then you seem to be retreating (or converging) toward my view, which is that these theories are interesting mathematical stories that you can tell about various quantum systems, but absent some revolutionary advance, no one of them can be seriously put forward as a proposal for something actually true about nature.

    I foresee the following objection: it’s true that no beables interpretation has the status of a fundamental truth about nature, but the same is true of “ordinary” physical theories. Newtonian gravity was superseded by GR, the Standard Model will be superseded by whatever comes after it, etc. All physics (at least of today) is provisional.

    That’s true, but “ordinary” physics has the crucial property of being cumulative, with each new theory inheriting the successes of its predecessors—Newtonian gravity arising as a limit of GR, etc. By contrast, I don’t see any sense whatsoever in which de Broglie – Bohm, for example, arises as a limit of Bell’s fermionic local beables theory (does it?). The situation seems rather to be that the beables theorist says to the physicist: you tell me what are the degrees of freedom of the most fundamental quantum theory you know, then I’ll cook up a beables theory to order for it, cheerfully discarding along the way any previous beables theories invented for less fundamental degrees of freedom.

    But if so, then the beables theories are forever like remoras; it’s the quantum theories they attach themselves to that are the sharks.

  58. Peter Morgan Says:

    Scott, #37, https://scottaaronson-production.mystagingwebsite.com/?p=3628#comment-1752447, when faced with “many choices that could be criticized as arbitrary”, why can’t one characterize those choices then work with a unique equivalence class? Isn’t that what thermodynamics does in the face of insufficient information?

  59. Aula Says:

    test QM on larger on larger systems

    I think one “on larger” is enough.

    there was no correlation whatsoever between interpretation and final exam score—except that students who said they didn’t believe any interpretation at all, or that the question was meaningless or didn’t matter, scored noticeably higher than everyone else.

    Well, that’s it, then; you found empirical evidence that you should not think about the interpretation of QM. 🙂

    Seriously though, if this result can be replicated on a larger sample of students, it could be a significant discovery in, well, pedagogy of science or something like that.

  60. cds Says:

    I guess I am partially confused about what all this interpretation business is supposed to buy you. If it buys you nothing, then one should choose no interpretation that introduces additional baggage to keep track of, which rules out any sort of hidden variables theory. It also seems clear that there is no reason to priveledge classical concepts over anything else reality might be since that is a kind of bias of length scale. I feel like some versions of MWI do have this problem in so far as they claim that there are multiple branches each with a copy of “me” in them.

    Is there something wrong with just saying that the universe is probabilistic and quantum mechanics tells you how to assign probabilities to different histories? Would that be considered a version of MWI?

  61. jonas Says:

    > Indeed, about the only thing I can think of that might definitively settle the debate, would be the discovery of an even deeper level of description than QM

    Could a deeper understanding how our brain works influence your interpretation? In particular, which interpretation would you be more likely to believe if scientists proved that your brain works as a classical computer, or that, on contrary, your brain does depend on quantum entanglement (but not quantum gravity) in an essential way?

    Re Scott #50
    > only regard “splitting” as having happened once you have two branches that have become “macroscopically distinct,” i.e. for which recoherence is now a thermodynamic absurdity.

    I’m confused, because that sounds like removing all the strength of the many-worlds interpretation. I mean, that variant of the MWI says that once a collapse has happened, then there’ll be multiple worlds and you observe one of them. But the original question was what it would feel like for an observer if they were still in an entangled state, without a collapse.

  62. Tim Maudlin Says:

    Scott #56

    I really can’t figure out what you have in mind here. I take it that you have conceded this counterfactual: if all of the known phenomena in the world were accurately predicted by non-relativistic quantum mechanics, then there would be very strong rational grounds for accepting the standard guidance equation over any of the other mathematically possible dynamics that are empirically adequate. That is what the “ten different derivations” observation is all about. And among the phenomena predicted by non-relativistic quantum mechanics are all of the classic “quantum phenomena”: 2-slit interference, violations of Bell’s inequality for arbitrarily distant experiments, “delayed choice”, quantum erasers, teleportation, and the entirety of the field of quantum computation. So if that were the case, we would have a theory that suffers no conceptual problems at all, no measurement problem, no problem accounting for Born’s rule and interpreting the probabilities, no vagueness or fuzziness in the laws, no vagueness or fuzziness in the ontology, no random collapses, no “multiplying worlds”. It would be insane not to treat that theory, at the least, as a very, very, very plausible proposal for the true theory of the world.

    Now the actual world is not like that for a few reasons. One is the phenomenon of particle creation and annihilation, which standard quantum mechanics cannot easily handle. (I say “easily” because one can try to address it by taking the Dirac sea seriously.) Another is that space-time is not Newtonian or Galilean. Another is that standard QM does not handle gravity. (The last may, of course, really be an aspect of the next-to-last.) So we have to go beyond non-relativistic QM.

    What do we have to work with? Well, not much. Unlike non-relativistic QM, there is no mathematically clean version of QFT. Axiomatic versions hit Haag’s theorem. Everyday calculations impose cut-offs that have no theoretical justification. People say it is just an effective theory, emerging from an unknown theory. So in such a situation you have to expect that the right way to extend the pilot wave architecture is not so obvious. What are the local beables of the theory? Some people, impressed by the “Field” in “QFT” suggest replacing particles with fields. Other people, impressed by the phenomenon of particle creation and annihilation take that seriously: keep the particles but allow their number for vary so the configuration space is the direct sum of all N-particle configuration spaces. These are both reasonable ideas to explore. How to deal with Relativity and still keep the mechanism that allows for violations of Bell’s inequality? My own preference is by adding a foliation to a Relativistic space-time structure, but other people prefer to try to hold on to local Lorentz invariance. Also reasonable things to pursue.

    What about your “cumulative science” remark? That’s child’s play. What is cumulative, or preserved, under all of these various ideas to explore is the basic shape of how it solve the measurement problem without fuzzying anything up. The wave function always evolves linearly. The local beables always evolve in accordance with an exact equation, which could be deterministic or could be stochastic. There is always only one physical world, which is the one we always thought we live in, where cats and people and planets are not continually “splitting”, so the notion of a frequency of events is completely unproblematic. But of course you adjust the theory to meet new phenomena. That’s called “doing science”. And there is never an algorithm for that.

    It’s true that if the wavefunction always evolves linearly and yet there is only one physical world—which is directly presented by the local beables—then you are stuck with the “mostly empty wavefunction” complaint you voiced above. Most of the fine structure of the quantum state plays no role at all in producing the observable phenomena. And your gut reaction is “what a waste”. But to promote that gut reaction into a rational objection without appealing to a God who created the whole thing and who might have some budget to deal with is, it seems to me, an impossible task. To let that worry get in the way of taking the theory seriously is not rationally defensible.

  63. Tim Maudlin Says:

    cds #59

    You have been misled by a terrible choice of nomenclature here. Let me suggest my preferred replacement.

    A “physical theory” answers two questions: what there is and what it does. The first bit is the ontology of the theory and the second bit is the dynamics, or physical laws. What I call the “nomology”. A physical theory is sharply formulated exactly insofar as both of these are presented in a sharp form, preferably with clear mathematics. So, for example, presenting the nomology using the term “measurement”, as van Neumann does, is (as Bell says) unprofessionally vague.

    According to this usage, there just is no such thing as “quantum theory”. Even what is taught as non-relativistic quantum mechanics is not a physical theory, because there is no clear statement at all about either the ontology or the nomology. What is taught to students is not a physical theory: it is a somewhat vague but practically quite powerful predictive apparatus. We know this because even physicists who teach the theory all the time do not agree about what the ontology and nomology of the theory are. They are rather teaching how to use a mathematical formalism and some rather vague rules of thumb to make predictions. Which is all fine and good if you are an engineer and only care that the bridge won’t fall, never mind about exactly why. But if you went into physics wanting an account of the physical world, this predictive apparatus alone does not answer any of the questions you were interested in. And at that point you are told to “shut up and calculate”. By not even asking these questions, you can spend your energies one learning how to make the predictions, and not waste time on what is really going on. And, as Scott has verified, that means you do better on the tests! Because the tests are not about the ontology or nomology: they are just about the predictions.

    From this point of view, the activity called “interpreting quantum theory” is really the activity of producing proper physical theories that recover (to within some epsilon) the predictions of the predictive apparatus. That is, what “all this interpretation business is supposed to buy you” is that you are actually doing *physics*, and trying to understand the fundamental nature of the physical world, rather than simply learning a predictive technique. If there are so few “physicists” who are even interested in doing physics any more, then maybe we need some new names. The vast majority of so-called physicists can teach what they teach under the rubric “Physics for engineers”, and the people interested in “interpretations” can have the proper title of “physicist”, and can discuss alternative physical theories.

  64. Mateus Araújo Says:

    Prof. Maudlin,

    You are talking about the way Bohmian mechanics deals with the Born rule as though it were a strength of the theory, but I see it as one of its worse problems. After all, if the theory is deterministic, the quantum-mechanical probabilities can only be subjective, they are a representation of our ignorance. But I think everyone here would agree that quantum-mechanical probabilities seem to be pretty damn objective. Why then this apparent objectivity? And why must our ignorance be distributed precisely by the Born rule? It seems a completely unmotivated postulate, that is only there to make the theory match our experience. It feels even more conspiratorial when one notices that this is precisely how our ignorance must be distributed in order to keep the action-at-a-distance going on in the fundamental level hidden from us.

    The distribution of the Bohmian positions does not even behave similarly to other probabilities uncontroversially considered to be subjective, such as for example the Maxwell distribution of the position of gas molecules. In the Maxwell case we know exactly how the distribution arises from the underlying dynamics, and knowing the actual positions behind it is completely unproblematic, it does not lead to new physics.

    Furthermore, it is easy to prepare a gas with known positions of the molecules and make more precise predictions than what is allowed by the Maxwell distribution. In the Bohmian case, though, it is literally impossible to prepare a particle with a known position and make predictions more precise than what is allowed by the Born rule.

    I’m curious about what you make of this.

  65. vzn Says:

    SA #39 at least you focused on the anderson+brady paper almost ½ decade ago and dont think all its conclusions are airtight but dont think that justifies throwing out baby+bathwater, honestly disliked the sort of “cyber uprising/ torches+ pitchforks witchhunt” tone to that post.

    wrt all the comments here/ elsewhere, theres a lot of energy into looking into interpretations, really wish everyone with all the energy would work on some collaborative group project eg in the style of polymath. anyone with those feelings, plz reply on my blog & lets attack it. feel that a resolution to interpretation issues is close at hand & have specific experiments in mind (acoustics), not exceptionally difficult, within range of undergraduates.

    lately am fascinated with investigations of “madelung fluid” connections to QM and think this following paper is quite substantial. it essentially *derives* plancks/ boltzmann constants from fluid dynamics considerations. feel this is breakthrough/ paradigm shifting/ revolutionary pov. also the concept of *emergence* and *information loss/ energy dissipation* elements are crucial and investigated in many other contexts.

    Schroedinger vs. Navier-Stokes/ Cordoba, Isidro, Molina
    https://arxiv.org/abs/1409.7036

    ===

    Quantum mechanics has been argued to be a coarse-graining of some underlying deterministic theory. Here we support this view by establishing a map between certain solutions of the Schroedinger equation, and the corresponding solutions of the irrotational Navier-Stokes equation for viscous fluid flow. As a physical model for the fluid itself we propose the quantum probability fluid. It turns out that the (state-dependent) viscosity of this fluid is proportional to Planck’s constant, while the volume density of entropy is proportional to Boltzmann’s constant. Stationary states have zero viscosity and a vanishing time rate of entropy density. On the other hand, the nonzero viscosity of nonstationary states provides an information-loss mechanism whereby a deterministic theory (a classical fluid governed by the Navier-Stokes equation) gives rise to an emergent theory (a quantum particle governed by the Schroedinger equation).

  66. DavidC Says:

    >what would it be like to be maintained in a coherent superposition of thinking two different thoughts A and B, and then to get measured in the |A⟩+|B⟩, |A⟩-|B⟩ basis?

    Scott, I’ve always been a bit confused about what kind of question this is (when you’ve raised it elsewhere too).

    It’s surely not an experiment that can tell us (the people who aren’t currently in that situation) anything about quantum mechanics, right? Since all interpretations will make the same predictions?

    So to the extent that we don’t know what someone in that situation would say to us… then it must be a question about what computation brains are doing?

  67. Alex V Says:

    Scott 49:

    What exactly does the “ontological quantum probability” interpretation say?

    ( It was my question 🙂 ) I simply tried to describe my interpretation of “set of words” used in mentioned comic and some other texts. Sorry, if it was wrong interpretation.

    The formal definition? – Maybe “A description of quantum mechanical object by a way convenient for making analogue with probability theory and looking for some ontological meaning of that.”

    For example, a|0>+b|1> is considered as some ontological way of manipulation with states |0> and |1>. But for some quantum systems all states on Bloch sphere may be equivalent due to some symmetry and so probabilistic analogue is only an interpretation.

  68. Pascal Says:

    There was a time when MWI was considered completely outlandish, but now it seems to be taken much more seriously.
    What do you think caused this change in perspective?

  69. cds Says:

    Tim Maudlin #62,

    A few points:

    1. It is not true that quantum mechanics is presented as “vague” rules of thumb. There is nothing vague about how to apply the rules to make predictions (barring human errors). If there was any question of how to apply those rules, there would be something there that a deeper theory of quantum phenomena could hang its hat on to help us distinguish fundamental reality.

    2. It seems like you making a subjective decision about what constitutes “knowing what is really going on” and deciding that some things are in and some are out. All theories end up with statements that amount to “well, that’s the way it is.” Why Maxwell’s equations? Because minimally-coupled U(1) gauge theory. Why minimally-coupled U(1) gauge theory? Because that’s the way it is. Does that mean that Maxwell’s equations are not getting to “what is really going on?” The same will be true for any pilot wave/hidden variables theory that purports to tell you what is really going on.

    Let me put this another way. What would it take for a theory to satisfy you as getting to “what is really going on?” How can I know if I achieved it in my theory? If you are allowed to say, “well, that’s the way it is” somewhere in your theory, then why isn’t quantum mechanics – as it is taught at universities right now – telling you what the nature of reality is?

    3. Your prescription for what generating a “proper” theory is sounds a lot to me like saying “I don’t like that the planets move on elliptical trajectories around the sun so I’m going to throw in epicycles until it all seems to work, because I really understand circles.” You are prejudging what a “proper” theory should look like.

  70. GA Says:

    I don’t get why people insist on adding consciousness to the mix. Replace Schrodinger’s cat with a Turing machine, say a regular laptop, let the laptop ask the same questions you would ask about the world yourself.

    Connect that normal classical laptop to the measurements of some qubits/spins/whatever, and isolate the it from the rest of the world. Let it also output some bit.

    Now when you observe the classical laptop, after it had done some calculations on those qubits, did it act as a quantum computer? Did the amplitudes for the internal states of the classical laptop interfere, and the amplitude for the final output bit would also interfere, and the classical computer would act like quantum computer when you measure it just because of the isolation from the outside world? Or did something along the way, stopped this from happening? What could possibly happen to the amplitudes of the states of the atoms of the classical laptop / turing machine, that they stopped being a linear combination of the original measured qubits? What would ever stop the interference in the final output bit from happening?

    What exactly stops regular turing machines in our world from becoming quantum turing machines when we isolate them from the environment? What do you even define as isolate from the environment?

    Or maybe we believe a regular laptop can be a quantum turing machine, if we only care to isolate it enough from the environment (Obviously we don’t, atleast I don’t).
    How do you even define isolate from the environment? Why should that matter? in relativity everything is already isolated from the environment by light cone. Is it enough to just send the laptop 2 lightseconds away from us to isolate it enough for 2 seconds of computation?

    Or maybe the real issue is what is the first measurement we do on the laptop, and we just have be absolutely sure the very first bit of information that can possibly arrive from the laptop is the output bit? This also sounds pretty wrong.

    The whole thing screams the question of why satellites don’t already act like quantum computers from our point of view.

    Hell I don’t understand why it makes sense to have a difficulty building a quantum computer in a quantum world given a classical computer in it, especially given the complete isolation of events that relativity already guaranties.

  71. Tim Maudlin Says:

    Mateus # 63

    The Born rule falls out of Bohmian mechanics in *precisely* the same way that the Maxwell velocity distribution falls out of classical stat mech! That is, one proves that Born statistics are typical for systems in quantum equilibrium, and since quantum equilibrium states comprise almost all of the configuration space according to any appropriate measure, it would only be problematic if something other than Born statistics were found.

    This is worked out in complete detail by Goldstein, Dürr and Zanghí here:

    https://arxiv.org/pdf/quant-ph/0308039.pdf

  72. Scott Says:

    GA #69:

      The whole thing screams the question of why satellites don’t already act like quantum computers from our point of view.

    Decoherence. Read about it. Isolating something large from its environment, to the point where QM becomes macroscopically relevant to it, is much, much, much harder than you think.

    In particular, it’s not enough to isolate the system from yourself (as your discussion of “light cones” would suggest). You also need to isolate the system from every stray particle passing through that might carry away information about its state—something has exactly the same effect as if that information had been measured (to someone who later came upon the system, and didn’t have complete control over everything in the environment that could have been affected by the stray particle).

    This is not a matter of debate or interpretation, but something that every interpretation agrees about.

  73. Tim Maudlin Says:

    cds #68

    Please make clear what you mean by “quantum mechanics”. In particular, how does it solve the measurement problem? What predictions does it make for the case of Wigner’s friend? Does the word “measurement” appear in the basic postulates of the theory? If it does, then it is unprofessionally vague, just as Bell said.

    Your points 2) and 3) do not correspond to anything I wrote. The “that’s the way it is” is invoked exactly in the statement of the fundamental ontology and the fundamental nomology. These are not further analyzed. And they better be sharp and precise and not include terms like “measurement”. Measurements are just physical interactions, and should be treated as such.

    Please cite the precise sentences in what I wrote that you think justify your points 2) and 3).

  74. Mateus Araújo Says:

    Maudlin #70:

    What the Bohmians can actually prove is far from your grandiose claim. See for example here. For simple systems that one can analyse exactly, like a plane wave or a gaussian wavepacket, one actually proves that they do not in fact equilibrate. For more complicated system there is some numerical evidence of equilibration, but much weaker than what can be done for classical statistical mechanics.

    And even if the quantum equilibrium hypothesis were true, there is still this glaring disanalogy with the subjective probabilities of classical statistical mechanics, in that one is not able to prepare systems out of equilibrium. Well, if it is literally impossible to obtain knowledge about the position, it seems that we are dealing with an objective property of Nature – an objective probability.

  75. James Gallagher Says:

    Dirac basically understood how Quantum Mechanics worked from the beginning, he was just bullied by his european philosophy liking colleagues and couldn’t get his point emphasised:

    “According to quantum mechanics the state of the world at any time is describable by a wave function ψ, which normally varies according to a causal law, so that its initial value determines its value at any later time. It may however happen that at a certain time t1 , ψ can be expanded in the form
    ψ =Sum(n)[Cn ψn] ,
    where the ψn ’s are wave functions of such a nature that they cannot interfere with one another at any time subsequent to t1 . If such is the case, then the world at times later than t1 will be described not by ψ but by one of the ψn ’s. The particular ψn that it shall be must be regarded as chosen by nature”

    Ref

  76. Scott Says:

    Pascal #67:

      There was a time when MWI was considered completely outlandish, but now it seems to be taken much more seriously.
      What do you think caused this change in perspective?

    Interesting question! Here are the first seven answers that spring to mind for me:

    1. The founding generation of QM, the generation that had been directly influenced by Bohr and Heisenberg, died off. A new generation of physics students, less under their influence, decided that MWI made more sense to them. (You may want to read Max Tegmark’s personal account of his “conversion” to MWI, as a grad student in Berkeley, in his book. I suspect hundreds of similar stories played out.)

    2. The quantum cosmologists mostly signed on to MWI, because Copenhagen didn’t seem to them to provide a sensible framework for the questions they were now asking. (Did the quantum fluctuations in the early universe acquire definite properties only when we, billions of years later, decided to measure the imprint of those properties on the CMB?)

    3. David Deutsch, the most famous contemporary MWI proponent, was inspired by MWI to invent quantum computing; he later famously asked, “to anyone who still denies MWI, how does Shor’s algorithm work? if not parallel universes, then where was the number factored?” Anyone who understands Shor’s algorithm can give sophisticated answers to that question (mine are here). But in any case, what’s true is that quantum computing forced everyone’s attention onto the exponentiality of the wavefunction—something that was of course known since the 1920s, but (to my mind) shockingly underemphasized compared to other aspects of QM.

    4. The development of decoherence theory, which fleshed out a lot of what had been implicit in Everett’s original treatment, and forced people to think more about under what conditions a measurement could be reversed.

    5. The computer revolution. No, I’m serious. If you imagine it’s a computer making the measurement rather than a human observer, it somehow seems more natural to think about the computer’s memory becoming entangled with the system, but that then leads you in MWI-like directions (“but what if WE’RE THE COMPUTERS?”). Indeed, Everett explicitly took that tack in his 1957 paper. Also, if you approach physics from the standpoint of “how would I most easily simulate the whole universe on a computer?,” MWI is going to seem much more sensible to you than Copenhagen. It’s probably no coincidence that, after leaving physics, Everett spent the rest of his life doing CS and operations research stuff for the US defense department (mostly simulating nuclear wars, actually).

    6. The rise of New Atheism, Richard Dawkins, Daniel Dennett, eliminativism about consciousness, and a subculture of self-confident Internet rationalists. Again, I’m serious. Once you’ve trained yourself to wield Occam’s Razor as combatively as people did for those other disputes, I think you’re like 90% of the way to MWI. (Again it’s probably no coincidence that, from what I know, Everett himself would’ve been perfectly at home in the worldview of the modern New Atheists and Internet rationalists.)

    7. The leadership, in particular, of Eliezer Yudkowsky in modern online rationalism. Yudkowsky, more so even than Deutsch (if that’s possible), thinks it’s outlandish and insane to believe anything other than MWI—that all the sophisticated arguments against MWI have no more merit than the sophisticated arguments of 400 years ago against heliocentrism. (E.g., “If we could be moving at enormous speed, despite feeling exactly like we’re standing still, then radical skepticism would be justified about absolutely anything!”) Eliezer, and others like him, created a new phenomenon, of people needing to defensively justify why they weren’t Many-Worlders.

    I’m curious if people who know the history better than I do have thoughts about these answers, or if they can suggest other answers.

  77. GA Says:

    Why would it not be enough to isolate it from yourself? For all purposes, if the first bit of information to arrive to you from the experiment is indeed the output bit, All those “stray particles” are part of the system from your point of view, and they should still be in superposition before your first interaction with it. Stray or not, as long as they didn’t change the behavior of the classical turing machine, which they didn’t because we know these are pretty error-resistant on their own, why would you even care about stray particles? So long as you made sure the first bit of information you measure is the output bit, everything else should not really matter. It can’t matter. That output bit is still in superposition before you interacted with it, and it must be measured immediately, and because relativity you can’t know whether in the future the “stray” particles will even be measurable by you so they can’t in any way effect the outcome. If it really is the first bit of information to arrive, you can’t determine yet if you actually isolated the experiment or not. It seems like it all boils down to whether you can make sure the first bit to arrive is the output bit.

  78. Scott Says:

    GA #73: Because when you only have access to part of a system, and not all of it (e.g., not the stray particles), you need to perform the operation of partial trace, which has the effect of converting quantum superpositions (i.e., “pure states”) into classical probability distributions (or more generally, “mixed states”) when the system is entangled. This is why it’s hard to build a quantum computer, and is also why we don’t notice superpositions on the scale of everyday life (e.g., satellites smeared out quantumly, to use your example). It’s one of the most important phenomena in the universe.

    Pick up any quantum mechanics book, or some free online lecture notes, and read about it, and then come back and continue participating in this conversation.

  79. Tim Maudlin Says:

    Scott!

    In the last paragraph of my #61 I got completely confused and attributed to you an objection that actually came from someone else in another exchange I have going on right now! My apologies.

  80. Scott Says:

    DavidC #65: No, “what is it like?” is not an experimental question—except insofar as the answer can be publicly reported, which as you correctly point out, is not the situation that we’re dealing with here. But it’s a question that does affect who you might need to treat as an “observer” for the purpose of formulating your theory—which in turn, I think, is extremely relevant to whether you prefer a more Copenhagen-like view or a more MWI-like one. For my further thoughts about this, see my “Could a QC Have Subjective Experience?” talk.

  81. Sniffnoy Says:

    Scott, you’ve probably noticed this, but it seems your website’s certificate has expired…

  82. Scott Says:

    Sniffnoy #77: No, I didn’t notice! Where do you see that, and what do I do about it? (I’m only a theoretical CS PhD…)

    Update: OK, I found where it says the certificate “has expired or is not yet valid.” But given that I don’t do anything whatsoever on my site that explicitly requires encryption or authentication, is this a pressing problem?

    Is the issue just that, for all anyone knows, this whole blog could be written not by me but by an impersonator who’s taken control of my domain? If so, then I urge people to remain calm and judge the impersonator’s arguments on their merits the same way they’d judge mine. 😀

    Another Update: OK, now I see the problem — the browser is generating warning messages for anyone who loads my site, as if it were some phishing site.

  83. Scott Says:

    Tim #75: ‘Tsok. I was wondering where that came from, but then I figured maybe my memory had failed me and I had made that point after all. (I’ve certainly made it other times when discussing dBB.)

  84. James Gallagher Says:

    Why is my post “awaiting moderation”! It’s a report of a quote from historical QM giant?

  85. Scott Says:

    James #81: Because I don’t always see everything in the moderation queue. It’s there now.

    Everyone: I’ve requested a renewal of my SSL certificate, but it requires validation by an email that has not arrived. Bluehost gave me no warning whatsoever that this was about to happen—my site just suddenly became insecure.

    To anyone considering starting a website, please heed my words carefully: do not under any circumstances use Bluehost. There is no limit to the grief and horribleness that they’ve caused me for more than a decade.

    Why then have I not moved the site? Because I know from experience that it would cause even more grief and horribleness: it would take days of effort, and much of my website would go down or get broken, and no one would be in charge of fixing it, and my patience for dealing with that form of horribleness has decreased exponentiality with each passing year since I was a teenager.

    Another update: OK, it looks like the certificate is renewed now!

  86. Scott Says:

    Everyone: One thing that I think I’ve learned from this thread, has been the surprising amount of convergence between MWI and Copenhagen. Those two are often considered the opposite extremes of the ideological spectrum of QM interpretations—with other options, like deBroglie-Bohm, somewhere in between or off the usual spectrum entirely (the Libertarian Party of interpretations?). But someone once remarked that MWI and Copenhagen are “equivalent up to a change of philosophical gauge,” and I think I now understand better what they meant by that.

    At the most obvious level, these are the two approaches that insist on adding no new mathematical ingredients whatsoever to QM, beyond what’s relevant to (already existing) experiments: no guiding equation, no spontaneous collapse term, no nothin’.

    But more deeply, these are the two approaches that really, truly don’t care how much metaphysical enormity you might need to swallow, in order to accept that those minimal mathematical ingredients are all you need! They differ only in which which metaphysical enormity they ask you to accept with a shrug:

    (1) that we have to give up on the entire notion of “the state of the universe” independent of an individual observer (e.g., you), or

    (2) that there is an observer-independent “state of the universe,” but it’s one that contains astronomically many slightly different copies of you, with each copy having a “degree of existence” weighted by the absolute square of its amplitude.

    But (1) and (2) do seem clearly related to each other by change of philosophical gauge. 🙂

  87. Mateus Araújo Says:

    Scott#74 and Pascal #67:

    Simon Saunders wrote a nice essay about the history, available here. He attributes the rise in popularity to decoherence theory and its solution of the preferred basis problem.

  88. Tim Maudlin Says:

    Scott #84

    Please take a look at the last three pages of Bell’s Against ‘measurement’. There Bell make a very, very enlightening comparison of Copenhagen (the real one, due to Bohr) and Bohm. In each case, the total ontology of the theory is dualistic, with a wavefunction (or in the orthodox case maybe more than one?) and a classical part. For Bohr, the classical part must include the description of the experimental situation at macroscopic scale. The wavefunction describes the microscopic. These two parts interact in a way that yields the Born probabilities for the behavior of the macro bit, and hence yields the right empirical probabilities. The macroscopic classical bits are local beables and the wavefunction is not.

    Bohm has the same form, except that the classical bits, the local beables, are microscopic and the macroscopic items are just big collections of them. There is only one dynamics: the local beables are always guided by the wavefunction in the same way, and the wavefunction itself always evolves in the same way (linearly). So the two theories solve the measurement problem in the same way.

    In Bell’s telling, the two theories are related not by a change of philosophical gauge (whatever that means) but by a change of local beable scale. And they critically differ by going from three laws (a “pure quantum” law, a “pure classical” law and a ‘quantum/classical interaction” law) to two: the Schrödinger equation and the guidance equation. Both laws are mathematically precise and universal.

  89. Mateus Araújo Says:

    Scott, there is a comment of mine stuck in the moderation queue.

  90. Scott Says:

    OK, everything in the queue should be out now. Comment numbering probably got screwed up in the process.

  91. Atreat Says:

    Scott #83,

    “Degrees of existence”

    Bah! What is the cardinality of the MWI multiverse? Is it larger or smaller than the cardinality of the reals?

    If you want to say MWI allows objective probabilities for these so-called “degrees of existence” then what is the measure of this set and how does QM provide it?

  92. Tim Maudlin Says:

    Mateus # 74

    First, it is not “what I claim”, it is what is proven in “Quantum equilibrium and the origin of absolute uncertainty”, which I linked to in #71. If you think you have found a flaw in those proofs, then you should contact Shelly or Detlef or Nino. Or just let me know and I will pass it on.

    Second, the question of equilibration is at best a very minor one. Overwhelmingly most of the possible initial conditions (i.e. initial particle configurations) are equilibrium configurations. So there would be no mystery at all if the universe were, at all times, in quantum equilibrium and no equilibration had to occur. It would rather be a great mystery if the universe were ever *not* in equilibrium. That is the puzzle about entropy: why did the universe start out in such a low entropy state rather than in an equilibrium state? That requires an explanation; the holding of equilibrium does not.

    Third, your categorizations into “objective” and “subjective” are not really adequate to the situation, nor are they in classical stat mech. The obvious case of an objective probability is when there is a fundamental law that is probabilistic rather than deterministic, a random process of some sort. That is the case neither in classical stat mech nor in Bohmian mechanics. And the obvious case of subjective probabilities are probabilities that are measures of some agent’s uncertainty. But neither classical stat mech nor Bohmian mechanics concern what anyone knows or believes about anything. The Boltzman entropy, for example, of a system does not depend on what I know about it. It is what it is even if I know the position and velocity of every particle.

    So what does one’s ability or inability to prepare a system have to do with anything? Nothing at all. it is obvious that no one ever has and never will prepare the atoms in a mole of gas at room temperature in such a way as to be able to predict what it will do for even a microsecond. So what? In Bohmian mechanics, these limitations are not “merely practical” but rather “in principle”: they follow from the dynamics itself. But again: so what? What possible difference can it make whether something is theoretically possible but practically impossible, or just theoretically impossible? It will never be done in either case, so the practical ability to do it cannot have any influence on how the theory is used or understood.

  93. Traruh Synred Says:

    Here is my fable ‘Schrodinger’s Cat and the Law’ which I think sheds light on some of the sillier things said about the cat:

    https://drive.google.com/file/d/0B0Ma4zkWrI1LX3c5SUpGMDJ0U2c/view

    No cats where actually harmed…

  94. Paul Hayes Says:

    Bunsen Burner #57,

    It’s very charitable of you to say that Tim Maudlin is “making strong statements” about the PBR theorem. His claim that it’s “killed off the psi-epistemic category” (made intransigently here and elsewhere) is simply false. Fortunately for the reputation of philosophers, Matt Leifer’s not the only one who has taken the trouble to debunk that nonsense.

  95. Tim Maudlin Says:

    Paul Hayes #93

    I said it and I meant it. I have, of course, read Matt Leifer’s post. If you would care to exposit how a psi-epistemic theory survives, please feel free. Only state what you yourself understand and are willing to defend.

  96. Scott Says:

    Traruh #92: Your story seems to operate under the same mistaken assumptions about what quantum mechanics says that GA #70, #76 suffered from. It looks like popular physics writing failed both of you.

    Let me be as clear as possible: unless the cat was isolated from its environment to a degree that not even a virus or mitochondrion has ever been isolated—which would involve so much preprocessing of the cat’s state (probably, “uploading” the cat to a simulation running on a quantum computer) that it’s not clear whether an ordinary person would still call it a “cat” at all—again, unless this was done, decoherence would make quantum mechanics obviously and manifestly irrelevant to whatever happened next. There would be no reason (besides obfuscation, I guess) to call Einstein or Bohr as expert witnesses. The “legal” premises of your story were confusing and unclear to me, but in any case, they’re all that would matter, because physics is unequivocal that at each microsecond or whatever, the cat (being an open system coupled to its environment, regardless of how sealed the box was) was either alive or else it was dead. Even Bohr (the real Bohr, that is) would agree.

  97. Tim Maudlin Says:

    Scott #96

    What has decoherence to do with anything? To quote Bell (Against ‘measurement’ again!):

    “The idea that elimination of coherence, in one way or another, implies the replacement of ‘and’ by ‘or’ , is a very common one among solvers of the ‘measurement problem’. It has always puzzled me.”

    Me too.

    Let the cat interact with, and hence become entangled with, its environment as much as you like. Still, the universal quantum state (which is the only one there really is) is still a superposition of a state with a live can and a state with a dead cat (or rather: many states with dead cats that dies at different times). If you want to ascribe a state just to the cat subsystem, then you can trace out over the environment to get a mixture, but if there are no real collapses then it is an improper rather than a proper mixture and cannot be interpreted epistemically. That is, it is not *really* the case that the cat is alive or *really* the case that the cat is dead, but you just don’t know which. So when you say that the cat “was either alive or dead” you are either smuggling in a real physical collapse, or some “hidden” variables (which settle the case). Absent that, you are committed to Many Worlds, and the cat being, as it were, both alive and dead, in different worlds. That is not at all the Copenhagen view.

    As Bell says, decoherence does not turn “and” into “or”. Collapses do. “Hidden” variables do. But decoherence doesn’t.

  98. Scott Says:

    Tim #95: I was commenting on a short story about a lawsuit involving Schrödinger’s cat, by an author who obviously didn’t know anything about decoherence. In such a case, the fact that the cat had decoherered in (less than) nanoseconds—i.e., that its state was “definite” at all times in exactly the same sense it would have been definite had it never been put into a box at all, which device made no difference—seems like it would clearly be an open-and-shut legal refutation of anyone who tried to bring quantum mechanics into the discussion. There wouldn’t be the slightest reason, for the court’s purposes, to go any further into the foundations of physics than that. The alternative for the courts would be to entertain arguments like:

    “Your Honor, the prosecution says I committed the murder, but that’s in at most some branches (including this one, admittedly) of the universal wavefunction, and as John Bell reminds us, decoherence doesn’t turn an ‘and’ into an ‘or’!”

    (I suppose the obvious response for the judge or jury would be, “well then, in this branch we find you guilty.”)

  99. Mateus Araújo Says:

    Maudlin #91:

    I’m not claiming anything in contradiction with what Goldstein, Dürr and Zanghí wrote. In fact the language of “quantum equilibrium hypothesis” and “quantum equilibrium problem” comes from their own paper, as they are aware of the limitations of their results. The paper I cited by Colin and Struyve, that you would do well to read, summarizes a lot of results by Goldstein, Valentini, and Dürr, and adds their own numerical simulations to support the quantum equilibrium hypothesis. So I’m merely being precise about what they know, instead of accepting a blackbox claim of “quantum equilibrium is true”.

    You are being disingenuous about the case of the gas: it is trivial to prepare it in a non-Maxwell distribution. Just think of a gas expanding to vacuum or passing through a nozzle. They are non-equilibrium situations, and assuming the equilibrium distribution gives you wrong predictions.

    You are correct that the Boltzmann entropy and Maxwell distribution are objective in the sense that any observer will agree about them, but they still depend on the subjective choice of what to ignore. A simpler example is the entropy of a die. If I put all the faces in the same macrostate, it has entropy log(6). If my macrostates are even or odd numbers, its entropy is log(3), and if they are numbers mod 3 the entropy is log(2). But with Bohmian mechanics it is impossible, e.g., to know anything more than the parity of the number, so you’ll never get the Boltzmann entropy lower than log(3). So this is a favoured choice of macrostates, something objective about the system. This is new. This is different.

  100. Tim Maudlin Says:

    Scott # 98

    Yes I read the story. But let’s make this as clear as possible.

    Instead of a regular box, put the cat into a sort of thermos-like container: an inner chamber sealed in glass (large enough to hold enough air for the cat not to suffocate) which is surrounded by a (nearly) perfect vacuum, which is then contained by another glass chamber. The whole gadget is put in a light-proof box.

    Now the cat interacts like crazy with its immediate environment and so decoheres. But there is no obvious way that the cat + environment + glass container system interacts at all with the outer glass + box + rest of the universe system. Let the rest of the story go as written.

    Now: is there in fact a time at which the cat either definitely dies or definitely survives in any branch of the external world in which we live? Or do we only split into various subbranches when the seal is broken and the inner system interacts with the outer system where we are? This case is just not clear to me, because I have never been clear about how this branch-creation mechanism is supposed to work.

    If the outer-world cat-health-branching only occurs when the inner subsystem entangles with the outer-world, then I think the whole story makes sense: did the cat *really* predecease Schrödinger (whose death, being in the outer world has a definite time).

    If it is obvious to you how this story goes in terms of branching please explain it to me. I never have gotten the hang of Many Worlds.

  101. Tim Maudlin Says:

    Mateus # 91

    I am not at all sure on what basis you have come to the conclusion that I have not read Colin and Struyve’s paper. I have, and there is absolutely nothing in that paper at all that suggests any possibility that the present state of the universe is not in quantum equilibrium, and hence would show non-Born statistics. Even Valentini would not deny that for any regular lab. And that’s on the completely unsupported supposition that 13.7 billion years ago the universe was not already in quantum equilibrium. I mean are you really extrapolating from some numerical results on a single particle in a 2-dimensional box with a wave function with few nodes to the relaxation time of the entire universe? Pray tell how you have done that calculation.

    I said nothing at all about not being able to prepare gasses in non-equilibium states. If you want to address that question, then the analogy with classical stat mech is perfect: once the universe has come to thermal equilibrium, no one can prepare anything in a non-equilibrium state. In fact, no one can live at all! Our ability to live and to do things like prepare systems in states of our preference, and particularly states of non-equilibrium, depends critically on the universe *not* being at thermal equilibrium. Obviously. So if the universe is in quantum equilibrium, then there is not a thing we can do about it. So what?

    What I was reacting to was this sentence: “Well, if it is literally impossible to obtain knowledge about the position, it seems that we are dealing with an objective property of Nature – an objective probability.” It is, of course, not literally impossible to obtain knowledge of the position of a Bohmian particle. Indeed, you can obtain it to whatever degree of exactitude you like short of perfect knowledge. It is impossible to obtain that knowledge and not also perturb the conditional wave function of the particle so that the remaining uncertainty does not yield the usual uncertainty relations. Again: so what? that is, of course, the “absolute uncertainty” that Durr, Goldstein and Zanghí are expositing.

    I can’t follow your claims about the die at all: you seem to be mixing up the Boltzmann (i.e. thermodynamic) entropy with something to do with quantum equilibrium. In Bohmian mechanics, the thermodynamic entropy is a function of the effective wave function of a system, and has nothing to do with the exact configuration of particles beyond that.

    The Boltzmann entropy depends on a partition of the phase space. In many ways, such a partition is not at all “subjective”: it has to yield, for example, macrostates of a given temperature field and pressure field and volume at macroscale if we are going to replicate classical thermo. And the sense in which it “subjective”—the exact sizes of the cells used to define the macro-condition, for example—have nothing at all to do with anyone’s ignorance of anything. So calling the Boltzmann entropy “subjective” is, at best, highly misleading. I would put it firmly on the objective side of things.

  102. Scott Says:

    Tim Maudlin #99: Well, there’s still gravitational decoherence, which can never really be shielded off except possibly by throwing the cat into a black hole. But, OK, suppose you’d done an experiment vastly beyond anything that’s ever been accomplished or might ever be accomplished in human history, one whose difficulty GA and Traruh were grossly misled about by whatever popular articles they read, one for which the current experimental frontier involves not cats but molecules a few nanometers across. Suppose you’d actually placed a cat in superposition in such a way that you could then go and actually measure the interference. In that case, I don’t think MWI renders a clear verdict about whether “branching” has happened or not—branching was only ever an approximate concept anyway, and here you’ve engineered a breakdown of the concept. And MWI or no MWI, the court would then indeed be faced with all the difficult questions about the foundations of QM that people are discussing in this thread.

  103. Tim Maudlin Says:

    Scott #99

    Excellent! Real progress (for me anyway)! That’s very helpful!

    Now please riddle me this: if gravitational interactions (which I did indeed ignore, although doing the whole experiment in freefall in interstellar space would simplify it: the inner chamber can just float inside the outer one) really lead to decoherence, how it is that, e.g., people can see interference effects in, say, buckyballs? What is the scale of gravitational interaction that would kill off all such interference effects? Is it really obvious they would couple the inner chamber to the rest of the world in the right way to decohere the rest of the world even in a regular lab on Earth? Or do you need a (presently non-existent) theory of quantum gravity to even address this question?

  104. Marcin Mucha Says:

    Am I the only one truly disappointed that after all Scott has not designed QM interpretations collectible cards? Such a wasted opportunity…

  105. Scott Says:

    Tim #101: The reason gravitational decoherence doesn’t affect the buckyball experiments, I think, is simply that the buckyballs are not nearly massive enough and not nearly far enough apart, and gravity is so weak. In such experiments, the gravitational decoherence is completely negligible, not even worth thinking about, compared to other, more mundane decoherence sources.

    But pushing this further takes us to Roger Penrose’s central speculation on the subject: namely, that when a superposition of a mass and a displaced version of that mass involves both a large enough mass, and a large enough displacement, that “the surrounding spacetime metric gets perturbed by more than one Planck unit” (for some way of making that precise…), then decoherence will necessarily be triggered. Others disagree: I think the mainstream prediction is that the which-way information would actually need to get recorded in (for example) a particle registered at a detector, not just in the gravitational field.

    In any case, Penrose and others did calculations that suggested that the necessary experiment would be a few orders of magnitude beyond the current state of the art, but not, like, laughably far beyond it (see this paper). I know that the group of Dik Bouwmeester spent more than a decade trying to work toward doing the experiment. I don’t know whether it’s still being actively pursued, or whether it was given up as too hard for the present state of technology.

    As I wrote this, an interesting question occurred to me to which I don’t know the answer. Namely, at what level of mass and displacement does everyone agree that gravity would cause decoherence—i.e., is the prediction (in contrast to Penrose’s) completely uncontroversial—for (let’s say) an experiment performed on the earth’s surface? Does anyone know?

  106. Tim Maudlin Says:

    Scott # 102

    Sorry: I spoke too soon. Why are you saying that my Schrödinger-cat-in-a-big-thermos is such a big deal to create? We can do that now. My only question was whether the inner system would couple to the outer system in such a way as to split the outer system into a set of “worlds” that correspond to different cat-heath states, or, in contrast, no such coupling occurs and relative to a *single* outer world the whole cat system is in a macroscopic superposition. I never imagined, or postulated, that any *experiment* could pick up signs of interference between the various superposed macrostates of the cat + environment + bottle system! Of course that will always be practically impossible! Nor was Schrödinger ever worried about being able to experimentally detect interference between the cat-alive and cat-dead states! He just thought that it was absurd to have a physics in which the cat is ever not either determinately alive or determinately dead! Since in the “standard” theory, collapses only occur on measurement, it was enough for Schrödinger to keep the cat from being observed at all. Hence the box. Similarly, what the story was asking, and I was asking, has nothing to do with observing interference effects. It has to do with whether—from the “perspective” of the outside world or according to the branch the outside world is on—the cat is always either definitely alive or definitely dead and never in an indefinite state. And that depends on whether the outer-system branches are somehow extensions of the inner-system branches, and if so how that comes about. If putting the cat in the thermos prevents that, then we can already do the experiment. It won’t have any remarkable outcome: when we finally open the box and get entangled with the cat we will split into branches with different cat-health states. But the *theory* will tell us either that before we looked we were already so split or before we looked we were not. I was not proposing an experimental answer to that question, but just interested in the theoretical answer that the Many Worlds theory implies.

  107. Tim Maudlin Says:

    Scott # 105 It’s hard to keep up, but this is really getting us somewhere!

    Penrose’s theory is not a “decoherence” theory at all, it is an Objective Reduction, i.e. collapse, theory with only a single outcome and with the probabilities for each possible outcome given by Born’s rule. What Penrose is looking for is a gravitational *trigger* for the collapse. And the collapse really does change “and” into “or”, unlike decoherence.

    So I don’t see any grounds to think that Penrose’s numbers for collapses are even related to the analysis of decoherence. I guess Karolyhazy did look at something in this ballpark for his collapses, but I don’t think Penrose does. But I could be wrong about that.

  108. Scott Says:

    Tim #104: Yes, with any experiment that anyone can do now, or probably for the next 300 years (pending an AI singularity), the inner system would couple to the outer system in such a way as to cause branching. Assuming the inner system contains an actual cat (the meowing kind, not the 3-qubit GHZ kind). I guarantee this.

    If |A⟩ and |B⟩ are orthogonal “classical pointer states,” then the technological abilities that are needed to prepare a state of the form |A⟩+|B⟩, and actually keep it isolated from its environment, are very nearly the same as the technological abilities that are needed to (for example) distinguish |A⟩+|B⟩ from |A⟩-|B⟩, or from the mixed state |A⟩⟨A|+|B⟩⟨B|, and thereby prove that |A⟩+|B⟩ was created. I.e., the degree of control needed for the former is so great that, if you can do it at all, then you can probably also do the latter.

    I have a forthcoming paper, hopefully out later this year, which formalizes and proves not quite that, but several other statements in the immediate vicinity—e.g., that the ability to map |A⟩ to |B⟩ and vice versa, is “technologically equivalent to” (has the same quantum circuit complexity, up to a constant factor, as) the ability to distinguish |A⟩+|B⟩ from |A⟩-|B⟩.

  109. Scott Says:

    Tim #105: I believe the relationship is something like this.

    Penrose’s estimates for the mass and distance scale needed to trigger his “gravitational objective reduction,” are what the scale needed to trigger ordinary gravitational decoherence would be, if the gravitational field were being continuously measured to Planckian precision by God.

    But hopefully someone who knows more about the subject can clarify?

  110. John Sidles Says:

    Scott wonders (circa #103)  “At what level of mass and displacement does everyone agree that gravity would cause decoherence?”

    In addition to gravitational waves, test masses also couple to electromagnetic waves … yes, even neutral-dielectric test-masses are so coupled.

    Hence a physically natural question is: which decoherence is generically faster: irreducible decoherence from gravitational radiation, or irreducible decoherence from electromagnetic radiation?

    With reference to the literature, the largest test-mass objects and longest time-scales for which quantum-coherent superposition has been experimentally observed are — to my knowledge at least — the neutral-dielectic force-microscope cantilevers of single-electron MRFM experiments. Here quantum-coherent superpositions lasting ~760 ms were sustained in cantilevers of motional mass ~9.1 pg (== 7.5×10^9 buckyball-masses).

    These quantum-coherent neutral-dielectic cantilevers are macroscopic objects by the reasonable criterion that unaided human eyes can see them … just barely! 🙂

    These considerations are exerting a significant engineering impact, too, on the design of next-generation gravity-wave observatories, for which the convenient theoretical assumption that 100-kg neutral-dielectric single-crystal mirrors are well-described by isolated point-mass dynamics is grossly invalid.

    Conclusion  Not by gravity alone do large-dimension Hilbert-space systems dynamically pullback onto low-dimension tensor-spaces … one practical consequence is that Quantum Supremacy (AFAICT) would not obviously be easier to demonstrate in all-electromagnetic no-gravity universes.

  111. Tim Maudlin Says:

    Scott #109

    Measured to Plankian precision by God?

    An actual measurement creates entanglement, and hence decoheres the measured system. And such a precise measurement would decohere the hell out of the system. But no one’s actually doing such a measurement, so how can it be relevant to our question? I’m completely lost now.

  112. Mateus Araújo Says:

    Maudlin #99:

    Since I was accurately summarizing the content of the Colin and Struyve paper, which is itself based on Goldstein et al.’s paper, and you misinterpreted me as claiming that I had a found a flaw in Goldstein et al.’s proof, I inferred that this misunderstanding originated from you being unfamiliar with Colin and Struvye.

    And now you are again misunderstanding me, by saying that I’m claiming that the universe is today not in quantum equilibrium, and that this should follow from the assumption that the universe was not in quantum equilibrium at the Big Bang. I’m claiming nothing of the sort. I’m describing what the evidence in favour of the quantum equilibrium hypothesis is, and claiming that this evidence is rather weak.

    Or maybe you don’t find the quantum equilibrium hypothesis relevant at all, and that is why we are failing to communicate? The quantum equilibrium hypothesis claims that generically systems which are not in quantum equilibrium typically converge to quantum equilibrium. It allows us to do away with the unmotivated and downright conspiratorial assumption that the universe was in quantum equilibrium at the time of the Big Bang. As such it would go a long way towards solving the problem of probabilities in Bohmian mechanics.

    Except, of course, for the disanalogy with statistical mechanics. One is not able to learn the positions of the particles – except in a way, as you describe, that makes it impossible to do anything interesting with them. Again, it is very easy to prepare a gas molecule with a given position and predict its trajectory afterwards. If it were possible to do this with Bohmian positions nobody would have an issue with calling the quantum probabilities subjective.

    And come on, I find it hard to take it seriously your claim that Boltzmann entropies are “objective”. They are literally a description of our ignorance of the microstates. Of course, we choose the macrostates to be such that they are compatible with a given temperature, pressure, etc. This is even more subjective! We are defining them precisely to be limited by our knowledge of these macroscopic properties. If one decides to take into account another “macroscopic” property, bam, the division into microstates and the Boltzmann entropy changes again.

  113. Mateus Araújo Says:

    Scott #103:

    Well, if the source of the gravitational field is not in a superposition – as is a good approximation for the case of the Earth – there will be no entanglement and thus no decoherence. You need two sources that can gravitationally affect each other for that.

    One case where I think gravitational decoherence will uncontroversially happen is in this proposed experiment, where two miligram-scale gold spheres interact gravitationally. They will get entangled and therefore one could be said to decohere the other.

  114. Scott Says:

    Tim #109: It’s relevant only because, for reasons involving consciousness that we need not go into here, Penrose speculatively proposes a new law of physics, wherein such measurements would effectively occur, even with no actual apparatus performing them. (In other words, there would be objective reductions.) Well, and Penrose also thinks that these collapses could violate the Born rule in uncomputable ways, but again, we need not go into that, since the basic gravitational OR proposal is speculative enough.

  115. Matthieu Says:

    At Scott #75 and #85:

    Thanks for your post on a subject I adore.

    I’m glad to better understand your position. Previously I had a hard time not seeing why you don’t enthusiastically embrace MWI.

    So everything lies on philosophical view about realism, the belief that reality exists independently of observers.

    First quantum physicists, especially Bohr, concluded that quantum physics disproves philosophical realism. Most modern philosophers replying “I told you so”.

    However with Everett and MWI, we have a convincing description of quantum physics that is compatible with realism.

    I tend to think that having a reality is worth some weirdness. (We have experimental results that our universe is weird anyway.) I agree that’s ultimately only a philosophical preference.

  116. Shmi Says:

    Scott #51:

    > I think most MWI proponents would only regard “splitting” as having happened once you have two branches that have become “macroscopically distinct,”

    Yes, and that’s the setup I have mentioned (no idea if you had any issue what what I stated): a radioactive atom inside a detector that records the time and the direction of emission. The “macroscopically distinct” branches are different because they all have different recorded emission time (in BIG RED letters, if you want it absolutely unequivocally macroscopic).

  117. Tim Maudlin Says:

    Mateus #112

    You seem to be so intent on suggesting something that you are running roughshod over logic in your own posts now. Here is where we actually are.

    1) Since quantum equilibrium states are typical—that is, the vast, vast, vast, majority of possible universal configurations are quantum equilibrium configurations (using a natural measure of typicality)—absent some reason *not* to expect that we are in quantum equilibrium it should come as no surprise if we are in quantum equilibrium. (do you disagree?). This is the position of Dürr et al’s paper.

    2) On the completely unmotivated hypothesis that the universe started out not in quantum equilibrium, we can ask how long it would take to equilibrate. In particular, we can ask this for a typical wavefunction, which will have lots of nodes, etc (unlike, for example, a momentum eigenstate that has none). One step in this is showing that such a system will typically equilibrate and another is calculating the equilibration time. This raises perfectly good mathematical questions, but if one accepts 1) they are really quite unimportant with respect to the question of whether the present state of the universe should, according to the theory, be in quantum equilibrium.

    3) These are hard questions to answer by direct calculation. Numerical simulations suggest strongly that A) typical systems out of equilibrium will indeed typically equilibrate and B) for certain modifications of the basic dynamics from Bohm’s, the equilibration process will go faster than it does with Bohm’s. It is this last claim that is the subject of Colin and Struyve’s paper.

    That is the actual situation. either you are aware of this or you are not, but if you are, then you already recognize that Colin and Struyve’s paper, while raising a precise question, has zero bearing on whether we should expect the present state of the universe to be in quantum equilibrium according to Bohmian mechanics. The paper has literally no bearing at all. None.

    So why bring it up?

    You write:

    “It allows us to do away with the unmotivated and downright conspiratorial assumption that the universe was in quantum equilibrium at the time of the Big Bang. As such it would go a long way towards solving the problem of probabilities in Bohmian mechanics.”

    There is no “problem of probabilities in Bohmian mechanics”. Here is one key paragraph from “Absolute Uncertainty”:

    “We may summarize the conclusion at which we have so far arrived with the assertion that for Bohmian mechanics typical initial configurations lead to empirical statistics at time t which are governed by the quantum formalism. Typicality is to be here understood in the sense of quantum equilibrium: something is true for typical initial configurations if the set of initial configurations for which it is false is small in the sense provided by the quantum equilibrium distribution P and the appropriate conditional quantum equilibrium distributions PYt arising from P.”

    The claim that the hypothesis that the universe has always been in quantum equilibrium is “unmotivated and downright conspiratorial”. What this paragraph indicates is exactly the opposite: just from the pure mathematics of the theory, and apart from any empirical consideration, the hypothesis that the universe is *not* in quantum equilibrium is “unmotivated and downright conspiratorial”. That is the situation also in stat mech, which is why the extremely low thermodynamic entropy of the early universe if a *problem*! Think of Penrose’s famous picture of God trying to pick out an initial condition for the universe with a thermodynamic entropy s small as ours was. If, on the other hand, our universe started out in thermodynamic equilibrium, then God could be blindfolded! There would be no problem at all! (Of course, if that were they case we would not be here to tell the tale. Thank goodness quantum equilibrium is not like that!)

    As I said at the start, the whole question of equilibration is really just a side note here: since Born statistics are expected for systems in quantum equilibrium, and since the overwhelming majority of possible initial configurations are quantum equilibrium configurations, and since the empirical evidence (Born statistics) suggests we are in quantum equilibrium, there just is no problem to solve. It is only if one decided, in a completely unmotivated way, to postulate that the universe started out *out of* quantum equilibrium that the question of equilibration even arises. And all of the numerical evidence is that even in such an artificially constructed case, the universe would have equilibrated by now. Colin and Struyve’s paper isn’t even about *that*: it is about the relative *rates* of equilibration for Bohmian mechanics compared to some other, rather artificial, theories.

    Maybe you are just confused by the situation in thermo. Exactly the same mathematical situation arises in thermo, but there we know empirically that the universe did not start out in thermal equilibrium, it started out very

  118. Tim Maudlin Says:

    Matteus (con’t)

    very far from equilibrium. So that does constitute a real physics problem. That is a problem for the very reason that the holding of quantum equilibrium is *not* a problem, and the empirical appearance of Born statistics is *not* a problem and never was. Because equilibrium is almost all of what is physically possible.

    If you have read “Quantum equilibrium and the origin of absolute uncertainty” at all, then you have not read it carefully, or in any case you have not understood it. If you thought that Colin and Struyve’s paper was attempting to fill some hole in Bohmian mechanics, then you have not understood that paper either. Your citing that paper at all proves that. I have just boiled it all down for you: at least you can read this post carefully and understand it. That would be progress for you.

  119. Tim Maudlin Says:

    Mateus #112

    As for the last paragraph of your post: Of course the Boltzmann entropy is objective in precisely the way I describe. The definition of Boltzmann entropy make no reference—none, zero, nada, zip, zilch—to anyone’s knowledge or ignorance of anything. Once you have partitioned the phase space into macrostates, the Boltzmann entropy of a system is fixed by its microstate. Period. If somehow I knew the exact microstate in perfect detail, down to the last decimal place, that would have no impact at all on the system’s Boltzmann entropy. How could it? To paraphrase David Albert, do you really think that the manifest fact that glasses of ice water regularly and predictably and spontaneously evolve into glasses of cool water with no ice somehow depends on *what we know or don’t know about their microstates!* How could it? That is literally crazy.

    So among the people you have failed to understand, let’s add Boltzmann. In fact, the general understanding of entropy—what it is and why it is physically important—is in as bad a state in physics as the general understanding of the foundations of quantum theory. For anyone interested—spam alert!—we are going to be holding a Summer School devoted to entropy this summer in Split Croatia from July 16 to July 22. Look for the announcement in the next week, or e-mail me if you would like more information.

  120. Sandro Says:

    But from your description, it sounds to me like that proposal again makes many choices that could be criticized as arbitrary. Most obviously, there’s the choice of a lattice, and (much more serious to me) presumably also the choice of a foliation of spacetime?

    The wave function itself entails a foliation, so such a preferred foliation a) isn’t arbitrary, and b) doesn’t strike me as a compelling objection given it’s presence in every quantum theory.

  121. Atreat Says:

    Matthieu #113,

    Except Scott – someone whose natural predilection is given toward realism – finds MWI’s account of the observer lacking.

    Your quip, “I tend to think that having a reality is worth some weirdness,” betrays motivated reasoning. You are letting your biases towards an objective reality influence your evaluation of the evidence.

    Personally, I think MWI has bigger problems. I haven’t seen any MWI proponent respond to the criticism of Mitchell Porter #30, Mateus Araújo #52.

    Scott, do you also find fault with Deutsch-Wallace theorem? What is your understanding of how MWI gives account to the number of worlds? I think Tim was asking the same thing for a sharp definition of how MWI branches.

  122. Atreat Says:

    Summing up…

    It is clear from the thread that we have roughly 4 sets of cards Scott might have interest in collecting with the pros and cons:

    1) Copenhagen – {Pros: no additional math ie, Occam’s Razor, relatively minimalist for a QM interpretation}, {Cons: requires a non-realist metaphysical leap, doesn’t define the observer}

    2) MWI – {Pros: no additional math ie, Occam’s Razor, relatively minimalist for a QM interpretation, restores some semblance of realism}, {Cons: still quite a metaphysical leap, doesn’t define the observer, is very fuzzy defining branching and how to count the worlds}

    3) dBB – {Pros: restores realism, relatively small metaphysical asks}, {Cons: extra math ie, fails Occam’s Razor, uniqueness?}

    4) objective collapse – {Pros: restores realism, no metaphysical ask, if true then hello QM gravity!}, {Cons: no evidence for it yet}

    BTW, if someone doesn’t create a trade worthy set of collectible cards from this thread, then I’m going to be mighty disappointed 🙂

    PS: I really want a limited edition “Ghost in the QTM” card!

  123. Paul Hayes Says:

    Matthieu, #113,

    Bohr, especially, did not come to silly conclusions about what QM says about reality.

  124. Tim Maudlin Says:

    Atreat #122 or anybody else:

    I have never been able to understand how the term “realism” is being used in these discussions. There is a doctrine called “scientific realism” that is discussed in the philosophical literature, but that can’t be what is meant here. What is a “non-realist metaphysical leap”? When people sometimes say that one can retain locality by jettisoning realism, I have paraphrased that as “There is no physical reality, but thank God it’s local!”, which is less than half a joke. I honestly have no idea what is being proposed. Any clear exposition, or even clear example, would help tremendously.

  125. fred Says:

    Scott:
    “what would it be like to be maintained in a coherent superposition of thinking two different thoughts A and B, and then to get measured in the |A⟩+|B⟩, |A⟩-|B⟩ basis?”

    But thoughts are no different than the other things that appear in consciousness, like sounds, images, sensations of touch, pain, etc.
    The question is then whether perception is in itself a type of perturbative measurement. I.e. can a certain type of QM system measure itself?

    When it comes to the relation between QM and consciousness, QM is the only theory where multiple objects suddenly behave as one (entanglement) in a fundamental way (unlike, say, physics of a “classical” gas or liquid, where any so-called “emergent” behavior can always be reduced to the sum of the physics of the individual “classical” constituent particles).
    And consciousness also seems to be a bottom level property of the group of atoms forming its associated brain (unless we posit that atoms are also conscious, and the content of their consciousness is limited to their spin/charge).

    And reducing consciousness to information processing alone leads to the conclusion that the universe is mathematical in nature (and reality as a simulation type of stuff).

  126. Scott Says:

    Atreat #119: I haven’t studied the Deutsch-Wallace theorem well enough to be confident in anything I might say about it here. As I said, though, I do know numerous other ways to justify the Born rule as “the only probability rule that could possibly make sense, once we’ve fixed the unitary part of QM.” And this isn’t, and never has been, the main thing that worries me about MWI.

    The main thing that worries me is simply that, contrary to what MWI suggests, I don’t know whether it’s actually possible to prepare a superposition over two different mental states of a conscious being, and do an experiment for which it actually matters that the superposition is a superposition, while still maintaining the preconditions that caused the being to be conscious in the first place.

    Suppose it isn’t possible, but meanwhile, that standard linear QM holds for systems of arbitrary size, as large as we can do experiments with. Then as I said, in some sense both the many-worlder and the Copenhagenist are permanently safe from refutation: their views are “equivalent up to change of philosophical gauge.” The many-worlder can say, I can posit the reality of all these other branches with other versions of us and you’ll never refute me, and that’s true. Meanwhile the Copenhagenist can say, I can posit that God slices away all the other branches and you’ll never refute me, and that’s also true. Even the hidden-variable theorist can go on forever, inventing new remoras to attach to whatever is the shark of the best current quantum theory of nature. That project might not be the one with the greatest interest for me personally, but it’s permanently safe from refutation as well, and to each their own.

    This is where I end up when you really, really, really push me to lay my cards on the table: with exactly what I said about it in the ghost paper. 🙂

  127. Scott Says:

    Sandro #118: No, because in QFT the wavefunctions are just tools that you use to extract probability distributions over observables, so it’s fine to have different wavefunctions for different foliations. (Or we could say: in MWI+QFT, what’s being posited to exist is just the whole block universe, with infinitely many different wavefunctions and unitary evolutions for them corresponding to different foliations.)

    By contrast, the entire point of beables theories is to stomp your foot on the ground and say no, that’s not good enough! There have to be actual values for things like the configurations of fields and particles, regardless of whether or not anyone’s looking, not just abstract wavefunctions! But then, having chosen that as your hill to die on, it’s actually a serious problem if your beables assume different, incompatible values for different foliations (as they do)—unless you bite the bullet, as many beables theorists do, and say that there must be a single preferred foliation, even if we can never know what it is.

  128. Tim Maudlin Says:

    Scott # 126

    This shark/remora business is just rhetoric with nothing behind it. The thing that does the heavy physical lifting—that connects the theory to the evidence—in Bohmian mechanics is the particles. It is in the distribution of the local beables that one finds the image if the physical world, and it is by the local beables that one solves the measurement problem, and the probabilities in the theory are probabilities for the local beables to evolve one way or another. Bohmian mechanics shorn of the particles is not a shark without remoras: it is, I don’t know, a shark without any teeth that cannot bite anything.

    If you have not taken that in about the architecture of the theory then you have not yet appreciated how the theory is put together. I can try to explain it, but I need a sense of why you are saying the things about it that you are saying, and I really have not got a clue. Is it unclear how the local beables play the role I just exposited in the theory? if not, why?

  129. Mateus Araújo Says:

    Maudlin #115-116-117:

    I have no wish to continue the conversation in this hostile tone. You should assume a minimum of intelligence and good faith for your opponent, otherwise what’s the point?

    I’ll just clarify a couple of points: I was citing Colin and Struvye because they represented the state-of-the-art in numerical simulations of quantum equilibrium, and provided a nice summary of the past results on the subject. The precise statements were much easier to find there than in the mammoth paper by Dürr et al. Maybe I should have just stated the results I wanted to mention without sourcing them. Let me quote myself lest another misunderstanding arise:

    “For simple systems that one can analyse exactly, like a plane wave or a gaussian wavepacket, one actually proves that they do not in fact equilibrate. For more complicated system there is some numerical evidence of equilibration, but much weaker than what can be done for classical statistical mechanics.”

    This is all I wanted to say. Must it be so difficult?

  130. Mateus Araújo Says:

    Maudlin 115:

    As for the technical content: One can only justify the naturalness of a measure of typically via the physics; so no, the fact that almost all configurations according to some measure are quantum equilibirum configurations doesn’t mean much. Hence my interest in the quantum equilibrium hypothesis, as it would justify using a measure that favoured quantum equilibirum configurations.

    Let me give you a hopefully uncontroversial example of what I’m talking about: an arguably natural measure to use for quantum states is taking a Haar-distributed unitary applied to a fixed state. It has lots of nice properties. But it also fails miserably as predicting which states we find in Nature. It turns out that a measure much better at predicting the state that real condensed matter systems will be in is given by tensor product states, or unitaries that are made of a polynomial amount of local unitaries. Both are rather complicated measures that give a very peaked distribution on some corners of Hilbert space.

    As for the initial state of the universe, I’m not assuming that it is or not in a quantum equilibrium state. I’m just saying that if the quantum equilibrium hypothesis is true than we do not need to assume anything about it, as in either case the universe would have equilibrated by now. It is better to not need an assumption than to need it. I can’t fathom how this is a controversial statement.

  131. Sniffnoy Says:

    OK, I was going to stay out of this one, but I feel like as a mathematician I have to respond to Atreat’s #90.

    Bah! What is the cardinality of the MWI multiverse? Is it larger or smaller than the cardinality of the reals?

    If you want to say MWI allows objective probabilities for these so-called “degrees of existence” then what is the measure of this set and how does QM provide it?

    This comment is — as best I can tell, it is possible that I’m misreading it — making what I’m afraid is a very common mistake, namely, thinking that size of sets is about cardinality.

    Cardinality is one way of measuring the size of a set. It is frequently not a very useful one. It should in no way be identified with the general notion of size.

    For instance, if you’re looking at a subset of the plane, and you want to know how big it is, you would likely ask about its area (assuming it’s measurable); cardinality won’t distinguish between a square of area 1 and a square of area 4, so it’s not very useful there. (Although if the set you’re looking at has area 0 — for instance if it’s finite — then cardinality would probably be a more useful measure, as area won’t distinguish between finite sets.) Or if you’re looking at a subset of the whole numbers — and if it’s infinite, obviously once again for finite sets cardinality is what you want — likely you’d ask about its natural density (or its upper and lower densities if there is no natural density).

    Basically, for infinite sets, cardinality is a very crude measure, and is rarely the right choice. Many contexts will have better choices available. The advantage of cardinality is that it’s universal — every set has a cardinality; it’s not limited to or dependent on any particular context. (Also it has the advantage of being a measure that works fine for both finite and infinite sets, while most other ways of measuring the size of infinite sets wash away finite sets as below notice, but I’m not sure that’s really much of an advantage in practice, since realistically you’d just switch between cardinality and your other preferred context-dependent measure based on whether the sets in question were finite or infinite.)

    (Note by the way that these other measures of size rarely take values in the cardinals! Area, for instance, takes values in the extended nonnegative reals; a set of infinite area just has area ∞, it doesn’t make sense to ask how infinite the area is. We don’t want to make the mistake of thinking that infinity always involves cardinals, another common mistake.)

    The point is, if you have an infinite set A, and a subset B, there is not some general context-independent way of saying “what fraction” of A is taken up by B, and you are certainly not going to learn about such a thing, except at a truly crude level, by looking about cardinalities. You need additional structure there, such as a probability measure (or a measure more generally, or other possible things). Fortunately, such additional structure is frequently, in the cases we care about, available.

    Applying this to MWI, well, others can do that. 😛 (The cardinality of the set of multiverses it seems to me ought to be that of the continuum, FWIW, so…)

  132. Sniffnoy Says:

    Ugh, I meant “cardinality of the set of universes” obviously in #128, not multiverses…

  133. Scott Says:

    Tim #125: The beables are remoras because you could completely change the rule governing beable evolution, and yet every physicist and chemist would still use exactly the same algorithm for every experiment they’d ever perform, and even for every theoretical calculation (other than those specifically about the beable interpretation itself). Furthermore, that algorithm would be one that would ignore the beable evolution rule entirely, and would just work directly with the wavefunction or density matrix.

    It’s clear that, for you, every way this conversation could possibly go is another track leading straight back to dBB and Bell, whereas I get off the train at an earlier station. I learned a few new things from this discussion, and it was nice talking to you as always. But I think I’m now going to retire from this thread, my itch for discussing these matters having been sated for now (and other responsibilities pressing down on me more and more heavily). You, and others, are welcome to continue arguing here for a while.

  134. Tim Maudlin Says:

    Mateus #130

    I’m afraid that the tone of this was set by you not by me. Go back and look at the exchange yourself. I was completely polite in trying to explain the situation to you and got back this:

    “What the Bohmians can actually prove is far from your grandiose claim.”

    Actually, Dürr, Goldstein and Zanghí proved precisely what I claimed in “Quantum Equilibrium…”, so I assumed that you were not even aware of the paper. And you then cited Colin and Struyve, whose paper is completely irrelevant. So I guess if you are going to go about calling true claims both false and “grandiose” you better be sure that you are right. In this case you have been wrong down the line. If you want to learn something I can try to explain it and if you are not in the mood I won’t bother. But at least for everyone else:

    Given what Mateus has just said, you could not but get the impression that the “quantum equilibrium hypothesis” has something to do with equilibration. It does not. The hypothesis states exactly this:

    “When a system has wave function psi, the distribution rho of its coordinates satisfies rho = |psi|^2.”

    That’s it. No mention of equilibration at all. The question is how the assumption of this hypothesis is to be justified. It obviously cannot be proven: there are distributions of coordinates that do not satisfy the condition. There is, as Shelly Goldstein would say, the “good set” and the “bad set” of configurations of particle positions, and no amount of mathematics will make the bad set go away. What some amount of careful mathematics can do is prove that the “good set” (the set that satisfies the hypothesis) is in an appropriate sense *huge* and the “bad set” is *tiny*. In such a case the hypothesis seems a reasonable one unless some particular reason is put forward that would tend to prefer the bad set.

    “Huge” and “tiny” must, of course, be determined relative to some measure over the space of configurations. It is trivial that for any partition of the configuration space some measure will make one side “huge” and the other side “tiny”. So to avoid trivialization, we need a constraint on the relevant measure.

    One relevant constraint is equivariance. That is, the measure should be defined in such a way that the time evolution of a large set is a large set and the time evolution of a small set is a small set. In classical stat mech one uses the natural measure on phase spaces and proves Liouville’s theorem to show that it is equivariant. In Bohmian mechanics the situation is more complicated, essentially because you are dealing with configuration space rather than phases space, and the role played by momentum in classical mechanics is taken over by the wave function in Bohmian mechanics. In any case, given the guidance equation one can show that the| PSI|^2 measure is indeed equivariant, where PSI is the *universal* wave function, not the subsystem wave function psi.This is not at all a trivial proof. It is especially tricky because the quantum equilibrium hypothesis is stated not with respect to PSI but with respect to psi. So one first of all needs to figure out what psi, the wave function of a subsystem, even *means”. Only then can the Quantum Equilibrium Hypothesis even be clearly stated.

    Long story short: Born statistics for systems in a given psi are shown to be typical: they arise for overwhelmingly most possible initial conditions. That justifies the quantum equilibrium hypothesis. So there is no problem of probabilities at all in Bohmian mechanics. As icing on the cake, one can also show by numerical simulation that non-equilibrium states tend towards quantum equilibrium. So even if somehow the universe started out of quantum equilibrium it would still typically end up there. This is a much more satisfying explanation of Born’s rule than anything to do with rational betting strategies.

  135. Scott Says:

    OK, I guess there was one factual thing that I wanted to clear up before I go. Mateus #111:

      Well, if the source of the gravitational field is not in a superposition – as is a good approximation for the case of the Earth – there will be no entanglement and thus no decoherence. You need two sources that can gravitationally affect each other for that.

    Thanks! I now get that whether the experiment is done near the earth’s surface was not the right question to ask, because that’s a source that to leading order will affect both terms in the superposition equally.

    But still: I take a 5kg iron ball. I create an equal superposition where, in one branch, it’s here, and in the other branch, it’s 2 meters away. I somehow shield against all sources of decoherence other than gravitational ones. Still, surely the other stuff on my lab table, or the air in the room, will be gravitationally sensitive to the position of the ball at a level that, while minute, is still in principle detectable, and that would be enough to cause decoherence?

    Thinking it over, I can see that the ball’s gravitationally perturbing nearby objects at the level of ~1 Planck length wouldn’t suffice, because in order to cause decoherence, the perturbation would need to be large compared to whatever is the intrinsic uncertainty in the position of the perturbed objects. But still … like, how massive does an iron ball have to be, before its location within an ordinary-sized room will start getting gravitationally recorded by ordinary surrounding objects at a level that’s detectable in principle?

  136. Douglas Knight Says:

    I think you are applying a double standard when you complain about consciousness in MWI but let it pass in other interpretations. MWI says that the world seems classical because it’s approximately classical. Then you complain that the theory can’t use the word “seem” without solving the problem of consciousness. But you let it pass that a classical world would seem classical, which is exactly the same problem.

    Stephen Jordan is definitely being overgenerous to the Copenhagenists; and Scott is positively a shill (also a shill for D-Wave). People equivocate on the term. It is important to keep track of history because “the Copenhagen interpretation” is popular purely because everyone agrees that it is the status quo, which is only because people equivocate on what it means. If you want to have a serious discussion about philosophy, choose an unambiguous name, like instrumentalism.

  137. Douglas Knight Says:

    Pascal 68 and Scott 75: Pascal’s question doesn’t pin down a timeframe. First we should ask what the actual timeline was. MWI was static at the fringe for 30 years, and then progressed for 30 years. Scott’s answers are largely about the period of progress. Some of them suggest ways that the situation was different than in the first 30 years. But the situation for the first 30 years wasn’t just that MWI didn’t make headway against the hegemony of Copenhagen; rather any discussion of interpretation was out of fashion, even pinning down what was meant by “Copenhagen,” hence “shut up and calculate.”

    David Kaiser, author of How the Hippies Saved Physics* has a very specific claim about the stasis. He claims that lack of interpretation research was the result of lack of interpretation in the graduate curriculum, which was because it was hard to teach and grade, simplification being the result of a mass influx of graduate students, as a result of funding, as a result of military demand for physicists.

    * I have not read the book. I heard this story elsewhere and it might not be in the book, which is about a related, but different story.

  138. Tim Maudlin Says:

    Douglas Knight # 137

    The story of Everett in particular is much darker than that. Everett was Wheeler’s student, and Wheeler was very impressed with the idea. So he arranged for Everett to meet Bohr. And Bohr rejected the whole idea, said it could not be allowed. Everett was forced to crop a lot out of his thesis, and ended up not going into academia. He went to work for the Defense Department and then his own firm, and did not even talk about foundations of physics in part because he had been betrayed by Wheeler. If not for DeWitt, who revived the whole thing and renamed it (from the “Relative State Interpretation” we might not speak of it today.

  139. John Sidles Says:

    Some of the issues raised in this thread, regarding the role(s) of gravitational dynamics in quantum decoherence, are quantitatively addressed in three arxiv (p)reprints by Erich Joos (link here).

    These three surveys convey the received dynamical wisdom that, first, quantum electrodynamic gauge fields (photons) “tell” quantum matter fields to become classical, following which quantum matter fields “tell” quantum gravity fields to become classical.

    Jost’s surveys includes many practical examples, demonstrating that in our universe, it’s QED-dominated condensed matter dynamics that that dynamically decoheres quantum gravity metric … not the other way around.

    In a nutshell — quoting from Jost’s arxiv:9803052v1 —”matter does not only tell space to curve but also to behave classically.”

  140. John Sidles Says:

    PS: as a Fermi answer to the “cast-iron cat” question (of Scott’s question #132), let us imagine that an iron sphere is immersed in an ordinary lab atmosphere, such that the trajectories of individual air molecules (necessarily external to the ball) are gravitationally perturbed by the ball.

    We associate to each air molecule a LIGO-style measuring device with force-noise at the standard quantum limit (as given by Carlton Caves here, for example).

    Then it is straightforward to show, that for the quantum decoherence time to be one second (or longer), as induced solely by gravitational perturbation of air molecule trajectories, the radius of the “cast iron cat” must be 40 microns (or smaller). I would post the specific Fermi-scaling relation, except that SO’s LaTeX previewer never seems to work (for me anyway).

    Of course, decoherence induced by (QED-mediated) collisions of the air molecules directly with the cat, and interactions with ambient black-body radiation in the lab, both will induce “cast-iron cat” decoherence far faster than the relatively feeble gravitational coupling to air molecules.

  141. gentzen Says:

    Mateus Araújo #64:

    You are talking about the way Bohmian mechanics deals with the Born rule as though it were a strength of the theory, … And why must our ignorance be distributed precisely by the Born rule?

    This is an interesting observation, which makes my uncritical acceptance of pilot waves theories as showing compatibility of QM with axiomatic probability theory more questionable than I was aware of. But it also seems to deal a devastating blow to any attempt to derive Born rule in MWI without explicitly specifying how probabilities should be interpreted. (I guess Everett ran into that issue. Deutsch, or at least the ones who tried to expose his ideas more clearly, successfully avoided that mistake.)

    I had an email conversation with BT in March 2017, where he praised Deutsch’s version of MWI, and I defended Heisenberg’s version of Copenhagen. In my initial reply, I made one remark about pilot waves (and Everett), which now caught my attention:

    In that context, one might also defend pilot waves as showing that this generalized probability theory is still consistent with the modern axiomatic approach towards probability theory. I am less sure about Everett in that context.

    Here is where your observation comes into play: What if we have a pilot wave theory, and our initial ignorance is not distributed correctly? The wave part of the theory still satisfies every axiom of MWI. But the Born rule is broken now. So the task is now to explain why the probabilities predicted by that broken pilot wave theory are not “true probabilities”. (Or else any claim that Born rule can be derived in MWI is pointless.)

    In the long term, the blow of this is probably worse for pilot waves. For MWI, it just clarifies what it means to derive Born rule. And the fix is already existent, it just means that Everett’s initial work had some holes. But for pilot waves, it probably means that its probabilities are no “true probabilities”…

  142. Mateus Araújo Says:

    Scott #132:

    This is a very good question, and which I haven’t seen treated rigorously (maybe physicists get squeamish about the whole gravitational field of a particle in a superposition thing).

    But there is a crude model by Joos here that allows us to estimate the quantities involved. He shows that for the off-diagonal elements of the iron ball to get damped by a factor of 1/e after 1 second, we need the difference of the gravitational field of the two members of the superposition to be larger than 10^(-5). The problem is that it is rather tough to generate such a large field difference. The most extreme difference we can get from an iron ball is to pass the air molecules just by its surface, and arrange things that for one member of the superposition the iron ball is at the left of the air molecules, and for the other member it is to the right, so that the difference of the gravitational fields where the air passes is just two times the gravitational field generated at the surface of an iron ball.

    In its turn, this field is given by GM/r^2, and using the density of the iron to eliminate the radius, we get that the field difference is $\(2G(4\pi\rho)^{\frac23}\) M^\frac13$, or 10^(-7) M^1/3. This implies that we need a ludicrous 1-ton ball of iron to get gravitational decoherence, with about 3 meters of radius, with a spatial superposition of 6 meters.

    Well, if you were worried about unshieldable gravitational decoherence ruining Wigner’s friend experiments, you can relax now.

  143. Mateus Araújo Says:

    Maudlin 131:

    Ah, so this is the problem! Checking Wiktionary, I see that in English grandiose has the following two meanings:

    1- large and impressive, in size, scope or extent.
    2 – pompous or pretentious.

    I meant the first, and was unaware of the second. Not ironically, I really think that it would be an impressive result to show that probabilities in Bohmian mechanics work like those from statistical mechanics. I’m just pointing out that the results we actually have do not support such a strong claim.

    Friends now?

  144. Mateus Araújo Says:

    Scott #132:

    (Ops, LaTeX problem, would you mind deleting the borked comment and keeping this? Thanks)

    This is a very good question, and which I haven’t seen treated rigorously (maybe physicists get squeamish about the whole gravitational field of a particle in a superposition thing).

    But there is a crude model by Joos here that allows us to estimate the quantities involved. He shows that for the off-diagonal elements of the iron ball to get damped by a factor of 1/e after 1 second, we need the difference of the gravitational field of the two members of the superposition to be larger than 10^(-5). The problem is that it is rather tough to generate such a large field difference. The most extreme difference we can get from an iron ball is to pass the air molecules just by its surface, and arrange things that for one member of the superposition the iron ball is at the left of the air molecules, and for the other member it is to the right, so that the difference of the gravitational fields where the air passes is just two times the gravitational field generated at the surface of an iron ball.

    In its turn, this field is given by GM/r^2, and using the density of the iron to eliminate the radius, we get that the field difference is \(2G(4\pi\rho)^{\frac23} M^{\frac13}\), or 10^(-7) M^1/3. This implies that we need a ludicrous 1-ton ball of iron to get gravitational decoherence, with about 3 meters of radius, with a spatial superposition of 6 meters.

    Well, if you were worried about unshieldable gravitational decoherence ruining Wigner’s friend experiments, you can relax now.

  145. Mateus Araújo Says:

    Argh, another typo, it’s a 1,000 ton ball or iron, not 1 ton.

  146. Scott Says:

    Mateus #139: Thanks so much, unbelievably helpful!

    But I don’t find 1 ton to be a “ludicrous” mass at all: it’s merely on the high side of what one might have guessed. All the numbers you’ve mentioned are actually impressively human-scale.

    But OK, this was all for the off-diagonals to drop by 1/e every second, correct? Since the dependence on mass goes like M1/3, does that mean that with 1/8 as much mass (= a heavyset human on two opposite ends of a room), we’d get the same amount of decoherence in 2 seconds? Or with 1/27 as much (a child), 3 seconds? Did anyone ask Wigner how old his friend was, and how much he or she weighed? 🙂

  147. Scott Says:

    Oops, comments crossed! But even if it’s 1000 tons, again just running the numbers, wouldn’t we merely have to wait ~25 seconds for a normal weight human on opposite ends of the room to decohere gravitationally by a 1/e factor?

  148. Sandro Says:

    Scott #124:

    By contrast, the entire point of beables theories is to stomp your foot on the ground and say no, that’s not good enough! […] unless you bite the bullet, as many beables theorists do, and say that there must be a single preferred foliation, even if we can never know what it is.

    There would be given the paper I linked, because all measurements are contextual in beable theories. The preferred foliation would be imposed by the environment making measurements.

    As you say, we can’t know what it actually is, but that isn’t really relevant for our calculations. This still seems to satisfy your non-arbitrariness criterion.

  149. Mateus Araújo Says:

    Scott #142: It’s my pleasure.

    The scaling is a bit different, but it works even better for your argument.

    The off-diagonals are damped by \(\exp(-\Lambda g^2 t^2) \), where \(\Lambda\) is a complicated factor, g is the field strength, and t is time. So the exponent goes with time squared and mass to the 2/3, and with a mass of 100 kg one only needs to wait around 20 s to get the same 1/e factor.

    But I had in mind somebody who wants to make a Wigner’s friend experiment, not someone that wants to ruin it! This argument implies that if an adult friend takes their time doing the measurement, and walks around the lab during it, Wigner will never be able to interfere them.

    But this just means that Wigner needs to befriend a child, tell them to sit tight and do the measurement as fast as possible!

  150. Tim Maudlin Says:

    Mateus #142

    Yes, friends.

    But if you would be impressed by that result, then I don’t understand why you are not impressed. In stat mech, there are different questions about the Maxwell velocity distribution one might ask.

    One is: Why is the actual distribution in gases at equilibrium found to be Maxwellian?

    The answer to that is that the Maxwellian distribution is typical in the equilibrium sector of the phase space.

    Parallel: Why do we see Born statistics when we do measurements on quantum systems?

    Answer: Born statistics are typical for systems in quantum equilibrium. This is proven in “Quantum Equilibrium…”.

    Another is: Why are the gases in equilibrium in the first place?

    A tempting answer to this question would be: Since almost all of the phase space for the gas is equilibrium, it should be no surprise that a given box of gas is at equilibrium: There would have to be something unexpected for it *not* to be at equilibrium.

    In stat mech, this tempting answer is not sufficient, because in fact we see boxes of gas out of equilibrium all the time, boxes of gas with much lower entropies than their maximum entropy. So we ought to be puzzled by that, and we are. We end up having to accept the Past Hypothesis—that the initial state of the universe was very, very low entropy—and try to figure out an explanation for that. But even absent such an explanation, we can explain why actual boxes of gas that we experiment on display the Maxwellian velocity distribution by showing that the Second Law is typical behavior for all boxes of gas, and then calculating the relaxation time to equilibrium for a given box and waiting that long before experimenting on it.

    But if the universe were in global thermodynamic equilibrium, we wouldn’t need to do this step. If it were in global thermal equilibrium, then we would never run into any box of gas out of equilibrium and have to wait for it to equilibrate; all the boxes would already be in equilibrium, and we would not find that in need of any explanation beyond “almost all of phase space is equilibrium, so what’s to be explained?” We could calculate the time period to wait for a box to spontaneously fluctuate out of equilibrium enough to make an empirical difference, and the time scale would be so long as to be practically irrelevant.

    That is exactly the situation in Bohmian mechanics. We always see Born statistics. Always. Born statistics are proven to be typical in quantum equilibrium. Conclusion: we are, and always have been in quantum equilibrium. The absence of systems that display anything but Born statistics is explained. And since we have no “low quantum entropy” reservoirs to draw on, we also can’t prepare systems out of quantum equilibrium, just as we could not do for boxes of gas if the world were at thermal equilibrium.

    So unlike stat mech, there is no reason to postulate that any system has ever been out of quantum equilibrium, and if that is so then it needs no more planation than that equilibrium is typical. You can pursue the question: if a system were out of quantum equilibrium, would it typically spontaneously evolve to equilibrium, and if so how fast? The answer seems to be yes to the first, large based on doing numerical simulations. Getting a handle on the relaxation times for a system is a much harder problem, as it is in stat mech. But I don’t think that stat mech is in any better shape here than Bohmian mechanics. And Bohmian mechanics has the huge advantage of there being no evidence that anything was ever out of quantum equilibrium.

  151. Tim May Says:

    Douglas 133:

    About “How the Hippies Saved Physics,” I read this a couple of years ago and greatly enjoyed it. It captured some of the Bay Area’s physics weirdness, which sometimes involved Esalen, Jack Sarfatti, Zen meditation, and (so they tell me) wild parties in San Francisco, Berkeley, and the Santa Cruz mountains. I was not involved, being in the “shut up and calculate” mode at nearby Intel Corporation. I did meet Nick Herbert at a party in the Santa Cruz Mountains and I got to know Robert Anton Wilson pretty well in his later years….I helped him with some computer issues and we had some long chats.

    But by far my biggest takeaway from this book was appreciating how an “obviously wrong” idea, about superluminal transmission of information could actually provoke interesting work after QM was basically “finished” by the early 1930s. After all, this had all be settled by Einstein, Lorentz, and others. Or had it?

    Bell’s work was gaining atttention, Aspect’s experiments were going on, and Wheeler was still fascinated by the whole area. Two of this students, Zurek and Wooters, went on to seriously examine the connections between entanglement and possible superluminal information transfer. The quote below, from the “Hippies” book gives a glimpse of what was happening in the early 1980s:

    “Thus by April 1982—if not before—Zurek had collided head-on with Herbert’s FLASH design. Immediately upon his return from Italy, Zurek’s thoughts returned to the discussion he and Wootters had shared in their San Antonio hotel room the previous month. He convinced himself that linearity was the key: the linear nature of quantum theory would place the ultimate limit on superluminal signaling, by making it impossible to duplicate arbitrary quantum states. ”

    Excerpt From: Kaiser, David. “How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival.” iBooks.

    I believe it’s fair to say that the “Second quantum revolution,” with all the work on “no cloning,” teleportation of bits, ER and EPR, and so on was influenced by the careful thinking that was partly done to formally disprove some of the wacky ideas of the time.

    (This is not the only case where a flawed idea has led to useful science.)

    And, by the way, I think this touches on the opinion poll where the best students tended not to have a preferred interpretations of QM. I remember in the early 1970s many heated bull sessions about about wormholes, EPR, the pilot wave interpretation, and, course, the Everett-Wheeler-Graham interpretation which gained a lot of attention around 1973 when the book came out. And when Larry Niven was writing science fiction stories about it (“All the Myriad Ways”).

    So a lot of us had been exposed to alternatives to the Copenhagen interpretation. Naturally, these didn’t take up any time in actual physics classes, but my advisor suggested I at least take a look at Bohm’s work. (And my GR professsor was Jim Hartle, who has had his own interprtations with Gell-Mann, Hawking, and others. Certainly he never spent classroom time talking about wild speculative theories, but it doesn’t mean he wasn’t thinking about them, or even talking to some of his best students in informal settings.)

    So my hunch is that a lot of bright students have thought about these ideas and so, ironically, don’t have a strong opinion on which particular interpretation is right. And a lot of duller students may’ve only been exposed to one or two of them.

    –Tim

  152. Tim May Says:

    BTW, I should add that for the group I hung out with–some physics and math students who went on stellar careers in academia and industry–we would’ve DEVOURED Scott’s book on quantum computing like a galactic black hole feasting.

    IMO, there’s nothing wrong with young people becoming enraptured by wild ideas, so long as they don’t distract them too much from doing some solid work.

    I remember in around 1968 being exposed to Wheeler’s work, making photocopies in the public library and at a nearby college, and then making a big photocopy of Feynman’s Nobel article/paper. I must’ve read that 10 times, gradually understanding more and more of it. (But not much.)

    Dirac had written a little popular book, “Matter and Antimatter,” and so the wild idea that anti-electrons (positrons) were electrons travelling backward in time blew my mind. In a good way.

    I think some wild ideas are important for bright minds. String theory never seemed to “do it” for me in this way, but the “ER = EPR” speculations are as exciting as things get.

    In re-reading the “Hippies” chapters tonight, before my last post, I saw that Jack Sarfatti, had drawn links to this before the current wave of interest. I wonder if Lenny Susskind knew these people?

  153. fred Says:

    Scott #123

    “I don’t know whether it’s actually possible to prepare a superposition over two different mental states of a conscious being, and do an experiment for which it actually matters that the superposition is a superposition, while still maintaining the preconditions that caused the being to be conscious in the first place.”

    Your thought experiment maybe isn’t all that different from studies done on what it feels like to be a “split brain” person:

    https://en.wikipedia.org/wiki/Dual_consciousness

  154. Scott Says:

    Sandro #143: I know I said I’d retired from this thread but … how on earth does the environment making measurements give rise to a preferred foliation?

  155. Aharon Maline Says:

    Here is an issue that I’ve had with dBB for a while: In what sense can it be said that the Bohmian particle configuration “defines the real world”?

    PSI is accepted as a physical field, defined of configuration space. Ontologically, it is just as real as the particles. It evolves unitarily, exactly as described by MWI. It decoheres into branches that are sharply peaked at different points in configuration space. One of these will contain the Bohmian particle configuration, and this describes “the real world”.

    But what is less “real” about the other branches (worlds in MWI language)? They contain quantum-wavefunction descriptions of rocks with effectively classical trajectories, planets with evolving biology, and humans typing on blogs. All of those processes are (believed to be) described perfectly well as interactions within a quantum wavefunction, with decoherence where relevent. Why would such a world be bothered by its lack of Bohmian particles?

    Do we end up reverting to the hard problem of consciousness by asserting that only observers containing Bohmian particles, rather than wavefunction alone, are allowed to experience their reality?

    Or if the other worlds are experienced by their inhabitants, what effect to Bohmian particles have at all?

  156. Travis Myers Says:

    Aharon #150,

    I had exactly those same issues until very recently. The answer to your question:

    “Do we end up reverting to the hard problem of consciousness by asserting that only observers containing Bohmian particles, rather than wavefunction alone, are allowed to experience their reality?”

    is a very subtle one. The short answer is yes, but only because you actually need a more mystical form of a solution to the hard problem of consciousness in order to get observers that are made of wave function.

    The way that Bohmian mechanics and MWI deal with the fact that we have no satisfactory solution to the mind-body problem are very different, and I will argue for why the Bohmian way of dealing with it, at least at present, is preferable (which is not to say that many worlds necessarily can’t be reformulated to deal with it in this way, just that it seems none of the proponents have done so). In general, when we specify a physical theory, we need some way of specifying how the entities postulated to exist in the theory relate to the world of our experience. We have to somehow communicate a concept of what that thing is. But we only understand the meaning of any concept via a combination of built-in machinery that we’re born with and association of that built-in machinery to things in our experience. We can then understand new concepts by combining these in various ways and relating to them to experience and so on. So when we think that we have a theory which describes nature, it means that we have described entities that we can relate to our experience through concepts we understand via our experience. The experience of the world as consisting of objects in a 3D space is a very manifest experience to us. Because of this, we intuitively understand what it would mean to describe our experience of a chair for instance as being an experience of a large number of particles shaped like a chair. So the Bohmian particles, which are postulated to exist in an actual 3D space, give us a foothold by which the world of our experience can be constructed, and thus it is clear and crisp what it means for Bohmian theory to be true or false. The particles directly relate to things in our experience, in a way enabling us to meet half way on the mind-body problem without fully solving it. Bohmian mechanics makes the hard problem seem tractable.

    In MWI, there are no particles existing in a 3D space. There is only a wave function evolving in a high dimensional space. In order for this to make contact with the world of our experience, we need to postulate a completely information-based solution to the mind-body problem. Since the worlds are defined by decoherence, and since decoherence does not strictly take place in the position basis, we ultimately can only connect this to the world of our experience by saying that our consciousness is dependent on abstract information processing in the wave function written in some basis which we don’t know. The wave function only contains “rocks with effectively classical trajectories, planets with evolving biology, and humans typing on blogs” if we are able to describe these entities in abstract information terms. And for sure, we have been able to describe any object using a wave function; but our description in such terms relies on our ability to intuitively understand what it means for us to measure that object in some basis, and it’s in order to try to solve the measurement problem that we would postulate many worlds in the first place. The fundamental entities that are postulated to exist in many worlds, namely the amplitudes associated with particular bases in a wave function, are much harder to relate to the world of our experience than the particles of Bohm theory.

  157. Mateus Araújo Says:

    Maudlin #146:

    Great, we’re on the same page then!

    The key issue seems to be this “naturalness of the measure” argument. I don’t think it holds water.

    The Maxwell distribution is derived by assuming that all microstates with a given energy are equiprobable. This assumption is not justified because this measure is “natural”, but because the physics of the problem do imply that there is not preference between microstates. A mathematician that does not know the physics in question might think that a natural measure is a translation-invariant measure over the phase space, according to which any measure that is singular on the microstates with a fixed energy is highly unnatural.

    This is closely related to the issue of equilibration. Since the physics has no preference amongst the microstates, time evolution will naturally lead to the macrostates with largest number of microstates.

    The situation in Bohmian mechanics is not so clear. Why should we prefer a measure on the configuration space that makes configurations in quantum equilibrium typical over one that does not?What I’m claiming is that a proof that systems out of quantum equilibrium tend to quantum equilibrium (which is not the “quantum equilibirum hypothesis”, sorry for mixing up the nomenclature) would be an excellent reason to do that, as it would imply that the physics make a quantum system prefer configurations in quantum equilibrium.

    You are asking why should we consider the possibility that the universe is out of quantum equilibrium, since we have only ever observed statistics compatible with quantum equilibrium. Well, we have a theory that is in principle capable of producing fantastical phenomena, but we only observe mundane ones. I think it is very natural to ask why can’t we get the good stuff.

    At this point, you have an analogy that I find very appropriate: in the Bohmian case one can take the equilibrium statistics as an experimental fact, and use it to imply that at the Big Bang quantum equilibrium was already true. Analogously, we can take the present observation that the universe is not yet in thermal equilibrium, and use it to imply that at the Big Bang the universe was in a low entropy state.

    In both cases this is a mystery: this initial state is a very particular one, why would the universe start in it? Well, it is a mystery that I’m stuck with in the case of thermodynamics, but that I can do without in the Bohmian case, since I can take the MWI instead.

  158. Mateus Araújo Says:

    gentzten #137:

    I’m sorry, I had overseen your comment before. Indeed, I do not take seriously derivations of probability in the MWI that do not attempt to explain what probability is. This includes not only Everett’s, but also Vaidman’s, Żurek’s, and Carroll’s. Deutsch’s derivation, cryptic as it may be, is very clear that the probabilities are the weights a rational agent uses to decide between games.

    I also agree with your point about dBB: even if one is observing probabilities that go against the Born rule, in the MWI it is still possible to argue why they are wrong (although in this case one would have hardly discovered quantum theory in the first place). With dBB, though, you’re stuck. Maudlin is here arguing that the Born rule is an experimental fact, so I guess he would also say that a broken Born rule must also be accepted as an experimental fact.

  159. Tim Maudlin Says:

    Aharon Maline # 154

    This is a very common question/objection. It is especially often raised by Many Worlds adherents, for obvious reasons. And the answer is so basic to the whole problem that you need to go back to square one and think about how any physical theory comes about.

    The aim of physics is to account for the behavior of physical objects, such as we take that behavior to be. Rocks fall. Light reflects off of mirrors and is refracted in water. Tracks of various shapes form in cloud chambers. Etc. etc. The existence and nature of human consciousness is not even on the list of phenomena that physics directly aims to account for, so if you end up thinking you have to solve the mind-body problem to do physics you have probably taken a wrong turn somewhere.

    The physical behavior that we are tasked with explaining is reported largely in terms of the locations, shapes and motions of macroscopic items in a macroscopically three-dimensional space or four-dimensional space-time. This was something Bohr insisted on: experiments and their outcomes must be described in “classical terms”. In order to have locations, shapes and motions in such a space, these items must be composed out of smaller things that have locations and motions. Call whatever the smallest microscopic spatially-located things are the “local beables” of the theory.

    Examples of familiar local beables:

    Democritean atoms
    Maxwellian electromagnetic fields
    the continua in continuum mechanics (what Aristotle called homoiomerous substances)
    point particles

    Examples of more unfamiliar local beables:

    point events (generally called “flashes” or a “flash ontology”: Bell suggested this as the local beables for the GRW theory)
    microscopic one-dimensional objects (“strings”)
    microscopic two-dimensional objects (“2-branes”)

    What sort of fundamental local beables should a theory postulate? Take your pick! Since they are microscopic, maybe even at Planck scale, we have no direct access to them. Collectively, they compose macroscopic objects, and the locations, shapes and motions of the macroscopic objects are just the collective locations, shapes and motions of some tremendous number of fundamental local beables. Any of those listed above, and others beside, could do the job.

    Similar remarks apply to the space-time in which the local beables are located. It should be macroscopically four-dimensional, but microscopically it could be continuous or discrete, it could have more dimensions compactified into a Calabi-Yao space, etc. Go wild. If your theory needs it, go for it.

    In fact, until the development of quantum theory, all of the ontology of physics was local ontology. This was not lost on Einstein, who also noted that the physical laws themselves were becoming more local. Fields equations as differential equations, for example.

    The greatest, most unexpected and hard-to-assimilate innovation of quantum theory is the postulation of a *non-local* beable. This is the thing that is represented by the (mathematical) wavefunction. Let’s call this non-local item the “quantum state”. To paraphrase Bell, you can’t indicate a location in space-time and ask: what is the magnitude or the phase or anything else of the quantum state *here*. It just does not exist locally in space-time in that way. It is because of this that the quantum state can entangle, leading to all the interesting behavior based on that.

    So why posit a quantum state at all? For the same reason we posit any other novel piece of physical ontology: to help explain the motions and locations of the local beables, and thereby of the familiar macroscopic objects.

    The cleanest example of this general theoretical architecture we have is Bohmian mechanics. The local beables are point particles, and the way the quantum state determines the motions of the point particles is given by the guidance equation. The quantum state (which is, fundamentally, the universal quantum state, described by the universal wavefunction) evolves deterministically and linearly by Schrodinger evolution.

    In Bohmian Mechanics the quantum state is as real as real can be. That’s why it is a non-local *beable*. But cats and people and stars and rocks cannot be “made out of” quantum state, exactly because the quantum state is not a local thing and they are. If you want to use some mathematical criterion to “divide” or “decompose” the quantum state into “branches”, then all of these “branches” are perfectly real since the quantum state is. But at any given moment, the only “part” of the quantum state that actually determines the motions of the particles is the “part” that is associated with the actual universal configuration of the particles. So why believe there are any other parts at all? Because of the interference effects, starting with 2-slit interference, and the entanglement effects, etc.

    This theory suffers no “measurement problem”, has no “fuzzy ontology” or “fuzzy laws”, can account for Born statistics, implements Bell non-locality via the non-locality of the quantum state, and is perfectly clear and precise. It cannot handle the phenomena of “particle creation and annihilation” in an obvious way, nor be easily adapted to a Relativistic space-time without use of a preferred foliation. So there is a lot to work on.

    Now: from this perspective, if you ask “what effect do the Bohmian particles have at all?”, you see what a strange question it is. Take away the particles and you take away the rocks and cats and tables and chairs and stars and everything whose behavior we were trying to explain in the first place! And if you throw away the local beables, then what use is the space-time in which they are localized? Throw it away too! Now all you have is the quantum state, assuming you can even have an N-particle quantum state without having N particles (which is doubtful). And you somehow, have to reconstruct the world we thought we lived in. Good luck.

    To more directly address your question, it makes perfect sense until you write that the “branches” of the quantum state “contain quantum-wavefunction descriptions of rocks with effectively classical trajectories, ..etc.” I don’t know what is it for a physical item to contain a “description”, but even if we can make sense of that, we don’t want descriptions of rocks and planets, etc. we want rocks and planets. There are descriptions of unicorns in the world but no unicorns. So as soon as you start speaking of the wavefunction as *describing* things rather than the quantum state as *being* things we are conceptual trouble. And how a non-local item—indeed how a world with no space-time at all!—can compose or “contain” tables and cats is a question so hard that it is difficult to know where to begin.

  160. Mario Hubert Says:

    Aharon #150
    Travis #151

    We do not need to touch the hard problem of consciousness in order to justify Bohmian particles. We presuppose that we have reliable experience about the outside world, namely, in form of tables and chairs etc. in three dimensions. Then the puzzle is, “What do particles do any better than just the wave-function in explaining the phenomena we perceive?”

    Let me first outline the many-worlds strategy. In a first step, we can be realist with respect to the wave-function in configuration space. Then every decoherred branch would correspond to a possible world. The problem is how we connect this mathematical description with the actual phenomena in three-dimensional space? David Wallace used to argue that the actual world is a pattern of the wave-function. This kind of functional explanation of the phenomena we observe, however, begs the question of how in detail can matter be composed of the wave-function if ordinary objects are pattern of the wave-function. Or in other words, it is totally unclear how patterns in the wave-function in configuration space can account for the pattern of objects in three-dimensional space. It seems that talking about patterns of the wave-function has just an operational meaning: we know how to predict the behavior of objects by means of the wave-function. But there is an ontological gap in the relation between the wave-function and ordinary tables and chairs.

    Therefore, Wallace updated his idea, which he named spacetime state realism. Here, he explicitly takes three-dimensional space as the fundamental space and the wave-function on configuration space as a mathematical description of the goings-on in three dimensions. The idea is to define local operators on three-dimensional space (like, the projection operator on local regions of spacetime). This amounts to having the many worlds in the very same three-dimensional space. The problem is, however, that we need to rely on operators to give us the fundamental ontology of matter. So again the relation between the wave-function and the behavior of ordinary objects is established in operational terms.

    What do Bohmian particles do any better? They don’t define the world; they don’t make up space itself, for example. Rather, they establish a clean and precise connection between a mathematical description in terms of a wave-function in configuration space and the behavior of baseballs, tables, and chairs. This is by the way the great merit of local beables in general, and Tim Maudlin pointed this out in an earlier post. The objects we observe are made of particles (or other local beables), and the behavior of particles determine the behavior of macroscopic objects. No talks of pattern, no talks of operators. Just simple composition from the microscopic scale to the macroscopic scale in the very same physical space.

    See that the Bohmian particles answer a question that is much more basic than the hard problem of consciousness. The hard problem of consciousness is still a mystery in Bohmian mechanics, as it is in all kinds of physical theories we now have at hand.

  161. Mario Hubert Says:

    Aharon #150
    Travis #151

    We do not need to touch the hard problem of consciousness in order to justify Bohmian particles. We presuppose that we have reliable experience about the outside world, namely, in form of tables and chairs etc. in three dimensions. Then is, “What do particles do any better than just the wave-function in explaining the phenomena we perceive?”

    Let me first outline the many-worlds strategy. In a first approximation, we can be realist with respect to the wave-function in configuration space. Then every decoherred branch would correspond to a possible world. The problem is how we connect this mathematical description with the actual phenomena in three-dimensional space? David Wallace used to argue that the actual world is a pattern of the wave-function. This kind of functional explanation of the phenomena we observe, however, begs the question of how in detail can matter be composed of the wave-function if ordinary objects are pattern of the wave-function. Or in other words, it is totally unclear how patterns in the wave-function in configuration space can account for the pattern of objects in three-dimensional space. It seems that talking about patterns of the wave-function has just an operational meaning: we know how to predict the behavior of objects by means of the wave-function. But there is an ontological gap in the relation between the wave-function and ordinary tables and chairs.

    Therefore, Wallace updated his idea, which he named spacetime state realism. Here, he explicitly takes three-dimensional space as the fundamental space and the wave-function on configuration space as a mathematical description of the goings-on in three dimensions. The idea is to define local operators on three-dimensional space (like, the projection operator on local regions of spacetime). This amounts to having the many worlds in the very same three-dimensional space. The problem is, however, that we need to rely on operators to give us the fundamental ontology of matter. So again the relation between the wave-function and the behavior of ordinary objects is established in operational terms.

    What do Bohmian particles do any better? They don’t define the world; they don’t make up space itself, for example. Rather, they establish a clean and precise connection between a mathematical description in terms of a wave-function in configuration space and the behavior of baseballs, tables, and chairs. This is by the way the great merit of local beables in general, and Tim Maudlin pointed this out in an earlier post. The objects we observe are made of particles (or other local beables), and the behavior of particles determine the behavior of macroscopic objects. No talks of pattern, no talks of operators. Just simple composition from the microscopic scale to the macroscopic scale in the very same physical space.

    See that the Bohmian particles answer a question that is much more basic than the hard problem of consciousness. The hard problem of consciousness is still a mystery in Bohmian mechanics, as it is in all kinds of physical theories we now have at hand.

  162. Tim Maudlin Says:

    Mateus # 156

    OK, so we have cleared up another communication problem: it is not the quantum equilibrium hypothesis that you were thinking about but what we can call the “quantum equilibration hypothesis”. Some people (e.g. Valentini) take mathematical results about equilibration to be very important and others (e.g. Dürr, Goldstein and Zanghí) take it to be a well-posed mathematical problem of little to no real physical significance. If you think that the universe has always been in quantum equilibrium, then detailed calculations of whether it would equilibrate and if so how fast are of only theoretical interest.

    They are also, I think, mathematically very, very difficult problems. Rather like the question of whether the dynamics of a system is ergodic: that is easy to ask and really, really hard to answer. That’s why one uses numerical simulations to attack it.

    Now: your description of the status of classical stat mech is conceptually problematic. You write as if in the classical case you do something that is very natural, if not unavoidable: you treat all the states as “equiprobable”. I guess that is literally true, but quite useless: the probability of any particular state is uniformly zero. If there were only N (finitely many) states, then we could usefully treat them as equiprobable, each with probability 1/N, but as soon as we have infinitely many states we have to talk about probability densities and probability measures. “Equiprobable” only has any content relative to such a measure. In classical stat mech we use the natural measure, which is sometimes misleadingly called “Lebesgue measure”. It is a measure that derives in an obvious way from the measure on space itself. And, given Liouville’s theorem, it is an equivariant measure under Hamiltonian evolution of a system.

    So to even raise the right questions in the Bohmian case (Are Born statistics typical?) we need a measure, in this case a measure over configuration space rather than phase space. The analog of “Lebesgue” measure, the one that derived just from the spatial metric, is not equivariant in Bohmian mechanics, so we can’t use that. One measure that is equivariant is |PSI|^2 measure. If we use that to define the notion of typicality, then we can ask what sort of statistics are typical for experiments done in Bohmian mechanics, and the answer is that Born statistics are typical. And the general methodological rule is that if some behavior is shown to be typical, then nothing more needs to be done to explain it when it happens. Observing atypical behavior would suggest or even demand changing the theory, but with typical behavior you are fine.

    As to why we can’t “get the good stuff”, here is an analogy. Classical mechanics, like quantum mechanics, is time reversible. So if a glass falling off the table and smashing to bits is physically possible (which it is), these theories imply that thousands of tiny shards of glass lying on the floor suddenly and spontaneously assembling themselves into a glass and jumping up on the table is physically possible. Well that would be a sight! And it is easy enough to arrange for glasses to fall off tables. So why can’t I get the good time-reversed stuff?

    The answer, of course, is usually stated as: theoretically it is possible but it is practically impossible given how precise the relevant initial state would be to prepare. You will never, ever see that happen. Just accept that. Now I’m not really sure what “theoretically it is possible” means: it is not like someone has the designs for some complicated machine that would actually accomplish that. But let’s accept that in some sense it is theoretically but practically impossible in classical mechanics.

    Now in Bohmian mechanics one can prove that, if you are in quantum equilibrium, certain things are not even theoretically possible. There are things that the physics itself, the dynamics, prevents you from doing, like preparing a beam in a way that violates the uncertainty principle. Preparing a system out of quantum equilibrium is one of the things you can’t do in quantum equilibrium. OK, so suck it up and accept it.

    It is true that there are nifty things you could do if you could prepare states out of quantum equilibrium. But equally, there are nifty things you could do if you could time-reverse systems: you could, for example, resurrect the dead. And classical physics says it is theoretically possible but practically impossible. So just take the same attitude here: that how the physics falls out, so just accept it.

  163. Mario Hubert Says:

    Mateus #152

    The microcanonical measure is derived from the Liouville measure, which is dynamically distinguished as a stationary measure with respect to the classical dynamics. In this sense, the Liouville measure is natural and justified by the dynamics. The same, as Tim Maudlin said, for the psi-square measure in Bohmian mechanics, which is equivariant with respect to the Bohmian dynamics.

    As your example with the mathematician shows, finding or defining the correct measure is not a matter of pure mathematics. You need to do physics!

    The case of equilibration in classical mechanics is not about that all microstates are equiprobable. The dynamics and the microcanonical measure tell us that for almost all non-equilibrium initial conditions the system will reach equilibrium. It could have been the case that the microcanonical measure as we are used to defining it is the wrong measure, see Bohmian mechanics.

    Now to your question, “Why should we prefer a measure on the configuration space that makes configurations in quantum equilibrium typical over one that does not?“ Because we only observe systems in quantum equilibrium, and because the standard psi-square measure is distinguished by the dynamics as being equivariant. It would be utterly odd to invoke a measure so that quantum equilibrium would have a small measure, although we see quantum equilibrium all over the place.

    Why do you refer to the non-equilibrium phenomena as “fantastical” and the equilibrium phenomena as “mundane”? We just don’t have systems in non-equilibrium. Therefore, Valentini proposes to search on cosmological levels for non-equilibrium systems that may have survived the Big Bang. If they exist they would be extremely difficult to detect. But since we are already in quantum equilibrium, we don’t need to speculate that the universe started from a non-equilibrium state. The equilibrium state would be a typical state, and therefore it is not special, and therefore it doesn’t need justification. Remember the picture by Roger Penrose that Tim Maudlin referred to. God could have randomly chosen an initial state and would have picked out an equilibrium state. This is the most natural state a universe could evolve from.

    We have a mystery only in classical mechanics because we are right now in thermal non-equilibrium. Therefore, we need to assume the past hypothesis. The mystery is, “Why did the universe started from this special thermal macrostate?” In Penrose’s picture, God had to carefully choose the right initial macrostate.

  164. Aharon Maline Says:

    Travis #151

    Thank you for your response, but I don’t think it addresses the question. The statement that “we observe objects in 3D space because the Bohmian particles have well-defined 3D positions” is simply incorrect!

    We observe a chair in 3D space because of interactions between the wavefunctions of the electrons in the chair and the photons in the room, further interactions between those photons and the electron wavefunctions in our retinas, and electrical interactions between ion wavefunctions in our neurons. All of these interactions are describes by quantum theory as unitary wavefunction evolution, plus decoerence. At no point do the Bohmian particles play any active role in the process.

    Furthermore, remember that “wavefuntion-only” worlds are being created by every decoherence event, as we speak. Do you really want to suggest that in all such worlds, everyone instantaneously loses their consciousness, but continues as some kind of zombie?

    In short: I don’t see how dBB is “MWI without the many worlds”. It seems to be “full MWI, with whatever problems that implies, and with additional particles that have no effect on anything else”.

  165. Tim Maudlin Says:

    Aharon # 162

    You have to take on board that the quantum state is not a local item, so it makes no sense that “We observe a chair in 3D space because of interactions between the wavefunctions of the electrons in the chair and the photons in the room, further interactions between those photons and the electron wavefunctions in our retinas, and electrical interactions between ion wavefunctions in our neurons”

    There are no “electron wavefunctions in our retinas”: that is exactly to ascribe a location to a non-local item. Ultimately, the only wave function (or quantum state) there is is the universal quantum state. It determines how the configuration of all the particles in the universe evolves. And so how one electron moves depends critically on which global configuration it is part of, and that in turn depends on where all of the other particles are. Hence this sort of dynamics creates strong counter-factual supporting correlations between particle behaviors: had this particle been in a slightly different location, that particle (arbitrarily far away) would have moved differently. That is how the theory violates Bell’s inequality. The motions of each particle depend directly, in this sense, on the actual positions of all the other particles, a dependency mediated by the wavefunction. But an individual particle is not “pushed around” by the wave function: how the particle moves is not determined just by the wave function but by the wave function plus the positions of all the other particles. In this sense, the particle definitely interact.

  166. Mateus Araújo Says:

    Maudlin #157, Mario #158:

    So your argument boils down to the distribution \( \rho = |\psi|^2 \) being preferred because it is equivariant under the quantum dynamics. How strong an argument is it? Are there any other equivariant distributions? Or to put it another way, could you find out that this is the correct distribution if you didn’t already know the answer?

  167. Travis Myers Says:

    Aharon #158

    You say: “We observe a chair in 3D space because of interactions between the wavefunctions of the electrons in the chair and the photons in the room, further interactions between those photons and the electron wavefunctions in our retinas, and electrical interactions between ion wavefunctions in our neurons.”

    What does it mean for electron wavefunctions to be in the chair and for electron wavefunctions to be in our retinas? Under MWI as currently formulated, the only thing that exists is the universal wavefunction in a high dimensional abstract space. Individual electrons don’t fundamentally exist in this picture; they only effectively exist as patterns in the wavefunction. The fundamental entities are amplitudes written in some basis. Certain patterns in these amplitudes effectively behave like electrons in a chair and electrons in retinas and ions in neurons. Such a picture leads us to the fully information-processing account of consciousness that I was talking about: ultimately our consciousness arises from abstract patterns in the amplitudes. If this seems ok to you, then all of your arguments are valid. But let me argue for why this shouldn’t seem ok.

    The mind-body problem will always remain a mystery so long as we insist on making a sharp distinction between the physical and the mental: in other words, dualism is incoherent. Ultimately, there is just reality. So we must recognize that the entities which we refer to in fundamental theories ultimately get their meaning via reference to things in our experience. We have never experienced an electron; we have experienced chairs; our understanding of what an electron could possibly be derives from our experience of chairs and other phenomena. So the more the fundamental entities make contact with the world of our experience, the more we are confident that we are just directly referring to reality rather than erecting an artificial barrier between the physical and the mental. Amplitudes as fundamental entities seem very far removed from directly referring to reality.

    As with all things related to consciousness, language fails us. All words that we use are just analogies to things in our experience, and analogies tend to break down when talking about consciousness. So the above paragraph isn’t completely satisfactory to me, but I don’t know how to put it into better words. I hope I’ve conveyed the idea well enough.

    I recognize that this is a very non standard argument, so take it as you will.

  168. Scott Says:

    I know I said I was done commenting here, but maybe one last thing. 🙂

    Maybe my most fundamental belief about the interpretation of QM debate, is that it’s tied up with many of the other thorniest philosophical debates, often in subtle and non-obvious ways: the mind/body problem, transtemporal identity, the meaning of probability, and more.

    This means, among other things, that it’s deeply unsurprising that the interpretation debate would still rage on, with no obvious consensus in sight. Yes, interpretations can become more or less popular over time, just like any other philosophical positions can—in #75, I tried to give some possible reasons why MWI has become much more popular over the last few decades. And yes, new interpretations can be invented just like new positions can be invented in any philosophical debate.

    But when smart people have already spent generations arguing about some philosophical question and have sorted themselves into camps, it’s extremely rare that further philosophical argument alone ever ends the stalemate, without some infusion of “fresh ammunition” (e.g., a new discovery from science).

    The high “philosophical loading” of the interpretation debate also means something else for me: that I’m deeply suspicious of those for whom the right interpretation is 100% obvious. I.e., those who are never once observed to say: yeah, it’s true that interpretation X does better at accounting for Y, but keep in mind, Z does better at accounting for W. Those for whom every sub-sub-dispute always and without exception gets resolved in favor of a single preferred interpretation, and even that interpretation’s weakest points are actually strengths if you look at them in the right way, while the other interpretations’ strengths are actually weaknesses—meaning, by implication, that most of the physics community has refused to embrace the One Best Interpretation out of sheer ignorance or stupidity or stubbornness.

    To be clear, I don’t mean I’m “suspicious” of such people in the sense of suspecting dishonesty: I have no doubt that their beliefs are 100% genuine. I only mean that I can’t read what they write about QM in my favorite way to read exploratory essays: the way where I temporarily relinquish my own views, “mind-meld” with whomever I’m reading, and follow their train of thought as though it were my own train of thought. I can’t do that if I know that, come what may, the person I’m reading is always going to be yanking me in a single preferred direction.

    While these reflections might seem airy-fairy, I think they’re nicely illustrated by recent comments of Aharon Maline, Tim Maudlin, Travis Myers, and Mario Hubert. Aharon #150 did a fantastic job of articulating the standard MWI view of dBB: namely, “what about all those empty branches? given that they’re there in the equations, who gave you the authority to declare them to be ghost towns, where no one really experiences anything?”

    In turn, Tim, Travis, and Mario nicely articulated the Bohmian response: “who said anything about ‘experience’? The job of physics isn’t to say anything directly about ‘experience,’ but just to solve an easier problem, of reconstructing a more-or-less familiar world of objects and events laid out in 3-dimensional space from whatever might be the more fundamental mathematical description. Once that 3-dimensional world is safely in place, philosophy can then take over in debating the relation between that world and subjective experience.”

    The trouble with that response was made explicit in Tim’s #154. Namely, there are extremely popular views about the mind/body problem according to which we don’t have the freedom to just arbitrarily discard pieces of our mathematical description of the physical world as “ghost towns,” bereft of experience, if we don’t find them useful. There’s the view that says: of course any appropriate “simulation” of a conscious process that you can find, anywhere in the physical world, is going to bring about consciousness. Indeed, the word “simulation” is a misnomer—it’s like talking about a “simulation of multiplication,” which just is multiplication. On this view, we don’t have the right to declare that the “empty branches” are bereft of conscious experience, because they’re missing Bohmian particles or for any other reason. If our best theory of physics includes those branches, and the branches include full information-theoretic ‘simulations’ of slight variants of our world, then there’s consciousness in the branches, full stop.

    My purpose here is not to defend this view of consciousness as correct: I don’t know if it’s correct! Certainly it has immense difficulties (though as far as I can see, every alternative view does as well…). I’m simply trying to point out that, if someone believes MWI for the above reason, then there’s going to be no possible way to bring them to any other interpretation without first changing how they think about the mind/body problem. Therefore the two questions can’t be decoupled.

    Tim, for all I know, might cheerfully agree with all this, adding only that the views about consciousness that are preconditions for favoring dBB over MWI, are just incontrovertibly correct, and anyone who doesn’t see that is being stubborn or irrational. 😉 Fine. But I’m satisfied if I’ve made my point about why the two questions can’t be cleanly separated.

    For my part, I confess that the “intermediate step” insisted upon by the Bohmians—i.e., the step of reconstructing a world of “beables,” such as particles that have definite positions in 3-dimensional space—is not one that I find essential to anything else I care about. I’d be fine with a view that “merely” said:

    (i) here’s an objective, self-consistent fundamental mathematical description of the world, and

    (ii) here’s how use that description to account for the experiences of conscious observers,

    without the intermediate step of a world of “beables.” (Of course, such a world could always be approximately reconstructed from the fundamental mathematical description, wherever and to whatever extent classical physics approximately described our experience. But I don’t need more than that.)

    On the other hand, I can live with other people demanding the intermediate step, and I agree with the conditional claim that for them—i.e., given what they want out of a physical theory—dBB and Bell’s local beables are extremely natural possibilities to explore.

    As for me, knowing that I’d be satisfied with merely (i) and (ii) still leaves me with plenty of confusions about QM. Is consciousness of such a nature that, if I believe in linear, unmodified QM, then I must ascribe consciousness to my informational doppelgängers in the other branches?

    But as explained above, I shouldn’t expect to be unconfused about such matters, until and unless I’m also unconfused about the mind/body problem, and others among the most vertiginous questions that human beings have ever asked!

  169. Mateus Araújo Says:

    Scott #163:

    I don’t think your neutral stance on the interpretations makes much sense. If you agree that there is a real world out there, there must an interpretation that best represent its ontology.

    I hope you’ll agree that it is nonsense to change your words

    “yeah, it’s true that interpretation X does better at accounting for Y, but keep in mind, Z does better at accounting for W.”

    to

    “yeah, it’s true that assuming P != NP does better at accounting for the difficulty of the travelling salesmen problem, but keep in mind, P = NP does better at accounting for the AKS primality test.”

  170. DavidC Says:

    Scott #79: Thanks, I think that makes sense to me.

  171. Sandro Says:

    Scott #149:

    how on earth does the environment making measurements give rise to a preferred foliation?

    Hmm, I thought I had been clear, but let me summarize point by point so there are no misunderstandings:

    1. As per the link I initially posted, the wave function governing a N-particle system covariantly entails a preferred foliation.
    2. In deterministic QM theories, measurement contextuality entails that the N-particle system’s wave function is entangled with the measurement apparatus.
    3. Therefore, the measurement apparatus alters the preferred foliation of the N-particle system, compared to the preferred foliation entailed by the stand-alone system.
    4. The preferred foliation we use for calculations doesn’t ultimately matter since they’re all unobservable, but it will obviously differ from the unknowable real foliation. Nevertheless, a real, non-arbitrary preferred foliation would exist.

    If this doesn’t seem right to you, at which step did I fumble?

  172. Scott Says:

    Sandro #165: I’m already unsure whether I’m on board with step 1—I’d have to study the paper more carefully. It’s a neat observation that in QFT, looking at (e.g.) the 4-momentum vector can sometimes give you a preferred foliation—but what if you’re in the vacuum with just a few particles here and there? Then the 4-momentum will vanish almost everywhere, so what do you then?

  173. Mario Hubert Says:

    Mateus #161

    One must distinguish two things: first, the quantum equilibrium hypothesis $\rho=|\psi|^2$, which tells us about the empirical distribution of the Bohmian particles guided by the effective wave-function \psi. Second, the measure of typicality $|\Psi|^2$, where $\Psi$ is the universal wave-function. This measure is equivariant with respect to the Bohmian dynamics. And with respect to this measure the typical empirical distribution of particles is $\rho=|\psi|^2$, but only if we can assign the subsystem the effective wave-function $\psi$.

    Of course, if $|\Psi|^2$ is equivariant, then $|\psi|^2$ is equivariant too. But $|\psi|^2$ is empirically justified, while $|\Psi|^2$ is theoretically justified.

    Shelly Goldstein and Ward Struyve showed that the empirical distribution $|\psi|^2$ is the unique equivariant distribution for the Bohmian dynamics.

    What about the uniqueness of the typicality measure $|\Psi|^2$? Every other measure that is absolutely continuous with respect to $|\Psi|^2 will do the job of giving the right sets high measure and the wrong sets small measure. Likewise, the microcanonical measure is unique up to an equivalence class of absolutely continuous measures, which are not uniform. But they agree on which sets are typical and which are atypical.

  174. Tim Maudlin Says:

    Scott #166

    Just when you think you’re out, they draw you back in….

    This is a very delicate moment, so I will be brief rather than my usual rambling on. In bullet points.

    1) I know this is hard but you need to at least consider the possibility that your “most fundamental belief” about debates about QM is not correct.

    2) To evaluate this possibility, you should put aside appeals to history, particularly that the debate has raged on for so long. I think I can explain why it has, but that is a sociological/psychological matter, not a matter of physics.

    3) Please try to mind meld! Don’t fight your usual ability to.

    4) Consider the possibility that having to bring the mind/body problem—which is an intractable problem!—into the debate is a sign of sickness rather than depth in this debate.

    5) Consider the fact that the Bohmian has a perfectly coherent answer to your worry. Consciousness supervenes somehow on the physical state of a system. The physical state of a Bohmian brain has *both* a quantum state *and* the detailed precise positions and motions of the particles in the brain.The “empty branches”, if you insist on reifying them into “things” just don’t have anything with that structure. So there is no plausible supervenience thesis about consciousness that is violated here.

    6) Consider very, very carefully your own condition (ii) for the adequacy of a proposed physical theory:

    (ii) here’s how use that description to account for the experiences of conscious observers,

    First point: no physical theory in history has ever demanded or provided this, if “account for” means anything in the neighborhood of “explain” or even “predict”. Try to use quantum mechanics to “account for” the experiences of a bat using echolocation, or of a lobster being put in a vat of boiling water, or of a shrimp, or of an eight week-old fetus.

    Second point: Even for a normal adult human, try to use quantum mechanics to “account for” her experience *without going through the “intermediate step” of first reconstructing the behavior of macroscopic objects in a 3-dimentional world!* There is all the difference in the world between using quantum mechanics to derive the claim that the pointer will swing to the right and then concluding that the observer will have experiences as of a pointer swinging to the right (if they are not drunk or distracted and there is enough light, etc. etc.) and using quantum mechanics to derive the character of the experience *directly*.

    I claim that no one ever has or ever will use quantum mechanics in the second way.

    But then the only way to fulfill (ii) is the first way: via the reconstruction of a 3-D world which, for all you care, may not even contain a conscious observer, e.g. a world where a pointer swings to the right. You have not avoided that step: you are tacitly doing it and relying on it. Any theory with local beables in a macroscopically 4-D space and local beables (including GRW with Bell’s flash ontology) can do this step honestly and cleanly and in a principled way, and then use the first (hand wavy) way to make predictions about experience. Any theory with no local beables can’t. That makes all the difference in the world.

  175. Scott Says:

    Mateus #163:

      If you agree that there is a real world out there, there must an interpretation that best represent its ontology.

    Disagree in the strongest possible terms.

    We know that P=?NP has a definite answer, even if we’ll never know the answer, because we can state truth-conditions using concepts that are 100% clear: P=NP iff there’s a polynomial-time algorithm for 3SAT, and such an algorithm is ultimately just a positive integer n such that Φ(n,m) for all positive integers m, where Φ is a certain computable predicate. If there’s no definite truth in cases like that, then there’s no definite truth about anything.

    Even in pure math, though, we can contrast that with (say) the Axiom of Choice or the Continuum Hypothesis, which don’t have truth-conditions that are 100% clear by my lights, and which might indeed be more useful to assume true in some situations and more useful to assume false in others, with nothing more to it than that.

    And in the QM interpretation debate, pretty much none of the concepts involved meet the standard of clarity where I’m sure that there must be some truth of the matter. Do other “branches” of the wavefunction represent “worlds” that “really exist” and contain “conscious beings”? Do Bohmian trajectories “really exist”? Is our branch “more real” than other branches? You can’t be serious!

    No one would mistake me for a postmodern relativist. I’m certain that there’s a real world. I’m even certain, for practical purposes, that every scientific theory that can be confirmed by experiment tells us something true and important about the real world (at a bare minimum, that the real world is such that we can rely on that theory’s predictions in its domain of validity). But the basic concepts that “carve reality at its joints” are prime candidates for what we could be radically mistaken about. It’s conceivable to me that no existing interpretation of QM is right, or that multiple ones are right depending on questions of definition that no one has even figured out enough to ask, or that the whole question is a wrong one (like “what’s to the left of the universe?”), or that there is a truth of the matter but it’s radically unknowable, or that the truth is knowable but not by humans. Speaking as a realist, I don’t see anything whatsoever that gives you a guarantee here.

  176. Tim Maudlin Says:

    Scott # 170

    To answer your question about the few particles in a near vacuum: if all you want from the theory is what you claimed you wanted, I can give it to you in this case: in such a situation no one will experience anything. If that sounds like a true but nonetheless inadequate observation, then acknowledge that you want more from a theory than that it satisfy your condition (ii).

  177. Scott Says:

    Tim #168: Come on. I was trying to clarify a specific point about the Dürr et al. paper, not proposing a physical theory.

    Make it a few particles in a near-vacuum, plus an astronaut in a spacesuit—or better yet, two astronauts moving inertially relative to each other. Again I ask: how do we use QFT to define a preferred foliation over the whole spacetime?

  178. Mateus Araújo Says:

    Scott #168:

    Well, I disagree on mild and polite terms.

    I think it is a matter of fact whether quantum states are objective or subjective. I think it is a matter of fact whether the Bohmian positions are real or not. I think it is a matter of fact whether many worlds exist or a single one.

    I wouldn’t be really interested in doing science if I thought the answer to these questions were only a matter of taste.

    Take the question of whether the luminiferous æther exists. It is clearly not as sharp as the P ?= NP question, but I hope you’ll agree that unlike the Continuum Hypothesis, it has a definite answer, given some general philosophical principles.

    I believe the same is true for the interpretations. Given general philosophical principles, like realism, physicalism, reductionism, Occam’s razor, we ought to be able to come up with a unique answer.

    Of course, it might be true that reality is too complicated for our feeble human minds to grasp. I’m happy to forgo that and only have an answer to some yes/no question as given above, or to have something like the best approximation to reality that we can understand and fit into a 400-page book.

  179. Tim Maudlin Says:

    Mateus # 164

    I think that your question contains an absolutely essential error of ambiguity. Please keep careful track of the little psi and the capital PSI. (I don’t know how to get the greek characters directly here, but maybe this will make it easier anyway). In the quantum equilibrium hypothesis, rho = |psi|^2, the little psi is the effective (or conditional) wavefunction of a subsystem, and the rho is a (coarse grained) density for the *actual distribution* of configurations in a large multitude of such subsystems that are all in the same universe. The wavefunction for the whole universe is PSI, and for the whole universe there is only one configuration, the one true universal configuration. So PSI is a complex function over the universal configuration space and the corresponding universal rho is *a single point* in that configuration space! So the equation RHO = |PSI|^2 cannot possibly be true or approximately true, unless the universal wavefunction is essentially a position eigenstate! But we don’t care about that at all. We care about rho = |psi|^2.

    Now: what is equivariant is not |psi|^2 but |PSI|^2. Its is therefore |PSI|^2 that we use as our “measure of typicality”, that is, as how to make clear what it means to say that “overwhelmingly most initial conditions yield Born statistics for subsystems”. Note this very, very well: at a particular moment t, we would get exactly the same result even if we used some other “measure of typicality”, as long as it is not wildly different. For example, if at time t we use |PSI|^4 for the measure of typicality, we still get that typically rho = |psi|^2. So it is not, as Detlef says, “garbage in/garbage out”: that is, we are not getting ρ=|ψ|2 as typical just because we are using |PSI|^2 as the measure of typicality. It is tempting to think that, but it just isn’t so. This is a really, really important conceptual point! Maybe it is still not clear, but try to keep careful track of psi vs. PSI. (Anyone who used to read MAD magazine will appreciate that!)

    So: are there equivariant measures other than |PSI|^2? No idea. I would be a little surprised if generically there are. And then: even if there are, is ρ=|ψ|2 still typical using this other measure as a measure of typicality? Obviously I don’t know that either, since I have no examples. But it certainly might still be typical. Indeed, for all I know ρ=|ψ|2 might be typical at a given moment if I use the standard Lebesgue measure over the universal configuration space at t as the measure of typicality!

  180. Daniel Says:

    Scott #169

    A simple comment, whose relevance to the discussion is not 100% clear to me at this point but which I suspect might be no-negligible ( I am just a physicist and new to these debates) :

    It seems that a big port of the discussion is framed in a context where space-time is taken as suitably described in classical terms: foliations by Cauchy hypersurfaces etc.

    Moreover there seems to be a notion that despite talking about QFT ( by which I assume people refer to Quantum Field Theory in general curved space-times), one knows what it means for there to be a definite number of particles, and that things like a “ vacuum” can be readily identified.

  181. Daniel Says:

    Scott # 169
    Sorry I now see that my message was a bit unclear.

    The point is that I suspect that clarifications about those two assumptions ( both of them have implications for the kind of theoretical physical treatment one has in mind ) might be required in order to sharpen the debate.

  182. Tim Maudlin Says:

    Scott # 174

    In Minkowski space-time, the preferred foliation for the case you describe will be the given by the Lorentz frame associated with the center of mass, I believe.

  183. Scott Says:

    Tim #176: In that case, a technical question. What if the spacetime is curved rather than Lorentzian? Won’t that prevent you from finding a canonical foliation associated with the center of mass? Or is the idea that, what you lose in having a unique Lorentz frame, you gain in being able to exploit the curvature tensor all across space to define a canonical foliation in some other way?

  184. Scott Says:

    Tim #168: I agree that in practice, if you were going to try to use QM to calculate someone’s sense perceptions, you’d probably do it by first reconstructing a quasi-classical world. But:

    (1) The main issue here is just that the observer or measuring device be treated classically, or at least that we know how to recognize versions of the observer and/or the measuring device in a giant wavefunction. If you have that, then everything else is “merely” a matter of computing resources: using classical approximations will save you time, but in principle you could always just spend exponential time to do the simulation, or else do the simulation on a quantum computer. And it would be ironic indeed if you were the one trying to convince me that polynomial vs. exponential time is the decisive issue here…

    (2) More to the point, even if we grant that the simulation should work by first reconstructing a quasiclassical world, I see no reason whatsoever why that would have to involve beables in Bell’s or dBB’s sense—or for that matter, why it would even help for the simulation to include beables in their sense (with their own dynamics and so forth). Why not just use decoherence theory, and keep track of which wavepackets (conditional on being in the branch we care about) are so well localized that they can be regarded as classical?

  185. Tim Maudlin Says:

    Mateus # 176

    I am in 100% agreement, save for one point. There are clearly sharp enough disagreements among the various theories (I will avoid the term “interpretations” for reasons given above) that at most one can be true or approximately true. So we want to clearly map out what the various possibilities are, how they differ, strong points and weak points etc. But whether any uncontroversial property will lead to one being universally accepted as the most rational one to believe (as Relativity was generally accepted as more rational to believe than an ether theory, at least until Bell!) is an open question. It is possible that we will end up with several competing plausible and conceptually clear theories with no means to decide between them. Epistemic bad luck: too bad for us. If I believed in a God, I would accuse God of being, if not a deceiver, at least a rotten egg.

    But nobody made the universe for our benefit and with us in mind. It is almost a miracle that we can reliably discover *anything* about the physical world, much less *everything* about it. Evolutionary constraints explain why we have to have highly reliable mechanisms that inform us about the location and character of nearby macroscopic objects: they might want to eat us, or we might want to eat them or have sex with them, so we better know about them! But we might have expected that those experiential and cognitive mechanisms will just be completely inadequate for discovering the nature of world at microscopic or cosmological scale. But so far so good! We have been unbelievably successful at coming up with a fantastic predictive mechanism for microscopics, and cosmology has made tremendous strides as well. This is an amazing fact, worthy of reflection. So I think there are strong grounds to believe that we have the cognitive ability to both frame and understand the true physical theory of the world. In fact, I think that as we get closer to the truth, once we have found the right mathematical language, the truer theories will get progressively simpler rather than more complex. But even so, there is no guarantee that we will be able to decide on either empirical or super-empirical (simplicity, elegance, etc.) grounds among various different competing theories in the end.

    If not, as I say, tough luck for us. At least we can have fun figuring out *one* coherent empirically adequate complete physics. If we manage to come up with more than one, and can’t find a way to decide between them, then we will just have to suck it up. But we are a very long way from that situation now!

  186. Tim Maudlin Says:

    Scott # 168

    I’m afraid that is above my pay grade, but I will see Shelly and Nino tomorrow and report back. I’m actually more inclined to just throw on a preferred foliation—even in a vacuum where symmetry considerations will defeat any scheme like this—and be done with it. Clearly things get much trickier in a curved space-time generally, and if the space-time is in a superposition of different curvatures, God knows how to even begin! I’ll see what I can find out.

  187. Scott Says:

    Mateus #172:

      Take the question of whether the luminiferous æther exists. It is clearly not as sharp as the P ?= NP question, but I hope you’ll agree that unlike the Continuum Hypothesis, it has a definite answer, given some general philosophical principles.

    Funny you should use that example! As it happens, I’ve never really been convinced there’s a definite answer as to whether the aether exists or not. Like, it seems to me that if you took some 19th-century aether theorists and brought them fully up to speed with modern physics, they might say:

    “Fine, so what we called the luminiferous aether, you call the QED vacuum. Just like we said, it pervades all of space, making it ‘nonempty’—what else are those virtual particles about?—and it provides the medium for electromagnetic waves to propagate in. Of course your aether has a crucial property that we didn’t know about, namely Lorentz invariance (not to mention quantum mechanics). But, like, Lorentz invariance and quantum mechanics change the detailed behavior of a lot of things that we knew to exist in the 19th century, and we don’t therefore say that all those other entities don’t exist!”

    Crucially, understanding the insights of Maxwellian electrodynamics and special relativity never requires making a decision about who’s right—my hypothetical aether theorist, or the modern majority that says that “aether has been disproved,” by which they mean that what modern physics puts in its stead should be called a different name.

    So on my conception of “realism,” you get to understand all the mathematical structures relevant to physics (in many cases, with the certainty of theorems), and also be sure for practical purposes that those structures really are relevant for physics—because experiment (or physical reasoning derived from experiment) shows they are. The one thing you can’t do, is disprove someone who has a different (even radically different) way of talking about the same mathematical structures. Is that really not enough for you?

    If it’s not, then try this on for size: must there be an ultimate truth about whether the quantum state of the universe is pure or mixed? If pure, then the value of its global phase? If not, why not?

  188. Travis Myers Says:

    Tim #168

    I want to try to understand two things that you said. The first is:

    “Consider the fact that the Bohmian has a perfectly coherent answer to your worry. Consciousness supervenes somehow on the physical state of a system. The physical state of a Bohmian brain has *both* a quantum state *and* the detailed precise positions and motions of the particles in the brain.The “empty branches”, if you insist on reifying them into “things” just don’t have anything with that structure. So there is no plausible supervenience thesis about consciousness that is violated here.”

    How is this fundamentally different from saying “The state of your brain has *both* a neutrino field *and* an electron field. Therefore, there is no plausible supervenience thesis in which your brain doesn’t supervene on a neutrino flux at least as strong as the one in our solar system.” It may be that it is implausible that consciousness can supervene on the wavefunction alone, but this doesn’t seem like a good argument for it.

    The second one is:

    “Any theory with local beables in a macroscopically 4-D space and local beables (including GRW with Bell’s flash ontology) can do this step honestly and cleanly and in a principled way, and then use the first (hand wavy) way to make predictions about experience. Any theory with no local beables can’t. That makes all the difference in the world.”

    It’s entirely plausible, and I think probably it is the case, that right now we have no better way of relating to our world of experiences than by using local beables. But isn’t it going too far to say that it’s simply impossible to describe the world without local beables? Is there nothing we could ever encounter in the future which could convince us that space (and hence locality) is not fundamental?

  189. Tim Maudlin Says:

    Scott # 182

    I’m afraid I don’t follow. There has been for some time a tag phrase—”Quantum theory without the observer”—for approaches to quantum theory that insist on leaving the observer out of the statement of the theory altogether. Or, as Bell once said, he wants a physical theory that neither requires nor is embarrassed by the presence of any observers. Bohmian mechanics and GRW certainly fall in this category. Many Worlds is usually presented as falling under this category, although if we take the Wallace/Deutsch line on probabilities they at least seem to require reference to a *rational agent* making *practical decisions* to make sense of the apparently probabilistic content of the theory. So maybe we should expand to “Quantum theory without the observer or rational agent”.

    I mean, there were billions of years of physical activity in the universe before there were any observers or measurements or agents making bets! Surely we want physics to treat that time just the same way as it treats stuff in labs now!

    So to make the point sharp again: suppose you have the wave function of the universe, and I somehow can “recognize” versions of our observer, who happens to be a shrimp, in the wave function. Leave aside issues of computation time here. How am I to proceed to determine the *experiences* of the shrimp? Even in principle?

    This sentence (fragment) ought to set off alarm bells:
    “The main issue here is just that the observer or measuring device be treated classically..”

    No, in the sense of physics I want a theory in which *nothing* is “treated classically”, a theory that is quantum mechanical through and through and in which—as far as physics goes—the body of a conscious observer is just a piece of meat, a collection of electrons and protons and neutrons like any other. Everything comes under the purview of a single physical theory, with no distinctions at all between “observers” and other items or between “measurements” and other physical interactions. In which there is no “measurement problem” because measurements are not treated as somehow special or different from any other interaction.

    If I can get the locations, shapes and motions of at least macroscopic items out of the theory, then I know how to test the theory. And I better be able to get that out without going through anybody’s consciousness, otherwise I will not be able to get it out at all.

  190. Travis Myers Says:

    Tim #181

    You say:

    “If I can get the locations, shapes and motions of at least macroscopic items out of the theory, then I know how to test the theory. And I better be able to get that out without going through anybody’s consciousness, otherwise I will not be able to get it out at all.”

    Are we not going through our consciousness to even make the leap to understanding macroscopic objects as collections of microscopic objects shaped in a certain way? Is it impossible that other consciousnesses don’t perceive macroscopic objects in a space like we do? What if we had only been gifted with the sense of smell? Would you be insisting that the fundamental nature of reality must make reference to odors or else we will never be able to test the theory? It would seem more wise to say that right now, we only know how to describe reality in terms of odors, but there may be a better description that we can learn. I don’t think we would be fundamentally incapable of imagining molecules positioned in a 3D space if we only had the sense of smell, it just might take us a long time to build up that concept; and in the same way, I don’t think we’re fundamentally incapable of imagining reality being something more than objects in a 3D space.

  191. Mario Hubert Says:

    Mateus #173

    It is a matter of fact that quantum states are objective because of the PBR-theorem. It’s a matter of debate whether Bohmian mechanics or MWI is the correct quantum theory. But we have given arguments why Bohmian mechanics beats MWI. These were physical arguments, not some metaphysical a priori principles. The principles you refer to (namely, realism, physicalism, reductionism, and Occam’s razor) all apply to Bohmian mechanics, GRW, and MWI. So these principles cannot distinguish between either theory.

    Let me mention some details about those principles. Realism is the basic assumption we need to make to do physics. Physics is about the behavior of objects in an external world. This should be unquestionable, and somehow due to misunderstandings of Bell’s writing physicists began to doubt realism. Physicalism, in the sense that all physical phenomena are determined by our fundamental physics, is also something we need to presuppose when doing physics. This may be questioned in solving the mind-body problem, though. The same for reductionism.

    Occam’s razor is not as strong as the other three. For example, it’s a bad argument to get rid of the Bohmian particles just by invoking Occam’s razor, because the particles play a role the wave-function does not and cannot play for explaining physical phenomena. It’s true that we need to argue why we put something more into our ontology, but also we need to justify when we cut something out. The meager theory is not necessarily the preferred theory.

  192. Daniel Says:

    Tim #176

    I take it we are thinking everything is described in QM terms… ( except space- time itself) … but in that case, how do we define the ” centre of mass frame”?

    That is normally a characterized by a classical 4 vector in Minkowski space-time ( actually a constant vector field) … How do we obtain it from the quantum characterization of matter?

    Some expectation value? … using which wave function ( in that case we would need a foliation to start with, right? ) .

    In that case the actual dBB trajectories would not play a role in the frame.

    Perhaps conservation laws could be of help … but again that would need to rely only on the wave-function…

  193. Aharon Maline Says:

    Many of the comments by dBB proponents here have emphasized reasons why having well-defined particle positions, or other local beables, is a desideratum in a physical theory. Like Scott, I find these arguments unconvincing: I’m quite satisfied to use only nonlocal beables, as long as the model in fact reproduces our experience (of an effective 3D world etc.)

    However, this question was not the thrust of my argument above. My claim is that dBB, rather than predicting a single experienced world as its adherents believe, in fact produces a multiverse of experiences worlds, exactly as described in MWI. Since I have difficulty with MWI (I am not convinced by the arguments that Born statistics can be “derived”), dBB will be at least as undesirable for me.

    The central issue is that the particles have no effect on the quantum state. Whatever quantum states exist in MWI exist in dBB as well, and vice versa.

    Now it is clearly the case that the wavefunctions we use in our quantum experiments, and in our quantum descriptions of interactions such as absorption of a photon by a molecule, are very much in line with the 3D classical world we see. That is, we have a range of possible positions in 3D space for each quantum particle, as controlled by our macroscopic equipment, and the wavefunction we use has negligible amplitude only in regions of configuration space that correspond to combinations of those 3D positions. The above remains true if we we take into account the wavefunctions of all the atoms making up the equipment, room, experimenter, etc.: The scene has a wavefunction that is very sharply peaked in the small region of configuration space that corresponds to the 3D setup. All this is true in any interpretation.

    Now in interpretations that have an objective, universal quantum state, in particular MWI and dBB, we need to explain how the above near-classical wavefunctions arise as part of the universal state. Bohmian particles play no role in resolving this. If it’s the case, as Tim seems to believe, that “how a non-local item—indeed how a world with no space-time at all!—can compose or “contain” tables and cats is a question so hard that it is difficult to know where to begin”, then the existence of a universal state is falsified by every quantum experiment to ever involve macroscopic equipment!

    Fortunately, we do know where to begin: decoherence. We hypothesize that unitary evolution of the universal state tends to form non-interfering sharp peaks corresponding to nearly “classical” 3D configurations. This hypothesis is far from being rigorously proven, but in short, the reason meaningful 3D configurations are preferred is that the Hamiltonian is local in 3D space. The relations between 3D positions of particles-and by this I of course mean the relation between the values of the corresponding triplets of dimensions in configuration space- is what controls which interactions take place. If the state can be written as a sum of two states that differ, even by small amounts, in the corresponding macroscopic configurations, the continuing evolution of the two will be so different that they quickly lose any overlap and become sharp peaks. This argument may be somewhat hand-wavy (actually, I’d appreciate if someone else can spell it out more clearly), but I repeat, dBB stands or falls on this point, just as MWI does.

    Assuming the decoherence argument is correct, the resulting universal state is a set of non-interfering peaks that we may as well call “worlds”. Each one contains wavefunction descriptions of atoms with nearly well-defined positions, forming macroscopic objects, just like the wavefunctions in our own world.

    Now in dBB, there is one important asymmetry between the worlds: exactly one of them contains the Bohmian particles. But this is the only difference. If the universal state were examined by an extra-universal being who could see the quantum state but not the Bohmian particles, he would have no way of guessing which “world” contains them (beyond betting on the one with the highest |PSI|^2). This must be the case, because once again, the particles never influence the state in any way.

    Thus in particular, the wavefunction branches near our own correspond to macroscopic configurations that include humans debating on blogs. (Perhaps in one of them, I actually stopped blogging for once and started doing my homework). The atomic wavefunctions in their screens form shapes that we would recognize as spelling out thoughtful arguments about the nature of consciousness, and the interpretation of QM. All this follows inevitably from assuming a universal quantum state!

    And yet dBB insists that we are special; we are “real” because the Bohmian configuration includes us. But how would we know that? All of our experiences can be described in terms of wavefunction interactions alone. The only solution I see is to posit that the Bohmian particles play an active role in solving the hard problem of consciousness, that the somehow act as a”soul” for the inhabitants of the world they are in, and that the inhabitants of the other worlds are zombies.

    Does anyone here actually want to bite that bullet?

  194. Daniel Says:

    Scott #178

    Why not just use decoherence theory, and keep track of which wavepackets (conditional on being in the branch we care about) are so well localized that they can be regarded as classical?

    In order to use decoherence we need to first identify the DOF we want to trace over ( and the reasons usually invoked for doing so are to some degree anthropocentric: ” We can not measure them ” , ” we take them as the environmental far as the RELEVANT objects are concerned”… etc). That seems inappropriate if we want THEN –afterwards– to try to identify the features in the wave function that we takes as humans , measuring devices and such).

    Then there is the issue of interpreting the improper mixtures as proper ones.. and the fact that in many cases the reduced density matrix even if fully decohered does not specify a unique basis.

  195. Scott Says:

    Aharon #184: Thanks!! I was going to write a reply to Tim that made basically the same point, but you’ve said it much better than I would have.

    A way to put it in philosopher-speak is that the particle positions or other beables are “epiphenomenal”: affected by the wavefunction, but having no effect whatsoever back on it. So they can play a role in “ensouling” us if you want—there’s that pesky mind/body problem again!—or in “breathing fire” into a single chosen branch of the wavefunction, but they can’t play any role at all in explaining why the world looks approximately classical in that branch, with objects localized in space and so forth. For that you need decoherence, just the same as MWI does.

  196. Atreat Says:

    Man, I’m glad I asked the question… 😉 learning so much just sitting as a fly on the wall. Scott, I think you have more than sufficiently selected your cards.

    Tim, your description of local beables and what you find motivating dBB was also invaluable. I think the debate is turning sharply toward a meeting of the minds about what the actual disagreements are.

    Scott #179, careful, if I did not know you any better I would say this comment is straying very close to post modern relativism! What is striking to me is that while I consider my own philosophical position avowedly NOT postmodern relativist, rather a global anti-realist. I think your comment here hints at the subtlety between the two.

    Travis #181, your comment here is also very much inline with what I consider global anti-realism. Very insightful comment!

    Mario #182, you say, “The principles you refer to (namely, realism, physicalism, reductionism, and Occam’s razor) all apply to Bohmian mechanics, GRW, and MWI.”

    It is my impression that the differences now becoming apparent between Scott, Ahron, Travis and other MWI vs dBB agnostics VS Tim, Mario and other dBB believers might be grounded in subtle distinctions and belief in these philosophical principles. I think I detect some tolerance for anti-realism in the former group.

  197. Atreat Says:

    Sniffloy, btw, I was gesturing toward then MWI giving an uncountable set of universes with my comment about the cardinality of the reals. I think this is a big problem for MWI adherents claiming MWI derivations of the born rule.

  198. Atreat Says:

    Want to thank everyone in the thread: between this and the Falcon Heavy / Starman launch my inner geek has been heavily satisfied this week 🙂

  199. Mario Hubert Says:

    Aharon #185

    How does a non-local beable reproduces our experience of *local* objects in three-dimensional space? If you refer to the “shape” of the wave-function in configuration space, this is just a hand-wavy metaphor. How can local objects be composed of wave-function? Even when David Wallace followed this route, he only talked of patterns without being more precise on how the pattern in the wave-function is able to explain the behavior of physical objects.

    Your reading of a many-worlds ontology in dBB is misguided, the role of the wave-function is to guide the particles. The wave-function does not by itself compose a world in this theory.

    Why are you not convinced that the Born rule can be derived in dBB? I mean it’s a mathematical proof. Where do you doubt the validity of this proof? Do you also doubt the validity of the Maxwell-Boltzmann distribution in statistical mechanics? If you’ve found a mistake in either derivation please point it out!

    I agree that decoherence plays a crucial part in the classical limit, but it doesn’t help MWI in giving us an ontology. This issue has to be settled way before decoherence can be applied. By the way, decoherence doesn’t need to be assumed; it is a consequence of the Schrödinger evolution.

  200. Travis Myers Says:

    Atreat #188

    “Travis #181, your comment here is also very much inline with what I consider global anti-realism. Very insightful comment!”

    That worries me, because I am definitely not an anti-realist!

  201. Mario Hubert Says:

    Scott #187

    Even when the wave-function is in a coherent superposition, the Bohmian particles guided by this wave-function inside the interference regime always have precise locations. No decoherence is needed for that!

    I don’t understand your reasoning: you grant that the Bohmian particles may account for our inner mental experiences, while being incapable of composing local objects? How can that work?

  202. Mateus Araújo Says:

    Scott #180:

    No, come on, now you’re just trolling. The Michelson–Morley experiment made the æether change from being a detectable thing with interesting consequences, to an undetectable entity so lame that it is not worth talking about it. Or, in other words, it was sliced off by Occam’s razor.

    Do you realize that the point of the Lorentz-covariant æether is to allow one to talk about a preferred reference frame, absolute time and apparent time? That is, reject the very insights that one should get from special relativity?

    I literally can’t believe that you find the æether hypothesis plausible or appealing. Of course one cannot disprove it from bare physics and mathematics, but one can from general philosophical principles (that the æther theorist might reject, of course, but that’s his problem).

    So no, this is not enough. I claim that without philosophical axioms that are at least as strong as the ones necessary to reject the æther one will never be able to solve the problem of interpretations.

    As for your questions: yes, there is an ultimate truth about the quantum state of the universe, and it is pure. As for its global phase, it is a redundant degree of freedom in our mathematical representation of the state. Is there anything else you want to know?

  203. Tim Maudlin Says:

    Travis # 181

    You raise a very interesting and serious question here (and no, I am not being sarcastic at all), viz., could a conscious being with only a sense of smell ever develop physics? I think the answer is “no”. Let me say why.

    Of course, to survive at all our olfactory creature had better not be able to locomote, otherwise it will fall of a cliff or something. So let’s root it, like a plant. Think of a conscious Venus fly-trap that triggers by smell rather than touch. Its “experiential space” will be very high dimensional—as many dimensions as there are distinct properties of smells—plus one dimension for intensity. The only continuously varying dimension in its experience is the intensity: it gets continuously higher as the “prey” approaches from any direction. The place of the odor in the other dimensions remains basically static.

    So if our creature develops any spatial imagination at all, it will be one-dimensional. No good for physics.

    The philosophical tradition going back at least to Aristotle made a distinction between different perceptible characteristics of things. Those characteristics that are confined to a single sense modality are called “special sensibles”, such as color, flavor, odor, pitch, etc. Then there are the “common sensibles” that can be perceived by several senses, in particular both touch and sight. These include shape, number and motion. The basic explanation of this distinction is that the special sensibles are purely subjective in character. People can have very different sensory experience of the same smell, and no one counts as either “right” of “wrong”. But the common sensibles appear more “objective”, because they can be perceived by different senses, which can check one another.

    This later became the doctrine of primary vs. secondary qualities.No body objectively has any primary quality like flavor or color. They are “all in the mind”. But bodies do have the primary qualities of shape, location and motion, like the Democritean atoms. Since our olfactory creature perceives no primary qualities at all, it is handicapped beyond repair, and can never develop the concepts required to do physics.

  204. Tim Maudlin Says:

    Daniel # 176

    Again, above my pay grade! I haven’t thought deeply enough about the proposal. I’ll try to figure it out.

  205. Mateus Araújo Says:

    Mario Hubert #183:

    I don’t think I’m able to respond to you politely after reading this statement:

    “It is a matter of fact that quantum states are objective because of the PBR-theorem.”

    So I’ll just leave it at that.

  206. Tim Maudlin Says:

    Aharon #190

    Here are several things to address here, but let me do one straight off. It is the observation that although the quantum state affects the particles (by guiding their motion), the particles in return do not affect the quantum state. That is perfectly true. Sometimes people say that that alone violates a “generalized Action-Reaction principle” or a “generalized Newton’s Third Law”, and is automatically a strike against the theory.

    The first thing to note is that the objection put this way (which I know is not how you put it) is just silly. Newton would certainly have rejected this generalization of his Third Law, as should everyone else. For example, Newton’s law of universal gravitation obviously has an effect on the motion of particles, but the motion of the particles has no reciprocal back-action on the law of gravitation! (Maybe Lee Smolin will think this is a problem too!) As Shelly Goldstein says, the quantum state has something of a nomological flavor: if you want to analogize it to something familiar, it is more like a law than a classical field. Personally, I try to lay off the analogies. Quantum states are quantum states. They aren’t like anything in classical physics. Or, as David Albert once put it, “we’re here, we’re queer, get used to it”.

    But granting the no-back-reaction fact, what has that to do with consciousness or the “reality” of “worlds” associated with the “empty branches”? Well, nothing at all! To see this, just put a little bit of back-reaction into the theory: the actual particle configuration deforms the quantum state at least a little exactly “where” that configuration is in the quantum state. Does that suddenly change the interpretive situation significantly? How could it?

    Is the “existence of a universal state is falsified by every quantum experiment to ever involve macroscopic equipment” if there is no back-reaction? Why should it be? In Bohmian mechanics, the actual shape, location and behavior of the macroscopic equipment is determined by the location and motion of its particles, and the motion of the particles (given their locations) is determined by the quantum state. Without the quantum state the theory falls to pieces. So how can such an experiment falsify the theory?

    Scott # 192: Note that the particle locations and motions are *not* epiphenomenal in the usual sense of that term. It is not merely that epiphenomena do not have any reciprocal back-effect on their cause or base, but they *supervene* on the base. The Bohmian particle locations and motions do not at all supervene on the quantum state. If I give you the quantum state alone, you can never analyze it to figure out what the particles were actually doing. The particle locations are independent degrees of physical freedom from the physical degrees of freedom of the quantum state.

    That is why the “Bohm is Many Worlds in a constant state of denial” meme that Deutsch created is so completely off base. A collection of particles plus a quantum state has much more physical structure to it—more physical degrees of freedom—than a quantum state alone. Further, those extra degrees of freedom—the particle locations in space-time—are of a completely different physical character than any in the quantum state. So it is not surprise that you can make lots of stuff out of particles +quantum state that you can’t make out of quantum state alone. In particular, you can make all sorts of complex localized objects in the second case and none in the first. These objects include trees and rocks and cats and brains with the right sort of structure to support consciousness. The “empty branches” are devoid of all these things, because they don’t have the particles to work with. And of course, since the particles never “split” or “multiply” the concrete Bohmian objects never “split” or “multiply” either, so we have none of the conceptual headaches that come with that in Many Worlds.

  207. Travis Myers Says:

    Tim #195

    That is extremely interesting. You gave me something to think about.

  208. Tim Maudlin Says:

    Aharon # 190

    I was going to go on to address the rest of you post, including the bit about Zombies, but let me try something else instead. I did this once before and it was really useful.

    Do your best to try to switch roles for a second. You understand enough about Bohmian mechanics now, for sure. So pretend that you are a Bohmian, and you are tasked with responding to the rest of the objections in post # 190. See if you can come up with the responses on your own. If you really can’t, then just raise them again and I’ll answer them. But I bet there will be no need.

  209. Atreat Says:

    Travis #191, or consider that how I use the word ‘anti-realism’ might be different from what makes you reflexively recoil. I don’t consider myself a postmodern relativist. They deny too much. I believe in an externally existing world. It is *how* it exists that I wouldn’t describe as ‘real.’

    Tim #194, I think you lack imagination. Polar bears use their nose to track seals kilometers away. A blind/deaf beagle is going to be able to track down a fox no problem. I’d pick a blind deaf bear in a fight vs you 😉

    To rule ‘reality’ out of bounds for conscious beings with sense worlds fantastically different from our own makes ‘reality’ pretty damn impoverished of a concept don’t you think? In a sense, you are saying that a conscious being without your “common sensibles” can not perceive ‘reality’. What then do you think a conscious being is perceiving? An imaginary world?

  210. Tim Maudlin Says:

    Mateus #202

    We have had our bad patch and got through it, I think, so let me try to intervene to avoid another. There is nothing at all impolite about Mario’s post. I myself have gotten this weird reaction both here and in other venues, that when I say that PBR killed all psi-epistemic theories and leaves us with only psi-ontic theories (including Bohm, GRW and Many Worlds) the temperature suddenly goes through the roof. But that is precisely my own understanding of the implications of PBR. And yes, I have read Matt Leifer!

    So somebody is misunderstanding something here. I am perfectly willing to explore the issue, and to defend my understanding of the implications of the theorem, but there is no reason at all for any particularly strong emotion to accompany that. I am completely befuddled by this reaction to a claim that I think is correct, and needs to be discussed with people who disagree.If I am wrong, I am wrong. But let’s just calmly try to work through the issues.

  211. Tim Maudlin Says:

    Travis # 181

    Obviously (I hope) an error in the last paragraph of my comment #200. It is the *secondary* qualities like color and flavor that are all in the mind, not the *primary* ones.

    Scott appreciates this:

    ‘by convention sweet and by convention bitter, by convention hot, by convention cold, by convention color; but in reality atoms and void’

  212. Tim Maudlin Says:

    Atreat # 206

    It was not for nothing that I rooted my creature in the ground. Your polar bear and beagle locomote and have proprioception. (If they don’t have proprioception then I’ll take the bear on in a fight, no problem. It would be cruel to the bear, though.) Because they are moving around in three dimensions and have proprioception they can develop a three-dimensional spatial imagination and come to understand the basic spatial concepts. But without that, I don’t think it is possible. And without basic spatial concepts they will never arrive at an adequate physics for our world, because its spatial (or spatio-temporal) structure is so important to understanding its physics;

    I am not saying that creatures with a certain sort of impoverished sensory capacity cannot experience “real things”! Even our olfactory creature is smelling real things, not imaginary things. But I don’t think that such a creature can ever develop an adequate physics. It just won’t have the right concepts, no matter how intelligent (in some sense) it is.

  213. Sniffnoy Says:

    Atreat #188: I agree that the idea that the set of worlds should have a (non-counting) measure on it seems a bit bizarre, but, you probably want to phrase your objections in those terms, rather than talking about cardinalities.

  214. Andy Says:

    Sorry for joining the thread late. I just have two comments. First, about Scott’s cartoon: maybe I’m a prude, but to me it seems complex numbers belong in one’s ontology iff one is a Platonist. Furthermore, whether or not they exist, under most accounts complex numbers are abstract objects (https://plato.stanford.edu/entries/abstract-objects/). The same holds, e.g., for wavefunctions, which are elements of some projectivized Hilbert space. My question for those who like to put such objects in their ontology: do you really mean that mathematical objects are instantiated physically? (Tegmark would say “yes” but I’m not sure how widely his views are shared.) Or do you mean there’s some physical object, different than but mathematically described by an abstract wavefunction in a Platonic Hilbert space?

    My second comment is that to me, it seems that asking for an interpretation of quantum mechanics is a bit less demanding than asking for a metaphysical theory in which quantum mechanics naturally resides. To interpret quantum mechanics, isn’t it enough to assign meanings to the various things arising in quantum mechanics, such that the theorems of quantum mechanics are true statements about these meanings? It seems that the standard Copenhagen / psi-epistemic interpretation does at least this, whether or not you want to say this is enough to qualify as an interpretation. (Of course, interpretations in this sense are probably very far from unique.) I do view metaphysical questions as very interesting, and I find the phrase “shut up and calculate” to be overly dismissive of them. But metaphysics seems to be very difficult, and for now I’m happy enough with being able to make sense of quantum mechanics without worrying about paradoxes.

    For example, I can view the wavefunction of a system (even in cosmology) as representing the beliefs of a hypothetical perfectly-rational conscious experiencer about (certain aspects of) its future conscious experiences, given whatever its past experiences were. Then the Schrodinger equation expresses a true statement: if time dt has passed and the observer’s conscious experience during that time contained no new information pertaining to the relevant aspects of future experiences, then the (perfectly rational) observer will update its beliefs by multiplication by e^{iHdt}. If, however, after time dt the observer has learned information restricting the relevant aspects of future experience in some way (e.g. if the relevant aspects are the readouts of some experimental machine, and after time dt the readout appears), then the perfectly rational observer will update its beliefs with a projection operator rather than a unitary operator. If you like, you can equivalently view these true declarative statements as imperative statements instead: “after dt, thou shalt do this,” removing the need to invoke hypothetical perfectly rational observers (but removing the declarative content of the theory); I think this is more or less what’s intended by Fuchs’ “Ten Commandments” figure (page 8 of https://arxiv.org/pdf/1003.5209.pdf), although I agree that no good reasons exist for praying at Fuchs’ altar.

  215. Conan Says:

    Hi Scott,

    I’m not an expert but saw a lecture called Copenhagen vs Everett by Leonard Susskind and since it’s related to the topic of this post was wondering what are your views on the connections he (Susskind) states between quantum mechanical interpretations and gravity, and things like GHZ branes etc. ? Is it just wildly speculative or speculative but with merit, or is that line of thinking outright wrong, in your opinion?

  216. Scott Says:

    Andy #205: I really don’t care whether it’s “complex numbers in our ontologies” (a laugh line that Zach wrote and I enjoyed), or whether it’s physical systems, a central aspect of whose behavior happens to be perfectly described using complex numbers.

    No, really, I don’t.

    Max Tegmark famously says that physical reality is literally a mathematical object. Other people say that’s crazy, physical reality isn’t at all a mathematical object, it just happens to be isomorphic to one in all respects. For my part, I don’t even understand what it would mean for one party to this dispute to be right and the other to be wrong.

    If you do care, then please go ahead and make whichever choice you’re happier with! (Presumably the latter one?)

  217. Scott Says:

    Conan #206: I’ve actually been working with Lenny for years on this subject, as part of the “It from Qubit” collaboration that he helped launch and that I’ve been part of since its beginning. Just last night, Lenny posted a paper on the arXiv about our joint work (with my blessing), after I’d dragged my feet for years doing my part to write our paper. So yes, I’m quite excited about these connections. Lenny himself is a living legend, whose enthusiasm and energy inspire everyone around him. Compared to him, I’m a skeptic and conservative about quantum information being the future of fundamental physics.

    Having said that, I don’t know what GHZ branes are and haven’t seen the lecture in question, so I can’t comment on those specifically.

  218. Andy Says:

    Out of those two choices, I would be happier with the latter one, although I prefer the epistemic approach (the Born rule just smells like Bayesian inference to me- actually the name Quantum Bayesianism was the one thing I liked most from Fuchs et al, and I’m not sure why they’ve abandoned it).

    It’s perfectly reasonable not to care about this type of philosophical question. It’s probably not precisely stated enough for one side to be right or wrong: what is a mathematical object, and what is a physical object? Already the ontological division of (abstract or (concrete = (physical or mental))) is highly debated, and there are lots of competing definitions / rejections of the whole scheme. Still, one can try to argue that the definitions *should* be set up in one way or another, to avoid consequences that violate the intuitions we’re trying to capture in the definitions.

    For example, here is one such argument: If wavefunctions exist physically, and wavefunction is literally taken to mean equivalence class of vectors in an appropriate Hilbert space, then presumably they don’t also exist as abstract objects (otherwise, confusing the physical wavefunction with the abstract one is like confusing the idea of the Parthenon with the physical Parthenon, using Quine’s example). Then it also seems plausible that no other mathematical objects exist as abstract objects (otherwise, why don’t the vectors in physical Hilbert spaces exist as abstract objects?), and since they don’t exist physically (or mentally) either, they don’t exist at all. But part of the usual intuition about mathematical objects is that they either all exist or they all don’t exist (e.g. there’s no largest number). So if one defines abstract objects, physical objects, mathematical objects, etc. precisely, it seems desirable to do so in such a way that mathematical objects (if they exist) are abstract and not physical. One could reasonably take issue with any point in the argument (it’s not rigorous), or reject the “all-or-nothing” intuition about mathematical objects. Or, one could prefer to work on actual scientific questions instead- this is my preference most of the time, but philosophy can be fun once in a while.

  219. Mateus Araújo Says:

    Maudlin #202: I’m not saying that this statement is impolite, I’m implying that it is wrong on so many levels that I’m not capable of arguing about it without insulting the person making the statement.

    So I’ll just leave it at that. I don’t want to fight with you again, or trade insults with Mario Hubert. It’s carnival in Cologne, and I have to get drunk at 11:11 am.

  220. Mateus Araújo Says:

    But I can reply politely to an argument you made in #181, if you don’t mind me barging in. You wrote that

    “Many Worlds is usually presented as falling under this category, although if we take the Wallace/Deutsch line on probabilities they at least seem to require reference to a *rational agent* making *practical decisions* to make sense of the apparently probabilistic content of the theory. So maybe we should expand to “Quantum theory without the observer or rational agent”. ”

    I think you are confusing two things: one is the fudamental reality, where there is only a vector in a Hilbert space evolving deterministically under the Schrödinger equation. There are no probabilities to be talked about.

    Another thing is the experience of an observer. Now the branching process gives rise to the appearance of randomness, and a rational agent will deal with this randomness by assigning probabilities.

    So it doesn’t bother me in the least that I need to talk about observers when what I want to describe is the experience of an observer.

  221. Scott Says:

    Mateus #193:

      The Michelson–Morley experiment made the æether change from being a detectable thing with interesting consequences, to an undetectable entity so lame that it is not worth talking about it. Or, in other words, it was sliced off by Occam’s razor.

    But my whole point was that, if we identify the aether with the QED vacuum, then it’s not just a useless appendage doing nothing, waiting only to be put out of its misery by Occam’s Razor. Do you agree that the QED vacuum has a real physical existence, and indeed is teeming with quantum-mechanical activity at every point in space? Do you agree, moreover, that the things it does (e.g., provide a medium for electromagnetic waves to propagate) are in extremely close correspondence with what the aether was proposed to do?

    If so, then it seems to me that this all just comes down to a factual question about scientific history. Namely: in the 19th century conception of luminiferous aether, how important was the idea of the aether defining an absolute rest frame, compared to the other functions the aether was supposed to serve (notably, that of providing “something for the E&M waves to wave in”)? Does anyone know?

  222. Mateus Araújo Says:

    Scott #213:

    Yes, the QED vacuum is real. No, it doesn’t do what the æther was supposed to do, not even close. The æther discussion was a discussion about a preferred reference frame and an absolute time, historically speaking.

  223. Scott Says:

    Mateus #213: I’m afraid I’m gonna need sources.

    Wikipedia seems very clear that the main point of the aether was to be a medium for the propagation of light. Having an absolute rest frame was just an additional property that it was wrongly postulated to have.

    Some other interesting tidbits that I learned:

    Einstein wrote a paper in 1924 called “Concerning the Aether,” arguing that one could speak of the “aether of electrodynamics,” the “aether of general relativity,” etc.—they just had different properties than the 19th-century aether was imagined to have, while upholding the concept in other respects. But his suggestion was not adopted, and the word “aether” fell into disuse.

    The abandonment of the word was a sociological phenomenon in physics history and pedagogy—you might strongly support it, which is fine, but don’t confuse it with a scientific discovery!

    I also enjoyed this 1911 quote from Lorentz:

      whether there is an aether or not, electromagnetic fields certainly exist, and so also does the energy of the electrical oscillations … if we do not like the name of ‘aether’, we must use another word as a peg to hang all these things upon … one cannot deny the bearer of these concepts a certain substantiality
  224. Hal Swyers Says:

    #212
    We have to keep the concept of QED vacuum as something separate from what we see as our “real” vacuum. What we see as real is something we have observed. We observe an extremely stable “real” vacuum and all of what we see that manifests itself as “real” is effectively entangled with it. In decoherence speak, the “real” vacuum is our environment which is always entangled but we trace out when we make our observations.

    As far as interpretations, we have to ask two questions:

    1) after we make an observation, can I exclude all other possibilities for that observation (e.g. Wave function collapse)

    2) what happens to all the other possibilities

    The answer to 1) in “real” terms is yes, but in QM the answer is more sophisticated. If we accept the lessons of QED, then the correct answer is that all the other possibilities have been caused by our act of observation to cancel out. If we think all actions as rotations, then our act of observation has rotated the state such that all other possibilities are phased such that they are no longer observable.

    The answer to 2) is that we tend to forget that what we observe is time dependent. Can we rotate back? I will say generally no, except under extremely controlled conditions. This is the question of how can we correct for the actions of the heat bath. Can I rotate out of the acts of randomness? In effect create a time independent state? I think the answer is yes within certain time bounds, at least that appears to be the case from press reports on Quantum Computing. Effectively time independent bubbles in largely time dependent background.

    Does this favor many worlds? What is “real” is based on time dependent observation. There is no schema that will allow an individual to rotate to see two “real” outcomes to an obsevation. Flipping a card over in blackjack doesn’t ever generate two cards. However, we left out the other important concept of translation. Two spatially separated individuals may not necessarily have the same starting state unless previously entangled. This gets back to the earlier point about our “real” vacuum and it’s importance. It is the means by which we ensure consistency in our observations.

    It isn’t aether though.

  225. Mateus Araújo Says:

    Scott #215: I’m not in condition to talk intelligently anymore (carnival here is serious business), but I promise you a proper reference tomorrow.

  226. Tim Maudlin Says:

    Scott # 218

    I promise I am not trolling. Just think about this statement you made:

    “Do you agree that the QED vacuum has a real physical existence, and indeed is teeming with quantum-mechanical activity at every point in space?”

    Anyone who believes this—and that includes most physicists!—believes in “hidden variables”. The vacuum is a stationary quantum state. The quantum state is not changing even a tiny bit. So if the vacuum is “teeming with quantum-mechanical activity”, then that activity must be in something other than the quantum state. It must be in some additional variables.

    So either you don’t believe what you wrote, or you do believe in “hidden” variables.

    This is not at all anything particular about you! 99% of physicists would completely accept what you wrote, and then adamantly deny that there are hidden variables! But those two claims are just inconsistent with each other.

    But if you do believe in in additional variables of a Bohmian flavor for QFT, you also believe that the vacuum is *not* teeming with activity! Since it is a ground state, the additional variables will also be stationary. Like the electron in a ground state of a Hydrogen atom is at rest in Bohmian mechanics. Isn’t life full of little ironies?

  227. Sandro Says:

    Tim, Scott, Aharon: in previous discussions with people who believe dBB is MWI in denial, I’ve found that they’re often bothered by a perceived lost opportunity to introduce another symmetry entailed by treating all branches equally. Physicists are all about the symmetries!

    Of course, there seems to be no reason to accept this symmetry other than conceptual aesthetics, and indeed, by making a different choice, dBB introduces other opportunities and other properties that are not in the MWI ontology. This is the reason dBB requires a concept like quantum equilibrium, from which one can then infer the Born rule, for which MWI has no analogue.

    So indeed, dBB is not MWI in denial, and while one might bemoan the loss of an aesthetically pleasing conceptual symmetry, let’s not confuse that with an actual observable symmetry.

    I think the response paper phrased it correctly, “dbb is MWI in denial” is failing to treat dBB on its own terms.

  228. Scott Says:

    Tim #217: OK, fair point. What you raise was at the core of this paper by Boddy, Carroll, and Pollack, which argues that the fact that “nothing is happening” in the QFT vacuum (i.e., you’re just sitting in the ground state) provides a solution to the Boltzmann brain problem that wouldn’t have been available with classical probabilistic theories.

    On reflection, I should have used a more careful formulation, like:

    “teeming with activity in the Feynman diagram description (the particle/antiparticle pairs), though not in the quantum state description”

    “teeming with potential activity (which you can see by, e.g., setting up a detector for the Casimir force)”

    “teeming with structure”

    “teeming with activity if you’re a local beables theorist” 🙂

    Thanks for pointing that out.

  229. Tim Maudlin Says:

    Mateus # 216

    Your comment is just bizarre. Something has gone completely off the rails here. I believe—and I will defend the claim—that PBR rules out all psi-epistemic theories in just the way that Bell’s theorem rules out all local theories. This is an extremely important fact. As was said at the time, PBR is the most important result in foundations since Bell. I believe that to be true. 100%.

    You believe that what I just said—and Mario said—is “wrong on so many levels that I’m not capable of arguing about it without insulting the person making the statement.” That is just a weird response. If it is wrong on so many levels, then your task should be really, really easy. Pick, among those many levels, the level that is the easiest to explain and make the argument.

    You add the strange comment that you are incapable of making this argument without insulting the other person. OK, that’s an odd sort of personal confession to be making in public, but that’s fine: I can take the insults. I promise even not to return them. This is a point of such importance that I would much prefer to endure some insults but end up enlightened about something I am so confused about than to continue to hold a false belief. So I hereby authorize you to insult the hell out of me, provided that the insults are accompanied by an actual argument.

    I would, of course, do the opposite—I mean provide you with the argument that PBR rules out all psi-epistemic theories, not insult you—but all I can say is read the PBR paper. They aren’t my arguments. If the paper contains a flaw or an error, I have completely missed it, and it should be easy to point out. But you have not given even a slight clue about what that flaw might be. You seem to positively refuse to give a clue.

    So let’s make a deal. Either accept my terms—I grant you the right to insult me as you like, so long as you also provide the argument—or else just stop saying in public that my conclusion is wrong while refusing to back that up with anything. There is no excuse for such behavior.

  230. Tim Maudlin Says:

    Scott # 224

    Thanks for the nice response! I don’t want to sound puffed-up in saying this, but I made this very observation to Sean before they wrote that paper and realized that the Boltzmann brains issue was a red herring. It’s the the kind of thing that once you see it is very obvious, but so many people repeat it that it is hard to notice.

    Let me just gently push back on your list of careful formulations.

    The use of Feynman diagrams is a mathematical method for solving an equation. The notion that one should physically reify the individual diagrams—so lots of diagrams implies lots of “activity” or even lots of “structure”—is something one could argue for but does require an argument. An analogy. You want to calculate the volume of a physical iron sphere. So you hit on this idea: First, calculate the volume of the largest cube that fits in the sphere. Next, take the remaining pieces of the sphere and calculate the volume of the largest cube that can fits in them, plus the number of such cubes (which is six). Add up those volumes, add to the first result. Rinse and repeat, more and more, smaller and smaller cubes. The calculation gets harder and harder, more and more terms, and sum to infinity. Violá! The volume of your sphere.

    Now that is all mathematically fine. There are much easier ways to get to the answer, but that infinite sum is OK. Hard to calculate, but OK.

    What would not be OK is to somehow think that all that complication in the calculation indicated some “complicated structure” in the sphere itself. The sphere is pretty simple and uncomplicated. Just your method of calculation was.

    My understanding of the “amplituhedron” (which is not a deep understanding) is that they are making exactly such a claim: the Feynman diagrams give a complicated way to calculate something that is actually quite simple. If so, the use of the diagrams is misleading.

    So that takes care of the first and third of your careful statements, or at least show they need more defense. The last I already showed was false: the Bohmian vacuum is stationary even for the additional variables. And as for potential activity…well the notion of potentiality here needs some discussion.

    So this is a tricky subject!

  231. Scott Says:

    Tim #221: Ah, thanks! I’d missed the part of your comment where you pointed out that the beables are also sitting there doing nothing when the wavefunction is in the ground state—at least in dBB, and I suppose also in Bell’s fermionic theory, although not in all possible beables theories that one could write down. Yeah, pretty obvious once you think about it.

    So the entire question of whether the vacuum is “teeming with activity” (as the pop-science writers insist) has nothing whatsoever to do with beables, and you were indeed—contrary to your protestation—trolling me for no reason. 😀

  232. Mateus Araújo Says:

    Oh my social skills are terrible! Even when I’m explicitly trying to avoid a fight I still manage to make Maudlin angry. I should have simply not answered to Mario Hubert without saying why, it would have been much less contentious.

  233. Daniel Says:

    Apparently my comments regarding the relevance of our treatment of space-time in discussions comparing dBB and MWI have not made too much of an impression … I however think that there might be a useful clue there.

    So let me try again:

    When one says the bBB variable ( particle) does not affect the “wave function ” ( more precisely the quantum state ) one has in mind a wave function ( or quantum state) that follows a Shroedinger like evolution, which is tied to a fixed background space-time. The particle is then guided by the wave function, but does not back react on it ( he wave function is unaffected by what the particle does).

    That is correct only under the assumption that the space-time is both treated as classical and fixed.

    But we know that is not nature, and the consideration of the way we deal with that aspect ; in my view forces one to consider the manner in which both bBB and MWI could treat the dynamics of space-time itself.

    There seems to be tow main bifurcations in our paths to deal with the issue.

    I) QUNTUM VS CLASSICAL SPACTIME

    Options

    Ia) Treat space-time in a classical language ( as characterized, say by the space-time metric of GR)

    Ib) Treat space-time in a quantum language.

    The problem with ib) is that we do not know how to do it, and even if we did the question of how to recover from such description the usual notions is very problematic. Even relative concrete proposals such as LQG or Causal sets have a hard time doing so.

    The whole discussion above seems to in fact be taking for granted option 1b) which might or might not be taken as fundamental

    II) HOW DOES MATTER GRAVITATE

    IIA ) In dBB we have the wave function and the dBB variable( s)

    Options
    IIA i) Take the wave function to gravitate.

    IIB ii) Take the dBB variable to gravitate.

    Option IIAi) seems problematic because the space-time will not adapt to the actual physical locations of objects ( taken to be codified by the dBB variable) . I think this would not be empirically viable.

    Option IIAii) seems more likely to work, although poses serious technical difficulties ( we would need to construct a conserved energy momentum tensor out of the dBB variables and that might not be so easy. The relevant point for the discussion however is that in this case ( an in contrast with the situation where the dynamics of space-times is ignored ) the dBB particle would influence the wave function through its influence on the space-time itself.

    II B) In MWI we only have the wave function so there are no options, it has to gravitate !

    However a reasonable picture of that would seem to have a non-vanishing possibility of emerging only if we have “each branch” of the complete wave function controlling the corresponding space-time metric ( * I might be mistaken here.. although this is what my intuition suggests) . That would seem to require a quantum treatment of gravitation, which takes us back to the problems of ib).

    In fact I thinks the problems would be exacerbated when one is force to consider something like the splitting of the branches and the space-time that would be associated with that… but here I am without any further intuition .

  234. Zephir Says:

    The Schrodinger equation is wave equation of elastic string, the mass density of which at each time and space interval remains proportional to its energy density in these intervals. This is small scale analogy of relativist field equations, according to which the stress energy tensor gets proportional the metric curvature tensor.

    If we would live like waterstriders or whirligig beetles at the water surface and if we would observe the objects on it by its ripples, then we would soon realize, that objects at short distances are blurred by omnipresent Brownian noise into hydrodynamic analog of quantum uncertainty. We would also realize, that introduction of energy at some place would make the undulating surface more deformed and slowing wave spreading, so that the probability of their occurrence at this place would increase.

    Therefore the behavior of quantum vacuum doesn’t actually differ from behavior of every material environment observed by its own transverse waves. In dense aether model the vacuum is formed by foam which gets dynamically more dense during its shaking in similar way, like the soap foam shaken inside the evacuated vessel. It slows down the propagation of light around object in motion in wake pilot wave in such a way, the speed of light remains constant there. In this way the quantum mechanics represents the extrinsic perspective or intrinsic relativistic perspective of deformed space-time.

  235. Per Östborn Says:

    There is more to Democritus:

    “Ostensibly there is colour, ostensibly sweetness, ostensibly bitterness, actually only atoms and the void”

    The senses reply to the intellect: “Poor intellect, do you hope to defeat us while from us you borrow your evidence? Your victory is your defeat.”

    (fragment 125)

    … and even if we close our eyes and imagine atoms or other beables, we need the afterglow of the senses to manipulate them in our mind – we cannot help giving them shapes, colours, or other sensory attributes.

  236. Tim Maudlin Says:

    Mateus # 229

    Curiouser and curiouser, as Alice said.

    Way back in the thread, at # 54, after I said the very same thing about PBR, Bunsen Burner wrote

    “Also, Tim makes some strong statements about the PBR results and statistical interpretations.”

    to which Paul Hayes #93 added:

    “It’s very charitable of you to say that Tim Maudlin is “making strong statements” about the PBR theorem. His claim that it’s “killed off the psi-epistemic category” (made intransigently here and elsewhere) is simply false. Fortunately for the reputation of philosophers, Matt Leifer’s not the only one who has taken the trouble to debunk that nonsense.”

    I did write the following reply to Hayes:

    “I said it and I meant it. I have, of course, read Matt Leifer’s post. If you would care to exposit how a psi-epistemic theory survives, please feel free. Only state what you yourself understand and are willing to defend.”

    which for some reason never got posted. So I never got any actual argument as to why this understanding of the PBR theorem is false. Leifer talks about the theorem presupposing the “ontological models framework”, but I cannot see that that is a contentful assumption of the theorem such that the conclusion of the theorem can be avoided.

    And now you also are assuring us in the strongest terms that this understanding of the theorem is wrong without providing any grounds for thinking it is wrong.

    It is not that I am angry: I am just perplexed. I can’t think of any reason for you and Bunsen Burner and Paul Hayes, all three, to both strongly assert that PBR doesn’t prove what it certainly appears to prove, and to strongly assert that both Mario and I are mistaken, without providing any argument to back that up. That is at least an odd way to proceed.

  237. RandomOracle Says:

    Tim Maudlin # 227 (hmm, the comment numbering is weird :-/)

    I too am curious about this, so let me see if I can help in clarifying the situation. Let’s take what Matt Leifer has said about PBR (in this really nice review on psi-ontology theorems, for example), since Paul Hayes claims that Matt Leifer has debunked the “nonsense” that PBR rules out psi-epistemic theories.

    What Matt Leifer says is that there are essentially 2 ways to still have psi-epistemic theories:

    1) You reject the preparation independence assumption of PBR. This assumption, which is essential in the proof of the theorem, basically says that things compose well under tensor product. More specifically, when two quantum states are prepared as a product state, their underlying distributions over ontic states are also in product form (the no-correlation assumption, NCA) and their corresponding ontic spaces compose under Cartesian product (the Cartesian product assumption, CPA).
    The paper I’ve linked to lists the criticism against CPA and NCA and I don’t think I could do it justice by summarizing it here.

    2) You take the anti-realist (neo-Copenhagen) psi-epistemic view. I.e. there are no objective properties of systems, there’s no ontological model. The wavefunction is a state of knowledge, but a state of knowledge about future measurements rather than about some observer-independent reality.

    Personally, I find the preparation independence assumption to be very reasonable and don’t really like the anti-realist view. So, psi-onticism it is for me. But, as you can tell, it is a matter of preference. If someone considers preparation independence or the existence of ontological models to be an unnatural requirement of the world, I can try to explain to them why I do find them natural, but I can also understand why they might not be convinced. That is to say, I don’t find the alternatives to be as far-fetched as say superdeterminism. So maybe “PBR rejects psi-epistemism” is too strong of a statement.

  238. Paul Hayes Says:

    Tim Maudlin #227±5

    The authors of the PBR theorem themselves point out in their paper that their theorem’s target is the ‘realist’ psi-epistemic subcategory of psi-epistemic interpretations and doesn’t – can’t – kill off the entire psi-epistemic category. Matt Leifer points it out in his blog post and in his review. As does Yemima Ben-Menachem in her paper (which I linked to above).

    The PBR theorem is irrelevant to most psi-epistemic interpretations. Unlike those weird ‘realist’ psi-epistemicists (arguably even weirder than psi-ontologists), [neo-]Copenhagenist psi-epistemicists aren’t trying to defy (quantum) probability theory. They don’t make one of the assumptions that the PBR theorem relies on: “that a system has a “real physical state” – not necessarily completely described by quantum theory, but objective and independent of the observer”.

  239. Mateus Araújo Says:

    Maudlin #228:

    Ok, since you insist, I’ll explain what is so wrong with the statement

    “It is a matter of fact that quantum states are objective because of the PBR-theorem.”

    First of all, this is not what I was talking about. In interpretations the objective/subjective division is applied to realist versus non-realist interpretations, with dBB, MWI, and collapse models on one side, and QBism/Copenhagen on the other. You might complain that it doesn’t make sense to have a subjectivist interpretation that is not \(\psi\)-epistemic, and I do agree with that, but the subjectivists are adamant that they are not \(\psi\)-epistemicists, and if you’re not going to stick to standard terminology you’ll be unable to talk to anyone.

    The second problem is that the PBR theorem needs its assumptions, in particular the assumption the probability distribution associated to a couple of states that were independently prepared is the product of the individual probability distributions. I really don’t think it is tenable to say that “it is a matter of fact” that this assumption is true. And we do know that without it the theorem fails, as shown by the Lewis et al. \(\psi\)-epistemic model.

    Finally, it’s not as if \(\psi\)-epistemic models were viable to start with! Please correct me if I’m wrong, but I think the first complete \(\psi\)-epistemic model was the one created by Lewis at al.! Before that we only had the Bell and Kochen-Specker models, that were valid only for d=2. The reason people were so unexcited about them is because any \(\psi\)-epistemic model needed to be nonlocal, due to Bell’s theorem. Well, if you are going to have nonlocality anyway you might as well go for a \(\psi\)-ontic model, such as dBB, and save yourself the trouble. (I’m saying this because Einstein’s original motivation to go for \(\psi\)-epistemic models was to get rid of nonlocality.)

    Now about your statement that “the psi-epistemic category was killed off by PBR”, I cannot read Bunsen Burner and Paul Hayes minds to know what they found so objectionable about it, but I can tell you my own opinion. Sociologically, it’s true that Leifer and Spekkens were very interested in \(\psi\)-epistemic models, and abandoned their attempts after the PBR theorem. But it’s also true that Lewis et al. model arose directly because of it, so it wasn’t such an efficient killing off.

    I do find it ironic, though, that you would make such a categorical statement about \(\psi\)-epistemic models, after spending a lot of time correcting people that categorically stated that “Bell’s theorem killed off hidden-variable theories”.

  240. Mateus Araújo Says:

    Scott #215:

    Yes, the main point was to be a material medium for the propagation of light. One can then talk about the reference frame in which this medium is at rest, motion relative to this medium. If you look at the section in this very article about ether drag, you’ll see that the whole pre-Michelson Morley discussion was about how to understand motion relative to the ether.

    Now, post-Michelson Morley, you get the Lorentz ether theory, which is precisely a theory with a preferred frame of reference and absolute time. This was the last ether theory to have defenders. It was already rather silly, as the ether didn’t have any substantiality in it, and was used just as an interpretational tool. Appropriately, it was dropped as a dead rat when special relativity appeared.

    Now, you are proposing the ether as something without substance and without even an interpretational consequence, beyond this woolly “providing a medium for the propagation of light”. Apparently this was only suggested by Einstein and picked up by nobody. Such a lame medium plays the same role as the vacuum in being a medium for the propagation of particles, and is emphatically not what physicists mean when they talk about mediums for the propagation of sounds waves – which is the kind of medium the ether theorists had in mind in the 19th century.

    But I’m not sure much hinges on this historical question. You might as well replace “luminiferous ether” with “preferred reference frame” in my comment 173, and the same argument follows.

  241. David Pearce Says:

    Tim Maudlin #154 notes, “if you end up thinking you have to solve the mind-body problem to do physics you have probably taken a wrong turn somewhere”.
    Quite so. Yet can researchers make progress on the correct interpretation of QM _without_ taking a stance? “Obvious”, innocent-seeming implicit assumptions can be the most treacherous. Would you disagree with, e.g.
    http://www.ijqf.org/groups-2/2016-international-workshop-on-quantum-observers/forum/topic/the-measurement-problem-revisited/
    (“…the measurement problem in quantum mechanics is essentially the determinate-experience problem. The problem is to explain how the linear quantum dynamics can be compatible with the existence of our definite experience. This means that in order to finally solve the measurement problem it is necessary to analyze the observer who is physically in a superposition of brain states with definite measurement records.”)

    So why do we have determinate experiences?
    My ideas are idiosyncratic (cf. https://www.quora.com/What-is-the-Quantum-Mind).
    But then what is orthodoxy?
    Intuitively, all the options are crazy IMO.

  242. RandomOracle Says:

    Mateus Araújo #48

    Sorry for the huge delay in the reply.

    Personally, I have a hard time understanding what objective probability means. I would think that it’s “probability independent of observers”. I.e. if the probability of some measurement outcome is p, then if one were to repeat that measurement, under the exact same conditions, many many times, the frequency of that particular outcome approaches p. But, crucially to me, *objective* probability should somehow make sense independent of any observers or agents. It shouldn’t be something that an agent ascribes to a system. (As a side note, objective probabilities to me can only make sense under a frequentist interpretation of probability; however, I know there’s such a thing as objective Bayesianism but I don’t really understand it. Do you? And if so, could you explain it to me? :D)

    On the other hand, one could say that objective probability refers to probabilities that all observers or agents can agree on. I.e. anywhere in the universe, at any time, if someone prepares the state a|0> + b|1> and measures it (in the |0>, |1> basis) they will observe the outcomes |0> and |1> with the respective probabilities |a|^2, |b|^2.
    Now, I do have some reservations about calling this truly objective, because in non-local hidden variable interpretations of QM (like dBB) the probabilities are arising due to a lack of information (it’s just that everyone is lacking it). At the ontological level, things are deterministic and there are no probabilities. So, to me, these are subjective probabilities stemming from incomplete knowledge.

    So, if as you say, in Kent’s world people live in a *deterministic* computer simulation, I would not call those objective probabilities.

  243. Mateus Araújo Says:

    RandomOracle #234:

    “Personally, I have a hard time understanding what objective probability means.” You’re not the only one. There exists no proper definition, philosophically speaking. Frequentism is as ruled out as anything can be ruled out in philosophy, and “objective Bayesianism” does not exist, as far as I know. The closest thing I can think of is Lewis’ excellent “A Subjectivist’s Guide to Objective Chance”, which does not try to define objective probability, but only to show how it fits within Bayesian probability.

    I think people can agree that whatever objective probability is, it must be a probability that all observers or agents can agree on, as you say. It cannot depend on anyone’s knowledge, and no one can be capable of predicting the outcome of the experiment.

    The fraction n/(m+n) in Kent’s world does fulfil this requirement: you already know everything about the simulation, you know the number of worlds that will be created with either outcome in it. There simply isn’t a matter of fact about what will be the “true outcome” of the experiment, so it doesn’t make sense to try to predict what it will be. This fraction also respects the law of large numbers, because if one repeats the same experiment a large number of times, then the relative frequencies approaches the fraction n/(n+m) in a very large fraction of the worlds.

    Of course, one could also make a Bohmian interpretation of Kent’s world to pretend one is in a single world and these probabilities are subjective, but at least in this case such an interpretation is clearly not warranted by the physics.

  244. Tim Maudlin Says:

    Random Oracle, Paul Hayes, and Mateus,

    That’s it? Then this really is exactly like the Bell deniers! I thought it might be something serious.

    We have been arguing for months with ‘t Hooft about this, so I’ll just summarize. No need to replicate that argument here. If you want to go either of these routes….well its a free country.

    If you are willing to deny the statistical independence postulate then of course there can’t be any experimental proof of anything. You can deny that all of the experiments ever done it prove that smoking causes cancer even make that claim plausible. Maybe statistical independence fails! When “randomly” assigning the population in a controlled experiment to the experimental and control groups, the ones already predisposed to get cancer always go into the experimental group. No matter how the sorting is done. Yeah, that is a logically possible way out, one that throws all of scientific method under the bus just to avoid a psi-ontic theory. That’s Oracle’s option #1.

    Oracle’s option #2 is, amazingly, even worse! You don’t just throw the scientific method under the bus, you throw the whole idea of physical reality under the bus! In the locality “debate” I call this option “Well, there is no physical reality but thank God it’s local!”

    Oracle is not inclined to accept either Option 1 or Option 2, but goes on to say that this is “a matter of preference”, and then says “I don’t find the alternatives to be as far-fetched as say superdeterminism”. Option 1 just is superdeterminism! We had to get ‘t Hooft straightened out about the terminology here as well, but what Bell meant by “superdeterminism” is exactly the denial of statistical independence! Not the best name. I prefer “hyperfine tuning”. Some people call this a “conspiratorial theory”, which is OK. But however you phrase it, it is an option that undercuts all experimental scientific method.

    And option 2 is even more radical than that. The denial that there even exists any physical reality independent of observers. Nothing was happening at all 13 billion years ago, when there were no observers. The cosmologists will object I think!. This path leads straight to Wheeler’s “self-excited circuit” picture: observations by late-time conscious individuals bringing into existence the very “physical history” the allowed them to evolve! So you bundle together Wigner-style collapse theory with retrocausation together with idealism to get one unholy mess.

    If that is all you mean by objecting to PBR—that you are willing to chuck out the scientific method, and even chuck out the notion of physical reality itself just to avoid psi-onticism—OK. Fine. Just like the folks who are willing to do this to avoid non-locality. But at least we can agree that anyone wedded to the scientific method and to the notion of a mind-independent physical reality must both accept non-locality and be a psi-ontologist. That’s good enough for me.

  245. fred Says:

    I think the paradox comes from the assumption that concepts such as “agents” and “beliefs” are real, that those things do exist as sources of causality independent of their environment, when in fact there’s no such thing.

    E.g. when considering living organisms, we regard them as systems that try to simulate their own environment in order to maximize their chance of survival. For example an eagle that’s diving down on a rabbit is using its brain to evaluate the most probable path its prey will take.
    The paradox is that (ignoring QM randomness) the dynamic of system “eagle + its brain + rabbit” is entirely dictated in a deterministic way by the basic rules of particle physics. There’s really no room for choice. Natural selection did lead to the development of brains, but the conceptual boundaries we draw assume it’s okay to pretend a sub-system can be isolated from its environment or that a system can be simulated from the inside.

  246. Mario Hubert Says:

    Mateus #231

    That’s a post that we can work with! Why had you felt the urge to garnish it with personal insults?

    Sure, every theorem has its assumptions, and PBR are explicit what they are. One of these is statistical independence (or preparation independence), the same kind of independence that is presupposed in Bell’s theorem. Now, giving up this assumption would overthrow all our scientific methods and our scientific practice. Violation of statistical independence sounds at first sight a tenable option: What’s the matter if two quantum systems are correlated even if I prepare them independently? But this move has so profound consequences which don’t trade off with securing psi-epistemic wave-functions. It’s not suprising at all that Lewis et al. can retain psi-epistemic wave-function, when they violate statistical independence.

    The other route you mention is to have subjective wave-functions that are not psi-epistemic. You disagree with this circumvention of the PBR-theorem, and that’s good because this stance is either incoherent or denies an external reality.

    So where is our disagreement then? Do you adhere to a violation of statistical independence? Or are there more “mistakes” in my statement about the objectivity of the quantum state as you indicated?

  247. Mario Hubert Says:

    RandomOracle #234

    Mateus #235

    Neither probabilities in classical mechanics nor probabilities in Bohmian mechanics are subjective! I repeat: THEY ARE NOT SUBJECTIVE!!! THEY ARE OBJECTIVE!!!

    First and foremost, both theories are deterministic. So we need to analyse how the probabilities enter these theories, and they do so in exactly the same way! Let’s go to the basics and toss a coin. We know that for long series of coin tosses the distribution of heads and tails approaches one-half. This one-half is the probability of landing heads or tails. Let’s say you were able to know the exact initial positions and velocities of all the particles composing the coin and the device you throw the coins with right before every coin toss (presupposing classical mechanics). Would this knowledge in any way alter the statistical distribution of heads and tails? The statistical distribution of the coins doesn’t care about what agents know about the coin. This distribution is objective! By the law of large numbers this distribution is stable for very long series. So the probability of one-half is an objective probabability arising from an analysis of the deterministic dynamics.

    A rational agent would adjust her credence to the probabilities giving by physics. So a rational agent uses the objective probabilities of physics to adjust her state of belief accordingly.

    We now see, Mateus, that your statement that “Frequentism is as ruled out as anything can be ruled out in philosophy” is totally unfounded. Rather, frequentism is required by deterministic physical theories and backed up by mathematical theorems.

    I have the impression that you misunderstood Lewis. For Lewis objective chances are those that appear in stochastic theories. They are also objective because they are part of the Humean best-system of laws. Within Lewis’s metaphysics, probabilities have two roles. First, they are a measure of fit, that is, the laws that assign higher probabilities to actual events fit better our world. Second, they determine the credence of rational agents by means of the Principal Principle.

  248. RandomOracle Says:

    Tim Maudlin #235

    According to the paper I’ve linked to, superdeterminism is an independent issue of the preparation independence assumption. All ontological models make the assumption that Pr(λ|ρ,M) = Pr(λ|ρ). I.e. that the probability of having a certain ontic state, λ, should only depend on the quantum state ρ and be independent of the measurement being performed on that state. To quote from the paper: “Theories in which dependence of λ on M nevertheless still holds in the underlying ontology are often called superdeterministic”. That is the sense of superdeterminism to which I was referring.

    An explicit psi-epistemic model, by Emerson, Serbin, Sutherland and Veitch, which does not satisfy preparation independence and is also not superdeterministic in the above sense, is given. Basically, they gave weaker versions of CPA and NCA in which, when composing two systems, A and B, the joint ontic space is not just the cartesian product of ontic space of A and ontic space of B but is also in cartesian product with another space, representing global degrees of freedom not reducible to properties of system A and system B alone. The PBR theorem does not hold for this model.

    Btw, I’m not trying to defend any of the two options, I just wanted to state what they are. Also, I should emphasize that they aren’t my options, they are the options presented by Matt Leifer in that paper 🙂

  249. Paul Hayes Says:

    Tim Maudlin, #235

    The denial that there even exists any physical reality independent of observers. [..]

    The denial that there exists a classical underlying sample/event space for the probability theory which physics needs for a good description of reality but which happens not to have such a thing is hardly that. Of course it’s silly to deny the existence of an ‘objective’ (not to be confused with ‘observer-independent in every respect’) physical reality. If you “draw that moustache” on the (Bohr) CI, or any neo-CI refinement of it, of course it looks silly. So please don’t. Philosophers such as Ben-Menachem and the ones who wrote/contributed to this article seem able to resist the urge.

  250. RandomOracle Says:

    Mario Hubert #238

    What would count as subjective probability in your view?

    In a deterministic universe, the outcome of a coin flip is fully determined by the initial conditions and the dynamics in that universe. If an agent knows, completely, the initial conditions and the dynamics then they can predict the coin toss outcomes with certainty. On the other hand, an agent not knowing the initial conditions will not be able to predict the outcomes with certainty. So I agree that “the frequencies of heads and tails outcomes approach 1/2 as the number of tosses goes to infinity” is objectively true. However, the probability that each agent would assign to a particular coin toss is subjective (depends on the information each agent has about the initial state). The first agent will assign probability 1 to the outcome that will happen on that coin toss, whereas the second agent will assign something else (presumably 1/2 if he knows nothing about the initial state).

  251. Mateus Araújo Says:

    Maudlin #236, Hubert #238:

    I find violation of PBR’s preparation independence assumption as contrived as a violation of Bell’s (or CHSH) locality condition, so I find it very amusing to see you defending the latter while vociferously denouncing the first.

    Neither is nearly as bad as superdeterminism, though. They are, in any case, logically independent.

  252. Mark Gomer Says:

    Scott, you may be interested in arXiv:0810.0613, which explains how an ether reappears in “condensed matter approaches” to quantum gravity. The basic principle is that Michelson–Morley experiments assume light propagates through the ether, but matter is some totally other substance coexisting with the ether – but if matter is treated as a different type of excitation of the same ether, this leads to the conclusion that “light propagates through something like an ether” is compatible with the results of MM experiments.

  253. Mateus Araújo Says:

    RandomOracle #242, Mario Hubert #239:

    You are both making the same mistake in the statement of the law of large numbers. It is not true that the relative frequencies will converge to the probabilities in the infinite limit. What is true is that the relative frequencies will converge to the probabilities in the infinite limit with probability one. And this is the statement of a mathematical theorem, independent of any interpretation.

    This qualification, with probability one, is what makes frequentism untenable. Every single infinite sequence is still possible. Even the one which consists of heads, heads, heads, heads…

  254. Mario Hubert Says:

    RandomOracle #241

    Subjective probabilities are not really probabilities for an event to occur but rather a quantification of an agent’s state of belief about the occurrence of this event. They quantify how confident an agent is that some event may occur. So different agents may disagree on their state of belief if they have different knowledge. It may also happen that two agents disagree on their state of belief, although they have the same knowledge, but evaluate their knowledge differently. Another feature of subjective probabilities is that they disappear once there are no agents.

    Your example illustrates this. The two agents have different knowledge: one knows the exact initial microstate, while the other does not. Therefore, the subjective probability for one agent is 1, and for the other is one half. These evaluations of the situations by the agents doesn’t in any way affect the outcome of the coin toss, and it does not affect the objective frequentist probability of one half. It may sound paradoxical that the subjective probability is 1, while the objective probability is one half. But beware that the objective probabilities in a deterministic theory as I defined them depend on the behavior of large collections of coins; they don’t say anything about the outcome of a single coin. In contrast, a subjective probability (or a state of belief) can be assigned to a single coin toss (and in general to all kinds of singular events).

  255. Mario Hubert Says:

    Mateus #242

    That’s very interesting what you write!

    You don’t want to violate statistical independence, and you don’t want to give up realism. But then you must conclude that all wave-functions are psi-ontic. There is no way out!

    How in the world do you retain locality, while saving statistical independence?

    To clean up terminology: superdeterminism=statistical dependence=preparation dependence (unless you want to endorse retrocausation)! So you can’t say that superdeterminism is worse than statistical dependence.

  256. Mario Hubert Says:

    Mateus #245

    Having an outcome of a certain number of heads in a row does not undermine a frequentist understanding of probabilities based on the law of large numbers. In philosophical terminology, a hypothetical frequency account of probabilities is not threatened by actual frequencies that don’t match the hypothetical ones (i.e., the probabilities). Trivially, if I were to toss a fair coin an odd number of times, I’d never ever get a distribution of exactly half heads and half tails. So what?

    All that I mean with a probability of 0.5 is that in almost all series of coin tosses the distribution of heads and tails approaches 0.5 the bigger the series gets. Beware of the “almost all”! This is what you referred to with “with probability 1”—honestly, I don’t understand what probability means here, but that’s a different issue. Sure, there are exceptions, namely the ones you mentioned, but these exceptions have measure zero. So who cares?

    What would be your alternative idea of probabilities in deterministic theories?

  257. Mario Hubert Says:

    Mateus #8

    I’ve just spotted your earlier post that there is no philosophical understanding of objective probabilities. Contrary to your allegations, there is a huge literature on this topic that shows that philosophers have a pretty good grasp of what objective probabilities are. The crucial distinction one needs to make is that objective probabilities appear in two guises: either in a stochastic theory or in a deterministic theory. Then one can proceed with the analysis of the meaning of these probabilities. It seemed that for you objective probabilities appear only in stochastic theories.

  258. Tim Maudlin Says:

    David Pearce # 238

    I completely disagree with the claim that the measurement problem is the “determinate experience” problem. I would characterize it rather as the “determinate cat” problem, just as Schrödinger did.

    Suppose we actually did the experiment Schrödinger describes and suppose we were very fond of the cat. Then we would be quite nervous, waiting to find out if the cat lived or died. Our natural, everyday understanding of the world is that at the end of the day, there will only be one cat—the very cat we put in the apparatus at the beginning—and that very cat will either be alive or it will be dead. That seems to be just a manifest fact about the physical world, as clear as any physical fact can be. And the “measurement problem” is the problem of accounting for that physical fact, since it is the aim of physics to account for every physical fact.

    Note: I may well have come to have the expectation that the cat will end up alive or dead because of my experience, but it is not in any way a claim *about* my experience. If we seal the cat in a spaceship and send the spaceship into interstellar space—so I am certain that neither I nor anyone else will experience whether the cat lives or dies—that makes not a speck of difference. In the usual view of the world, the cat will either end up dead or alive. That is a physical fact that physics must account for.

    Now there is a radical way not to solve but rather dissolve this problem. That is, of course, Many Worlds. That theory simply denies that as many cats come out of the experiment as went in. It denies that our everyday beliefs about macroscopic physical items are correct. So one has to discuss Many Worlds and its problems separately.

    But nothing I have said is about any determinate experience of anything.

    You might ask: but as an empirical theory doesn’t physics have to make predictions about experience? And the answer is: since it never has, obviously it doesn’t have to.

    And then: but surely experience plays some essential role in physics! And the answer is of course, but not where you think it does.

    The best that any physical theory has ever been able to do is to make predictions about the behavior of macroscopic physical objects. It is a simple fact that experimental reports—the data reported by experimentalists—either are about or were recorded in the structure and behavior of some macroscopic physical objects. (Otherwise, how did the experimentalists some to know what the results were?) The theory meets data there: the meeting point is the behavior of some macroscopic physical objects that the theory predicts and the experimentalists report. No mention of anyone’s experience. Certainly not the experimentalists! They report that the pointer went to the right, not what their experiences at the time were. (Maybe they were hung over, for example, and had a pounding headache. That does not go into the data.) And the assumption of all physics is that in certain circumstances, people are very good at coming to know and reliably report the behaviors of the macroscopic objects around them. That is why we trust the experimentalist’s reports. But there is no explicit mention of experience in any of this, much less an attempt to predict or account for anyone’s experience!

    There is a reason that Bohr insisted that experiments *must* be described and reported in “classical language”. That just is the language of macroscopic objects with shapes that move.

    Can you imagine opening a physics book and finding this problem:
    A solid ball of radius 5″ and a cylinder of radius 4″ roll without slipping down an inclined plane at 45°. The plane is 20″ high. Describe the experience of someone in the room.

  259. Tim Maudlin Says:

    Random Oracle #245 and anyone else

    I propose that we all agree to avoid the term “superdeterminism”. It is, simply put, a mess. In our very, very long discussion with ‘t Hooft (Part 1 of 7(!) here: https://www.facebook.com/tim.maudlin/posts/10155641145263398)
    it took a while to figure out that what he thought “superdeterminism” meant was that human bodies, as well as everything else, are governed by a deterministic physics. So he concluded that he could avoid Bell’s result by denying that people have free will(!). Bell did use the term “superdeterminism” in this context to mean “Failure of statistical independence of the ‘free variables’ from the state of the particles when they meet the detectors”. In his case the free variables are the settings of the spin measurement devices. Bell’s discussion of the status of this hypothesis is in his paper “Free variables and local causality”, Chapter 12 of Speakable and Unspeakable”.

    Since there are innumerable different “physical randomizers” that can be used to determine these settings, denial of Statistical Independence means that all of them are somehow always sensitive to the state of the incoming particles, even before they have arrived.

    One way to try to arrange such dependency is through retrocausation. But ‘t Hooft explicitly denies there is any retrocausation in his theory. And he denies that there is any non-locality. But what we know is that no such theory can return the predictions of QM.

    PBR has a similar statistical independence assumption. It comes down to this: given a set of identical-looking boxes, containing particles in different states, it is possible to choose randomly among the boxes. So if .n of the boxes contain a particle in a given state, then in the long run one will pick 2 of those boxes in a row about .n^2 of the time.

    Bell shows that Statistical Independence + Locality implies a certain inequality for the observed results. That inequality is reported as violated in the lab. If we take those results at face value (i.e. we do not consider a Many Worlds splitting), then any theory must violate one of those two principles. Since denying Statistical Independence would undercut scientific method, Locality must be denied. (Contingent on a discussion of Many Worlds, which must be examined.)

    PBR show that from statistical independence alone one can demonstrate that quantum predictions require that any systems prepared to be in different pure quantum states are, indeed, physically different. Since different wavefunctions imply a real physical difference, the wave function of a system corresponds to an objective physical difference between the systems.

  260. Tim Maudlin Says:

    Mateus, Random Oracle, Paul Hayes

    OK I have read or skimmed all of the articles that you have linked which purport to show some flaw or loophole in PBR. We agree that one of the assumptions is a statistical independence assumption akin to the assumption made by Bell. The denial of that assumption is sometimes called a “conspiratorial” theory. Retrocausalists aim to have an account of why statistical independence can fail without anything deserving the name “conspiracy”: the “initial” state preparation is causally influenced by the *later* measurements that are made on the particles! Think what one may by that, the solution does not yield a Bell-local theory since causes from the future are as bad as causes from a space-like separated event as far as Bell’s definition goes. If you rule out retrocausation, then the denial of Statistical Independence is just bizarre and ad hoc and unscientific.

    So the structure of Bell is Locality + Statistical Independence => Bell’s inequality. Since quantum mechanics predicts, and much more importantly nature displays, violations of Bell’s inequality, we are forced to deny either Locality or Statistical Independence. Since it is unscientific to deny Statistical Independence, we have to deny Locality if we are going to do science at all.

    The structure of PBR, in contrast, is Statistical Independence + Psi-Epistemicism =>predictions contrary to the predictions of quantum mechanics. I have no idea if the relevant experiment has been done, but everyone expects the predictions of quantum theory to be correct. So our choice is to deny psi-epistemicism or deny Statistical Independence which, as usual is unscientific. So now we are forced to deny psi-epicstemicism and become psi-ontologists.

    Between the two of them, they force us to take both the wave function as describing some real characteristic of the individual systems, and force us to accept non-locality.

    Now: I would be happy to discuss the papers, if someone thinks there is a decisive passage. I can report that the paper by Emerson, Serbin, Sutherland and Veitch introduces a completely unexplicated notion of “composing” a pair of systems that has no clear physical content. And there is a lot of gesturing towards that “Copenhagen interpretation” without any clear statement of what that is. I hope we can keep any discussion focused.

  261. Mario Hubert Says:

    Tim #250

    Yes, I agree to drop the term superdeterminism. It’s a source of confusion, and it’s also a very bad choice of terminology for the thing it should actually stand for.

  262. Paul Hayes Says:

    Tim Maudlin, #251

    No-one’s purporting to show some flaw or loophole in PBR. The flaw – the non sequitur – is in your claim that it forces or should force us all into the Church of Psiontology. The objectivity in certain probabilistic descriptions of physical systems doesn’t compel a neo-CIer any more than does the objectivity in certain (Bayesian) objective priors. Just as they’re not spooked by non-separability in probability (even when people insist on calling it “nonlocality”), neo-CIers aren’t spooked by objectivity in it.

  263. Paul Hayes Says:

    BTW, I urge anyone unaware of what I meant by that last remark about “nonlocality” to read §7.1 of the article by Summers which I linked to earlier. See how bizarre Tim’s exhortation to “deny Locality”:

    So the structure of Bell is Locality + Statistical Independence => Bell’s inequality. Since quantum mechanics predicts, and much more importantly nature displays, violations of Bell’s inequality, we are forced to deny either Locality or Statistical Independence. Since it is unscientific to deny Statistical Independence, we have to deny Locality if we are going to do science at all.

    (instead of just accepting the existence of non-separable states) is in the context of a sensible telling of Bell.

  264. Mateus Araújo Says:

    Maudlin, Hubert, several numbers:

    If you really think that PBR’s preparation independence is equivalent to no superdeterminism, you should write a paper about it. I’m not saying this ironically. This would be a great result, that would change people’s understanding of the PBR theorem. But as far as I know this is claimed nowhere in the literature.

    In the meanwhile, I find it irresponsible to use both terms as synonyms. This only serves to confuse people about what you mean. Standard nomenclature should be used not because it’s good, but because it’s standard! Even though hot dogs are probably made of horse, and not dog, calling them hot horses is not a good idea.

    I am sympathetic to getting rid of superdeterminism, though, as it has nothing to do with determinism: it is possible to violate it in a deterministic theory, and it is possible to respect it in a indeterministic theory. Furthermore, people are often confused about it, as your example with ‘t Hooft illustrates. When I’m talking about it I just call it “conspiracy” instead, while noting that it is usually known as “superdeterminism”.

  265. David Pearce Says:

    Tim Maudlin #251

    When awake, does each of us:
    (1) directly perceive a macroscopic physical world where laboratory apparatus has determinate pointer-readings, cats are manifestly alive or dead (but never alive-and-dead), and well-localised friends report on the health status of our pets?
    Or
    (2) run a quasi-classical world-simulation, subjectively experienced just as (1), that tracks fitness-relevant patterns in the hypothetical mind-independent world?

    I agree with you: contrived circumstances aside, neither the perceptual direct realist (1) nor the world-simulationist / inferential realist about perception (2) talks explicitly about their experiences of laboratory apparatus, solid balls rolling down inclined planes (etc).
    But whether (1) or (2) is true is highly relevant to the correct interpretation of QM. Only if (1) is true can the mind-body problem be quarantined from the interpretation of QM, as you propose.

    [For what it’s worth, IMO perceptual direct realism is false. Only the universal validity of the superposition principle allows the CNS to run a robustly classical-seeming world-simulation. But the theoretical sub-femtosecond lifetime of neuronal superpositions assuming unitary-only QM makes this view far-fetched, to say the least.]

  266. Mateus Araújo Says:

    Hubert #247:

    With the Many-Worlds interpretation one can respect locality and violate Bell inequalities, while not requiring any conspiratorial assumption such as “superdeterminism” or “preparation dependence”. It is one of the main reasons I favour the MWI.

    But you seem to misunderstand me: I think the quantum state is clearly objective, I think the preparation independence assumption is very reasonable, and I like the PBR theorem. I just object to such a categorical statement that “the PBR theorem proves that quantum states are objective”, without citing the assumptions.

    And for me personally, \(\psi\)-epistemic models were already ruled out by Bell’s theorem, together with any deterministic hidden-variable theory, as they must be nonlocal. So the PBR theorem didn’t change anything for me. Still, I wouldn’t state that “Bell’s theorem rules out Bohmian mechanics”, because I know that there are people which think that nonlocality is reasonable, even though I disagree.

  267. Mateus Araújo Says:

    Hubert #248,249:

    It rules out frequentism because it makes it undefined. I cannot say that the probability is the limit of relative frequencies as the number of trials goes to infinity if this limit does not exist.

    The “almost all” and “with probability 1” are equivalent, because the measure according to it “almost all” infinite sequences have the correct relative frequency is given by the probability itself.

    To make the limit exist, I need then to assume then that the probability of getting heads in an individual trial is p. I can then prove that

    \(\text{Pr}\Big( |f_n – p \ge \varepsilon \Big) \le \frac{p(1-p)}{n\varepsilon^2}, \)

    where \(n\) is the number of trials, and \(f_n\) the fraction of heads.

    But to use this limit to define the probability would be circular. Because of this circularity, frequentists do not define the probabilities as this limit, but go for weirder and weirder stuff. Take a look here to see what they actually defend.

    To answer your question, deterministic theories cannot have objective probabilities, only subjective ones. You are the first person I see to have a problem with this statement, I thought it was blindingly obvious. Objective probabilities can only appear in quantum mechanics.

    You’re claiming that philosophers understand objective probabilities well, but you didn’t say which interpretation of probability you think can make sense of them.

  268. John DeBrota Says:

    Scott:

    In comment #76 of your January 29th post you write: “On the other hand, I have to confess that I recoil at the radical subjectivism inherent in the “QBist” philosophy, the refusal ever to say anything about what’s the actual state of the world. I.e., if quantum states are just personal knowledge assignments, then what are they knowledge about? And how could you treat a quantum state as just your personal knowledge assignment, with no “ontic” reality behind it, if (using some far future technology) you yourself were being manipulated in a superposition state by someone else? Or does such a scenario not even make sense? Whatever the answer, stick your damn neck out and say something about it!”

    It is true that QBists refuse to make an upfront definite claim about what the stuff of the world is. On the other hand, they are clear about what the “knowledge is about” (also, QBists now eschew “knowledge” for “belief”): A quantum state encodes a user’s beliefs about the experience they will have as a result of taking an action on an external part of the world. Among several reasons that such a position is defensible is the fact that any quantum state, pure or mixed, is equivalent to a probability distribution over the outcomes of an informationally complete measurement. Accordingly, QBists say that a quantum state is conceptually no more than a probability distribution. Okay, fine, but what is the stuff of the world? QBism is so far silent on this issue, not because there is no stuff of the world (in fact, it explicitly posits that there is stuff of the world, contrary to the charge of solipsism by Atreat in comment #77 of the other thread and by Tim Maudlin in comment #7 of this thread), but because it is simply not yet known. Answering this question is the goal, rather than the premise. Is this an unacceptable weakness of the interpretation? Well, that’s a matter of opinion, but my position is that it is not. Must we demand that a complete ontology be laid out before one’s ramblings graduate to the status of an “interpretation?” If taken to the extreme, this is clearly unfair: One might claim that no one has a qualifying interpretation because we don’t have a successful formalism for quantum gravity and so every proposed ontology necessarily fails. More practically, feeling pressured to commit to an ontology prematurely may leave us unable to imagine one which departs sufficiently from classical intuitions. Why not see if the right ontology can be teased out from the formalism itself and a principled stance on the meaning of its more familiar components (such as probability distributions)?

    The QBist answer to your scenario is quite simple: My quantum states are mine, your quantum states are yours. If someone else considers a quantum system containing me and ascribes to that system a quantum superposition state, so be it. That is their quantum state assignment. It doesn’t make sense for me to assign myself a quantum state if a quantum state is an encoding of my own beliefs for the outcomes of my freely chosen actions. If something feels off about this answer, consider whether you are assuming that there is a “correct,” i.e. purely physically mandated, quantum state in the scenario you’re describing. For a QBist, there is never such a quantum state just as in personalist probability theory there is never an ontologically “correct” probability distribution. The answer to the question “What does it feel like to be in a quantum superposition?” is the same as the answer to the question “What does it feel like to be in someone else’s probability distribution about me?”.

  269. Tim Maudlin Says:

    David Pearce # 252

    I am not sure what the “directly” in (1) is supposed to connote. If it means: Do I ever perceive,e.g., a table “directly” in the sense without some physical intermediary such as light traveling from the table to my retina, exciting the retina to send signals down the optic nerve, or an interaction between the table and haptic receptors which sends nerve impulses to my brain. then of course the answer is “no”. Or if it means: Do I ever experience a table, or indeed anything at all (e.g. a headache) without a bunch of complicated cerebral activity, the answer is again “no”. So I would reject (1) under these readings. If you mean something else, please clarify.

    Does rejecting (1) commit me to (2)? Well not by logic alone, of course. And the short answer is: I haven’t really got a clue how our interactions with the world and the subsequent cerebral activity produces experiential states: that is the mind/body problem! Whether this talk of “running a simulator” is apt or not is something I know nothing at all about. But how does that bear on my point? Physics has just never addressed questions like this: they lie in the purview of cognitive psychology, at least.

    All I said is that we take for granted that an experimentalist’s report that the pointer swung to the right is reliable: the pointer did swing to the right. Just as I take my own experience of the external world to be reliable about nearby macroscopic objects: if it seems to me that a certain pattern of light and dark spots are on my computer screen now, they are indeed on my screen now and indicate something you wrote a while ago. Why either the experimentalist or my own eyes are so reliable is an interesting question, but not one of physics per se.

  270. Mateus Araújo Says:

    Paul Hayes #254,255:

    What Maudlin has in mind is that if one has a single-world ontology one is forced to deny locality – as done by Bohmian mechanics or Collapse models. To retain locality the Copenhageners/QBists give up on having an ontology instead, which for Maudlin is such a nonsensical step to make that is not even worth talking about it.

    With respect to this point, actually, I think he is completely right.

  271. Tim Maudlin Says:

    Paul Hayes # 259, 260

    Well, we must be cutting close to the bone if the insulting Trumpian nicknames (“Church of Psiontology”) are being wheeled out. And I imagine that getting clear about Bell as well as the measurement problem and PBR would extend this discussion exponentially. So I will just remark about that that I stand behind my characterization of both the logical structure and physical implications of Bell’s thoerem 100%, despite section 7.1 of the paper you cite. Yes, Bell + experimental data prove that the world is non-local. Straightforwardly if the theory is not Many Worlds, and requiring a longer discussion if it is.

    Just to be crystal clear: of course I accept the existence of non-separable states! As a psi-ontologist I accept the physical reality of a quantum state that pertains to individual systems and has no epistemic or subjective aspect at all. And some quantum states of joint systems are entangled. You have managed to get into the truly bizarre position of simultaneously mocking me for being a psi-ontologist and accusing me of not taking entanglement seriously! It is just the opposite: it is the psi-epistemicists who do not want to take entanglement seriously as an objective physical fact about systems: that’s why they want to understand the wavefunction epistemically rather than ontologically. Just as they don’t want to take collapse seriously as a physical change in the world, but rather merely as a rule for updating beliefs on the receipt of new information, i.e. as conditionalization. I mean, at least get the basic motivations straight here. Psi-ontologists take entanglement and hence non-separability deadly seriously as facts about the physical world that have nothing to do with minds, facts that are part of the explanation of how physics is non-local, as it must be to account for the data. It is psi-epistemicists that do not take entanglement and non-separability seriously as objective physical features of the world. And hence who can’t explain the violation of Bell’s inequality that we see in the lab at all. Pushed to one extreme, they are forced into quantum solipsism: Bell’s inequality never is violated by experiments done at space-like separation, it is only violated in the experiences of some single individual!

  272. Tim Maudlin Says:

    Mateus # 261

    Good: so let’s settle on some nomenclature, or at the least try to explain each of our uses of it.

    Bell’s theorem has a statistical independence assumption, which says that the frequency with which particular states of the particles are produced by the source is statistically independent of the choice of measurement settings. We can call this condition “Statistical Independence”. I call the denial of Statistical Independence “hyperfine tuning”. The retrocausalists assert hyperfine tuning and invoke retrocausation to explain it. ‘t Hooft denies retrocausation and so has no explanatory resources here: he embraces what we might call unmitigated or primitive hyperfine tuning: restrictions on the set of physically possible initial conditions that have no further explanation at all. I regard that as scientifically unacceptable: I can explain anything if I can resort to that. This is what you call “conspiratorial”, I take it, and I am happy with that term.

    PBR also has a statistical independence assumption, namely that it is possible to implement a physical randomizer for choosing among a set of boxes containing prepared systems such that the frequency with which various types of systems appear in the chosen sets are (approximately) the frequencies with which they appear in the set being chosen from. What is really needed is even weaker than this, it is just that if in the set being chosen from there are many boxes containing a system in a given state S, then sometimes when randomly choosing a pair of boxes one will occasionally choose two boxes with S. Again, just as with Bell, a violation of this assumption would require some sort of conspiracy, especially since the violation must happen no matter which physical randomizer is used.

    To my mind, the two statistical independence assumptions are essentially identical, so I call them both “Statistical Independence”, and the denial of both “hyperfine tuning”. It sounds like you think these are fundamentally different assumptions. So can you explicate what you take the difference to be?

  273. Bunsen Burner Says:

    This discussion has gone on long enough, but since my name was mentioned it behooves me to make at least a couple of points. First of all I certainly agree that QM is not the coarse-grained statistical theory of some finer quai-classical sub quantum world. I do believe that all the no go theorems put this to rest. That still unfortunately tells us nothing about QMs relationship to the real world. It may well be nothing but a type of statistical decision theory that allows us to assign degrees of belief to the outcomes of certain measurements.

    This doesn’t mean that reality doesn’t exists, or that it is unknowable. It just means that QM is not an observer independant theory that everyone seems to desperately crave. Consider what might be a decent analogy. E. T. Jaynes showed that you can develop statistical mechanics as a type of Bayesian statistical decision theory. This is interesting because it shows all the years of talk about the details of the molecular dynamics, ergodicity and all that, were for nothing. Any dynamics consistent with his maximum entropy principle would give the same distiributions.

    In this sense I think a psi-epistemic viewpointis still tenable.

  274. Tim Maudlin Says:

    Matteus # 264

    Again, there is just a nomenclature problem here, so let’s not get bogged down in that. Mario is taking “objective” to be the complement class of “subjective”, and “subjective” to mean “defined by reference to some agent’s beliefs or credences”. So by definition, a world containing no agents or believers has no subjective probabilities. Any way to define what one means by a “probability” that makes no reference to an agent’s credences is “objective”.

    One paradigm of an objective probability is a fundamental dynamics that is a random process. In such a case the laws of physics are indeterministic: given an initial state, the laws are compatible with different final states, and further assign a probability to each final state. God plays dice. I’m not at all sure why you say that “Objective probabilities can only appear in quantum mechanics.” One can imagine all sorts of theories using stochastic processes that are not quantum mechanics, and don’t give the predictions of quantum mechanics. There are, for example, completely local theories with stochastic processes. And the most famous physical theory with such objective stochastic processes—GRW—is often said not to be quantum mechanics because it makes slightly different predictions than “standard” quantum mechanics.

    So what Mario is referring to here is called “deterministic chance”: theories in which the basic dynamics is deterministic but still one can define “chances” that obey the probability calculus for certain events without making any appeal to anyone’s credences. Without going into to much detail, these sorts of things can be defined in Bohmian mechanics, for example. I would call them “typical frequencies” for the outcomes of certain experiments. As frequencies, they obey the mathematical requirements of a probability measure over outcomes, and the notion of “typicality” is defined with respect to an equivariant measure, as we already discussed. The point is that no agents or beliefs appear in the definition, so it is “objective”. And it satisfies the Principle Principal: a rational agent should adjust their subjective credences to match the typical frequencies.

  275. Tim Maudlin Says:

    Mateus # 266

    Just a comment. It seem to me that anyone who gives up on ontology altogether certainly has no right to call themselves a Copenhagenist or neo-Copenhagenist. One thing Bohr absolutely insisted on was that the experimental situation and results must be described in classical terms: the lab, at least, and the outcomes are perfectly real and objective. That why Bell reconstructs Copenhagen as an additional variables theory, like Bohmian mechanics, but with macroscopic additional (and hence not at all “hidden”) variables. So to mix up the QBists with Copenhagen, for example, does a disservice to Copenhagen.

    Of course, not only would I not call anyone who rejects ontology a Copenhagenist, I wouldn’t call them a physicist. Reject ontology and you reject physics altogether. Just to avoid non-locality. Talk about the baby and the bath water!

  276. Paul Hayes Says:

    Mateus Araújo #260,

    He is completely wrong. As I said before, and as is clear from e.g. that plato.stanford.edu article on the CI, all the nonsensicality is in that false assertion – “To retain locality, the Copenhageners/QBists give up on having an ontology”. They don’t. They’re just not wedded to a naive, classical-like ontology. They adopt a “rarefied ontology”, as Rovelli puts it. Psi-ontologists adopt a “bloated ontology”.

    Tim Maudlin #261,

    You richly deserved that “Church of Psiontology” snipe because you are wheeling out vacuous rhetoric, falsehoods and illogic, apparently in an attempt to convert others to your (psi-ontic) beliefs.

    You have managed to get into the truly bizarre position of simultaneously mocking me for being a psi-ontologist and accusing me of not taking entanglement seriously!

    There you go again. Misrepresenting me (before going on to, among other errors, misrepresent the position of psi-epistemicists in general). What I actually accused you of is (tacitly) conflating the “locality” and “separability” concepts. You invoked Bell as a reason to “deny Locality”. I urged people to read Summers’ clear and linguistic trickery-free treatment of Bell so that they, hopefully, will see that there is no reason to deny locality (properly defined), or to be spooked by nonseparability.

    This “moustache”:

    Pushed to one extreme, they are forced into quantum solipsism: Bell’s inequality never is violated by experiments done at space-like separation, it is only violated in the experiences of some single individual!

    is a classic in the obtuse psi-ontologist’s repertoire of traducements of the [neo-]CI position.

  277. Mario Hubert Says:

    Mateus #258

    Fair enough. I take the criticism that I should have made the assumptions in PBR explicit from the beginning. It’s just that in doing science you always presuppose a mind-independent external world and statistical independence.

    It would be strange to mention these assumptions outside of QM as something to be deniable: Warning: This study only shows that smoking causes lung cancer under the assumption of realism and statistical independence! Since your reaction was so harsh, I thought that we disagreed on the conclusion of PBR, but it seems that you were just bothered that I didn’t mention the assumptions. Alright!

    Now, I have two questions for you:

    1) How did Bell already rule out \(\psi\)-epistemic models?

    2) And why are you so opposed to non-locality?

  278. Per Östborn Says:

    Mario and Tim,

    We don’t need to assume a mind-independent external world to do science. But we do need to assume the existence of physical law that organizes our experiences in a way that is (at least partly) beyond our control. To learn and formulate this law better and better is the task of science. It is therefore part of a necessary ontology, but mind-independent material objects are not.

    When Dr Johnson kicked a large stone to refute Bishop Berkeley’s idealism, he did not prove the existence of mind-independent stones, but the existence of ruthless physical law that makes us feel pain when we forcefully hit perceived massive objects.

    Here’s one of my favorite quotes by Heisenberg:

    “We ‘objectivate’ a statement if we claim that its content does not depend on the conditions under which it can be verified. Practical realism assumes that there are statements that can be objectivated and that in fact the largest part of our experience in daily life consists of such statements. Dogmatic realism claims that there are no statements concerning the material world that cannot be objectivated. Practical realism has always been and will always be an essential part of natural science. Dogmatic realism, however, is, as we see it now, not a necessary condition for natural science. […] Metaphysical realism goes one step further than dogmatic realism by saying that ‘the things really exist.'”

  279. Tim Maudlin Says:

    Paul Hayes # 272

    “You richly deserved that “Church of Psiontology” snipe because you are wheeling out vacuous rhetoric, falsehoods and illogic, apparently in an attempt to convert others to your (psi-ontic) beliefs.”

    I don’t suppose you would be so kind as to actually cite the vacuous rhetoric, falsehoods and illogic and demonstrate them to be such. It is, of course, much less strenuous to just call names, but, you know, everything fine is difficult.

    PBR supply a sharp criterion for what they mean by a psi-epistemic theory and they also provide a proof from two assumptions: 1) a statistical independence assumption and 2) the assumption that the quantum-mechanical predictions are accurate. The definition is that in a psi-epistemic theory it sometimes happens that state preparations for two different pure states produce systems that are physically identical. If this never happens, if two systems prepared in different pure states are always physically different in some way, then the pure state characterizes some objective physical feature of the individual system. So it is hard to see that you can reject the criterion. And presumably you are not going to reject that the quantum predictions are correct. So that leaves the statistical independence assumption. Are you committed to denying that?

    We have an actual theorem on the table. If you don’t want to accept the conclusion, then you have to either reject a premise or show that the reasoning is not valid. If you put your time and effort into that rather than into overheated hissy-fits you would be doing something useful. As it is, you are not doing your position any favors, at least in my estimation.

  280. AJ Says:

    what about this interpretation https://plato.stanford.edu/entries/quantum-bayesian/?

  281. Paul Hayes Says:

    Tim Maudlin,

    Okay, in view of #265 I see you were “drawing the solipsism moustache” on QBism rather than the CI. I still don’t think that’s fair to QBism but it’s a side-issue and I can’t be bothered to argue about it. The real issue is your “strong” claims about PBR and Bell. I would summarize the disagreement as follows.

    In interpreting QM, “radical” psi-epistemicists would choose a rarefied ontology; “realist” psi-epistemicists and psi-ontologists would choose an enriched one. You say that because of Bell and PBR there is no (reasonable) choice. I and others say that is wrong:

    Bell doesn’t rule out locality: it rules out locality-preserving ontology enrichment.

    PBR doesn’t rule out psi-epistemicism: it rules out enriched ontology psi-epistemicism.

    The “radical” psi-epistemic choice is still open. So is the psi-ontic choice.

  282. Paul Hayes Says:

    Tim Maudlin #270,

    If you put your time and effort into that rather than into overheated hissy-fits you would be doing something useful.

    I think I’ve done okay under the circumstances. Remaining entirely unprovoked by your style is difficult enough. But faced with this sort of thing…

    PBR supply a sharp criterion for what they mean by a psi-epistemic theory and they also provide a proof from two assumptions: 1) a statistical independence assumption and 2) the assumption that the quantum-mechanical predictions are accurate.

    (here’s the PBR paper itself):

    The argument depends on few assumptions. One is that a system has a “real physical state” – not necessarily completely described by quantum theory, but objective and independent of the observer. This assumption only needs to hold for systems that are isolated, and not entangled with other systems. Nonetheless, this assumption, or some part of it, would be denied by instrumentalist approaches to quantum theory, wherein the quantum state is merely a calculational tool for making predictions concerning macroscopic measurement outcomes. The other main assumption is that systems that are prepared independently have independent physical states.

    …it becomes difficult to remain unconcerned for one’s own sanity. Is that “realism” assumption just an illusion? Are all those lambdas just for decoration?

  283. Scott Says:

    AJ #271: Did you not read far enough to see that QBism (and my comment opening on it) were my prompt for this entire post?

  284. Tim Maudlin Says:

    Paul Hayes #282, 283

    So thanks for finally reading my posts with enough care—although I would have done so before sending in any comments, much less comments of the tone you sent—to see that what I said was about a certain form of QBism and not about CI at all. As I have said repeatedly, the actual CI is certainly not solipsism. Nor is it idealism. It is also not a sharply formulated physical theory. As Bell shows, if you want to try to characterize the CI at all, is it a sort of vague variant of a “hidden” variables theory where the additional variables are macroscopic. As for neo-CI, feel free to explain exactly what that is. I have no idea. If all you mean is instrumentalism, “instrumentalism” is not a physical theory: it is an attitude one takes to a physical theory.

    In the interest of actually communicating, can you please also give a definition of the difference between “rarified” and “enriched” ontology? I have never heard these terms before and haven’t a clue what they are supposed to mean.

    Do you mean to deny that systems ever have real physical states that are independent of all observers? So before the appearance of observers there were no physical states and hence no physics? That way lies Wheeler’s big retrocausal U with Wigner’s consciousness-triggered collapse. That way lies Mermin’s “The moon does not exist when no one is looking at it.” Good luck with that. If you mean something else, say what it is.

    The idea that Bell’s use of the Greek symbol lambda was any sort of physical assumption at all is just silly. Mathematically, the lambdas could be anything you want: no restriction. They have no intrinsic relation of quantum mechanics. If you don’t like the lambdas because they are supposed to represent the physical states of things, and you just don’t like the idea of physical states, then you just don’t like physics.

    Maybe what you have in mind is not quantum solipsism but some form of idealism. If so, good luck with that too.

    Please also distinguish in your mind what PBR proved from what they thought they proved. It is possible that they proved more than they set out to, and even more than they realized. I am not interested in what they thought they proved but what they proved.

    And as a final comment, if you really want to criticize someone’s rhetorical or argumentative style, what you do is cite instances of that style, as I did with you. You do not cite yet a third person criticizing the style. Because the third person might be wrong. You have a whole lot of a material in this post to work with if you want to attack my “rhetoric”. How about some concrete examples from here?

  285. Mateus Araújo Says:

    Maudlin #274:

    Your (and presumably Mario’s) definition of objective probability is too weak. You take good care of the “objective” part, but your take on the “probability” part is unsatisfactory. It is not enough for the definition to be independent of any agent’s credences, one must also demand that there is nothing that determines the result.

    Otherwise one would consider a pseudorandom number generator that simply outputted the sequence 0101010101… to have an objective probability of 1/2.

    I don’t see how this would satisfy the Principal Principle. An agent that knew that the last bit to be outputted was 0 shouldn’t set their credence about the next bit to be 1/2.

    About “Objective probabilities can only appear in quantum mechanics”: indeed, I was being imprecise, I was referring to any theory with fundamentally indeterministic dynamics.

  286. Mateus Araújo Says:

    Maudlin #275, Paul Hayes #276:

    That’s a surprise: so you both agree that Copenhagen actually has an ontology, even if it is a “rarefied” one? Come on, it is a dualist non-reductionist ontology using obscurantism to paper over the inconsistencies. In 2018 you must do better than that.

  287. Tim Maudlin Says:

    Per Östborn

    The verificationist criterion of meaning that Heisenberg is tacitly assuming here has been as thoroughly refuted as any proposal in the history of philosophy. Philosophy does make progress (albeit often in the refutation of other philosophical positions), and the demonstration of the untenability of most forms of Logical Positivism is part of that progress. So if you want to make any claim that rests on a theory of meaning, please use a theory that has a chance of being true.

    No theory in the history of physics has proposed a law that predicts anyone’ experience, and I will wager that none ever will. Here are some of the obstacles that would have to be surmounted to do that. 1) One has to solve the mind/body problem (at least the sort-of-hard one, i.e. giving the supervenience rule). 2) One would need a thorough understanding of the human nervous system, including the brain. 3) One would have to treat a experimental situation that is so expansively defined as to include a complete human body, described at a level of precision and detail that one could in principle tell whether the person was a) drunk b) sleepy c) angry d) paying attention e) playing Candy Crush on their i-phone, etc., etc. etc. All of these would contribute to the character of their experience.

    If you aspire to such a theory, I admire your ambition. But I think you are being foolhardy.

  288. RandomOracle Says:

    Tim (and anyone else who might be interested)

    I highly recommend this talk, of Robert Spekkens, titled “Why I am not a psi-ontologist”. It gives some good arguments in defense of (realist) psi-epistemic theories, in light of the PBR result. For a quick summary, his arguments are essentially this:

    1) As I mentioned above, one of the assumptions that go into the preparation independence (or statistical independence, however you want to call it) of PBR is the cartesian product assumption, that when preparing two quantum states (as a product state), the underlying ontic spaces compose under cartesian product. Essentially, that there are no holistic properties for the joint state. However, this should only be true for product states, not for entangled states. If you strengthen this assumption for entangled states as well, then you get a contradiction irrespective of whether your model is psi-ontic or psi-epistemic.
    Spekkens argues that for a psi-onticist it is natural to assume that entangled states have holistic properties, but not for a psi-epistemicist, since entangled states themselves are not part of the ontology. So one should argue why CPA is a natural assumption, rather than full separability, without appealing to psi-ontic intuition.

    2) The second assumption that goes into preparation independence is the no-correlation assumption which says that the probability distribution for the joint state (of the two quantum states that form a product state) is simply the product of the distributions of the two states.
    Spekkens shows that this can actually be derived from preparation non-contextuality (the assumption that if you have 2 preparation devices that are operationally indistinguishable [no measurement can distinguish them], then the underlying distributions over ontic states, for the states they prepare, should be identical). However, we know that quantum theory does not satisfy preparation non-contextuality, so again this leads to a contradiction regardless of whether you go psi-ontic or psi-epistemic.
    So the challenge is to argue why the NCA assumption is more natural than preparation non-contextuality.

    3) Similar to 2), Spekkens shows that NCA can also be derived from local-causality, which yet again is not consistent with quantum theory. So, again, the challenge is to argue why NCA is more natural than local-causality.

    Ok, I should mention that he uses different names for CPA and NCA, but I was sticking with Matt Leifer’s terminology, to stay consistent with my other comment.

    Now, you say that it would be conspirational for preparation independence to be false. I agree, however I also find it conspirational to have non-locality without the ability to signal instantaneously. If you are not allowed to signal, why should you be allowed non-local correlations? And yet, that’s the world we live in, as far as we can tell. These sorts of things make it difficult to say what is and what isn’t natural with respect to quantum mechanics, in my opinion at least.

    Anyway, this will be my final comment here so I’ll end by thanking you, Scott and everyone else for this discussion. I’ve certainly learned a few things and it has improved my understanding of other things, as well as making me want to dig deeper into these topics 🙂

  289. Tim Maudlin Says:

    Per Östborn #279

    One more observation. There is no “ruthless physical law that makes us feel pain when we forcefully hit perceived massive objects” if by “perceived” you refer to the person’s experiential state. If I am hallucinating a stone and kick at it, I will feel no pain. Furthermore, insofar as there is any “law” in the neighborhood, it does not mention being perceived by the subject of the pain at all. When I stub my toe, it is almost always on an object that I had failed to perceive. And boy, does it hurt!

  290. Mateus Araújo Says:

    Hubert #277:

    Bothered because you didn’t mention the assumptions, and because I think you gave PBR credit that is actually deserved by Bell.

    In my point of view he ruled out \(\psi\)-epistemic models as he showed that they are non-local. And the reason why I care so much about locality, is because I think we should learn the lessons taught to us by relativity, not fight against it by postulating an undetectable preferred reference frame as used in Lorentz’s ether theory. The most sophisticated and successful theories we have – GR, QED, QCD – are all fundamentally Lorentz covariant. This is not a coincidence. It would take a lot of evidence to convince me to do away with it.

    If Many-Worlds did not exist, I would just say that we just don’t know how to make sense of quantum mechanics. It would be a successful set of hacks to make predictions, but that appeared to give rise to a non-local ontology, which is not acceptable.

  291. Mateus Araújo Says:

    Maudlin #272:

    Is it conspiratorial to have all pair of systems you prepared correlated, and with just the right correlation to give a null result in the PBR test? You bet! I’m just saying that it is worse to have them correlated with a future choice of measurement setting just in the right way to give a null result in the Bell test.

    Maybe you are correct, and it is possible to violate a Bell inequality with a local single-world theory by using PBR-like correlated systems to generate the measurement settings. But to determine that we would need to go beyond general arguments and actually do some mathematics.

    The problem is that I’m not really interested in that: such a result would be a shot in the back of the neck of \(\psi\)-epistemic models, but as far as I’m concerned Bell already put them in the guillotine.

  292. Tim Maudlin Says:

    Mateus # 292

    We are in complete agreement. In fact, I would say that 2-slit interference killed psi-epistemicism well before Bell! Not as a matter of logic, but as a matter of plausibility. So I’m happy to let it go too. But clearly other people here are not.

  293. Mario Hubert Says:

    Mateus #285

    Tim emphasized the standard meaning of objective and subjective probabilities in the philosophical literature. These are reasonable definitions that we agree on. If you have your private definitions, so be it. But then we cannot communicate.

    There are objective probabilities also in deterministic theories. And we tried to explain to you what these probabilities mean. But you still seem to be dissatisfied. Tim also mentioned why the definition of typical frequencies is not circular, because the Pr is a typicality measure and does not represent probabilities. If you like, I could expand on this.

    What I haven’t understood yet, is your idea on probabilities in deterministic theories? Are they subjective? Is the Maxwell-Boltzmann distribution subjective?

    You know, the Principal Principle has a nice little caveat that is easy to overlook: you adjust your credence not only to the probabilities but also to all the information admissible with respect to the event. Information is inadmissible (that is, not admissible) with respect to the event, when this information exceeds the information about the probabilities. So if you toss a fair coin and you know the initial microstate of the coin, then you don’t set your credence according to the probabilities \(\frac{1}{2}\), but to 1 or 0 according to your information about the microstates.

  294. Mario Hubert Says:

    Mateus #290

    I strongly disagree with you that Bell’s theorem also refutes \(\psi\)-epistemic wave-function. Bell’s theorem doesn’t use quantum mechanics; rather, it shows a constraint for all local theories (quantum or non-quantum). The question of PBR, on the other hand, is a question within quantum mechanics: Is a pure state wave-function a unique representation of the physical system. It turns out that, yes, it is.

    By the way, here’s a nice quote from the PBR paper: “In the terminology of Harrigan and Spekkens, we have shown that \(\psi\)-epistemic models cannot reproduce the predictions of quantum theory.” (p. 477)

    I know that you won’t like what I’m saying now: QFT is as non-local as QM because of Bell and because you still have the wave-function there. So the tension between special relativity and non-locality is not released in QFT. The wave-function and with it the non-locality in QFT is swiped under the rug, as well as the measurement problem, but this doesn’t mean that we have solved these problems. What needs to be done is to merge special relativity and QM into a theory by solving the measurement problem without relying on operationalism as is the current situation. But in doing so you won’t get rid of non-locality (provided you’ve a single world).

  295. Mateus Araújo Says:

    Maudlin #292:

    I never thought the day would come that you would agree with anything I said, and I even agree with your agreement: the 2-slit experiment already gives us a very good argument against \(\psi\)-epistemicity, and Bell settles the issue.

  296. Mateus Araújo Says:

    Hubert #293,294:

    I’m tired of teaching you basic stuff, and you’re clearly refusing to learn because of the adversarial structure of the conversation. Just read by yourself a bit about interpretations of probability, study Chebyshev’s inequality and the proof of the law of large numbers, study Bell’s theorem, and you’ll be able to argue productively.

  297. gentzen Says:

    Paul Hayes #123:

    Bohr, especially, did not come to silly conclusions about what QM says about reality.

    The link for “did not” is missing.

    Paul Hayes #249:

    Of course it’s silly to deny the existence of an ‘objective’ (not to be confused with ‘observer-independent in every respect’) physical reality. If you “draw that moustache” on the (Bohr) CI, or any neo-CI refinement of it, of course it looks silly. So please don’t.

    I read Carlo Rovelli’s neo-CI refinement now. His writing is fine and understandable, and I think it is OK to sell relational QM as a descendant in spirit of CI. (Henry P. Stapp in “The Copenhagen Interpretation” (1972) summed up CI in two assertions: (1) The quantum theoretical formalism is to be interpreted pragmatically. (2) Quantum theory provides for a complete scientific account of atomic phenomena.) Not OK is that Schrödinger is depicted as a pupil of Bohr and as converted to CI. Rovelli had better cited Schrödinger’s “Are There Quantum Jumps” papers from 1952, Heisenberg’s review of them, and Everett replies to Heisenberg’s criticism of that interpretation on page 115 of his dissertation: “This view also corresponds most closely with that held by Schrödinger.” which hints that Schrödinger himself paved the way for MWI.

    While googling for Paul Hayes (because I liked his answers), I found Mind-Blowing Quantum Mechanics, which are two very nice public performances of Sean Carroll. Watching them was a joy for me, and made up for some of the frustrations of spending time with discussions about interpretation of QM.

  298. Paul Hayes Says:

    Tim Maudlin #284,

    An enriched ontology is one which includes “hidden variables”, or “many worlds”, or psi itself. A rarefied ontology is one which doesn’t include the (always and everywhere) “possessed values” of position, momentum etc:

    The philosophy is not to inflate ontology: it is to rarefy it.

    No – I won’t waste any more of my time criticising your rhetoric etc.

    Following Harrigan and Spekkens, the lambdas in the PBR theorem are used to represent ontic states in the “psi-epistemic ontological” models which that theorem discredits. They can be anything you want so long as it’s something that can be assumed to have a definite value (uncertainty about which can be represented by a classical probability). Lambda is the supposed “real” property of a physical system that can have the definite value(s) for the “realist” psi-epistemicist to be “ignorant” of.

    So again, your claim that PBR rules out “radical” psi-epistemic interpretations, which simply don’t try to coerce QM back into a classical box like that, is simply false. As Matt Leifer said, it’s likely that “no theorem on Earth” could.

  299. Tim Maudlin Says:

    Mateus # 286

    I was a little too brief about what is typical behavior in Bohemian mechanics. If I prepare a million systems in a state of z-spin up and then put them through a Stern-Gerlach magnet oriented in the x-direction, one cannot only show that the outcome frequency of being deflected up is about 50%, one can also show that the typical outcome will pass all the usual statistical tests for randomness. How much more do you want?

  300. Tim Maudlin Says:

    Mateus # 291

    This passage of yours is very enlightening about how you think:

    “If Many-Worlds did not exist, I would just say that we just don’t know how to make sense of quantum mechanics. It would be a successful set of hacks to make predictions, but that appeared to give rise to a non-local ontology, which is not acceptable.”

    You apparently are somehow so upset by non-locality that you refuse to even imagine it might be a physical fact about the world. But I can’t even formulate a coherent set of commitments that would justify this. Bohmian mechanics, for example, describes a perfectly clear physical world containing of particles that move around in accordance with a simple deterministic law framed in terms of the wavefunction. The wavefunction represents a non-local item (the quantum state), which in turn allows the theory to predict violations of Bell’s inequality. Now if you somehow cannot abide “non-local ontology”, as you say, then you ought not to abide the quantum state, or the use of the wavefunction in stating the theory. But MWI is stated *entirely* in terms of the wavefunction, and in its canonical formulation the *only* ontology it has is the quantum state, which is not a local beable! So how can you like Everett but not even consider Bohm?

    It is true that it is at least difficult to generalize the guidance equation to a Relativistic setting without adding to the standard Relativistic account of space-time. There are ways to do this formally, but whether they adhere to the spirit of Relativity is a subtle question. My own preference is to cut the Gordian knot here: just postulate a preferred foliation as an additional piece of space-time structure and be done with it. Use that foliation to define the guidance equations. Done.

    What is so unpalatable about this approach? Some people find it repulsive that you postulate a foliation and then prove that it is not empirically accessible. But I don’t get the objection. If I had to fine-tune the theory to hide the foliation, that you be one thing. But if I use the foliation in the simplest and most natural way and it falls out that it cannot be observed, so what? I mean, especially for an Everettian! Here is Everett saying that there are all of these worlds being produced all the time and we never even notice them. The response is to point out that it follows from the dynamics of the theory that the multiplication of worlds will not be observable. Why is that OK in Everett and not OK in Bohm?

    I think we have three models of approaches to understanding quantum theory: Bohmian mechanics, GRW and Everett. Of the three, Bohemian mechanics is the cleanest in the non-relativistic setting but cannot be directly generalized to field theory or to relativity; GRW can be adapted to a fully Relativistic form but does not have very natural local beables, and may be empirically refuted soon; and Everett is the most problematic in terms of understanding really what it is committed to (again what local beables?) and how it solves the foundational problems (status of probability). People of good will can assess the relative importance of these strengths are weaknesses differently, but I can’t see how you can argue that Everett is acceptable and the other not at all.

  301. Scott Says:

    Tim: Now “Bohemian mechanics” sounds like a hidden-variable proposal that I could get behind… 🙂

  302. Tim Maudlin Says:

    Mateus # 296

    I’m glad of the agreement, but a little disappointed at your surprise. I am just trying to get this stuff straight as best I can, and if I feel certain that a claim is true or false I say so directly. I would never avoid agreeing with someone in public if I agree with them in fact. What would be the point of that?

  303. Tim Maudlin Says:

    Paul Hayes # 299

    I really am at sea here. Yes: the lambdas are supposed to represent whatever the theory postulates to exist. What Bell called the beables of the theory and I call the fundamental ontology of the theory. If a theory has no beables and no fundamental ontology then it denies the existence of any physical world at all and is not a physical theory.There is nothing in Bell’s proof that mentions position or momentum in particular: indeed, the only thing you do with the lambdas is conditionalize on them. But it sounds as if you want the ontology of the theory rarified straight out of existence! I mean, what is left in the “rarified ontology”?

    If there are no ontic states at all then there are no facts for the epistemic wavefunctions to be about, so psi-epistmicism falls immediately. Maybe you can try to explain the option you have in mind more directly and clearly.

  304. Paul Hayes Says:

    gentzen #297,

    Thanks for that. That link was meant to point to the Stanford philosophy encyclopedia entry for the CI.

  305. Mateus Araújo Says:

    Maudlin #299:

    Now this sounds a lot like the definition of a pseudorandom number generator: the sequence must pass any polynomial-time randomness test. But is clearly just subjective randomness, as anybody that knows the seed can predict the whole sequence. Same for Bohmian mechanics. There is just no getting around this if the theory is fundamentally deterministic.

    But if the Bohmian random number generator is so damn good, shouldn’t you consider the possibility that this is true randomness?

    You should keep in mind that this question is not empirically decidable, as any true random number generator can be modelled by a pseudorandom number generator with a long enough seed.

  306. Mateus Araújo Says:

    Maudlin #300:

    I’m objecting to nonlocal dynamics, not to globally-defined ontological entities. Even in GR you cannot get around this, if you consider the event horizon of a black hole to be objective. And in quantum mechanics I think everybody agrees that there is no getting rid of entangled states. Luckily relativity doesn’t have a beef with them.

    The problem I have with Bohmian mechanics is that one needs to postulate a preferred foliation, after decades of learning that there is no such thing as a right foliation. I would feel the same way about Everettian quantum mechanics if we had branching theories in the 19th century that had been destroyed by the advance of science, and were being ressurrected to deal with the difficulties of quantum mechanics. But this is not the case: branching is a new prediction of the Schrödinger equation that nobody saw coming, and that gives a solution to the nonlocality problem different from anything our classical prejudices would guess.

    But I’m not saying that Bohmian mechanics is unacceptable in objective terms; I was just explaining why I personally do not accept it. On the contrary, I think Everett’s and Bohm’s are the only two serious interpretations of quantum mechanics (I don’t count GRW/CSL as serious because they postulate dynamics that fight against where the experimental evidence is going, instead of trying to learn from it).

  307. Mateus Araújo Says:

    Maudlin #302: That was just a light-hearted comment, no need to take it so seriously. But I think I think it is much easier to get Scott to agree with me than you 😉

  308. Tim Maudlin Says:

    Mateus # 299, #300

    Of course one should consider the possibility it is true randomness, if you mean the dynamics is fundamentally indeterministic! Notice that GRW is on my list of theories to take seriously (although as you note it is under empirical pressure), and Bohmian approaches to field theory are often indeterministic. I don’t know if in your terms MWI is an example of randomness: the fundamental dynamics is deterministic.

    The thing to realize about Relativity is that Einstein was not tasked to account for any phenomena like violations of Bell’s inequality which—as you acknowledge—requires non-local dynamics in a single-world setting. Einstein was just trying to make sense of Classical EM. And postulating a Lorentzian metric with a light-cone structure brilliantly. Einstein didn’t need a foliation, so of course he didn’t postulate one. It would have been pointless to.

    But we have learned things since then, and in particular learned from Bell. One way to account for the violations of Bell’s inequality is to add a foliation to space-time. Indeed, it is a very natural way to implement a non-local dynamics, and it works like a charm. I can’t see any sense in which it is more controversial than accepting the existence of splitting worlds that you cannot directly ever verify.But your most recent post suggests that this really is just a personal preference, in which case we have no dispute at all.

    We could get into a more detailed discussion of the sense in which MWI is local and the sense in which it isn’t—given the non-locality of the quantum state. And more urgently, the question of whether MWI has any local space-time ontology at all. And, of course, the status of probability in the theory. But these are the sorts of serious and careful discussions needed once the distractor proposals have ben cleared away. If we agree on that, then we really agree on everything essential here. ????

  309. Travis Myers Says:

    Mateus:

    You say “But if the Bohmian random number generator is so damn good, shouldn’t you consider the possibility that this is true randomness?”

    Bertrand Russell correctly observed that scientists should search for deterministic theories like mushroom seekers should search for mushrooms (possibly apocryphal, I can’t find the exact quote). Wherever there is “true randomness” in a theory, that’s just a point where you throw your hands up and say you don’t know what happens. It’s fine if you don’t know right now, but then that should be a place where you try to improve the theory. Uncertainty exists in the map, not the territory.

  310. Per Östborn Says:

    Tim Maudlin #289

    You are right about hallucinations, dreaming, and so forth. To create a viable idealistic world-view we have to assume an inherent, primary distinction between true and false interpretations of perceptions. Then it becomes possible in principle to distinguish between stones we run into when we are awake and stones we come across in our dreams or hallucinations. In other words, we have to dismiss postmodernism by assumption! But this has to be done anyway if we want to do science. Then we may let an experience-oriented physical law act on correctly interpreted experiences. Such a law may sometimes punish us with unexpected pain when we run into a stone that we didn’t notice.

    Tim Maudlin #287

    1) I think you are presupposing your own realistic perspective when you argue against a more idealistic one. If subjective experience is seen as primary, it cannot and need not be explained in terms of something else. The “hard problem of consciousness” dissolves. Existence and experience go hand in hand. Existence without any subjective experience whatsoever has no meaning, as I see it.

    2-3) Here you are again presupposing your own perspective, I think. Yes, if you assume that material objects (or other beables) are the primary ingredients of reality, then it becomes incredibly messy to create a theory based on perceptions on top of that. But that is not what I’m promoting as a viable alternative. If experience or perceptions themselves are seen as primary, the known physiology of the brain corresponds to (hopefully correct) interpretations of the perceptions by experimental physiologists. Yes, there seems to be a close correspondence between (correctly interpreted) perceived states of the brain and perceptions in general. This correspondence makes our perceptions firmly anchored in the brain, but it does not imply that the state of the brain is primary and the corresponding perceptions are secondary.

    Therefore, in our physical modelling, we don’t have to assume a lot of complex processes *behind* the observations, even though there is certainly a lot of interesting things to be observed *in front* of us – including our own brains.

    I tried to borrow authority from Heisenberg. Let me try again with Pauli:

    “[T]here remains still in the new kind of theory an objective reality, inasmuch as these theories deny any possibility for the observer to influence the results of a measurement, once the experimental arrangement is chosen. Therefore particular qualities of an individual observer do not enter the conceptual framework of the theory.”

    Maybe the relationship between the brain and the naked observations that I tried to sketch above sounds like sophistry, but to me it is where we inevitably end up if we try to minimize the metaphysical content of a scientific world-view.

    The minimalistic approach to throw away all metaphysical beables guided Heisenberg to quantum mechanics. The abstract of his magical paper from 1925 simply reads:

    “The present paper seeks to establish a basis for theoretical quantum mechanics founded exclusively upon relationships between quantities which in principle are observable”

    This paper solved a lot of old problems, and opened the door to a universe of new insights and predictions. You have to show that the opposite philosophical approach can be even more fruitful. You have to pour all the beables back into the quantum mechanical house in such a way that it does not break, and furthermore add some value to it.

  311. Mario Hubert Says:

    Mateus #296

    I’m not disagreeing just for the sake of disagreeing. I aim for a fruitful scientific discussion.

    1) I gave you an argument that probabilities in deterministic theories can be objective.

    2) I gave you an argument why defining probabilities with the help of the law of large numbers is not circular.

    3) I gave you an argument why deterministic chances are compatible with the Principal Principle.

    I hoped you’d react to these arguments.

    Concerning Bell, I would agree that he poses a serious threat to a subjective interpretation of the wave-function, and the reaction of QBism to Bell’s result is off-target. But I don’t see why Bell also knocked down a statistical interpretation à la Ballentine (which is also \(\psi\)-epistemic and which could be dismissed for other reasons). Am I missing something here?

  312. Mateus Araújo Says:

    Hubert #311: Circularity is a mathematical fact. If you can’t understand it, you need to study, not argue with people online. More generally, find somebody else to teach you.

  313. Mateus Araújo Says:

    Myers #309: Easy thing to say, harder to do. A lot of the research in the foundations of quantum mechanics is precisely about the question of determinism versus randomness. If you want to seriously engage in the discussion you need to read up on what has been done and offer more than generic quotations.

  314. Tim Maudlin Says:

    Per Östborne

    Of course I am supposing Realism as opposed to Idealism! If the Idealist is correct, I would say that there is no physical world for physics to be about. And if the Idealist is correct, then scientific inquiry will never get us anywhere. For the reasons I gave, there just are no articulable laws at all. As I said, just try to write a single law of human experience. Or try to figure out what your experience will be tomorrow at 2.30. Or do the inclined plane question I asked above. It simply cannot be done.

    One more terminological point. There is the Realist/Idealist split, according to which I am a Realist. And then there is the Scientific Realist/Instrmentalist (or Anti-Realist) split. These are not distinctions among theories, but among attitudes to theories. I am a mild Scientific Realist there. But that should not affect the examination of alternatives here one way or the other.

  315. Mateus Araújo Says:

    Maudlin #308:

    I was careful to give two requirements for objective probabilities that Many-Worlds can fulfil: that they are the same for all observers, and that there is nothing that determines the outcome.

    That the fundamental dynamics is deterministic is not a problem, because multiplicity makes “the outcome” unpredictable in the most brutal fashion: there is nothing to predict, as there is no such thing as “the outcome”. This is actually the only explanation I know for true randomness. As far as I know it is otherwise treated as a bare postulate.

    As for splitting versus nonlocality, I think there is clearly a sense in which splitting is less controversial: it is a prediction of the Schrödinger equation (or more generally any linear evolution), whereas a foliation of space-time must be added as an extra postulate. That said, I think my other argument about it, of not trying to resurrect dead theories, is really just a personal preference, and cannot be elevated to a principle of the philosophy of science.

    All in all, we do agree about the essential. Maybe I should add, in order not to be too agreeable, that I don’t think we need to clear out distractor proposals before we can tackle the real problems. Right now you’re arguing with someone that is quoting Heisenberg and Pauli as if it’s gospel; if you are going to wait for him to start doing real physics you are going to wait forever.

  316. Mario Hubert Says:

    Mateus #312, 313

    Your post is totally inappropriate and shows your infinite arrogance! I tried to stay calm and reopen the discussion after your #296, but you tend to prefer personal accusations. For my part, this is not the level I want to continue to discuss!

  317. Scott Says:

    OK, maybe it’s time to close this thread down? 🙂 Get in any final comments by tonight. Thanks, everyone!

  318. Tim Maudlin Says:

    What Have We Left Untouched?

    Well, we have occupied poor Scott’s blog long enough—thanks for the hospitality Scott!—and covered a fair amount of ground. There is one bit of terminology that I had hoped to discuss, so I will end with this thought.

    The term “realist” is used quite a lot in discussions of these issues. Nowadays, it is often said that Bell’s fundamental assumptions were not just (Bell) Locality and Statistical Independence, but also a third assumption of “Realism”. This characterization of the result is supposed to imply that if you don’t want to give up Locality or Statistical Independence, then you can just abandon Realism instead.

    I have never understood what this assumption of Realism was supposed to be. It is somehow associated with Bell’s use of lambda, but since he puts no constraints on lambda I can’t see how there is any assumption there to be denied. So I would like to get a clear account from anyone who thinks they understand what Realism is what that whole suggestion is about.

    I thought it might be useful to explain what the terms “Scientific Realism” and “Instrumentalism” connote in the philosophy of science literature. People who invoke Copenhagen often portray their position as instrumentalist, thereby shunning the label “realist”. And they are under the impression that this gets them out of certain problems trying to understand quantum mechanics.

    Now the first point is that *theories* are neither Scientific Realist or Instrumentalist per se. It is rather an individual’s *attitude* toward a theory that can be Scientific Realist or Instrumentalist. Take, for example, the Copernican theory. That was a definite physical theory with a particular account of the physical world. According to the Copernican theory, for example, the earth spins on its axis once every 23 hours and 56 minutes, and orbits the Sun every 365 ¼ days. That is what the theory asserts.

    The question of Scientific Realism is whether, on the basis of observations or any other evidence or considerations, it can be rational to accept the Copernican theory as true or approximately true or on-the-path-to-truth. The Scientific Realist answers “yes”: in at least some circumstances, the evidence makes it rationally compelling to accept some claims about unobservable entities or behaviors as true or approximately true. And the Instrumentalist answers “no”. The Instrumentalist concedes that the theory makes reliable and accurate predictions, and so can serve as a predictive tool or instrument, but does not think that that, or any other feature, justifies us in believing that the theory is true or approximately true. That was the line taken by Osiander in the preface to de Revolutionibus: Copernicus is putting the theory forward not as true but as an aid to making predictions. That was supposed to square Copernicus with the Church.

    It is not my brief to argue for Scientific Realism here. In fact, absolutely nothing I have claimed depends on being a Scientific Realist: an Instrumentalist can argue all these points just as well. The key is that Bell is not trying to convince anyone of the accuracy or truth of any particular theory, and the PBR theorem does not purport to establish the truth of some theory. Rather, both Bell and PBR *rule out* whole classes of theories. Even if you can never have good grounds to accept any proposed theory, you can have decisive grounds to reject entire approaches.

    Bell shows that no theory that is Local and satisfies Statistical Independence can account for certain correlations between the outcomes of experiments done at spacelike separation (assuming those experiments have unique outcomes to be correlated). So if you accept what experimentalists report—that such experiments were done and had outcomes and the outcomes displayed certain correlations—then you are forced to give up either Statistical Independence or Bell Locality. Similarly, PBR define a psi-epistemic theory as having a sharp characteristic. And then they prove—again assuming Statistical Independence— that no psi-epistemic theory can reproduce the predictions of quantum theory. So if you accept the experimentalist’s word that the quantum predictions are correct, psi-epistemic theories cannot be right.

    Note that the results are proofs. They hold whether one is a Scientific Realist or an Instrumentalist. In short, Instrumentalism is not a safeguard against either Bell’s or PBR’s result. And if you are going to invoke some extra assumption made by either Bell or PBR as “realism”, and claim to avoid the consequences of those theorems by adopting Instrumentalism. then you have made a mistake. If you have something else in mind, be prepared to state clearly what it is.

    Thanks for the conversation.

  319. Paul Hayes Says:

    As a final comment I would just like to say that of course I don’t accept that a “radical” psi-epistemic interpretation involves losing (all) beables and facts, and forces a denial that there is any physical reality. The differences between a (classical probabilistic) mechanical description of the world and a (quantum probabilistic) mechanical one are not that radical!

  320. Discussions of quantum mechanics – Silgate Solutions Says:

    […] very lengthy discussion about this on Scott Aronson’s […]

  321. Why Do We Think Quantum Mechanics Is Weird? - ECNT Says:

    […] that define our intuition and the quantum rules? The Copenhagen approach, which Scott Aaronson amusingly dismissed as “shut-up and calculate except without ever shutting up about it” basically asserts […]

  322. Shtetl-Optimized » Blog Archive » The Zen Anti-Interpretation of Quantum Mechanics Says:

    […] although I’ve written tens of thousands of words, on this blog and elsewhere, about interpretations of quantum mechanics, again and again I’ve dodged the […]

  323. The Zen Anti-Interpretation of Quantum Mechanics (2021) by Tomte - HackTech Says:

    […] although I’ve written tens of thousands of words, on this blog and elsewhere, about interpretations of quantum mechanics, again and again I’ve dodged the […]