## Einstein-Bohr debate settled once and for all

In Steven Pinker’s guest post from last week, there’s one bit to which I never replied. Steve wrote:

After all, in many areas Einstein was no Einstein. You [Scott] above all could speak of his not-so-superintelligence in quantum physics…

While I can’t speak “above all,” OK, I can speak. Now that we’re closing in on a century of quantum physics, can we finally adjudicate what Einstein and Bohr were right or wrong about in the 1920s and 1930s? (Also, how is it still even a thing people argue about?)

The core is this: when confronted with the phenomena of entanglement—including the ability to measure one qubit of an EPR pair and thereby collapse the other in a basis of one’s choice (as we’d put it today), as well as the possibility of a whole pile of gunpowder in a coherent superposition of exploding and not exploding (Einstein’s example in a letter to Schrödinger, which the latter then infamously transformed into a cat)—well, there are entire conferences and edited volumes about what Bohr and Einstein said, didn’t say, meant to say or tried to say about these matters, but in cartoon form:

• Einstein said that quantum mechanics can’t be the final answer, it has ludicrous implications for reality if you actually take it seriously, the resolution must be that it’s just a statistical approximation to something deeper, and at any rate there’s clearly more to be said.
• Bohr (translated from Ponderousness to English) said that quantum mechanics sure looks like a final answer and not an approximation to anything deeper, there’s not much more to be said, we don’t even know what the implications are for “reality” (if any) so we shouldn’t hyperventilate about it, and mostly we need to change the way we use words and think about our own role as observers.

A century later, do we know anything about these questions that Einstein and Bohr didn’t? Well, we now know the famous Bell inequality, the experiments that have demonstrated Bell inequality violation with increasing finality (most recently, in 2015, closing both the detector and the locality loopholes), other constraints on hidden-variable theories (e.g. Kochen-Specker and PBR), decoherence theory, and the experiments that have manufactured increasingly enormous superpositions (still, for better or worse, not exploding piles of gunpowder or cats!), while also verifying detailed predictions about how such superpositions decohere due to entanglement with the environment rather than some mysterious new law of physics.

So, if we were able to send a single short message back in time to the 1927 Solvay Conference, adjudicating between Einstein and Bohr without getting into any specifics, what should the message say? Here’s my attempt:

• In 2022, quantum mechanics does still seem to be a final answer—not an approximation to anything deeper as Einstein hoped. And yet, contra Bohr, there was considerably more to say about the matter! The implications for reality could indeed be described as “ludicrous” from a classical perspective, arguably even more than Einstein realized. And yet the resolution turns out simply to be that we live in a universe where those implications are true.

OK, here’s the point I want to make. Even supposing you agree with me (not everyone will) that the above would be a reasonable modern summary to send back in time, it’s still totally unclear how to use it to mark the Einstein vs. Bohr scorecard!

Indeed, it’s not surprising that partisans have defended every possible scoring, from 100% for Bohr (quantum mechanics vindicated! Bohr called it from the start!), to 100% for Einstein (he put his finger directly on the implications that needed to be understood, against the evil Bohr who tried to shut everyone up about them! Einstein FTW!).

Personally, I’d give neither of them perfect marks, in part because they not only both missed Bell’s Theorem, but failed even to ask the requisite question (namely: what empirically verifiable tasks can Alice and Bob use entanglement to do, that they couldn’t have done without entanglement?). But I’d give both of them very high marks for, y’know, still being Albert Einstein and Niels Bohr.

And with that, I’m proud to have said the final word about precisely what Einstein and Bohr got right and wrong about quantum physics. I’m relieved that no one will ever need to debate that tiresome historical question again … certainly not in the comments section of this post.

### 161 Responses to “Einstein-Bohr debate settled once and for all”

1. Alexis Hunt Says:

Are Bayesian critiques of Bell’s inequality considered settled now, or are they still outstanding?

2. Matt297 Says:

I’m not saying anything new here. But if you believe that measurements of, say, the positions or momenta of particles, or the spin directions of electrons, have definite outcomes, and also that closed systems evolve unitarily, then those definite measurement outcomes are hidden variables for quantum theory.

The reason is that in any Wigner’s friend experiment, Wigner’s friend does the measurement inside a box and, by assumption, sees a definite outcome, whereas Wigner, outside the box, describes the whole process unitarily, and so arrives at a coherent superposition that doesn’t identify the specific outcome. So the specific outcome is something above and beyond Wigner’s superposition, meaning that the superposition is an incomplete description of reality. This is the sort of incompleteness of quantum state vectors that Einstein was referring to.

I don’t really see a way out of this.

You can deny definite outcomes, meaning that you embrace either anti-realism (no outcomes) or something like Everettian QM/many worlds (many outcomes) or some other relative or perspectival notion of reality. In the former case, you owe us an explanation of why we have experiences at all, and in the latter case, you have to contend with the myriad experiences we don’t ever seem to have, and why we should experience anything probabilistically.

Or you can deny that closed systems evolve unitarily, in which case you owe us an explanation of how and why unitarity breaks down, like with GRW and other spontaneous-collapse models. Decoherence alone isn’t enough, because it merely converts coherent superpositions (pure states) into incoherent superpositions (mixed states) without singling out definite outcomes.

Or you can just accept that there are hidden variables of some sort, and accept that they’re weird.

Einstein confronted these problems head on. Bohr tried to deflect. So we find ourselves still going around in circles a hundred years later. At the very least, everyone should try to be as honest as Einstein in facing up to all this, rather than just claiming that there’s no measurement problem and that there’s nothing to see here.

3. Matt Says:

I’m glad that’s finally settled! But the main issue for me with Pinker’s remark was that it totally overlooked Einstein’s significant contributions to the development of quantum theory. Even if we did score Einstein at 0% in the Einstein-Bohr debates, the sum total of his contributions to quantum theory would still very much be a positive number.

4. Matt297 Says:

Also, with all due respect to Pinker, Einstein’s contributions to quantum mechanics are a gajillion times more important to science than anything most of us will ever do.

For heaven’s sake, Einstein won a Nobel Prize for his explanation of the photoelectric effect. The EPR paper is one of the most cited papers in the history of physics. And we’re still talking about Einstein’s arguments about quantum theory 100 years later.

Is this isn’t super-intelligence, then I don’t know what is.

5. Nick Drozd Says:

I have a question about quantum mechanics, or at least about a pop-sci version of quantum mechanics. Pretty much every word in what follows could have scare quotes around it, so bear with me here.

According to the many-worlds interpretation, each event is a branch point, and multiple different universes are realized according to the different possible outcomes of the event. Theses universes all somehow coexist, and there are a lot of them. They are initially identical except for the different outcomes of the event in question.

Now, unrelated to any of this is the rapid expansion of our own single universe. Observations of redshifted light have led to the theory that the universe is expanding faster than the speed of light. This doesn’t mean that anything is moving locally FTL, which would be impossible. But it does have the effect that other galaxies are moving away from our galaxy FTL by means of the expansion of space itself. This seems to be an ongoing process, and if it keeps up then almost everything in the universe will become causally unreachable from the Milky Way. (The Milky Way itself will cohere due to gravity.) If the Earth stuck around for long enough, eventually there would be far fewer stars in the sky, because those stars would be too far away for their light to reach us.

Okay, here’s my question. When the universe is copied at a quantum branch point, is the whole thing copied, or just the causally reachable portion? Copying the entire universe every single time seems like it would be extravagantly wasteful, since much of it cannot possibly be affected by the outcome of the branch event. But then again I don’t know how or where the copying work is getting done.

6. Jon Awbrey Says:

Re: To Ask The Requisite Question

Sing It, Don Quixote!

This brings me to the question I was going to ask on the AI post, but was afraid to ask.

Does GPT-3 ever ask an original question on its own?

Simply asking for clarification of an interlocutor’s prompt is not insignificant but I’m really interested in something more spontaneous and “self-starting” than that. Does it ever wake up one morning, as it were, and find itself in a “state of question”, a state of doubt or uncertainty so compelling as to bring it to ask on its own initiative what we might recognize as a novel question?

7. Bruno Loff Says:

Ah, finally!

“I’m relieved that no one will ever need to debate that tiresome historical question again … certainly not in the comments section of this post.”

My wife wants to know why I’m laughing hysterically.

9. Matt297 Says:

Re: Nick Drozd (Comment #5)

The Everettians/many-worlders usually say that you have to decompose things into a little Hilbert space corresponding to each small region of 3D physical space and compute its reduced density operator. The “splitting” happens at the level of each such reduced density operator. There’s a theorem (the no-signaling theorem) that implies that if the Hamiltonian is local, then the reduced density operators can only affect each other with a propagation speed limited by the speed of light.

So it’s more like the universe sort of “unzips” into branches, with the branches spreading out at the speed of light. Or something. The Everettians usually regard the branches as approximate/emergent structures, so it’s always a little unclear what exactly is supposed to be going on.

10. Clint Says:

What are you saying? There’s some sort of disagreement about quantum theory … ?

11. Peter Morgan Says:

That’s a pretty good red rag for a comments about QM Summer fiesta. For my part, I think we can get a re-enlivened Bohr and Einstein closer to agreeing if we leverage Bohr’s insistence on classical description. As he puts it in the Schilpp volume, “It is decisive to recognize that, however far the phenomena transcend the scope of classical physical explanation, the account of all evidence must be expressed in classical terms,” which I suppose Einstein would more-or-less agree with (and we can all agree with that for the purpose of this comment, right?)
The next step, from Gigabytes of evidence expressed in classical terms to a quantum model for it, is taken by a collection of ordinary algorithms that use Hilbert spaces and algebras of operators acting on them. Einstein can live with that analysis pretty well if we show him how to use the Poisson bracket to include noncommutative algebras of operators (as models for incompatible probability measures, which are classical in the sense that they were a mid-19th Century discovery) in “in classical terms”, so that classical measurement theory is identical to quantum measurement theory and evades all no-go theorems that depend on noncommutativity and Hilbert spaces. Ironic that such a compromise has been hiding in plain sight since Koopman introduced a Hilbert space formalism for classical mechanics in 1931.
Crucially, an algebraic approach to Koopman classical mechanics is different from quantization, so that thinking about the differences can help us understand the relationship between classical and quantum descriptions (the comparison wouldn’t help at all if they were the same!) Note, however, that this is just about different descriptions: it makes no difference to how a given apparatus behaves, so quantum computing and all other applications can continue unchanged even though we now might call them “extended-classical” instead of “non-classical”. Note also that this is the opposite of the almost century-old attempts to add hidden variables into QM: this adds hidden observables into CM, which, if you’ll give it a chance, is very different. I have focused on whether Einstein might be happy with this, but I suppose Bohr would be OK with anything that more takes an empiricist than realist stance towards different theories, which early and late Einstein might disagree with himself about.
Sadly, people don’t find my three published papers about this very clear, but if you want to discuss the relationship between classical and quantum in terms of Bohr and Einstein, I offer this as one way to do it. If anybody good does, they’ll do it better than I have and we can move on.

BTW, I think it’s a problem to insist that measurements are of properties of “one qubit of an EPR pair”. We can also take a field approach to exactly the same measurement apparatus, in which we say that events are a consequence of surrounding very carefully engineered devices by equally carefully engineered electromagnetic and other fields as parts of a whole apparatus, with quantum, thermal, and other components of the noise in the device and in those fields contributing to the randomness we see. Different pictures can be useful.

12. Scott Says:

Clint #10: You must be new here 😀

13. Scott Says:

Jon Awbrey #6: GPT-3 doesn’t do anything whatsoever “on its own initiative.” It’s a text completion engine. That said, if you prompt it to generate questions about some topic, it will do so, and the questions might be interesting.

Rather than discussing this abstractly, why not give it a try??

14. RandomOracle Says:

Speaking of entanglement, are you going to blog about the recent NLTS result (https://arxiv.org/abs/2206.13228)?

15. William Gasarch Says:

“so if we were able to send a single shot message back to the 1927 Solway conference”

Wow- if we could really do that, that would raise even more questions about physical theories and the real world 🙂

16. Scott Says:

Nick Drozd #5:

When the universe is copied at a quantum branch point, is the whole thing copied, or just the causally reachable portion? Copying the entire universe every single time seems like it would be extravagantly wasteful, since much of it cannot possibly be affected by the outcome of the branch event.

Good practice for thinking about physics: can you compile that question down to anything that would make a difference empirically?

If you can’t, then the usual answer is simply: “it depends on what formalism you use to describe it” (as is arguably the case for your question—constant “global splitting of worlds” will look like it’s taking place in the Schrödinger picture, but not in the Heisenberg picture).

17. Douglas Knight Says:

I guess scoring Bohr vs Einstein is relevant to that discussion, but for most purposes, I think it is a mistake. They were both much better than the postwar “shut up and calculate” consensus. The prewar Copenhagen interpretation wasn’t as bad, but it seems to me to have been about declaring Bohr the winner and not paying attention to what he actually said, shutting down progress.

18. Quite Likely Says:

“But if you believe that measurements of, say, the positions or momenta of particles, or the spin directions of electrons, have definite outcomes, and also that closed systems evolve unitarily, then those definite measurement outcomes are hidden variables for quantum theory.”

I don’t really have any physics expertise but this explanation has always jumped out at me – aren’t all the results we see from “entanglement” just explained by the conservation of spin / momentum? Why is another ‘hidden’ variable needed besides the obviously existing one that’s actually being measured?

19. Oren Says:

Still feels like they both got themselves wrapped around the axle of assuming that space is a “real” thing rather than a measure of the rate of amplitude change for specific forms of interference between tightly coupled self interfering constructs.

The “weirdness” that they all seem to be trying to avoid is the idea that entangled particles share variables that do not transit relative to the space-time axes. Not sure why this bothered them so much.

20. Shmi Says:

They seem to have both missed the role of gravity, as far as I know. (And so have you, Scott!)

I mentioned it here a few times before, and Sean Carroll talked about it recently, and the idea of “gravcats” it has been discussed in literature on and off: For a perfectly isolated system massive enough to create measurable gravity (in Sean Carroll’s humane adaptation, a cat in a box awake vs. falling asleep), where would the gravimeter arrow point and when?

If external devices track the “real-time position” of the cat in a box, then there is indeed a limit on quantum computer scaling, because of gravitational entanglement, a la Penrose, and Gil is right.

If the arrow suddenly jumps when you open the box (the cat gets entangled with the environment), then general relativity is wrong on mesoscopic scales, not just microscopic or cosmological, and something is seriously wrong with our understanding of the basics.

If the gravimeter arrow tracks the expectation value of the position of the cat, then classical gravity is actually semiclassical (the stress-energy tensor is determined by the expectations of quantum fields) and we have no idea how to reconcile it with non-pertubative classical spacetime.

This is one of the few TABLETOP EXPERIMENTS where Quantum Mechanics DOES NOT MAKE A DEFINITE PREDICTION. The issue is that our instruments are not yet sensitive or accurate enough to perform them.

21. Scott Says:

Alexis Hunt #1:

Are Bayesian critiques of Bell’s inequality considered settled now, or are they still outstanding?

What is a “Bayesian critique of Bell’s inequality”?

22. Richard Gill Says:

I think the critique which Alexis Hunt #1 is referring to is Ed Jaynes’ critique. Which was exploded by Steve Gull. See my https://arxiv.org/abs/2012.00719. Ed Jaynes thought that Bell screwed up in applying the definition of conditional probability.

23. Scott Says:

Richard Gill #21: I see, thanks!

It still amazes me that otherwise highly intelligent people can take this actual experiment, which you can actually do and which gets an actual result, one that clearly wouldn’t have been possible in a local, classical universe, and imagine you can make it all go away just by playing around with words and definitions. It’s a bit like thinking you can escape an approaching bullet by arguing about Zeno’s Paradox.

24. Clint Says:

Scott #11:

🙂

Actually, I would nominate you, Scott, for the “once and for all” Quantum Debates Trophy because this (seriously) was the deepest single insight on this topic:

So, what is quantum mechanics? Even though it was discovered by physicists, it’s not a physical theory in the same sense as electromagnetism or general relativity. In the usual “hierarchy of sciences” — with biology at the top, then chemistry, then physics, then math — quantum mechanics sits at a level between math and physics that I don’t know a good name for. Basically, quantum mechanics is the operating system that other physical theories run on as application software (with the exception of general relativity, which hasn’t yet been successfully ported to this particular OS). There’s even a word for taking a physical theory and porting it to this OS: “to quantize.”

But if quantum mechanics isn’t physics in the usual sense — if it’s not about matter, or energy, or waves, or particles — then what is it about? From my perspective, it’s about information and probabilities and observables, and how they relate to each other.

Of course I’m aware that you didn’t come up with that in a vacuum (requiring work from Feynman, Deutsch, Manin, etc.) But you took seriously what “quantum” computing would mean and in my humble opinion gave the best statement of how to interpret it.

In one fell swoop that quote above swept away the tempests (Copenhagen, many-worlds, pilot-wave, metaphysical eastern mysticism, etc etc) that had tormented me for years … Of course, unintentionally tormenting … as they were all suffering under the presumptions of the history of physics and the language we inhabited.

I am now able to imagine an alternate timeline of discovery …

It’s the late 1800’s, Turing is born 40 years early and is hired by Bell and Tesla to work at “Bells Labs” to brainstorm about computing. Tesla imagines a device he can “input positive or negative amplitudes”, Bell tries 100,000 different possible things before he hits on something that will allow combining or canceling hundreds or thousands of amplitudes in all kinds of combinations, and then Turing (using obscure mathematical texts about general probability theories) figures out how to combine Bell’s “amplitude interfering device” with a threshold switch to make a computer, getting some help along the way from a young Hungarian mathematician to explain the theoretical foundations in orthogonal states, maintaining a norm (commenting in a footnote of a lab report that this setup means it has to be the 2-norm), linear operators, and the likely need for a lot of parallelism to reach fault tolerance (Bell says no problem he can wire thousands of them in parallel), while Tesla casually notes that “it is rather trivial that how we define these computational architectures is entirely up to us as we can, in a word, contextualize the amplitude information in any basis rotation.”

Meanwhile … A few decades later some physicists (who have come to rank somewhere below cognitive psychiatry at most universities) try to argue that “the whole universe is made of amplitudes! All possibilities are real! The universe is constantly splitting itself with every decision you make!” when they set up what they call “double-slit experiments” … getting quite a few chuckles and scornful comments about “more crackpottery” from most academics. Gordon Moore, one of the well-known founders of the multi-billion dollar company Fairchild Interference Devices dismisses the physicists by noting that “All they have done is produce an interference device using light … and a very crude and practically useless one at that … using Turing’s definition of computation. I am not shocked in the least that they can use a probabilistic method of computation to predict where light will shine. And I have no comment about the laughable proposition that I chose BOTH to wear my blue and my red tie this morning in alternate universes.”

But if quantum mechanics isn’t physics in the usual sense — if it’s not about matter, or energy, or waves, or particles — then what is it about? From my perspective, it’s about information and probabilities and observables, and how they relate to each other.

I find that even pretty much everyone working in quantum computing still thinks that “quantum mechanics” is about physics. The arbitrary order of the history of discovery we went through stuck it with this “physics” name … “quantum mechanics”. Maybe if we had gone through the alternate history above and “earlier” Turing had called it … Interference Machine Computing …

25. beleester Says:

@Jon Awbrey: AFAIK, asking “does GPT-3 ever wake up and find itself in a state of doubt?” is sort of like asking “Does a calculator ever just turn itself on and start adding numbers together spontaneously?” It’s a text completion engine – it doesn’t have a continuous consciousness, it calculates an output based on an input.

I do wonder what prompt GPT-3 would be the least confident about its answers to, or if you could elicit that information by asking something like “the biggest unanswered questions in physics are…”

Re: Matt297 (Comment #2)

>”So the specific outcome is something above and beyond Wigner’s superposition, meaning that the superposition is an incomplete description of reality. This is the sort of incompleteness of quantum state vectors that Einstein was referring to.
> “I don’t really see a way out of this.”

There’s a very neat resolution, which Bell himself noted, if you accept that (a) the universe is [super-]deterministic, and (b) we are within the universe and not running experiments from an outside or arbitrarily privileged position. (This latter condition seems too obvious and trivial to be worth noting, but contemporary mathematicians and theorists have some difficulty with it. I blame set theory for this.)

I think that Stephen Wolfram’s latest work in physics, which is not quantum mechanical in nature, also inclines towards superdeterminism. So determinism comes up a lot, as a disfavored way to resolve otherwise very tenacious physical problems.

27. fred Says:

What does current physics say about the following thought experiment:

1) create two entangled particles, flying away from each other at very high speed in totally empty space.

2) wait long enough until the expansion of space/universe between the two particles is such that they’re no longer in the same causal light cones.

3) measure one particle.

A variation of 2) is that one particle falls into a black hole (in reality it would cause a decoherence I guess, but this is a thought experiment).

Does entanglement break due to the expansion of space/the breakage of causality chain?

28. JimV Says:

Great post.

I (not that it matters) identify with Einstein’s objections, and Bohr’s position seems prescient but too early, unless he had seen a lot more experimental results than Einstein, and thereby trained his neurons to that intuition, like a neural network. Whereas we know that General Relativity seems to work quite well classically, tending Einstein’s neurons that way.

I recall reading somewhere (probably in “Genius” by Gleick) that Feynman gave a seminar to theoretical physicists on his QED theory during the Manhattan Project, to much criticism of it, and Einstein was his main or only supporter. Nothing I know about Einstein would make me think he was less than super-intelligent.

29. fred Says:

Nick #5

“When the universe is copied at a quantum branch point, is the whole thing copied, or just the causally reachable portion?”

If you assume that the universe is the wave function, then the universe isn’t really “copied”.
As far as I understand, the branching doesn’t mean that the wave function of the universe gets bigger. If no measurement was made, the wave function of the universe would already be big enough because it evolves carrying a superposition of all the possible outcomes. Once a measurement is done, the superposition kinda splits, but mathematically it still carries the same amount of information.

Like in the double slit experiment, if you start stacking multiple barriers with two slits, say N, the number of paths (keeping track of all the superposition of potential paths) carried by the wave function grows exponentially with N… but all those “wasteful” paths already “exists” in the wave function whether we then measure the electron on a final screen or not. Once we measure, those paths now start evolving in a way where they no longer interfere with one another, but more paths didn’t get created, they were already there before the electron reached the screen.

30. fred Says:

fred #25

The so-called “branching” looks like a very defined event when the measurement outcomes are countable, like a spin up/spin down situation.
But things get more murky when a measurement covers a continuous spectrum. E.g. when an electron hits a screen, the set of possible positions can be arbitrarily big (depending on the precision of the measurement), and then we can’t really answer “how many branches are created?”.
Mathematically it’s because, for position, the eigenfunctions that form a basis are delta functions (one for each position), and there’s an infinite amount of them (one for each potential measure of the electron), so it would look like an infinite amount of branches get created. But all that happens, mathematically, is that the wave function gets decomposed in that special basis (its sum is still the same), and the Shrodinger equation can be applied on that decomposition just like in any other situation.

31. Scott Says:

JimV #27: Einstein wasn’t involved in the Manhattan Project (other than his and Szilard’s original letter to Roosevelt), and never visited Los Alamos. Feynman did give a talk in Princeton prior to WWII that Einstein attended.

32. Scott Says:

fred #26: There’s nothing in the quantum formalism that can cause entanglement between two distant particles to “break,” other than measurement (and for a many-worlder, not even that). If, however, the two particles become separated by a causal horizon—and if we treat that horizon as totally non-negotiable, as in classical GR—we would lose the ability to do any experiment to verify the entanglement directly.

33. dennis Says:

Scott #22:

It still amazes me that otherwise highly intelligent people can take this actual experiment, which you can actually do and which gets an actual result, one that clearly wouldn’t have been possible in a local, classical universe, and make it all go away just by playing around with words and definitions.

Well, it’s not just words and definitions. Any experiment needs to postselect on their full data set to get the violation. Look at the Hensen et. al. 2015 loophole-free Bell experiment. They published their data online: 4746 events. For the paper, they selected 245 events, to get a violation with S = 2.42.

With the published data, however, you can do the selection yourself. And depending on how you select, you can also get no violation with S = 0, or even a too large violation with S greater than 2 sqrt(2).

Please try it. I was a bit shocked.

34. fred Says:

beleester #24

“It’s a text completion engine – it doesn’t have a continuous consciousness, it calculates an output based on an input.”

true, but if we start adding to it some sort of recursive mechanism, where part of the output is re-injected as input, self-reference would appear, and technically the output would never really settle totally (it could oscillate or converge asymptotically), and the network would always have some sort of processing echos (resonances, etc) going on within it (an internal system clock becomes more important, to determine the various propagation times, resonance frequencies, etc… i.e. internal sense of time).
Think of the image formed between two parallel mirrors. Or think of the classical feedback loops in control theory.

Although it may seem reductive to say “it calculates an output based on an input”, this also describes the human brain. Things come in from the senses, which gets combined with the prior state, it then all percolates through a complex network (the brain “completes” it), and then out comes a new state corresponding to new thoughts/feelings/muscle outputs (and that new state will be part of the next input).
When we look at our own thoughts and emotions with focus, we see that they just “appear”, in the same way the response of those completion engine also just appear. A human brain just has more feedback going on (the output becomes training data).

35. Paul Hayes Says:

Clint #23

Alternatively, you could’ve saved yourself all that torment if you’d consulted the post-von Neumann math. phys. literature. As I think I’ve mentioned before here, it’s been known since at least as early as the middle of last century (Segal, Varadarajan,…) that, mathematically speaking, quantum theory is ‘just’ a natural and straightforward algebraic reformulation and generalization of (Kolmogorovian, classical) probability theory. This fact – and its consequences for understanding and interpreting quantum mechanics – has even been rediscovered.

36. Scott Says:

dennis #32: Have you tried to sort this issue out with the authors? That would seem like a natural first step, before bringing it to blog readers who are poorly placed to evaluate it!

Also, what about the other groups who have since replicated the result?

37. Lorraine Ford Says:

Scott,
Do you think that the locality problem could potentially be restated as an algorithm? I.e., does there exist an algorithm such that:

“IF time or distance difference < a number THEN laws of nature (which involve time and distance) apply, ELSE laws of nature (which involve time and distance) don’t apply”?

38. Scott Says:

Lorraine Ford #36: No, I think the laws of nature always apply. (And what’s the “locality problem”?)

fred#33 Are you talking about GPT3 in particular, or just about some hypothetical future model?

If you feed GPT3’s outputs back in as inputs (which is part of the usual way of using it), — ok well I haven’t tried 3 myself, just 2, but still — as the text continues, it does not become more coherent, but less. It becomes more likely to start repeating itself.

If you tried to bolt on a clock as part of the input, well, it would presumably start predicting the text to contain the clock signal, but I don’t think this would do much for it.

I don’t think bolting anything simple on to the existing GPT3 would make it start to function as an intelligent agent, not because something of its size and made of artificial neural nets couldn’t possibly be one, but because that isn’t something that its training would make it do.

And, anything from feedback loops in it couldn’t be too too big, because its context window can’t be all that big (long enough for a fair chunk of text, but substantially less than a chapter of a book), and so there’s only so much room for whatever information might be stored in such feedback loops.

40. Lorraine Ford Says:

Scott #37:
I agree. Time and distance can have no modifying effect on the laws of nature; there is no such algorithm. I.e. there is seemingly no actual problem with the existence of “faster than the speed of light” so-called “communication” collapsing one of an entangled pair: the laws of nature are independent of time and distance.

41. Olivier Says:

In the multiverse, they are both right

42. Scott Says:

Lorraine Ford #39: If you could use entanglement to transmit a signal faster than light, then there would be a serious problem with special relativity, generally leading to either existence of closed timelike curves. But you can’t: that’s the “no-communication theorem” of quantum mechanics, which I teach in my undergrad class.

43. Lorraine Ford Says:

Scott #41:
There might be a no-communication “theorem”, but seemingly this theorem would conflict with the idea that there are such things as laws of nature (i.e. relationships involving space and time) that are independent of space and time in the sense that they sort-of don’t exist inside space and time.

44. Scott Says:

Lorraine Ford #42: The statement of the no-communication theorem is that, in quantum mechanics, if Alice and Bob share an arbitrary entangled state, then nothing that Alice chooses to do to her half alone can affect the probability of any outcome of any measurement that Bob performs on his half alone. The laws of nature existing outside space and time just has nothing to do with it—I don’t know why you thought it would!

45. Lorraine Ford Says:

Scott #43:
Leaving aside the issue of how one might understand “choice” and “probability”, there is still the issue of the laws of nature. The laws of nature themselves imply that faster-than-light “communication” does in fact exist, but it is not actually “communication”, it is merely relationship.So does relationship exist?

46. Matt297 Says:

Re: Quite Likely #17

> I don’t really have any physics expertise but this explanation has always jumped out at me – aren’t all the results we see from “entanglement” just explained by the conservation of spin / momentum?

Nope. Classical physics also features these kinds of conservation laws. They’re not what makes quantum entanglement do what it does.

> Why is another ‘hidden’ variable needed besides the obviously existing one that’s actually being measured?

Wigner’s friend is measuring something inside the box. If Wigner’s friend gets a definite outcome, but Wigner on the outside instead sees a quantum superposition (by the assumption of unitary evolution), then the definite outcome found by Wigner’s friend is a hidden variable, and the quantum state vector seen by Wigner on the outside is manifestly incomplete, precisely in Einstein’s sense.

> There’s a very neat resolution, which Bell himself noted, if you accept that (a) the universe is [super-]deterministic, and (b) we are within the universe and not running experiments from an outside or arbitrarily privileged position.

Bell’s theorem assumes hidden variables as a premise. Superdeterminism is a loophole to the conclusion of Bell’s theorem, meaning that superdeterminism is a potential way of escaping the conclusion that hidden variables would have to be dynamically nonlocal.

Superdeterminism is not a way of resolving the conflict between unitarity and definite outcomes, which together require some form of hidden variables – i.e., something beyond quantum state vectors.

47. Raoul Ohio Says:

In an unusual example of entanglement, Sean Carroll also posted today, an interview in Quora, covering related topics. In my brain ,”emergence” has moved from spooky to obvious.

Check it out:

https://www.quantamagazine.org/where-do-space-time-and-gravity-come-from-20220504/?mc_cid=a3c2d0c9cf&mc_eid=20df551254

48. John R Says:

I always feel there is an elephant in the room no one addresses. Quantum physics has a mystical non-mathematical part to it that is NOT described by the mathematical model behind QCD. This the hand-waiving “wave function collapse” which is something that presumably happens in an instant (I don’t believe we can measure things in an instant — there is always some time frame it takes for the measurement to be made no matter how small, but these arguments presented here won’t depend on that either way). The mathematical model of QCD that applies up until that (conscious?) measurement is fully consistent with special relativity (from which it is derived) and present a continuous description. At the moment of magic of the conscious observation though AND in the frame of that conscious observer, things are discontinuous based on the magic of the collapsing wave function. (aka that collapse is NOT predicted by a mathematical model, something admitted to by famous quantum physicists as well). This actually sets up a paradox. Let’s consider Sally who makes an observation of a quantum particle that collapses the wave function from a plane wave to an exact position (delta function in space). Along her worldline, at that “instant” of observation there is a discontinuity in the wave function, but probability, particle/charge count is preserved, etc. BUT for Bob, who is traveling at some speed relative to Sally, his wordline is different and what he would see is this: before the measurement is taken, the plane wave will never integrate to 100% and after the measurement Bob will see part of the plane wave along with the delta function traveling along Sally’s worldline. He would greater than 100% probability of seeing the particle. This is the prediction made by QCD presumably under these circumstances. The physics now for Bob is fundamentally altered, and even conservation principles seem to get violated.

49. blackeneth Says:

Scott #43: I mail you a package. You open it and inside it is my left shoe. Now you know that I possess my right shoe.

P.S. Please send back left shoe.

50. Andrei-lucian Drăgoi Says:

I agree with Einstein. Quantum mechanism may indicate something deeper: the fact that both General relativity and quantum field theory may be in fact the manifestations of a quantized space and the same prototype voxel of spacetime (with possibly multiple distinct levels of excitation) may explain both spacetime curvature and all quantum fields.
See three of my papers:

Best regards!
Dr. Andrei Lucian Drăgoi
http://WWW.DRAGOII.COM

51. Matty Wacksen Says:

>After all, in many areas Einstein was no Einstein.

To Pinker’s more general point: probably we could add human relationships to his list. There is the infamous – and cruel – letter to his wife, whom he (probably) drove into a nervous breakdown for example.

52. Danylo Says:

Einstein was a true believer in Laplace’s demon, that is, the Universe is entirely deterministic if you know enough information about its past. Due to Bell and all confirmations of QM we now know it can’t be true, the way they thought about it at that time.

On the other hand, we don’t really know what is an alternative of Laplace’s demon (universal wave function is a poor replacement, in my opinion). Thus, there must be something deeper about QM that we don’t know yet. (Personally, I believe that messaging back in time is possible in some sense.)

So, they were both wrong about something. Which is completely natural.
Although, it looks like Bohr’s score is a bit higher, since his position was more open towards QM.

53. Ordinary Joe Says:

Maybe it’s just me, but I always found the Kochen-Specker result to be a whole lot ‘weirder’ than Bell’s I am still sure how the various interpretations deal with contextuality. Especially the psi-ontological ones.

54. Anon Says:

What about the de Broglie-Bohm “pilot wave” interpretation of QM? It’s always seemed like the most plausible interpretation to me.

55. AK Says:

Scott #0: Einstein said that quantum mechanics can’t be the final answer, it has ludicrous implications for reality if you actually take it seriously, the resolution must be that it’s just a statistical approximation to something deeper, and at any rate there’s clearly more to be said.

Einstein was right. Quantum mechanics does have answers for reality.

We may never have a theory about reality because reality is unknowable or most likely does not exist.

Bohr was wrong. He did not understand that Einstein’s question was not about quantum theory as a theory about our knowledge related to reality. He wanted a theory related to reality.

Now we know that there can not be such a theory.

56. AK Says:

Scott #0: Einstein said that quantum mechanics can’t be the final answer, it has ludicrous implications for reality if you actually take it seriously, the resolution must be that it’s just a statistical approximation to something deeper, and at any rate there’s clearly more to be said.

Einstein was right. Quantum mechanics does not have answers for reality.

We may never have a theory about reality because reality is unknowable or most likely does not exist.

Bohr was wrong. He did not understand that Einstein’s question was not about quantum theory as a theory about our knowledge related to reality. He wanted a theory related to reality.

Now we know that there can not be such a theory.

The need for statistical approach in microscopic physics while building a model or theory : Distribution of quantities like energy / momentum distribution among system of particles etc. arises from the fact that there is something missing which doesn’t allow us to exactly ascertain the values of dynamical variables.
This leads to putting forth certain empirical or semi empirical assumptions that pave the way to statistical laws of distribution of entities .

***
Here comes the question of degrees of freedom.
In any dynamical system , if number of known degrees of freedom are less than actual number of degrees of freedom , you have to make certain assumptions based on experimental data.
Same is the case with quantum mechanics of system of particles.
Einstein’s view regarding quantum mechanics is about these missing degrees of freedom.

58. Sandro Says:

Clint #23:

I find that even pretty much everyone working in quantum computing still thinks that “quantum mechanics” is about physics.

I still think QM is about physics. It’s just that physics is about information and vice versa.

59. Lorraine Ford Says:

What is communication? One characteristic is that communication seems to be a many-step process. Does primitive nature even do communication?

The only instantaneous “communication” (not really communication), a one-step process, is via the law of nature relationships, where change in the numbers of some variables leads to change in the numbers of some other variables, seemingly purely by virtue of relationship, without any calculation steps on the part of nature (though it would take calculation steps on the part of people who tried to symbolically represent the situation).

60. Spencer Says:

Reality is emergent in all of the universes.

61. Matt Mihelic, MD Says:

In a physical system, quantum information can transition to classical information across an energy barrier that is appropriate to the Landauer limit (kT•ln2), which is essentially the quantum limit of a bit of information. Bohr’s concepts of statistical quantum mechanics are relevant to energy levels of less than the Landauer limit, while Einstein’s concepts of deterministic quantum mechanics are relevant to energy levels of greater than the Landauer limit. For a discussion of such quantum-to-classical transition of information in the DNA molecule please see the 20 minute presentation at the beginning of https://www.youtube.com/watch?v=FetQ5KThiSM

62. Clint Says:

Hi Paul #34:

Yes, I agree!

And, thank you for pointing to the expansive and deep history of quantum probability theory which I only began alluding to when saying “Feynman, Deutsch, Manin, etc.” One can easily make the argument that it’s always been nothing but talk about a general probability theory … even when whoever was doing the talking didn’t explicitly say it 🙂

I’m aware of Varadarajan’s work on the Geometry of Quantum Theory (which I thank for explaining Wigner’s projective representation of groups … and his book on Lie Algebras and recommend them highly. Segal’s familiar from C*-algebras. However, I don’t know (which means I’m probably ignorant of) works of either Varadarajan’s or Segal’s that explicitly promote quantum theory as a probability theory that’s “not about physics” – although no doubt the core idea is in there if we go back even to Von Neumann (1932 – it’s all right there!), Jordan, etc of course! In any event, I would still put both Segal and Varadarajan in the “we are talking about physics” camp even as they were, especially Varadarajan, clearly aware of the general probability stuff. But mostly that was because no one really seemed to be aware that there was anything except a “this is about physics” camp … but again, maybe I just missed it.

The first clear hints I had were with Gudder’s Quantum Probability and Pitowsky’s Quantum Probability – Quantum Logic back in the 80’s. (Pitowsky’s book I actually came across while researching for a project in a philosophy course!) Maybe there were clearer signals to others in the physics literature but I was just unaware. You are correct if I only had more insight, more awareness, and maybe been less lazy keeping up, maybe I would have realized this sooner! Still, at that time, at least my own perception was, even reading Gudder and Pitowsky, that we (academic community and culture at large) were all still “talking about physics” when we talked about “quantum mechanics” – again, best I recall. Then came the 90’s developments in quantum computation culminating in the canonical “Mike and Ike” … I think I was almost there by that point … but to be honest I still thought … “this is all about physics”.

But don’t we all remember how this “story” was told for the last 60 or 70 years in undergraduate and general introductions to “quantum theory”. Quantum probability theory was never a chapter in an undergraduate math class (at least not for 99% of us in science or engineering disciplines). “Quantum mechanics” was typically “performed” in physics courses like a magic trick … and you know how this went … A physics instructor gets up and says, “Now, imagine we have a wall with two slits, a screen beyond and we fire particles at the slits …” The magic trick basically relies on our preconceptions about classical “particles” and “waves” … which again, were preconceptions from earlier physics courses (or the general cultural language we inhabited).

The implication of the explanation was that quantum mechanics comes from physics, that it is something fundamentally physical like gravity, or entropy, or the mass-energy equation – that it is a “physical thing”. Yes, if I had really understood the core theoretical foundations by reading more deeply in Von Neumann etc then I would not have had that misunderstanding. But I was focused more on getting an engineering degree at the time 🙂 Only later did I go back and start digging into the foundational literature. The thing is that the postulates of quantum theory are extremely simple! Once a student has linear algebra then they’re ready and it’s easily done in a one-hour lecture (!)

What did not happen was my physics instructors did not say, “Now let’s begin by saying that we want to build a predictive model of anything at all we might observe in the universe. The first thing we need to do is agree on a good general probability theory. What features should it have? Linear evolution? OK. Measurement operators? OK. Combining state spaces? OK. How about using vectors for representing states? OK. Well, here, let’s use this one that uses complex numbers (amplitudes) because their closure and some other properties make things better for us. Notice how this theory allows for positive or negative amplitudes that interfere! Isn’t that a nice feature! Now, once we have this general probability theory, let’s set up this experiment with two slits …”

Less of a magic show. But, of course, that is not how it happened historically to the physicists themselves. Understanding the foundations as well as appreciating the reasons why this particular probability theory might be the one we should choose … those understandings did not arrive until much later … for example, as to why complex numbers are required. And, yes, I understand that when explaining something to non-specialists essential complexities have to be left out which can leave the non-specialist with incomplete or outright wrong conceptions.

To be clear in explaining the motivation behind my above post nominating Scott for “Quantum Debate Winner” … to the best of my knowledge, which yes is always woefully too little, Scott was the first to stand up and say clearly, “Quantum mechanics didn’t have to come from physics. This is not about physics.” I don’t, of course, think that the physics community was purposefully misleading the rest of us … so much as this was just how the story developed historically for them and so it served as both an introductory history lesson and a way to “bring along” the rest of us … and it was a neat magic trick which we can all attest to if (almost certainly) the first time you saw it you were coming into it with a preconceived basis in classical logic. That “classical baggage” then creates a lot of “interpretational noise” within and outside of the physics community that then becomes like an exercise in Wittgensteinian philosophy to get oneself out of the fly bottle …

My feeling is that this has been a slow unfurling of something like an “Emperor’s New Clothes” moment for physics. Like someone is taking their prized show pony away. Does this mean that physics turns out, in the end, to be just “experimentation and applied computing” and not really some profound revelation about the nature of reality? Maybe that depends on how general relativity is reconciled with QM 😉 Maybe our loss of feeling like physics actually describes “reality” is the heart of the psychological “mystery” or “shock” of quantum theory.

It felt taboo for someone to stand up and say … “Quantum mechanics is not about physics.” And the first time that I came across Scott saying that my first thought was “You can’t say that! Heretic!” But then I went back and looked at Mike and Ike, Feynman, Von Neumann, Pitowsky … and realized “Oh, actually, that’s true.” And realized what I had not understood … or missed for lack of digging deep enough to uncover the message hidden right there, as you correctly point out, in plain sight in the original and subsequent literature.

63. Alex Z Says:

In my book Einstein wins, because Bohr got to live long enough to see a consistent and concise explanation come along (in the form of a visit from Everett) and duly quashed it. I believe that if Einstein had lived a year or two longer he would have seen Everett’s explanation and adopted it himself. No hidden variables, but no dice either!

If only I’d been born on that branch!

64. JimV Says:

Thanks for the correction, Dr. Scott. My memory is getting very unreliable. I make an effort to check it sometimes but not always. I think the main reason people get less intelligent/productive with age is that loss of memory.

To blackeneth #47, although I am stepping on your joke, I insist that the general case of measuring the spin of one entangled particle does not instantaneously tell you what the spin of the other particle would be if measured in the same plane. It allows you to make that prediction, but prediction is not actual knowledge until confirmed. You could have made an error in your measurement setup, the particles could have lost entanglement previously, et cetera. And, as your joke points out, there are classical cases which allow instant predictions also. So as I understand it (maybe) QM defies causal locality but not informational locality.

65. Paul Hayes Says:

Hi Clint #59

Maybe there were clearer signals to others in the physics literature but I was just unaware. You are correct if I only had more insight, more awareness, and maybe been less lazy keeping up, maybe I would have realized this sooner!

It’s hardly your fault. One shouldn’t have to trawl the math. phys. literature to find what should be in every undergrad. QM textbook. Much less wade through the swamp of errors, misconceptions and misinterpretations that has grown around the subject over the years and dominated the discourse.

AFAIK, Segal was the first to explicitly promote this understanding.

66. Clint Says:

Hi Paul #62,

I was unaware of that! Although now that I look for it I do see that it is referenced in Kolmogorov’s Foundations text.

Another great example of exactly why I hang around this blog – people like you and Scott who are kind with their time and I’m always learning something new.

Thank you, sir!

67. Norm Margolus Says:

Scott #41: If there were some physical effect that allowed communication faster than light, then it would mean that our current version of special relativity is wrong in situations where this effect matters. Causality would not necessarily be affected. After all, Galilean relativity is causal.

68. Scott Says:

Norm Margolus #64: That’s why I used the word “generally.” There are ways to get superluminal signalling without closed timelike curves, but they all involve trashing the structure of special relativity.

69. Tu Says:

Alex Z #60:

Einstein considered the Schroedinger’s cat scenario a reductio ad absurdum, so I am highly sceptical that he would have been excited about the many worlds interpretation of QM.

I refuse to allow the many worlds people to claim Einstein for themselves, in the comments of this blog or elsewhere.

I imagine he would have enjoyed talking with Bell (who regarded many worlds negatively), since Bell took the questions he raised in the EPR paper seriously.

70. Scott Says:

Lorraine Ford #44:

The laws of nature themselves imply that faster-than-light “communication” does in fact exist, but it is not actually “communication”, it is merely relationship. So does relationship exist?

Relationships exist between words and their referents! 🙂 E.g., when I say “communication” I mean “communication,” the sending of a chosen signal from one part of our universe to another part. You’re free to talk to others here, but I’m not going to spend more time replying to you until I see that you’re using words in ways that I understand.

71. Scott Says:

Indeed, while I’d love to know what Einstein would’ve thought about MWI, I’m not going to pretend to. If anyone wants to make a case one way or the other, let them ground it in actual quotes from him.

72. mjgeddes Says:

The right way to look at QM, it’s clear to me now, is in terms of information , computation, the arrows of time, and levels of abstraction.

One needs to carefully distinguish the types of information one is talking about, separate out the different levels of abstraction, and then establish how they all relate.

When we refer to the ‘wave function’ (the universal wave function), we’re referring to a purely mathematical, timeless level of existence. This type of information is about the space of possibility rather than actuality. When we refer to actual physical observables, we’re referring to a type of information that is in some sense just the opposite: this type of information is about specific actualities, and it exists on a different level of abstraction, a higher level.

So two types of information at two different levels of abstraction. The physical observables are a high-level manifestation of one particular type of information. But information runs deeper than physics, and beneath the physics level, at the lower level, there’s the other type of information, about the space of possibility.

The missing link is a 3rd type of information that hasn’t yet been understood by science. This type of information refers to the arrows of time. It exists on the level of an information geometry, and is the interface between the info about the high-level physical observables, and the low-level info about the space of possibility (the wave function in Hilbert space).

Although there do exist ‘many worlds’, not all worlds are allowed to exist, only those with a definite ‘arrow of time’. So some of the ‘worlds’ in the wave function, are mere ‘ghost worlds’, and are never actualized on the level of physical observables.

In short, both Bohr and Einstein were half-right and half-wrong. Einstein was correct to insist on an objective picture, and that is the “information” picture I summarized above. So in a sense there are “hidden variables” after all. But he was wrong to think that these can specify physical observables, which are a high-level (emergent) form of information. That kind of information can’t be complete, because there’s a deeper level beneath the level of physics.

My summary:

A reduced version of MWI is correct, but there are some hidden variables imposing constraints of what kinds of ‘world’ can get actualized at the level of physical observables. Only those worlds with definite arrows of time can ‘exist’ at the physical level.

73. James Says:

“The implications for reality could indeed be described as “ludicrous” from a classical perspective, arguably even more than Einstein realized. And yet the resolution turns out simply to be that we live in a universe where those implications are true.”

Nice. But just what are some of these most shocking implications that you have in mind? If you were forced to provide the three most shocking, human-relevant implications, what would you say they are?

74. Scott Says:

James #70: Shor’s algorithm, Bell inequality violation, and … tough competition for the third! Elitzur-Vaidman bomb? QKD? MIP*=RE?

75. Matt Mihelic, MD Says:

Perhaps “A Gedanken Quantum Love Story” (from the Appendix of https://dc.uthsc.edu/cgi/viewcontent.cgi?article=1018&context=gsmk_facpubs) will provide some insight with regard to communication via entangled particles and the no cloning theorem:

“You’ve probably heard a lot of talk about Alice and Bob, but most folks don’t know that
the original Alice and Bob were actually real people from the hills of East Tennessee whose
names were Alice Hatfield and Bob McCoy. Like many kids in East Tennessee their parents
were nuclear physicists working at Oak Ridge National Laboratory, but between their families
there was a long-standing familial feud over something about quarks, and upon learning that
Alice and Bob had taken a fancy to each other at school their families forbade any further
interaction between the two. But Alice and Bob had already fallen deeply in love and planned to
elope to Trenton, Georgia and get married. The plan was for Alice to find the right moment to
slip away and signal Bob to come pick her up at her home, but they had a problem in that Eve
Hatfield, Alice’s mother, was a security freak and was monitoring all of Alice’s communications,
whether by cell phone, radio, internet, etc. Alice was very concerned and said, “Oh Bob, how
can I ever let you know when I’ll be able to sneak away to marry you without my family
knowing?” Bob became very depressed over this and went to see his father’s brother who was
the town doctor, in the hope that his Uncle Leonard might prescribe him an antidepressant. Bob
told his Uncle Leonard about his problem of not being able to arrange for an appropriate
undetectable elopement signal with Alice, and after listening to Bob’s predicament Uncle
Leonard said, “Bob, you have situational depression and the cure for you is not in a medication,
but rather is in the resolution of your situation. Now, I’m just an old country doctor but I do
think that I have something that can help.” Then Uncle Leonard stepped out of the exam room
for a moment and returned with two identical small boxes saying, “Bob, like many folks here in
East Tennessee I’m very interested in quantum information science, and here’s something that
I’ve been working on in my spare time. Each of these boxes contains one of a pair of quantum
entangled particles that are being held coherently, and each box has three lights on it that tell the
state of the particle. Green is for spin-up, red is for spin-down, and yellow is for the unmeasured
coherent state. Each box also has a switch that can be moved from the coherent position to the
measurement position, thereby measuring the state of the particle in the box. Right now the
lights on both boxes are yellow, but when the particle spin direction is measured the light on
either box will simultaneously change from yellow to either green or red, because the two
particles that had been entangled would then declare their spin states. This is a way that Alice
can signal you without being detected.” Bob was delighted and when he saw Alice at school he
gave her one of the boxes with the instructions to move the measurement switch on her box
when the coast was clear for them to meet. Bob watched his box carefully, and when the light
changed from yellow to green he drove by and picked up Alice without anyone else knowing,
and they were married that day. Their marriage and subsequent children eventually led to
resolution of the feud between their families, and Bob and Alice were living happily ever after
until one night when Bob awoke in a cold sweat and shook Alice awake saying, “Alice, I fear
that we have been living a lie because I learned in my graduate physics class that you can’t send
classical information between two entangled particles. You need a classical channel to send
classical information, so how could you have signaled me through those boxes when we eloped?
that same question before he left to join the Space Force, and he told me that in this case it was
his wife and went back to sleep, secure in his love for Alice and in knowing that his Uncle
Leonard was out there somewhere helping to keep the galaxy safe.”

Once you accept that QM is probabilistic theory, then everything is OK about QM. There is nothing wrong with using large statistics. We use that in lots of different scientific issues. So , since Einstein did not have any alternative, and no body else has a deterministic theory for some 100 years now, I would say Einstein was wrong and Bohr was right. One should not feel sorry for Einstein!! He got so many honors for being right in many different areas!!

77. Scott Says:

kashyap vasavada #73: The idea that being probabilistic is the problem with QM couldn’t be further from the truth. The real issue is what you have to do to calculate the probabilities!

Are you unaware of the measurement problem, or do you just think that it’s not a problem or is trivially solved or something?

78. Lorraine Ford Says:

Scott #67:
I agree. I think I probably slightly more clearly expressed what I was trying to say in Lorraine Ford #56.

So, as opposed to Alice and Bob, I would question that base-level nature itself could actually be communicating, if only for the reason that communication seems to be a many-step or several-step process. Communication seems to imply a higher-level intention, and also a several step process, that could seemingly only come from higher-level organisms. So I can’t see how communication can be a model for what is happening at the base level of the world: there are seemingly no higher-level minds there. I would have thought that, even if base-level reality had low-level minds, they would only be capable of performing quite simple single steps.

Another issue is: are there steps in nature? Seemingly there ARE steps. But the steps that low-level nature takes are not necessarily the same as the steps that people need to take when they use symbols to try to represent the steps that low-level nature takes. E.g. people need to take quite a few steps to do calculations, but low-level nature is seemingly not doing calculations; nature can seemingly effect number change purely via (law of nature) relationship.

79. Paul Hayes Says:

Scott #74

Presumably you’re aware that for those of us who understand QM as ‘just’ probabilistic mechanics (albeit done right, i.e. using QP instead of CP) there is at least only a small measurement problem?

Yes. i am familiar with the so called measurement problem. But it and numerous interpretations of QM are due to the fact that the apparatus has to be our size we can see and touch, while the atomic and particle world we are trying to study is billions of times smaller than us and objects in our daily life. We do not have any experience of living in the atomic world. So that does not worry me. Agreement between theory and experiment even statistical is fine with me! BTW who can imagine expansion of space and big bang? Yet. Many of us accept them.

81. Bryn Ay Says:

Einstein was correct that Quantum theory is incomplete and this relates to Godel’s theorem. Incomplete theories are always open to new interpretations and insights. Quantum theory is no different. It is always evolving and growing as we learn more about the universe. Godel’s theorem states that any consistent theory of mathematics can never be complete. This is because there will always be new truths to discover and new ways of looking at things. The same is true for Quantum theory. There will always be new discoveries to be made and new ways of understanding the universe.

The Copenhagen interpretation of quantum mechanics, the most widely accepted version of quantum mechanics, was defended by Niels Bohr, one of the most important physicists of the 20th century. Bohr argued that the wave-particle duality of quantum mechanics cannot be understood in terms of classical physics, and that the only way to make sense of it is to accept that particles can exist in more than one state simultaneously. This may seem strange, but it is actually a very natural way of thinking about the world. After all, we are constantly bombarded with particles that exist in more than one state simultaneously, such as photons. The Copenhagen interpretation provides a sound epistemology for quantum mechanics, and should be accepted as such.

82. Brian Flanagan Says:

A theory that yields “maybe” as an answer should be recognized as an inaccurate theory.

~’t Hooft

I think it very likely, or at any rate quite possible, that in the long run Einstein will turn out to be correct.

~Dirac

Today there is no interpretation of quantum mechanics that does not have serious flaws, and [we] ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is merely a good approximation.

~Weinberg

Whatever the meaning assigned to the term ‘complete,’ the following requirement for a complete theory seems to be a necessary one: every element of the physical reality must have a counterpart in the physical theory.

~EPR

Well, obviously the extra dimensions have to be different somehow because otherwise we would notice them.

~Green

If you ask a physicist what is his idea of yellow light, he will tell you that it is transversal electromagnetic waves of wavelength in the neighborhood of 590 millimicrons. If you ask him: But where does yellow come in? he will say: In my picture not at all, but these kinds of vibrations, when they hit the retina of a healthy eye, give the person whose eye it is the sensation of yellow.

~Schrödinger

83. gentzen Says:

Anon Says #52:

What about the de Broglie-Bohm “pilot wave” interpretation of QM? It’s always seemed like the most plausible interpretation to me.

It depends on what you are interested in. If you want to know mathematically, whether there are mathematically consistent models reproducing the effect of the collapse postulate in QM in some appropriate way, then “pilot wave” interpretations can be illuminating. But if you want to know what “really exists” in physical spacetime, then “pilot wave” interpretations are pretty useless. (The pure wave function does not exist in physical spacetime, but only in configuration space. And the particle positions which exist in physical spacetime don’t correspond to anything directly observable, or even anything having much “causal power” enabling indirect observation. But at least those “configurations” have “consistency power”.)

An advantage of “pilot wave” interpretations is that they can be attacked, and you can get clear answers from their proponents on the relative merrit of your specific attack. Even better, you can get similar answers even from non-proponents:

For De-Broglie Bohm, a pure state defines a measure on the configuration space of particle positions. …

One problem is how to get randomness from one particular configuration of particle positions drawn according to that measure. This is achieved by the notion of typicality. This means that the particular drawn configuration of particle positions has an overwhelmingly huge probability of being typical, at least if the configuration space is big enough. How big is big enough? Not sure, but at least 10^23 is definitively big enough.

The other problem is how to get actual measurement results from particle positions that are in a certain sense not directly observable either. But at least they exist, according to the ontology of Bohmian mechanics. And so you end-up with a measurement theory which tries to explain this.

One advantage of De-Broglie Bohm compared to MWI is that it “paid rent” more than once to various people, both before and after it lead Bell to discover non-locality. (If you don’t want to count Bohm’s two papers from 1952, you should at least count Everett’s dissertation, which makes it pretty clear that he read and understood Bohm’s papers.) But maybe this objection to MWI is just caused by claims of its proponents that MWI is inevitable. Anyway, Son Goku’s attack on “pilot wave” interpretation linked above was indirectly caused by me pointing out an instance where “pilot wave” interpretations had “paid rent” again, this time to Sheldon Goldstein, Roderich Tumulka, et al.:

It seems that density operators don’t fit into that De-Broglie Bohm framework, but I recently learned that some Bohmians (Goldstein, Tumulka, et al) attacked that problem head on (https://arxiv.org/abs/quant-ph/0309021):

It is thus not unreasonable to ask, which mu, if any, corresponds to a given thermodynamic ensemble? To answer this question we construct, for any given density matrix rho, a natural measure on the unit sphere in H, denoted GAP(rho). We do this using a suitable projection of the Gaussian measure on H with covariance rho.

Sadly, Jozsa, Robb, and Wootters already constructed the same measure in 1994 (https://journals.aps.org/pra/abstract/10.1103/PhysRevA.49.668), as I learned from a follow up paper by the Bohmians (https://arxiv.org/abs/1104.5482).

84. Vick Says:

I must be misunderstanding the “wonder” of entanglement: You have an entangled pair, unobserved, and you send one to someone who observes it. Then, you observe yours… and, Voila! It is opposite.
Isn’t that the gist of entanglement wonder? If I do the same thing with a pair of shoes, is it miraculous?
P.S. Has anybody ever tried to force a chosen spin state on separated entanglements?

“Quantum mechanics sits at a level between math and physics that I don’t know a good name for.”

Physical mathematics?

I have this awful suspicion that the correct theory of quantum superposition is itself a superposition of theories.

86. Karen Minto Says:

Having read Paul Davies and a couple of other simplified books on quantum theory, I found this discussion interesting on my level. Thank you!

87. maogl Says:

I just have few notions on quantum physics but have a question about the meaning of probability in the many worlds interpretation. Under Copenhagen, reality pops up into existence according to the wave function and the Born rule. Under many worlds, the wave function and the Born rule gives us the probability an observer finds himself in a particular universe. But does this probability come from the frequency of different universes? Universe A has probability 99, universe B has probability 1 – does this mean Universe A is 99 times more frequent than B? But how many of these A and B universes are out there? Infinite? How would such a multi world universe look like from the outside? (Sorry if missing something basic)

88. Ordinary Joe Says:

Vick #81

What makes the QM situation different from the shoes is that the shoes are prepared initially with different orientations. In QM, the correlations of Bell seem to imply that particles are not created with their properties well-defined like that. They only come into existence during measurement, hence the talk of ‘spooky action at a distance.

89. Roger Schlafly Says:

If you believe in MWI, then both Bohr and Einstein completely missed what quantum mechanics is about. What makes QM unusual is not measurement, or entanglement, or superposition, or probability, or complementarity. It is the continuous splitting of the universe into parallel worlds, where essentially everything happens somewhere.

While Bohr and Einstein did not find Bell’s Theorem, they were probably aware of von Neumann’s 1932 textbook with a theorem that had similar conclusions. That is, under certain hypotheses, QM cannot be recast as a theory of classical variables.

90. Scott Says:

Learn before you comment, rather than commenting before you learn! 😀

91. Scott Says:

Roger Schlafly #86: Von Neumann’s 1932 no-hidden-variables theorem had what’s now considered basically a fatal error (namely, a totally unjustified assumption that the hidden variables behave linearly), and which was pointed out already in 1935 by Grete Hermann. Admittedly, though, I have no idea to what extent either Einstein or Bohr were aware either of von Neumann’s claim or of its refutation.

92. Scott Says:

Bryn Ay #78:

Einstein was correct that Quantum theory is incomplete and this relates to Godel’s theorem. Incomplete theories are always open to new interpretations and insights. Quantum theory is no different. It is always evolving and growing as we learn more about the universe. Godel’s theorem states that any consistent theory of mathematics can never be complete. This is because there will always be new truths to discover and new ways of looking at things. The same is true for Quantum theory. There will always be new discoveries to be made and new ways of understanding the universe.

That reads like a GPT-3 output! 🙂 But if you’re serious: Gödel’s Theorem says nothing whatsoever against the possibility of a final theory of fundamental physics. It implies that whatever the final theory, there’s no systematic way to deduce all of its logical implications—but that’s a completely different question. (In your defense, even Stephen Hawking got confused about this simple point.)

You might as well say that there can be no final theory of what causes summer and winter, because Gödel. Some questions do have final answers—as Gödel himself, an arch-Platonist, knew better than most.

93. Lorraine Ford Says:

A number of people have commented about information; and there seems to be a view that physics is about information, or that information IS physics, or vice versa.

But if physics is information, or information is physics, then presumably one would use exactly the same type of symbols to represent information as one would use to represent the physics of the world, i.e. one would use nothing but equations, variables and numbers to represent information. But obviously, that is not the correct way to represent information.

Seemingly, the acquisition of information requires a higher-level analysis of a system or situation. So, to represent the acquisition of information requires the use of analytical symbols like IF, AND, OR, IS TRUE and THEN, as well as the usual symbols like equations, variables and numbers.

Seemingly, the information itself is like the product of this higher-level analysis of a system or situation, i.e. information is inherently contextual and tied to the analysis and the situation; information can’t be symbolically represented as a number that has no context.

So, to symbolically represent information, wouldn’t one need to use a structure built out of things like IF, AND, OR, IS TRUE and THEN, as well as equations, variables and numbers?

94. dennis Says:

Scott #35:

The experimentalists said they know they always have to tweak postselection parameters in order to get the violation. One parameter is even shown in the supplementary Fig. S4, by which they show they can make S vary between 0.8 and 2.6.

Also, what about the other groups who have since replicated the result?

Yes, I have analyzed data from other experiments (some of which my supervisor was sent the data in private). You always only get the violation by discarding part of the data through postselection.

By the way, seeing that this “issue” is apparently not well known, I hope Richard Gill mentions it on his growing Wikipedia page on Bell theorem opposition.

PS: I’m not a “Bell denier”. There’s nothing wrong with the theory. I was just surprised when I looked at how data is handled in the experiments. Maybe worth looking into the consequences?

95. Colin Rosenthal Says:

If nothing else, this discussion had led me to wonder why it is that lists of “Forgotten Women In Mathematics And Physics” seem mostly to forget Grete Hermann. Certainly I hadn’t heard of her until now!

96. Paul Hayes Says:

Colin Rosenthal #92

Well I’m sure she did some things worth remembering – and that she herself would want to be remembered for – but that pointing out of Von Neumann’s “error”* probably isn’t one of them.

* See the remarks in Landsman’s book (section 6.1 and the notes at the end of that chapter).

97. mar o Says:

Matt297, #2:

The reason is that in any Wigner’s friend experiment, Wigner’s friend does the measurement inside a box and, by assumption, sees a definite outcome, whereas Wigner, outside the box, describes the whole process unitarily, and so arrives at a coherent superposition that doesn’t identify the specific outcome. So the specific outcome is something above and beyond Wigner’s superposition, meaning that the superposition is an incomplete description of reality. This is the sort of incompleteness of quantum state vectors that Einstein was referring to.

I don’t really see a way out of this.

In orthodox QM, “measurement” is not a precisely defined term but a primitive so it is assumed that everybody has an idea what a measurement is. It certainly involves some kind of definite outcome. If we try to pinpoint what this means physically, we end up with thermodynamic irreversibility. For all practical purposes, this means that outcomes can’t be reversed but the scenario of Wigner’s friend is deliberately concerned with a hypothetical where it can.

So when the friend does the measurement he gets a definite outcome in the sense of thermodynamic irreversibility in his isolated subsystem. Afterwards, Wigner breaks this isolation and

98. maline Says:

Scott #15
I don’t agree with the claim that the splitting of MWI doesn’t show up in the Heisenberg picture. I think this impression is just an artifact of the typical textbook presentation of the Heisenberg picture: since the time evolution rule applies to the operators, they get all of the attention while the states get ignored. But of course you can’t predict anything without choosing a state, and to describe the state you need to choose which operators you care about. If you care about a history of how the world develops with time, then you will examine the amplitudes for the operators projecting out various configurations at various times – and that effectively gives you back the Schroediger wavefunction, along with the branching of MWI.

This is especially clear in QFT: the wavefunction in, say, a state with two Dirac fermions is , where |PSI> is the state and psi(x) is the Hiesnberg-picture field operator for a spacetime point x, and we choose x and y to both be at the time of interest.

99. JimV Says:

L.Ford at #90: as I’ve mentioned to you before, in high school physics class I was taught that IF there is a reaction, THEN there is an equal and opposite reaction. There are algorithms all through physics, See Feynman’s “QED” book for other examples. As I see it, physics is intended to be the computer code for running this universe. That’s why every computer simulation of real events, from billiard collisions to galaxy formation, is based on physics. Obviously those simulation codes contain lots of logic operators, which are part of the physics. Obviously also, those simulations aim to depict possible real events, i.e., true information. (Our understanding of physics will always be fallible though.)

100. maline Says:

Sorry, my last comment got garbled because I tried to use angle brackets in text. What I was trying to write is that the Schroedinger-picture wavefunction $$\psi(x,y;t)$$ where x, y are points in 3-space is the same as the matrix element $$\langle \Omega | \hat \psi(x,t) \hat \psi(y,t) |\Psi \rangle$$, where $$|\Omega |\rangle$$ is the vacuum state,$$|\Psi |\rangle$$ is the two-particle state in question, and $$\hat \psi(x,t)$$ is the Heisenberg-picture field operator for the spacetime point.

Bottom line is that wavefuctions are physical, at least in the PRB sense, so their behaviour cannot meningfully depend on the formalism.

PS: the feature that allows you to edit a post for a few minutes is nice, but it would be alot better if the timer stopped counting down when you start editing!

101. Scott Says:

maline #95, #97: I was thinking specifically about the Deutsch-Hayden paper. Deutsch certainly believes that splitting of worlds is real, but he also thinks the Heisenberg picture explains how the splitting can happen without any superluminal influence. I quickly get confused in such debates about what empirical or mathematical question is actually at issue, but take a look and see what you think!

102. Lorraine Ford Says:

JimV #96
What you’ve got wrong is that equations represent relationships; equations don’t represent algorithmic procedures. BUT, if a person tries to solve a set of equations, then they use an algorithmic procedure, which could be represented by symbols like IF, THEN etc.

103. Peter Shor Says:

All this argument about left shoes and right shoes and marginal distributions can be seen to be completely beside the point if you look at the GHZ game or Mermin’s magic square game.

104. Lars Says:

“After all, in many areas Einstein was no Einstein. You [Scott] above all could speak of his not-so-superintelligence in quantum physics”

I’d have to say that the only thing that demonstrates is that Steven Pinker is no Einstein.

True genius is not necessarily coming up with an explanation (especially in the absence of definitive evidence either way which was the case in the 30’s) but in understanding an issue deeply enough to ask the right questions.

Clearly, Einstein had few (if any) peers when it came to the latter.

105. fred Says:

maline:

“the feature that allows you to edit a post for a few minutes is nice, but it would be alot better if the timer stopped counting down when you start editing!”

But that’s what makes posting here so exciting and addicting!
I always start by publishing the opposite view of what I mean to say with some really offensive stuff that would get me instantly canceled from this blog, and then spend the next couple of minutes desperately trying to correct it all in the tiny edit window!
My heart is pounding just thinking about it.

106. fred Says:

Scott #98

What’s also confusing about MWI (at least to me) is how the background spacetime responds to superposition and branching.
If we have a macroscopic heavy object in a state of superposition, how is the background space time curving?
And then once the position is measured, and there’s some branching (i.e. non interfering alternatives), if we have one branch b where the object is at position x, and another branch b’ where the same object is at position x+dx, and both those situations are “real”, then, according to GR, in those two branches to spacetime curves differently based on the shift of position… or does the background spacetime curves based on some sort of average of the positions.
Could this put a limit on the size of objects in superposition?
I guess it’s one effect we could measure experimentally, as we create heavier macroscopic object in a quantum state?

107. Matt Mihelic, MD Says:

Clint #23 and paradoctor #82: “Quantum mechanics sits at a level between math and physics that I don’t know a good name for.”
The term “nano-info-bio nexus” might be appropriate because it indicates the intersection of molecular nanostructure, information theory, and molecular biophysics. Some might question the “bio” component, but it could be considered as fundamental. Because all information is physical (Landauer) and because you can do classical computing on a quantum computer but you can’t do quantum computing on a classical computer (Deutsch), it is arguable that the physicality of the brain by which one conceives of quantum mechanics must contain a quantum logical processing mechanism. Von Neumann spoke of the “schnitt” of psychophysical parallelism, and Schrödinger predicted the quantum molecular biophysics of an “aperiodic crystal” carrying the genetic information in every cell.

108. maline Says:

Scott #98: So now I understand that your point in comment #15 was that in the Heisenberg picture we’d describe the world-splitting as local rather than global (I had thought you meant that we wouldn’t describe a split at all). I suppose I mostly agree, but I don’t think this necessarily depends on the picture: if we’re starting in a branch with no entanglement then it’s natural, even in the Schroedinger picture, to write the state as a product and apply the “splitting” to only one factor (temporarily, of course, until the influence spreads). If there is entanglement, then even if you’re mostly working in the Heisenberg picture you’d plausibly want to describe the entanglement using something like a Schmidt decomposition of the state, and then you’d describe the split using operators in that basis. But all this is a matter of taste/convenience.

When we get to more “applied” questions, then we do need an actual answer: my main reason for not accepting MWI as true is that I think it predicts we should see probabilities according to branch-counting and not Born’s rule. To claim that, I should be able to specify how branches would be counted, so I need to decide whether I see the splitting as local or global. But for that purpose the answer is clearly “local”: a global physical effect would contradict relativity. Also, philosophically the answer should depend on who counts as a separate conscious observer, and for that I think there should at least be some locally measurable difference between them.

OTOH, this wild paper from Tegmark and Aguirre tries to solve the probability issue but relies on there being a literal infinity of discrete branches, because of the spatial infinity of the Universe. That would require instantaneous global splitting.

I also can’t resist putting in my two cents on the Deutch-Hayden paper: I’m not very impressed by their argument for locality. They are basically just rehashing the point that entanglement isn’t so terribly nonlocal because you can only measure it by bringing the two measurement outcomes together.
The new part: one standard argument for nonlocality is the fact that the two traced-out density matrices don’t contain all of the information (the entanglement entropy is missing) So Deutch and Hayden refuted that argument by finding a way to represent the situation that does contain everything, but also maintains a clear split between subsystems. Which is nice and kind of cute.
But I don’t see this as a reasonable definition of “local information”, for two reasons: 1. they are just replacing “nonlocal” with “locally inaccessible”. Seems like pointless wordplay. 2. Their representation is only informative when combined with the state, namely the zero eigenvector of the computational basis operators for time t=0. The state “contains no information” because it is a fixed standard, but it is a global object that plays a critical role, so an ontological account of “where the information really is” can’t afford to ignore it.

109. Lorraine Ford Says:

JimV #96:
The equations that represent the laws of nature represent relationships in nature. These equations are not a type of shorthand for a series of algorithmic steps. These symbols (these equations) mean a specific thing: relationships. However, when people interact with symbols, e.g. equations, then PEOPLE take steps. And also in computers, equations have to be represented as a series of steps.

This is not to say that nature doesn’t take steps, seemingly nature DOES take steps. But, unlike words and sentences which can be interpreted in various ways, people have assigned a much more specific meaning to the symbols used in equations and algorithms.

Scott#15
If there is no way to find out, with any conceivable experiment, how that supposed splitting of the Semi-classical worlds occur in the MWI picture ( locally or globally? Is there any specific physical splitting process? What is the geometrical description of this supposed splitting in the ” classical limit” .etc..) and, moreover, we don’t really care about these “details “, then there’s no need for any Ψ- (pseudo)realist interpretation like MW ( with all its problems..)!
We can stick to any Copenhagen – like epistemic interpretation and declare that the collapse of the wavefunction happens only as ” in a manner of speaking” and that we don’t care about what’s ” really” going on and we’ll be ” juuust fine” ( as the late Bob Ross could have put it 🙂).

In the usual case of entangled pairs in an EPR setting, if Alice and Bob ( that’s another Bob) are spacelike separated when they’re doing measurements, how does Alice’s and Bob’s detected particles ” know” that they’re in the same “correct” classical branch?
In spacelike separated events there is no way to even define if A ( or B ) measurement caused the ” splitting of the worlds”- it’s totally meaningless. And it’s not clear what’s going on with causal horizons ( as in asymptotically deSitter universes – like ours-etc..)

111. Lorraine Ford Says:

Matt Mihelic, MD #104, “all information is physical (Landauer)”:
The trouble with statements like this is that nobody is any the wiser about what “information” actually is. Information continues to be a very hazy concept. Clearly, information can’t be symbols, e.g. number symbols, because symbols have no inherent meaning. Similarly, information can’t be real-life numbers without a category (like mass or position) because numbers without a category have no context or relationship to anything else. I have tried to explain what I think information actually is in Lorraine Ford #90.

112. Andrei Says:

Scott,

Einstein was mistaken in trying to argue for the simultaneous existence of position AND momentum. Bohr speculated this and won the debate.

Einstein’s argument works perfectly for a single property. The measurement of position at A allows you to predict the position measurement result at B with probability 1, therefore the particle B has a well defined position before measurement. Momentum is irrelevant here because it cannot be measured in the same experiment.

Bohr would have been defeated by the above argument.

113. maline Says:

Back to Bohr vs. Einstein: Bohr should definitely lose a bucketload of points for throwing around his pseudo-philosophical Complementary Principle as if it made sense, for pretending everyone else was just too set in their ways to see its brilliance, and for saying that there are questions you Just Can’t Ask.

Very unjustly, when Bohr’s Emperor’s-New-Clothes tactics ended up pushing us to shut up and calculate, that turned out to be the right way forward! Which is lucky for us; if Einstein had his way we might have wasted decades banging our collective head against the Measurement Problem.

114. OhMyGoodness Says:

Here is a recent neuro-imaging paper that compared brain activity of accomplished mathematicians with non mathematicians. The result was that accomplished mathematicians use completely different brain circuitry than the circuitry used for language. Even when considering logical problems posed in words the mathematicians did not recruit language circuitry.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4983857/

I looked at recent papers that criticize earlier work that found this or that different in Einsteins brain compared to average brain and all conclude no significant anatomical differences.

115. OhMyGoodness Says:

Earlier there were claims that the human genome had been fully mapped but that was a slight exaggeration. There was about 8% missing that couldn’t be sequenced due to limitations of the technology available. Only this year was the first full 100% female genome sequence announced by the Telomere to Telomere Consortium. In January of this year the first full 100% sequence of the puny male y chromosome was announced.

I think the following summary approximately represents current understanding. The human haploid genome contains about 3 billion base pairs so call it max info of 800 Megabytes (diploid twice this) in each cell, one meter total of linear DNA. About 2% of the total DNA code for proteins. Until recently the remaining 98% was considered junk but that is now proven to be completely untrue and in fact part of the “junk” (5%) has been highly conserved over a hundred million years or so of evolution suggesting major biological importance. The latest estimate is about 80% of the non protein coding is transcribed to rna or provides some bioactive result. Part of this is management of the structure of the DNA and epigenetic controls that changes the expression pattern of the protein coding genes. Far less is known about non coding versus coding DNA.

My point in all of this is that with a 100 billion neurons and say a trillion glial cells in the average brain, and each cell has approximately 800 megs of data actively modifying its composition and actions according to unknown rules, that coherent human thought and consciousness are crazy notions that no rational person can possibly believe in. 🙂

116. James Baird Says:

The gist of the quantum foundations debate is that for every “solution” to the measurement problem a new problem is introduced/arises by that new theory then that has to be resolved. This seems to be true for every “interpretation” I’ve ever seen: MWI, pilot wave,instantaneous collapse theories, and Superdeterminism. The funny thing is that the new problems are arguably far worse than the original. Some interpretations can be at least in theory tested(which means they at some point would deviate from the standard predictions of QM) and some can’t(philosophical). This is also part of the debate as to which theories are and are not testable and how to go about it.

117. Clint Says:

Hi Matt Mihelic #104:

Apology for the long post … but if this helps anyone “get out of the fly bottle” …

The term “nano-info-bio nexus” might be appropriate because it indicates the intersection of molecular nanostructure, information theory, and molecular biophysics. Some might question the “bio” component, but it could be considered as fundamental. Because all information is physical (Landauer) and because you can do classical computing on a quantum computer but you can’t do quantum computing on a classical computer (Deutsch), it is arguable that the physicality of the brain by which one conceives of quantum mechanics must contain a quantum logical processing mechanism. Von Neumann spoke of the “schnitt” of psychophysical parallelism, and Schrödinger predicted the quantum molecular biophysics of an “aperiodic crystal” carrying the genetic information in every cell.

I’m a computer engineer. So … as a “no-nonsense, computing is all about the hardware kind of guy” let me share my journey down the “brain is a quantum computer” rabbit hole.

First, I’ve never been able to make much of the “information is physical” dictum. To me that is one of those … “OK, sure, but what can you do with that” or “so broad that it ends up having no content” kind of things … or I’m just too dumb to understand … non-zero probability of that 😉

Second, actually, we could do classical computing on a quantum computer. But we wouldn’t want to. The class of quantum computation is a more general class of probabilistic computing than classical probabilistic computing. But … while we could implement classical (bits and gate logic) in a quantum computer we would almost certainly be totally wasting our time trying to do that. You can think of classical computing as having devices that are amplified to the limit so that we can use them to operate over classical logic. Even if quantum computers reach scale they will probably only be used for specialized types of problems built upon phase and period finding – which is where they outshine classical computers because of the fact they have interference of amplitudes available in their hardware device description.

So … there’s been a lot of speculation about a possible relationship between quantum mechanics and the brain … (pause for eye rolls) … As a computer engineer I’ve always had interest both in fundamental physics and neuroscience and recreationally dipped into the suggestion out of plain curiosity …

It was when I came across Penrose’s arguments that I decided to dig a little deeper. I mean … Sir Roger Penrose, right? But, my response to Penrose’s hypothesis was “this doesn’t make any sense just from a purely computational device standpoint” … remember I’m a computer engineer so if we are talking about computing then I need for you to give me a physical hardware based model of computing that starts with the primitive device description (like a transistor in a microchip, or toggles on the Digi-Comp II board), what the (abstract) logical primitives are going to be as input to that device (like bits), how to assemble logical/mathematical operators (gate operator logic), how the logical primitives are stored (memory), and how to do a verification that it actually works (the evidence of its output demonstrates the abstract model must be at work inside).

Penrose’s argument doesn’t at all satisfy my criteria for a computational device. And I fully expected that I would be able to look at the postulates for the quantum model of computation and quickly rule out any hypothesis that the brain could ever in any way be a quantum model of computation.

I understood that the neuroscience community’s consensus is that the brain is a classical computer. So, I also expected to be able to go to the neuroscience literature and quickly be shown where in the brain classical bits are input to a primitive device, how those devices are assembled in the brain to operate on classical bits, where the brain stores classical bits, and where we read out (only) classical information in order to verify correct operation. These are the things necessary to convince me that a device is a classical computing device.

However … to my consternation … none of those are found in the brain’s hardware description – at least in all the neuroscience I can find … and well … you shouldn’t have to dig very deeply if it’s “obvious”, right?

Well … fine then … I’ll just go to the postulates of quantum computing and see how they rule out that the brain could be a quantum computer.

The postulates of quantum computing are simple (see Mike and Ike’s Quantum Computation and Quantum Information.

They begin with the requirement that ALL input must be (in general) in the form of positive or negative complex numbers (they can be called just “amplitudes”). I thought, well, this will be easy … surely the brain is not a computational device where all input is in this weird form of complex numbers. So, I read Christof Koch’s Biophysics of Computation to get an authoritative ruling. Turns out … the primitive computational devices of the brain are the dendrites. The dendrites can form logical gates (NOT, XOR, etc.) but they can also be “programmed” in their morphology for mathematical operators like multiplication, phase shifts, and Fourier transforms. And, ALL of the input to these dendritic operators is … no, not classical bits … complex numbers. That means that what the input functionally means to the dendrite is a complex number – an amplitude. And I don’t mean that complex numbers just “show up” in science so we shouldn’t be surprised … no the meaning to the computation that takes place inside the dendrite is that the input exactly means “this is a complex number”. The dendrites use BOTH the magnitude and the phase information to perform computations.

Furthermore, the amplitudes that are input to the dendrites can interfere with each other both in the sense of adding together and also in the sense of being able to “cancel” because the dendrites accept both positive and negative complex number inputs. In fact … it turns out that the entire computational model of the brain is built on the back of the interference of amplitudes. By the way, once you have orthogonal outcomes (vectors) represented in this way in a device capable of multiplication and phase shift operators (gives us an inner product) then we have a complex Hilbert space. In a classical computer the space for representing states is just a bit different … it is {0,1}.

But I kept reading the quantum postulates … surely … one of these is going to eliminate the absurd idea that the brain could be a quantum computer, right?

The computational device has to be able to link together amplitudes representing orthogonal outcomes in a special way. They must interfere with each other so that the 2-norm of the magnitude of the amplitudes always adds up to 1 or 100% since this model of computation is built for being a predictive device. The 2-norm of the magnitude of the amplitudes then “represents” a probability for that outcome. How do “negative probabilities” make sense? I don’t care from a computational definition standpoint. But, maybe like evidence “against” something?

The brain does in fact represent input amplitudes from distinguishable (that’s what orthogonal means) “receptive fields”. Further, neuroscientists have evidence that normalization is canonical in the cortex. They do not yet know what the model is for that norm. But … this doesn’t allow me to rule out the quantum brain hypothesis. The brain does represent possible outcomes over orthogonal receptive fields, it does have some kind of norm going on, and the whole point of the brain appears to be to work as a “predictive computer” … for reasons of survival and all that.

Next, the postulates require unitary operators. This basically means linear operators that “don’t break the rules of the Hilbert space above so that the device can “rotate” the vector in the state space … essentially it means the brain can “set” the input amplitudes to point the vector more towards one possible outcome or the other. There is also a sense in which a linear computer shouldn’t “blow up.” Christof Koch gives plenty of evidence for this in his book even saying that despite all the nonlinearities and chaos going on in the different parts of the dendrite/neuron it appears that Nature was aiming to produce a computational device that behaves in a perfectly linear manner.

Next, the postulates require projection operators. There has been much talk about these … “measurement operators” …. but computationally they are only projection operators!! There appear to be at least two ways the brain could implement this kind of operator: dendritic pruning and/or inhibition. Again, inhibition (as part of interference) is what the brain is built on. Inhibition works by the interference of amplitudes in the dendritic operators. The effect of either this type of pruning or inhibition is to “project” onto a subspace of possible outcomes. Interestingly it appears that pruning can be viewed as something like a “final” projective “Copenhagen Cut” while inhibition has a nature more like the “decoherence” viewpoint … but those are just my impressions. Again … there is no “mystery” or weirdness here. These are simply defined computational operators and as a hardware computer engineer … I’ve got no problem with the description or of the two candidate neural mechanisms.

Then the last postulate for quantum computing is that the device has to be able to combine or form products of state spaces to form larger “coherent” state spaces. Again, neuroscience reports evidence that this is canonical for the brain’s model of computation.

And finally to verify. The output should give us evidence that this model of computation is taking place. There are cognitive neuroscientists who are using the quantum model to explain long-standing paradoxes in human cognitive studies (output given input) that long resisted explanation with classical Markov models. The key in these studies is that they give evidence that interference is taking place inside the computational device (the brain). As best I can tell … the empirical evidence from these studies is just as convincing as the empirical evidence from physics using this model: take the numbers, put them in the model, and it checks. I’m open to counter-evidence … but at this point I can’t rule out a quantum model in the brain based on this verification requirement. In fact, there appears to be supporting evidence of interference occurring.

Some important points.

We did NOT have to solve every mystery under the sun here. All we had to do from a “hardware analysis” point of view was identify evidence in the brain for good candidate realizations for the postulates of the quantum model while not violating any known physical principles.

Usually the first thing someone will say in response to this is …. yes, but, decoherence … there’s no way the brain functions on the small scales required for the quantum model. The argument presented above in this post makes no claims and finds no evidence that the brain uses atomic scale quantum effects for computation in the brain (except MAYBE sampling randomness … ). Read the postulates in Mike and Ike and note that this class of computation does NOT require that the device operates on ANY definite physical scale. Decoherence is basically entropy and as a computer engineer I appreciate that all computational devices need to be concerned with fault tolerance against environmental noise and degradation. The brain does appear to deal with this by using massive parallelization (replication of functional units) to achieve fault tolerance for its representation of amplitudes. Since we are NOT arguing that the brain uses atomic scale devices for quantum computing then we don’t have to “maintain coherence at the atomic scale”! The brain just has to maintain fault tolerant representations of complex number amplitudes (along with the norm requirements above) representing different possible states over receptive fields.

Someone says … yes, but all of our evidence up this point is “quantum” anything can only be atomic scale systems. So, that would be like saying all the evidence up to this point is that a computer can’t pass the TURING TEST ?? I feel like Scott when he says “Let’s assume a computer has passed the Turing Test…” And then the first “evidence” people want to argue is “Well, we’ve never seen a computer pass the Turing test.” The postulates of quantum computation impose NO physical constraints on possible innovation (by Nature or ourselves) for realizing this model of computation.

All that is required by the postulates:
(1) Complex numbers as the input
(2) Maintain norm over (interfering) possible states for a predictive model
(3) Some linear operators
(4) Some projection operators
(5) Ability to join together state spaces

That’s it.

Now … what else? So, what would that mean if the brain were a quantum computer? Probably not as much as some would like 🙂 Does it “explain consciousness”? I have no idea and don’t care as far as the purposes of this hardware analysis. I personally don’t have a very high opinion of “consciousness” as a computational thing and whatever it is in humans is extremely unreliable, intermittent, and has a very small bandwidth. Does quantum computation make us supercomputers? No. The quantum model only gives some advantage (to our present knowledge) when it can be leveraged for phase or period finding algorithms. But still, I would expect classical computers to (eventually) beat our limited brains when optimized for any defined task. OK, so why can’t I factor large integers? I don’t know, maybe you haven’t really practiced? Or maybe the brain didn’t evolve specialized functional units required by Shor’s algorithm for modular exponentiation? Maybe the brain has limited computing resources available for something “recreational” like that? What, are you saying that the brain is a quantum computer because of atomic scale quantum effects at the synapses or in microtubules something? No. No. No. I’m saying it can be quantum because the synapses encodes inputs in the form of complex numbers (a physical signal having meaningful magnitude and phase) specifically relative to the operations encoded in the dendritic morphology. The dendrite is saying to the synapse: “Hey, give me a complex number please” just like the transistor is saying “Hey give me a 0 or a 1 please”. But what about neuronal spikes? Aren’t those classical bits? No, that is a classical non-linearity embedded within a non-classical device. The neuronal spike provides the model of computation with the necessary non-linearity or “decision” required for computation to actually arrive somewhere. Embedded within the neuron the axonal spike activates amplitude inputs to many other dendritic operators. But, nowhere does the brain store information in a classical form nor does it have operators (gates) that act on classical bits. All input to dendrites appears in the form of a complex number where the magnitude and phase is useful information. Yes, there is a classical bit of information within the fact that the synapse fired – but there is much more information present (and used!) than just the classical information of the presence of a spike. The classical computational model of the spike is contained within an overall non-classical neuron with its interference based dendritic operators – just like BPP is contained in BQP. But, what is the brain (or the neuron) computing? I don’t care. I don’t have to know what a microprocessor is programmed to compute (video game graphsics? genetic AI algorithms? …) to determine that there is a classical model of computation realized in the microprocessor’s substrate.

This was a hardware analysis of the quantum brain hypothesis. What was required was some believable evidence for and no evidence against the hypothesis that the brain has a quantum model of computation. It appears, based on the postulates and on the evidence, that we do have at least some evidence for and no conclusive evidence against the hypothesis.

Somebody wants to argue the brain is a classical computer? Please proceed from the evidence. Show us where the classical bits are stored. Show us the gate logic where classical bits are input. Do this in the same way that the neuroscience evidence shows that all input is in the form of complex amplitudes, information is stored in the form of amplitudes, typical neural operations rely on interference of amplitudes, good candidate linear operators exist, good candidate projective operators exist, and it’s possible to form products of state spaces.

118. Godric Says:

Lorraine Ford #90  #108:
You keep claiming that information “clearly” can’t be symbols because symbols don’t inherently have meaning, and require context to give them meaning, but that couldn’t be farther from the truth.  The context that gives them meaning is simply a different, larger context, i.e. one that serves as a meta-context.  It is still local, it is just a “larger” locality, that is less constrained, such as a decoherent choice of basis for measurement. A symbol and a meta-symbol are both symbols, they are just at different levels of a decoherence hierarchy.  Relative to the bottom level measurements, meta-symbols don’t look like symbols, but there must always be a larger context within which they are indeed still just symbols.  Ignoring that difference in context confuses meta-levels.

maline #105: I think Deutsch and Hayden’s “locally inaccessible” information is in no sense pointless wordplay.  You seem to think that is another way of saying “global” information, but that is not at all what they are saying, as far as I can see.  The “locally inaccessible” information is simply local information on a larger scale.  Classical information is always relative to some measurement basis, but that basis is not “global”, it is also decoherent locally, just not the same “locale”.  Decoherence is a hierarchical process, as light cones converge.   An encoding of a measurement result propagates locally, accompanied by inaccessible information that is itself also propagating locally, but that secondary locale is simply decoherent on a larger scale.  So there is an important and real difference between the concepts of “locally accessible information”, “locally inaccessible information”, and “global information”.

119. Matt Mihelic, MD Says:

Lorraine Ford #108 “Information continues to be a very hazy concept.”

A quantum of classical information (i.e., a “bit”) can be defined as the deterministic difference between two physical conditions that are separated by an energy barrier of at least kT•ln2. Thus, a “measurement” can be defined as the deterministic result of a comparison to a standard, with that “deterministic” result being defined as a physical change in a system that occurs across an energy barrier of at least kT•ln2. An example of this is the enantiomeric shift in the deoxyribose moiety of the DNA molecule between the C2-endo and C3-endo conformations.
https://dc.uthsc.edu/cgi/viewcontent.cgi?article=1001&context=gsmk_facpubs

120. Matt Mihelic, MD Says:

Clint #144 “…I need for you to give me a physical hardware based model of computing that starts with the primitive device description…”

DNA can be modeled as a quantum logic processor in which electron spin states can be coherently conducted, spin-filtered, and read into a logically and thermodynamically reversible enantiomeric symmetry between the C2-endo and C3-endo deoxyribose conformations. The non-locality of quantum entanglement is a hallmark of quantum logic, and pilot research has demonstrated correlated depolarizations between neuronal cells in separated cell cultures that were non-locally modulated by laser pulsations and by pharmacological manipulation.
https://dc.uthsc.edu/cgi/viewcontent.cgi?article=1002&context=gsmk_facpubs

121. Clint Says:

Hi Matt #120,

Please forgive me because I’m no DNA expert 🙂 I’m just a doofus engineer. But I would not be surprised if there are quantum effects that play a role in DNA functioning. Aren’t mutations believed to result from quantum effects in DNA?

But I can’t see any way to get computational information from the scale of DNA up to the computational level of the brain nor of access to or playing a part in whatever consciousness is.

My hardware analysis specifically did not find that the computational level of the brain has any sort of access to atomic scale processes in the sense of encoding state vectors into those systems and then performing operations on such encodings of state vectors. Atomic scale processes in the brain appear to play the same kind of role they do in classical transistors – namely doing their atomic stuff only to support the structural nature or properties of the device. I mean there are quantum things going on inside semiconductor devices like inside all materials… But those have nothing to do with the logical model represented by the transistor.

Rather, what is found is that the brain encodes input as amplitudes (complex numbers), maintains some sort of norm over orthogonal sets of those amplitudes, appears capable of linear operators on those same amplitudes, has mechanisms that fit the function of projection operators, and can form products of those state spaces. These are simply the postulates of the quantum model of computation. They are simple. They are not weird. They don’t say anything about atoms or consciousness.

Again, I can point to where exactly the brain is encoding input in the form of complex numbers and I could tell you if you were a programmer exactly HOW to enter a complex number as an input (set the magnitude and phase for a synapse) Can you tell me how to input a complex number in a DNA quantum computer? We have to at least start with that 🙂

122. Lorraine Ford Says:

Godric #118:
Information is not information unless it has context: 1) category (i.e. mathematical relationship); and/or 2) your “meta-context”. To represent information, one needs to represent the context. E. g. a simple yes/no answer is only information in the context of the simple or complex analysis used to derive the answer. So, to fully symbolically represent information, one needs to symbolically represent the analytical structure needed to derive the information: without this, the answer alone is not information. And continuing this theme, numbers alone can never be information.

Also, symbols are not information. The symbols can represent information, but they are not themselves the actual real-world information, in the same sense that the symbolic equations that represent laws of nature are not the actual laws of nature.

123. Lorraine Ford Says:

Matt Mihelic, MD #119:
A “bit of information” that is a number, e.g. a binary digit, can never be information because it has no context or category.

The “bit of information” that you described seems to be a thing that is representable as a number and an associated category. I.e. it could only have low-level information value: the only possible information value it could have is something like: “(change in) category1 = number1 IS TRUE”. But this is very low-level information.

To represent more complex information, one would need to include the logical context of the “bit of information”: one would need to use symbols like IF, AND, OR, IS TRUE and THEN to represent more complex information.

124. Tom Marshall Says:

Shmi #20: That gravity is a factor in the state of the system is indeed an important observation. I would like to emphasize that it is the in-principle observability and not the (physical, conscious, or otherwise) observation that determines whether we will or will not observe phenomena governed by superposition. If you can observe the state of the cat with your gravitometer, then it is not in a superposition at any point.

My favorite, but by far not the only, experimental reference of in-principle observability:
Short review in Physics Today
https://physicstoday.scitation.org/doi/abs/10.1063/1.1768665?journalCode=pto
arxiv of the Nature article referenced
https://arxiv.org/pdf/quant-ph/0402146.pdf

125. Tom Marshall Says:

Clint #24: With respect for your views, and your quote of Scott’s views (I must have missed the original), I look at this “hierarchy of knowledge” somewhat differently.

Newton’s theory, and Maxwell’s theory, were (are) about the observable phenomenology that we, at the time, labeled “ponderous matter”, “electrons”, “light”, etc. Quantum mechanics provides a more accurate description of matter as it becomes progressively “less ponderous”. Quantum electrodynamics provided a more accurate description of moving charges and radiation, incorporating special relativity explicitly. (And electroweak theory and QCD extend this approach to more fundamental aspects of that which we used to think of as “ordinary stuff”.)

I’m somewhere between baffled and amused (or rather, glad for the insights into the workings of even the most intelligent minds) that we have chosen to stay with the labels of Newton and Maxwell in defining what “physics” is about, rather than recognize that, so to speak, they got the nouns as well as the verbs wrong, or more gently, in seeking to extend their knowledge, they still made only incremental steps in seeing (imagining) the physical universe.

Physics is an experimental natural science (no matter how clearly it can be expressed in math, nor how much a deep knowledge of math provides insights into ways forward in physics) — it never was about Newtonian matter or Maxwellian charges. It was always about the fundamental fermions and bosons, but we couldn’t see that clearly until recently.

126. Ben Bevan Says:

I find myself increasingly sympathetic to Rovelli’s relational interpretation. Seems to be it’s essentially Everett, but without the baggage of thinking the wavefunction is a real thing as in many worlds.
It is odd that these ideas seem to have been largely ignored.

127. Matt Mihelic, MD Says:

Hi Clint #121

“Can you tell me how to input a complex number in a DNA quantum computer?”

In our pilot experiment we modulated the local culture of neuronal cells with laser pulsations, and that input was a 2 Hz series of 50 msec pulses of coherent electromagnetic radiation that had a wavelength of 650 nm. This induced sustained oscillatory depolarizations of cells in the local culture at about 130 Hz. This also induced sustained oscillatory depolarizations of cells in the non-local culture of about 130 Hz. When we were able to observe the initiation of a phase lock in cells of the non-local culture, it always began on or immediately after the fourth laser pulse, which ostensibly indicated a calculation taking place in the cells of the non-local culture. So, the input was laser pulsations and the output was cellular depolarizations. The quantum mechanical mechanism(s) of such non-local reactions are theoretical.

128. Matt Mihelic, MD Says:

Hi Lorraine Ford #123

It is true that “to represent more complex information, one would need to include the logical context of the ‘bit of information’”. In a quantum system, the “context” is the superposition of all of the quantum bits (i.e. qubits) that are held coherently in the quantum gates (which are essentially the physicality of the qubits), and those quantum gates are the logical gates of the system by which the logical operations take place.

129. Lorraine Ford Says:

Matt Mihelic, MD #128:
The point about information, is that information requires that all the “bits of information” are mathematically or logically connected. One might say that physics deals with invisible mathematical connections (represented by equations that represent laws of nature). But is there such a thing as invisible logical connection (seemingly representable by symbols like IF, AND, OR, IS TRUE and THEN)?

By “logical connection” I mean:
1) Something that actually exists in the world, and therefore requires symbolic representation, just like mathematical connection already has symbolic representation.

2) Something that is a complete chain of logical connection, with no missing parts of the chain.

One can imagine that the brain is making logical connections. But presumably this can only happen if low-level nature has, or makes, logical connections. Just like law of nature relationships are invisible things whose existence has been deduced, seemingly logical connections would be invisible things whose existence can only be deduced.

130. Shmi Says:

Tom Marshall #124:

First, thank you for the link to the hot bucky balls article! I remember Anton Zeilinger visiting and talking about the preliminary results of the experiment a long time ago, but I could never track down anything published.

Second, you are right, as far as all other forces are concerned, an observed quantum system is entangled with the environment and so is never in a superposition. But gravity is not like that, it is not a force and you cannot shield from it, however hard you try. Either macroscopic objects can be in superposition of spatially distinct eigenstates (or even in a superposition of non-degenerate energy eigenstates) and then we can observe them gravitationally, or they cannot, which limits how much a quantum system, including a quantum computer, can be scaled up.

Sadly, Scott does not seem to either understand or acknowledge this point (or else, I am missing something fundamental), even though it is directly relevant to the limit of quantum computation scaling (albeit at ridiculously large number of qubits, since the masses involved are about Planck mass, 10^19 nucleons).

131. Clint Says:

Hi Matt #127:

I read your paper and think you have made an interesting observation that does make one stop and ponder what might be going on. A few first questions. Was there a control … meaning was this test also performed with MEAs from cells not originally grown together (and by the way was that supposed to be where/how they became entangled)? Was the experiment repeated with a different computer or with the computer(s) and laser on isolated power/ground circuits? Why should we presume that this synchronization is due to quantum entanglement? Natural systems (or unnatural like pendulum clocks on a shelf) are known to “spontaneously” synchronize. What were the alternative explanations and how did you rule them out? If there is quantum entanglement between the two separated cultures why must it be in the DNA of the cells and not something else like in the membranes of the cells?

I’ll go along with the claim that there are quantum things going on with the electrons in DNA … but how does that rise to the level of DNA being a “quantum logic processor”? Remember, I’m a doofus computer engineer so I need things to get pretty simple like “Set this input to a one here and a zero here, put these transistors together in this parallel/series configuration for a gate, NOW you’re computing …” I don’t subscribe to “the universe is a computer” point of view by the way … if “everything” is a computer … then nothing is a computer. A “computer” (to me) has to be something that humans can actually say things like “I want to put this arbitrary input in” (from some defined set) and “I want to program it to do this arbitrary operation” (from some defined set). I’m not saying that it isn’t possible for us to do that with DNA … I just need the programming instructions, please 😉 Is the claim that the laser settings allow for encoding arbitrary complex numbers in neuronal DNA?

To be a quantum computer means (to me) that some person has to be able to walk up and say, “You know what … I think I want to put these arbitrary complex numbers in today.” Of course, those arbitrary complex numbers will define the position for some state vector representing the possible states … so they have to obey the Hilbert space and norm requirements – so, the programmer is really arbitrarily able to set the “position” of the state vector. But … still, a crucial idea with a computer is … the ability to make arbitrary input from some defined set of possible inputs.

Now here is what gets my attention … I do see a way to meet this requirement for being able to input arbitrary complex numbers in neuronal dendrites. In fact, it appears that neuroscientists have been aware of this … for some time now … as the old joke goes.

How is the voltage/current attenuation characterized between any particular synaptic input site and the soma? In other words, in classical computers, we characterize the input (which is really a physical voltage) as a number from the set {0,1}. That’s because those are the numbers that are “meaningful” to the devices that are attached to the input – the logical devices in a classical computer are “looking for” some input from that set to operate on.

According to Christof Koch in Biophysics of Computation (p. 67, and going back at least to Rall and Rinzel, 1973), by either the action of a synapse or intracellular injection, “the voltage attenuation between the injection site and the soma is, in general, a complex number”.

It may be worth reading that again. Here is a computational device (the dendrite). The inputs are characterized as complex numbers. The choice for the arbitrary input to a neuronal dendrite does NOT come from this set {0,1}. You get to select arbitrary inputs from the set of complex numbers.

Now, there are a few other postulates that have to be satisfied (see Quantum Computation and Quantum Information section 2.2) … For example normalization appears to be canonical in the cortex. But, the first postulate BEGINS with having the ability to input arbitrary complex numbers. If you can’t do that … for example if I show you an Intel microchip and say “Sorry, only inputs from {0,1} allowed” 🙁 then it’s game over for going any further to claim that the Intel microchip is a quantum computer. Or if someone says “we have a quantum computer here” and you say “great, where can I input my arbitrary complex numbers?” and they don’t know … then well … it may be a quantum system but not a quantum computer.

The primitive logical device in the brain accepts arbitrary complex number inputs. These amplitudes are meaningful to the dendrite for the purposes of the various logic/math operators that Koch goes on to describe. In other words, the magnitude and the phase of that voltage attenuation, the synaptic injection, is an amplitude and it matters to the operator encoded by the dendritic architecture. The dendritic operator “understands” that a complex number just showed up at one of its inputs.

We have to be able to input arbitrary complex numbers. Christof Koch says we can input arbitrary complex numbers into dendrites. That gets us started with the first postulate 😉

132. Dorinteodor Says:

My name in MOISA Dorin Teodor from Romania. I am not physicist. I created a general theory about the basic hardware functions of the brain. This theory was not created to explain Quantum Physics, of course, but it can explain the basic problems associated with the function of the brain.

The theory considers that the human brain builds and operate two types of models: analogical models which generate imagination and symbolic models based on logic only.

As the analogical models generate the imagination, there is a tendency to understand the external reality based on imagination.

But starting with Max Planck and Albert Einstein, to understand the external reality based on imagination is no more possible and you, as physicists just understand this situation.

But, in fact there is no problem; the brain evolved to the level where the functions based on imaginations become obsolete.

I think that you, as physicists have a filling that this is the situation but it is very difficult for you to abandon function of the brain which generate the imagination but there is nothing to do. Go ahead and, sorry Einstein, you miss the point….

133. Clint Says:

Hi Tom #125:

Physics is an experimental natural science (no matter how clearly it can be expressed in math, nor how much a deep knowledge of math provides insights into ways forward in physics) — it never was about Newtonian matter or Maxwellian charges. It was always about the fundamental fermions and bosons, but we couldn’t see that clearly until recently.

I completely agree!

The reason that I voted for Scott in this thread however was because Scott (to the best of my knowledge) was the first one to really very clearly say, “Quantum mechanics is not about physics”.

As a computer engineer that really knocked me off my feet! The new understanding here (and yes I know there were many working on “quantum probability” for decades so Scott stood on the shoulders of giants, but so does everyone) is that there is this class of computation that is an abstract thing as a full-fledged class of computation that a priori does not have anything to do with “quantum physics”.

This is why the naming is so confusing. If we had “QUANTUM PHYSICS” (QP) on the one hand where everybody knew that we were talking about atoms, electrons, bosons, etc. and then on the other hand we had “AMPLITUDE INTERFERENCE COMPUTING” (AIC) … maybe we could talk about these two DIFFERENT things without everyone getting confused!

The key is that AIC is not necessarily equal to QP.

That is the real insight about this “thing” that the physicists discovered in the pursuit of an experimental science.

And it means that the door is open to Nature (or humans) innovating to realize a model of this class of computation (AIC) in systems that are not necessarily in the domain of QP.

The requirements are simply:
(1) Arbitrary input to the device has to be characterized as complex numbers (amplitudes)
(2) Orthogonal amplitudes have to be able to interfere and be normalized by the device
(3) Mechanisms/architecture needs to be available for unitary (linear) operators
(4) Mechanisms/architecture needs to be available for projection operators
(5) Mechanisms/architecture needs to be available to form products of state spaces (#2 above)

Nothing in there about quantum physics 😉

134. Aleksandar Mikovic Says:

The Einstein’s view on the nature of quantum mechanics is right, simply beacuse you cannnot apply the rules of QM to the whole universe (no outside observers; probability as a frequency of events cannot be used). As far as Bell’s theorem is concerned, it can be avoided by using non-local hidden variables, and one such example is the Bohmian mechanics.

135. Matt Mihelic, MD Says:

Hi Clint #131

The plan of our pilot research was that the system would both act as its own control (without laser stimulation), and also act as the experimental condition (with laser stimulation). An extensive literature search found no previous reports of any sustained oscillatory depolarizations being induced in other research, so this was a unique finding that was induced by the laser pulsations. At first, we thought that what we were seeing in those oscillatory depolarizations was electrical interference, for instance, perhaps a 60 cycle harmonic. However, with further examination it was determined that what we were measuring was individual cellular depolarizations, which continued in between the laser pulsations. Also, not every electrode or cell demonstrated the oscillatory depolarizations. The fact that when we observed the initiation of those oscillatory depolarizations in cells of the non-local culture they always began on or immediately after the fourth laser pulse of the iteration, is also an important consideration indicating that a very rapid and efficient calculation was taking place. Finally, there was a definite non-local pharmacological effect of the isoflurane gas, which combined with the laser stimulation terminated the non-local synchronization of cellular depolarizations.

There are likely multiple natural means of input into the DNA quantum logic processor. The reason that we used a laser is because Rita Pizzi, PhD, et al. had similarly shown some correlated depolarizations between cells in separated cell cultures in several experiments between 2004 and 2009. I suspect that in a natural cellular condition there are biophotons affecting cellular DNA. I have no doubt that there are also chemical inputs into the DNA molecule, such as various binding proteins. Also, I suspect that the time-dependent magnetic vector potential that is induced when electrons rush across the cellular membrane to balance the charge of the sodium ions rushing into the cell during depolarization, will affect coherently held Cooper pair electrons in the DNA quantum logic system.
https://dc.uthsc.edu/cgi/viewcontent.cgi?article=1017&context=gsmk_facpubs

136. Matt Mihelic, MD Says:

Hi Clint #131

Also, one other interesting point with regard to quantum DNA I/O. Because the theoretical quantum gate in the DNA deoxyribose enantiomeric symmetry operates across an energy barrier of kT•ln2 it would theoretically “vibrate” at about 4.3 THz, and Averoses Inc. is developing a THz speed CMOS technology that, among other potential capabilities, can theoretically be used to interface with and/or emulate certain DNA logical processes.
https://pdfpiw.uspto.gov/.piw?PageNum=0&docid=11063118&IDKey=00846F5F9EDA&HomeUrl=http%3A%2F%2Fpatft1.uspto.gov%2Fnetacgi%2Fnph-Parser%3FSect1%3DPTO1%2526Sect2%3DHITOFF%2526d%3DPALL%2526p%3D1%2526u%3D%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%2526r%3D1%2526f%3DG%2526l%3D50%2526s1%3D11063118.PN.%2526OS%3DPN%2F11063118%2526RS%3DPN%2F11063118

137. Clint Says:

Hi again Matt #134:

The fact that when we observed the initiation of those oscillatory depolarizations in cells of the non-local culture they always began on or immediately after the fourth laser pulse of the iteration, is also an important consideration indicating that a very rapid and efficient calculation was taking place.

There are likely multiple natural means of input into the DNA quantum logic processor.

Calling this system and its behavior a “calculation” or “logic processor” seems problematic and non-standard to me. But, then maybe that just indicates that we have a different fundamental view about physics (or metaphysics) regarding whether or not “natural” systems should be considered to be “computing” 😉

To my understanding a “computer” must be something that can accept arbitrary input or programming (both within some defined sets) from a human being (or well, anyway from “some being” who is capable of arbitrarily programming a computer). If all physical systems are “computing” then … well everything is ultimately a “quantum computer” … but that doesn’t seem to get us anywhere.

Again, to my point above, there are physical quantum systems – things that physicists study using the quantum model like atoms, electrons, photons, etc. But then there is the “quantum class of computation” which I would prefer was called something like the “amplitude interference model” of computation that doesn’t necessarily have anything to do with quantum physics.

My point of view here obviously is my humble opinion and I don’t believe it at all takes away from possible interest in your results. After all I was interested! And just in general my fundamental belief is there are no wasted experiments or uninteresting results if we are getting more data, more confirmation, more examples, etc. Who knows, maybe you are on to something new about neuronal synchronization? That would be very interesting.

I then encourage you (and others) to repeat (and thus verify the results of) these experiments. Determine modifications or other experiments that can rule out possible explanations. And demonstrate by applying the rules of quantum mechanics that the systems are truly entangled in a non-classical way … meaning that if the system(s) are represented by complex vectors in Hilbert state space under the 2-norm (I don’t know maybe for depolarized or not?), interference must be present meaning you can verify the neurons (or the DNA?) must have been in a superposition of depolarized and not, … I’m just guessing at how you would define these states for your purposes! But those are the kinds of things I would need to see in the discussion if I am to be convinced that to fully describe this system’s behavior requires that we apply the quantum model. And, even then, I think I would only be willing to go so far as to say you’ve modeled a physical quantum system … but we need some more interactive capabilities to realize a “quantum logic processor” 🙂

Thanks again, I enjoyed the exchange. Best of luck!

138. Matt Mihelic, MD Says:

Hi Lorraine Ford #129

The information of the bits/qubits that are theoretically held in the DNA quantum gates (of deoxyribose enantiomeric symmetry) are physically connected within the molecule and are therefore logically connected.

139. Lorraine Ford Says:

Matt Mihelic, MD #138:
If one were writing a computer program to symbolically represent this situation, how would one symbolically represent (in a general way) this “physically”/ “logically connected” situation? And how would one, in the computer program, symbolically represent (in a general way) the response to this situation? Presumably, every detail of every outcome of the situation would be covered by the computer program.

140. Jake Says:

In what way does QM appear to be like a final theory? That statement seems ridiculous to me. How can you possibly know such a thing?

141. Clint Says:

Hi Jake #140:

Great questions!

There needs to be an agreed definition of what a “final theory” would be, right?

Maybe for conducting science it would be something like “It covers all past observation and continues to make correct predictions.”

However, as Scott has pointed out, quantum mechanics is not really about physics … but is more like an operating system that we use to build predictive physical models. So, a general probabilistic class of computation.

The topic typically divides into those who see QM as something “really running the universe” (or that the “universe runs on”) and those who see QM as a mathematical tool we use that, while it always works for us, tells us nothing about whatever is “actually out there” behind our observations.

I actually think that there is a deep computational problem within your “how can you even know such a thing?” question. Let’s suppose that our brains are (very limited) quantum models of computation. How would you know that when you reached the conclusion that quantum mechanics was the “final theory” of the universe if it was REALLY the final theory of the universe or … if it is just the case that every observation you make, every probability you define, every basis you choose, every amplitude you represent as information … all are only in your head … while “whatever” is out there running the universe is nothing at all like our concept of a probabilistic model of computation (OK maybe at least it has to contain BQP 😉 ) If that were the case then I think you would be right that we could never say such a thing. But … how do you prove that?

To me that seems something like a computational complexity question because it is basically saying: “Suppose you have an AI realized in a class of computation X and that AI is running in a universe running on class of computation Y where X is strictly contained in Y.” Would that AI eventually conclude: “Hey, the universe must be running on X.” Because all of its information would have to be encoded in X, all of its logic is restricted to X, even the very definition of things like “observation” and “experimental setup” and “probability” are defined by X?

Or would an X-limited-AI, by some kind of experiment or experience, be able to discover a universe running on Y ?

But, I don’t know, maybe that computational problem has been solved and I’m just unaware? Maybe there is a computational complexity researcher around here who knows?

142. Lorraine Ford Says:

Matt Mihelic, MD #128 and #138,
Re “In a quantum system, the ‘context’ is the superposition of all of the quantum bits … the quantum gates (which are essentially the physicality of the qubits), and those quantum gates are the logical gates of the system by which the logical operations take place”:

You seem to be implying something like the following things and the following steps:

1. What might be called “bits of possible information” are themselves information to the system, where the system is this tiny little part of the world that we are describing. Each of these individual “bits of possible information” can be represented in something like the following form: “category = number IS TRUE”. So, in the first step, the system needs to create all the “bits of possible information” (i.e. these “bits of possible information” don’t just appear from nowhere, for no reason).

2. These individual “bits of possible information” are not themselves the information context. The information context can seemingly only be represented as (something like) all the different “bits of possible information” separated by “OR” symbols. Clearly, the different “bits of possible information” are not themselves representable as “OR” symbols, i.e. the “bits of possible information” are not themselves logical gates. So, the second step might be to link all the different “bits of possible information” by “OR” symbols. Now we have the information context. But in this scenario, contrary to what you say, “the physicality of the qubits” (where physicality is representable as numbers that apply to categories) seemingly can’t be a logical gate.

3. The next step is the response to this information context. I.e. the next step would be representable as something like: “IF logical context THEN …”.

The only solution seems to be that matter (e.g. DNA) can’t be represented as a set of numbers that apply to categories: to fully represent matter, one needs to also represent what matter DOES via symbols like IF, AND, OR, IS TRUE and THEN.

143. Paul Hayes Says:

Clint #141

QM is really about physics. QT isn’t but QM (and QFT) certainly is. This may seem like a minor point but I think being aware of and making the distinction between QT (probability theory) and QM (one of its applications) is important, especially when talking to physicists. Mathematicians and (some) mathematical physicists may know better but the average physicist is burdened by a sorry history and a poor education*.

* “The are few mathematical topics that are as badly taught to physicists as probability
theory.” –Streater

144. Clint Says:

Hi Paul #143:

I totally agree.

The problem is the word “quantum” keeps hanging around when trying to talk about a class of computing 😉

Seems like I read/heard some physicist say “If Planck’s constant shows up then it’s quantum physics.”

That would be consistent with Mike and Ike in QCQI introducing Planck’s constant and the Schrodinger equation in Postulate 2′ (as an extension to Postulate 2) as the appearance of experimental physics. The question then comes down to “Is Postulate 2′ really optional?” Pretty sure Scott has pondered that in more than one place 😉

If only we could stop calling this computational class “quantum”! As a computer engineer this strikes me as like calling an Intel microchip a “Newtonian Computer” instead of a binary computer or twisting ourselves up to say we’ve made a “Newtonian Mechanical Computer”. It inverts the hierarchy of the computational class and the application of that class. Seems like … the naming of a computer should come down to “What is the set of numbers we are allowed to operate over?” If that set of numbers is {0,1} then we have a “binary computer”.

All of what makes this class of computation different comes down to the fact that it allows for the interference of amplitudes (complex numbers) in some “computing device”. And it doesn’t restrict to any “physics” as regards how we or Nature might innovate to realize amplitude interference (and the other postulates required) in a device. So, obviously I’m advocating for the position that

Postulate 2′ is not the only way to realize Postulate 2

In other words, it is in principle possible to realize a device that allows interference of amplitudes (and the other postulates) without bringing Planck’s constant into the computational level description of the device.

All of the following postulates (normalized Hilbert space, linear operators, projection operators, tensor products) follow as consequences of saying “we want to build a class of computation that is based on the interference of complex numbers.”

Start with saying “I want to compute by interfering amplitudes”. Then we need …
Orthogonality -> distinguishable states requirement
Norm -> follows from Gleason’s theorem if you want to compute with the above
Unitary -> linearity (don’t destroy the above)
Projection -> required non-linearity for reaching computational “decisions”
Tensor products -> want to combine things above

Maybe it should be called “Amplitude Interference Computing”. Of course that’s a mouthful 🙂 Maybe just “interference” computing? “amplitude computing”?

145. Lorraine Ford Says:

Re quantum “possible outcomes”:
One of the weirdest parts of “quantum weirdness” is that, it is thought that at every point in space and time, nature is constructing for itself a set of possible outcomes, only to change it’s mind, and only select one of the outcomes. It seems like a lot of wasted effort on the part of nature. I would have thought that it would be more likely that people, physicists, are the ones that are mentally constructing a set of possible outcomes, rather than low-level nature constructing a set of possible outcomes. I.e. something like QBism would be a more reasonable way of looking at this aspect of the world.

146. Matt Mihelic, MD Says:

Hi Lorraine Ford

Re. #139:
“…how would one, in the computer program, symbolically represent (in a general way) the response to this situation?”
George E.P. Box once said that all models are wrong but some are useful. The best representation of the system is found in the modeling of the DNA molecule. This is because the DNA molecule IS the system. It is the physicality of the “nano-info-bio nexus”.

Re. #142:
In the classical computer the bits are different from the logical gates, but in the (DNA) quantum computer the hardware and the software are one and the same, and the qubits are synonymous with the quantum gates. This is a significant difference from the classical digital computing paradigm that dominates our current conceptualization(s) of quantum computing. Because one can do classical computing on a quantum computer but one cannot do quantum computing on a classical computer (a la David Deutsch), classical computing can be considered as a sort of “subset” of quantum computing. Today the time-dependent serial logic of classical digital computing dominates our ideas of computing and so we tend to conceptualize quantum computing in terms of classical digital computing. We’ve kind of forgotten that prior to about the mid-1970’s analog computing ideas “competed” with digital computing ideas in the minds of those developing computer science. Analogously, it can be difficult to conceptualize quantum computing as time-independent computing (a la Paul Benioff) within the current milieu that is dominated by the paradigm of time-dependent digital computing.

147. Clint Says:

Hi Lorraine #145:

I would enjoy hearing more about why you see QBism as more reasonable.

Here are my thoughts …

One of the weirdest parts of “quantum weirdness” is that, it is thought that at every point in space and time, nature is constructing for itself a set of possible outcomes, only to change it’s mind, and only select one of the outcomes. It seems like a lot of wasted effort on the part of nature.

I agree it seems inconceivable to imagine the universe keeping track of the amplitudes for all possible measurable states AND for all possible definitions of AND contexts for “systems”! Of course, the idea to leverage this fantastic level of computational work the universe is doing is the motivation behind quantum computing efforts – Scott cited this “practical confirmation of the fantastic” as part of his original interest in the field.

This is where I am usually told “Inconceivable … You keep using that word. I do not think it means what you think it means” 🙂

I would have thought that it would be more likely that people, physicists, are the ones that are mentally constructing a set of possible outcomes, rather than low-level nature constructing a set of possible outcomes. I.e. something like QBism would be a more reasonable way of looking at this aspect of the world.

And I agree interpretations like QBism may seem more “reasonable”. Here are two arguments that could be made for why they are “more reasonable”:

First, “extraordinary claims require extraordinary evidence.”

Subjective interpretations seem to be making the “least extraordinary claim” among the different possible interpretations. They are only claiming that “quantum theory is the way it is because that is just how you think about things.” That can be either (A) It is simply a mathematical tool (general probability theory) or (B) it reveals something about our own model of cognition (brain). Either of those subjective interpretations – that we are choosing a math tool or that this is how our brains put together information, probabilities, and observables – are “humble” interpretations in that they do not presume more than that we are simply computer-like beings who are making use of this model. However, … think about what the other interpretations are asking us to accept … many worlds? pilot waves? The other interpretations seem to want to cling to the “old dream of physics” that we can “know and understand the universe/reality”. That is, the other interpretations want to say we are both using the tool AND because it works for us that must mean we have gained authentic knowledge of reality … so we can “have our cake and eat it too”. I realize this isn’t even something we normally stop to doubt … but it is an assumption we are making.

Second, the fabricated nature of our cognitive model.

What the subjective interpretations are claiming seems to be most in line with what cognitive neuroscience tells us: yes, there is some kind of external reality out there … BUT everything we know about it really comes down to what we know about the simulation of it fabricated by our brains. Furthermore, we are profoundly biased (or evolved) to unquestioningly accept the brain’s fabrications: we are biased to think our consciousness is “special”, our beliefs “must be right and everyone else is crazy”, what we see or experience is “real”, “mathematics is everywhere so math must be revealing something deep about external reality”, etc. The subjective interpretations then at least don’t violate this “cognitive fabrication principle” that we are bound, restricted, and often unknowingly seduced by being all-encompassed by the model of our own cognition. By being all-encompassed I mean that the very nature of what it means “to set up an experiment” is something that we can’t escape from because it is itself defined by the nature of our cognitive model of computation: choose your arbitrary basis, encode your observations in amplitudes, the norm of the amplitudes represent probabilities, linear evolution operators modify the state, projection operators form threshold decisions, and tensor products combine state spaces. If all of those come from the very architecture of our thoughts/brain then … how could “set up an experiment” ever be performed in any other way? … like Scott says Bohr would say – it sure LOOKS like it always works … well … if this model is the ONLY logical/model “eyes” we could ever see the world through …

We (well most of us) recoil from this path as it appears to lead towards the truly crazy-sounding conclusion that … Quantum theory is not about physics … it’s about cognitive neuroscience. What the physicists “discovered” was the model of their own predictive cognition that underlies and defines ALL our observations.

How else could we frame this “subjective trap” concern? Maybe it would be that we are under something like a computational version of the independence of the continuum hypothesis where we can construct an inconceivable (infinite non-classical field) model of reality while all the time God could be looking down on us and saying … “Funny creatures … their reality is finite and classical but they’ve constructed a model that leads them to believe it is infinite and non-classical. How clever of me for designing a finite classical universe in which all possible infinities would appear to provably exist!” Again, I’m just using the independence of the continuum hypothesis to say … our experience of quantum theory working could be something like that kind of an “internal self-constructed illusion” that would pass all tests/proofs inside itself but not actually hold when viewed from the outside … and, like IOTCH we wouldn’t ever be able to tell the difference or “find” the truth … from inside our model …

All of that being said … 🙂 … I do recognize there are arguments against the subjective interpretations. Personally, I feel that the strongest one is simply … This is why we do experiments! It’s empirical evidence that forces us into accepting QM!

Yeah … but … that’s not a knockdown, of course … after all … if our brain can’t define an experiment in any way except by

(1) Record everything as amplitudes
(2) Use interference and norm for vector state spaces
(3) Use linear operators to change state
(4) Use projection operators for decisions
(5) Combine state spaces

… then the character of “experimental” science itself comes into question. Like Scott said, quantum mechanics is “not about physics” but is the “operating system” that we use to set up experiments, obtain evidence, and build models. Well … why are we “forced” to use this particular operating system? Are we forced to use it?

The answer is usually: “Because experimental science forces us to use it! In other words, it is the operating system that works for building models of the universe/reality.”

However, that could still be undercut by, “It works for everything because that is how your brain fabricates all the things you call information, probabilities, and observables.”

At which point Science throws up its hands and says, “Well, what’s the difference then??”

The difference could be in things such as:

(A) In the spirit of Wittgenstein’s “whereof one cannot speak thereof one must be silent” … to apply to statements like “The universe is quantum”

(B) It would answer the question Why Quantum Mechanics?

(C) But maybe most seriously of all … it might make us start working on how to escape the “subjective trap” … as a scientific necessity. Or, is this just a corollary of IOTCH, and we can’t get out nor can we ever know if the model we are using truly matches the operating system of our universe/reality? I mean … what would it even mean to discover “information” or “laws of Nature” in another operating system if we can only conceive of information, experiments, measurements, etc. in our operating system? (And yes classical information is contained in QT)

Looking forward to your thoughts, Lorraine 🙂

148. Andrei Says:

Clint,

„I agree it seems inconceivable to imagine the universe keeping track of the amplitudes for all possible measurable states AND for all possible definitions of AND contexts for “systems”!”

I think we can understand a lot about how nature works by focusing on the EPR argument.

So, we have two distant locations, A and B, and they measure the same property of the entangled particles (position, momentum or, in Bohm’s version, spin on a certain axis, say Z). We know for a fact that:

P1: A measurement at A allows you to predict with certainty the measurement result at B. If A measures spin UP on Z, B will get spin-Down on Z with probability 1.

Let’s introduce now the locality condition:

P2: The measurement at A does not disturb B.

So, we know that the Z-spin of B is Down after the measurement at A, we know that the measurement at A did not disturb B, so it logically follows that the Z-spin of B must have been Down even before the A measurement (otherwise we contradict P2). So, we have:

C1: The spin of particle B was predetermined (spin-Down on Z).

But this also implies that the spin Z of particle A was predetermined as well (otherwise you could not predict B with certainty).

So, the above argument proves that the spins on Z were predetermined for both particles since the time of emission. The only assumption is locality (which Bohr accepted). Einstein was right, Bohr was wrong. QM is incomplete, since the true state of the particles before measurement was: particle A – spin UP on Z, particle B – spin DOWN on Z, not the superposition/entangled state QM postulates.

So, the universe does not „keep track of the amplitudes for all possible measurable states AND for all possible definitions of AND contexts for “systems”!”. The universe is deterministic so there is single/a well defined state at each time.

The above argument also implies that Qbism must be non-local (since it does not employ deterministic hidden variables). This is, in my opinion, deadly for Qbism since non-locality requires the introduction of a universal, absolute frame of reference, which does not go well with the agent-centered view of this interpretation.

149. Paul Hayes Says:

Andrei #148

The QBists have themselves pointed out why QBism – and QM more generally – isn’t and doesn’t need to be “nonlocal”. The idea that non-disturbance implies predetermination is simply wrong. It’s disappointing that so many people still don’t understand these issues and continue to make that mistake, offer wrong explanations of EPR and even fail “the ping pong ball test”.

150. Clint Says:

Hi Andrei #148:

Does the EPR argument then suggest that we should discard the “no-superdeterminism” assumption in the “local friendliness” theorem?

Superdeterminism (determinism with pre-existing correlations between the systems being measured and the measurement settings and/or observers) I concede undercuts subjective interpretations! Certainly we could simply be doing something like the independence of the continuum hypothesis suggests and inventing a mathematical model that, while provable to us, claims “more” than actually exists. Maybe the IOCH supports superdeterminism?

If you are not advocating for superdeterminism, then … where do the measurement settings come from?

It is fascinating that we have actually been able to conceive of an experimental protocol that forces us into these two “radical” positions, either

(A) Superdeterminism, or
(B) Subjectivism

(Or are there serious arguments for superluminal signaling that I’ve missed …?)

I guess what bugs me about Superdeterminism is … It feels like it actually violates the “extraordinary claims require extraordinary evidence” rule. It is making a rather extraordinary claim! And isn’t the claim … by definition … not something we can actually verify/test? But maybe the IOCH tells us why we can’t verify ??

On the other hand, subjectivism (either of the “using a model tool” variety or the “natural brain model” variety) seems to be making a much less extraordinary claim: simply that we are computational beings who have been fitted with a particular model for predictive computing.

But then maybe God was feeling extraordinary 😉

Thank you for sharing your thoughts!

151. Lorraine Ford Says:

Clint #147:
I don’t think we have a “kind of an “internal self-constructed illusion””. Because people successfully drive cars and navigate the world, so our consciousness can’t be harbouring too much of an illusion when it comes to dealing with the everyday world.

I think any illusion would come from using models of the world. We have an illusion about models. The illusion is that mathematical models can work without people perceiving and moving the math symbols, i.e. the models have hidden and unacknowledged parts. In some circumstances, this could lead to people drawing wrong conclusions from a model, if they failed to notice the aspects that people contribute to a model.

Re the aspects that people contribute to a model: in order to make computer systems work, computer programs need to model people’s contribution to a model. Computer programs model perception (i.e. statements like “condition1 AND condition2 AND condition3 … IS TRUE”), and computer programs model agency (i.e. statements like “IF condition1 AND condition2 AND condition3 … IS TRUE, THEN…..”). These might be rough models of perception and agency, but they represent the type of aspects that people contribute to a model, the type of aspects that can’t be represented with equations alone.

So I’m contending that we need to work out which part of a model is what people are doing, and which part of a model is what the “external” world is doing.

152. Lorraine Ford Says:

Matt Mihelic, MD #146:
What does “the physicality of the “nano-info-bio nexus”” mean? This is not a model.

Given the environment or surrounding situation of a molecule, is there any logic at all in the behaviour of a molecule? How does one model the behaviour of the DNA molecule: with equations, or with logical symbols like IF, AND, OR, IS TRUE and THEN, or does one model behaviour with both types of symbols?

Symbols like IF, AND, OR, IS TRUE and THEN are the types of symbols that can be used to model perception of a situation, and response to a situation (as opposed to the type of equations that represent the laws of nature, which can only model relationship). Are you trying to model pure relationship, or are you also trying to model perception of a situation, and response to a situation?

153. Jonathan Says:

Scott #91,

As it happened to be I was reading the histories [0] only a week or so ago and came across an answer to whether Einstein refuted von Neumann’s proof on page 68. Abner Shimony relates a story told to him by Peter Bergmann about one time Bergmann asked Einstein for his opinion on the proof. Apparently Einstein was quite familiar with it and fetched von Neumann’s book, pointed to one of the assumptions, and said there was absolutely no reason to believe that the assumption should hold in general for all alternative theories. However Einstein never published this criticism, with the authors suggesting that because von Neumann was very careful in his wording of the proof not to claim too excessively, Einstein didn’t see the need to publish a criticism of something von Neumann never specifically claimed (although even to this day there’s still arguments in the literature to what exactly von Neumann said and meant[1]; I suppose it’s just another one of many things that we unfortunately never got clarification from von Neumann from because he died so early). On Bohr, I am sure he would have known of the proof but if I remember correctly reading somewhere else he did not use it specifically in his arguments (although others in the Copenhagen school did).

154. Andrei Says:

Paul Hayes,

„The QBists have themselves pointed out why QBism – and QM more generally – isn’t and doesn’t need to be “nonlocal”.”

Sure they did, it’s just that they are wrong. Let’s see how Qbists argue that their interpretation is local. I quote the paper you linked:

„An Introduction to Qbism with an Application to the Locality of Quantum Mechanics”, page 4:

„Quantum correlations, by their very nature, refer only to time-like separated events: the acquisition of experiences by any single agent. Quantum mechanics, in the QBist interpretation, cannot assign correlations, spooky or otherwise, to space-like separated events, since they cannot be experienced by any single agent. Quantum mechanics is thus explicitly local in the QBist interpretation. And that’s all there is to it.”

But of course, the space-like events can be experienced by the agent at the moment this agent looks on the experimental records received from the A and B labs. Sure, he does not experience them in real time, but then what? Whould a Qbist deny the correctness of experimental data collected by two distant computers located at A and B? If so, Qbists would be unable to do any science.

The time when the agent looks on the experimental records is completely irrelevant. What he needs to do is explain those experimental records acquired at the time they were acquired. And, as proven by my version of the EPR argument presented in my comment #148, such an explanation involves either non-locality, or the existence of deterministic hidden variables.

Or take a classical measurement of the speed of some object. The Qbist would record the initial position, X0, at time T0 and the final position, X1, at time T1. The velocity of that object would be given by the formula (X1-X0)/(T1-T0). Say the result is >10^10c. What would our Qbist say? That the speed is still <c because he cannot experience space-like separated events?

„The idea that non-disturbance implies predetermination is simply wrong.”

On the contrary, it’s a rock-solid logical deduction. Let B0 be the state of B before the A measurement and let B1 be the state of B after the A measurement. If the A measurement does not disturb B we have:

P1: B0=B1.

But we know from the A measurement that:

P2: B1=spin DOWN on Z.

From P1 and P2 it necessarily follows that B0=spin DOWN on Z. The measurement result was predetermined.

I see that you linked a very long article dealing with the EPR argument. If you think you can find a rebuttal of my argument in there, please be more specific. I don’t see anything about ping-pong in there either.

155. Andrei Says:

Clint,

Thanks for a very interesting discussion!

„Does the EPR argument then suggest that we should discard the “no-superdeterminism” assumption in the “local friendliness” theorem?”

I think that a true theory must pass all arguments. The EPR argument proves that there are only two options:

1. Non-locality
2. Deterministic hidden variables.

If we take into account the implications of Bell’s theorem we remain with:

1. Non-locality
2. Superdeterminism

I cannot say that one must accept superdeterminism, since the non-local option is still available, but, given the importance of relativity for both QM and GR I think superdeterminism is the most reasonable option.

„Superdeterminism (determinism with pre-existing correlations between the systems being measured and the measurement settings and/or observers) I concede undercuts subjective interpretations!”

I am not sure what the subjectivity of those „ subjective interpretations” is supposed to achieve. The assumption of objectivity does not enter anywhere in the EPR argument, as presented in post #148, so its denial cannot possibly avoid the conclusion of the argument, which is that only deterministic hidden variable theories can be local. Of course you could have subjective non-local theories or subjective hidden variable theories, but what’s the point?

„If you are not advocating for superdeterminism, then … where do the measurement settings come from?”

I do consider that superdeterminism is indeed the most reasonable option.

„It is fascinating that we have actually been able to conceive of an experimental protocol that forces us into these two “radical” positions, either

(A) Superdeterminism, or
(B) Subjectivism„

Again, I don’t see where „subjectivism” appeared as an option. It’s not, it is just a different property a theory could have, a property unrelated to the issue of locality. The two options are 1. Non-locality and 2. Superdeterminism.

What does IOCH mean?

„I guess what bugs me about Superdeterminism is … It feels like it actually violates the “extraordinary claims require extraordinary evidence” rule.”

Well, it does not. Unfortunately many scientists associate superdeterminism with some finely tuned initial state, at the Big-Bang. Scott is one of them. I have no idea where this originates, since no superdeterminist proposal (’t Hooft Cellular Automaton Interpretation, or Stochastic Electrodinamics) postulates such a thing.

In the context of Bell’s theorem superdeterminism implies that the states of the particle source and detectors are not independent. They are correlated in some way. This is all.

The fact that some distant physical systems are correlated is not something extraordinary.

Choose two stars in a binary system. Their position and momentum are correlated, since they orbit around the common center of mass, in the same plane, and their orbits are ellipses. The explanation for this correlation has nothing to do with some special conditions at the Big Bang. It’s a consequence of how gravity acts (the inverse square law). No initial condition would generate square orbits.

In the general case of a system of N interacing objects, the state of the system would be a solution to the corresponding N-body problem. None of the N objects would have a state independent of the rest, since the solution to the N-body problem depends on all N objects.

What about a Bell test? The source and detectors are made out of atoms, so electrons and nuclei. They are charged particles, so they all interact electromagnetically. The outcome of a Bell test must be a solution of the corresponding N body EM problem. So, the state of the source (which in turn determines the hidden variable) is not independent on the states of the detectors. So, Bell’s independence assumption fails. All particles in the experiment interact, so you need to consider all of them.

Since any experiment involves electrons and nuclei, and they all interact electromagnetically, at any distance, we have a proper justification for the claim that superdeterminism (pre-existing correlations) are to be expected in the general case. There is nothing extraordinary about that.

There is a regime where independence is true, to a very good approximation, the Newtonian/macroscopic regime. When you are interested in macroscopic properties, the microscopic correlations between electrons and nuclei are hidden in the statistical noise. Large objects consisting of same number of positive and negative charges approximate very well non-interacting objects, at least when they are far away. This is why we can assume independence in medical tests. And this is why superdeterminism does not conflict with the scientific method. You should expect correlations at the fundamental level (where they indeed manifest as the so-called quantum contextuality), but you should not expect them at the so-called classical level.

156. Paul Hayes Says:

Andrei #154

That quote from the QBists about timelike vs spacelike correlations is indeed nonsense but it doesn’t detract from the fact that, as is well known, neither QBism nor any other [neo-]Copenhagen interpretation is “nonlocal”. See those papers by Werner I linked to in the Quanta Magazine comment, or Landsman’s book or Rovelli’s take on this issue or Griffiths’ or Gell-Mann’s or Mermin’s in that video I linked or the other comments of mine in that Quanta article or…

Your argument isn’t a “rock solid logical deduction”, it’s a non sequitur. It fails to take into account that [neo-]Copenhagen interpretations simply don’t need to and don’t assume that definite values found / measured must have been there all along; that they are/were “possessed values”. They just don’t subscribe to that naive, classically motivated metaphysical prejudice (“Realism_2 (=C)” as Werner puts it).

157. Andrei Says:

Paul Hayes,

“Your argument isn’t a “rock solid logical deduction”, it’s a non sequitur.”

Really? Are you saying that from:

B1=B0 and B1=spin DOWN it does not follow that B0 must also be spin DOWN? Are you serious? Where is the non-sequitur?

“Copenhagen interpretations simply don’t need to and don’t assume that definite values found / measured must have been there all along;”

I don’t make such an assumption either. The only assumption is locality, that the measurement at A does not disturb B. The necessity of the preexisting definite values is the conclusion of the argument. As long as you don’t deny any of the premises (and you didn’t) you have to accept the conclusion.

The argument works perfectly regardless of your assumptions about the state of B. You can assume there is no B at all, or that it is undefined, or that it is a pink 6-dimensional rabbit, whatever. The problem is that locality requires that the A measurement does not disturb this state. But then QM tells us what the state of B is after the A measurement. It’s a spin eigenstate (spin-DOWN in our case). So, locality + QM forces us to accept that the pre-mesurement state has to be spin-DOWN as well. The superposition simply describes our incomplete knowledge about the system. Such a view is not naive, it’s the only logically consistent view that is also local.

158. Paul Hayes Says:

Andrei #157

“Are you serious?”

Of course I’m serious.

“I don’t make such an assumption either.”

Of course you do – “P1: B0=B1” – and for pity’s sake please just watch that Mermin lecture video about GHZ or something. This is all quite elementary.

159. Andrei Says:

Paul Hayes,

“Of course you do – “P1: B0=B1”

This is the third time I have to explain this elementary logical deduction. You want locality, so you want the state of B NOT to change when A is measured. If the state of B does not change it has to be the same, so B0=B1, therefore B0=B1 FOLLOWS from the locality assumption. I don’t assume B0=B1 from the start. If B0 does not equal B1 it means the B changed as a result of the distant measurement at A, so a non-local physical effect occurred. What could be simpler than that?

Can you please explain me how it is possible for B not to change, while B0 is different than B1?

GHZ is another experiment. You need to understand EPR first in order to properly interpret GHZ.

I have already debunked Mermin’s take on locality in #154. You agreed with me:

“That quote from the QBists about timelike vs spacelike correlations is indeed nonsense”

Yet, you failed to replace that nonsense with something better. Why is that? By no means, find a proper local explanation for EPR by Mermin or Werner, or any other physicist you admire and we’ll see how coherent it is. I predict you will come empty-handed.

160. Clint Says:

Hi Lorraine #151:

Yes, I agree with you … “illusion” was a poor word choice! Maybe fabrication? In any word, you are certainly correct that our model maps successfully enough to the world in the sense that it allows us to navigate the world at our level of interaction/abstraction … which pretty much was the goal of evolution, right? In saying that … it seems obvious that whatever our model of cognition (C) might be it is at least strictly contained (like a proper subset) in the model running the universe (U) (if we presume that the universe is running like a computational model, that is). And maybe we could say that evolution/life was/is aiming for achieving a model (class) of computation that is at least equal to the class of computation running in/on the universe.

Could evolution evolve a model in our cognition so that the class of the universe would be strictly contained within the class of the model of our cognition? Since the model of our cognition must run on or in the model of the universe … then I think complexity theory tells us, no. At best then, evolution could equal the class of computation running the universe. However, the “model” of our cognition, while it could be of an equal “class” as the model running the universe, it may be much weaker in practical performance. For example, both the Fugaku supercomputer and the Digi-Comp II have the same “class” of computation … but classical computation is realized in them in very different models of classical computation with very different practical results 🙂 We have no reason not to expect that for the “quantum” class of computation that there will also be different models of quantum computation that are extremely different in practical performance.

So I’m contending that we need to work out which part of a model is what people are doing, and which part of a model is what the “external” world is doing.

Probably a totally different direction … But this reminded me of a thought that I once had … When thinking early on about computers I had the eerie feeling that the computers required me to observe their output in order for what they were doing to be considered “computing”. After all … it’s nothing but voltages, currents, EMFs, and semiconductor devices, right?! So, if the transistor doesn’t really know that it is representing a value from the set {0,1} … then who does know this?? or where is it known??

This is of course the “Does a tree falling in the forest make a sound?” philosophical paradox/problem. The resolution requires us to agree on a definition. And as Godel was quick to correctly observe, it was Turing who gave us the definition of “what does it mean to say something is computing.” That way we can say that if a “computer” multiplied 18*343=6174 then it wouldn’t matter whether that was done with a Commodore F4146R, the Fugaku supercomputer, the Digi-Comp II, GPT-3, the IBM Eagle r1 quantum processor, or a human being with paper and pencil … in all cases it’s “a computer computing”. Of course, the class of computation and the models of those classes realized physically can vary enormously in their practical performance 😉

Again, you are probably going in an entirely different direction! Which I would enjoy hearing about. Thank you for your interesting comments.

161. Lorraine Ford Says:

Clint #160:
Yes, I’m taking “a totally different direction”.

Going back to your concerns about subjectivity (#147), what you call the “subjective trap”:
What one can infer about the world depends on the model one is using. The equations that represent the laws of nature represent relationships; they can’t be used to represent subjectivity: subjectivity seems like an aberration.

But the symbols used in computer algorithms, like IF, AND, OR, IS TRUE and THEN, are a natural fit if one wants to represent subjectivity, and one could still retain relationships (i.e. laws of nature). These symbols can be used to represent perception of situations a subject faces, and the response to situations.

This subjectivity would have to be an inherent part of the system as a whole, it would be a system of subjects. Inherent subjectivity (and agency) would seemingly make the world more like the world of the quantum mechanics models. But, this view would conflict with some people’s deeply held beliefs about the nature of the world. However, subjectivity can’t exist if the world is fully describable by the type of equations that represent the laws of nature.

One can’t ever “escape the “subjective trap””, because living things are subjective. The problem is the model. Equations can’t ever model a world of subjects and agents; I think one needs to model the world using the type of symbols used in computer algorithms, while still retaining laws of nature.

Re “Does a tree falling in the forest make a sound?”: Of course, it doesn’t make a sound. Physical sound waves don’t have a sound. Sound is the conscious experience of sound waves: the experience of sound requires consciousness, as well as functioning sound detectors (ears). But there is no mathematical relationship connecting physical sound waves with experienced sound, just like there is no mathematical relationship connecting light wavelength with experienced colour. Consciousness/ subjectivity/ experience is a different type of thing, and one needs to use different types of symbols in an attempt to represent it.

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.

Comment Policies:

1. All comments are placed in moderation and reviewed prior to appearing.
2. You'll also be sent a verification email to the email address you provided.
YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.