## Google’s Sycamore chip: no wormholes, no superfast classical simulation either

Update (Dec. 6): I’m having a blast at the Workshop on Spacetime and Quantum Information at the Institute for Advanced Study in Princeton. I’m learning a huge amount from the talks and discussions here—and also simply enjoying being back in Princeton, to see old friends and visit old haunts like the Bent Spoon. Tomorrow I’ll speak about my recent work with Jason Pollack on polynomial-time AdS bulk reconstruction. [New: click here for video of my talk!]

But there’s one thing, relevant to this post, that I can’t let pass without comment. Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement of a holographic wormhole on a microchip (!!)—a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg asked whether, a century from now, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.

I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.

To be clear, I don’t blame Professor Nirenberg at all. If I were a medieval historian, everything he said about the experiment’s historic significance might strike me as perfectly valid inferences from what I’d read in the press. I don’t blame the It from Qubit community—most of which, I can report, was grinding its teeth and turning red in the face right alongside me. I don’t even blame most of the authors of the wormhole paper, such as Daniel Jafferis, who gave a perfectly sober, reasonable, technical talk at the workshop about how he and others managed to compress a simulation of a variant of the SYK model into a mere 9 qubits—a talk that eschewed all claims of historic significance and of literal wormhole creation.

But it’s now clear to me that, between

(1) the It from Qubit community that likes to explore speculative ideas like holographic wormholes, and

(2) the lay news readers who are now under the impression that Google just did one of the greatest physics experiments of all time,

something went terribly wrong—something that risks damaging trust in the scientific process itself. And I think it’s worth reflecting on what we can do to prevent it from happening again.

This is going to be one of the many Shtetl-Optimized posts that I didn’t feel like writing, but was given no choice but to write.

News, social media, and my inbox have been abuzz with two claims about Google’s Sycamore quantum processor, the one that now has 72 superconducting qubits.

The first claim is that Sycamore created a wormhole (!)—a historic feat possible only with a quantum computer. See for example the New York Times and Quanta and Ars Technica and Nature (and of course, the actual paper), as well as Peter Woit’s blog and Chad Orzel’s blog.

The second claim is that Sycamore’s pretensions to quantum supremacy have been refuted. The latter claim is based on this recent preprint by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. No one—least of all me!—doubts that these authors have proved a strong new technical result, solving a significant open problem in the theory of noisy random circuit sampling. On the other hand, it might be less obvious how to interpret their result and put it in context. See also a YouTube video of Yunchao speaking about the new result at this week’s Simons Institute Quantum Colloquium, and of a panel discussion afterwards, where Yunchao, Umesh Vazirani, Adam Bouland, Sergio Boixo, and your humble blogger discuss what it means.

On their face, the two claims about Sycamore might seem to be in tension. After all, if Sycamore can’t do anything beyond what a classical computer can do, then how exactly did it bend the topology of spacetime?

I submit that neither claim is true. On the one hand, Sycamore did not “create a wormhole.” On the other hand, it remains pretty hard to simulate with a classical computer, as far as anyone knows. To summarize, then, our knowledge of what Sycamore can and can’t do remains much the same as last week or last month!

Let’s start with the wormhole thing. I can’t really improve over how I put it in Dennis Overbye’s NYT piece:

“The most important thing I’d want New York Times readers to understand is this,” Scott Aaronson, a quantum computing expert at the University of Texas in Austin, wrote in an email. “If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper.”

More broadly, Overbye’s NYT piece explains with admirable clarity what this experiment did and didn’t do—leaving only the question “wait … if that’s all that’s going on here, then why is it being written up in the NYT??” This is a rare case where, in my opinion, the NYT did a much better job than Quanta, which unequivocally accepted and amplified the “QC creates a wormhole” framing.

Alright, but what’s the actual basis for the “QC creates a wormhole” claim, for those who don’t want to leave this blog to read about it? Well, the authors used 9 of Sycamore’s 72 qubits to do a crude simulation of something called the SYK (Sachdev-Ye-Kitaev) model. SYK has become popular as a toy model for quantum gravity. In particular, it has a holographic dual description, which can indeed involve a spacetime with one or more wormholes. So, they ran a quantum circuit that crudely modelled the SYK dual of a scenario with information sent through a wormhole. They then confirmed that the circuit did what it was supposed to do—i.e., what they’d already classically calculated that it would do.

So, the objection is obvious: if someone simulates a black hole on their classical computer, they don’t say they thereby “created a black hole.” Or if they do, journalists don’t uncritically repeat the claim. Why should the standards be different just because we’re talking about a quantum computer rather than a classical one?

Did we at least learn anything new about SYK wormholes from the simulation? Alas, not really, because 9 qubits take a mere 29=512 complex numbers to specify their wavefunction, and are therefore trivial to simulate on a laptop. There’s some argument in the paper that, if the simulation were scaled up to (say) 100 qubits, then maybe we would learn something new about SYK. Even then, however, we’d mostly learn about certain corrections that arise because the simulation was being done with “only” n=100 qubits, rather than in the n→∞ limit where SYK is rigorously understood. But while those corrections, arising when n is “neither too large nor too small,” would surely be interesting to specialists, they’d have no obvious bearing on the prospects for creating real physical wormholes in our universe.

And yet, this is not a sensationalistic misunderstanding invented by journalists. Some prominent quantum gravity theorists themselves—including some of my close friends and collaborators—persist in talking about the simulated SYK wormhole as “actually being” a wormhole. What are they thinking?

Daniel Harlow explained the thinking to me as follows (he stresses that he’s explaining it, not necessarily endorsing it). If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle. Granted, they couldn’t get a message back out from the wormhole, at least not without “going the long way,” which could happen only at the speed of light—so only simulated-Alice and simulated-Bob themselves could ever test this prediction. Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as “more real” or “more wormholey” than a classical simulation.

Of course, this scenario depends on strong assumptions not merely about quantum gravity, but also about the metaphysics of consciousness! And I’d still prefer to call it a simulated wormhole for simulated people.

For completeness, here’s Harlow’s passage from the NYT article:

Daniel Harlow, a physicist at M.I.T. who was not involved in the experiment, noted that the experiment was based on a model of quantum gravity that was so simple, and unrealistic, that it could just as well have been studied using a pencil and paper.

“So I’d say that this doesn’t teach us anything about quantum gravity that we didn’t already know,” Dr. Harlow wrote in an email. “On the other hand, I think it is exciting as a technical achievement, because if we can’t even do this (and until now we couldn’t), then simulating more interesting quantum gravity theories would CERTAINLY be off the table.” Developing computers big enough to do so might take 10 or 15 years, he added.

Alright, let’s move on to the claim that quantum supremacy has been refuted. What Aharonov et al. actually show in their new work, building on earlier work by Gao and Duan, is that Random Circuit Sampling, with a constant rate of noise per gate and no error-correction, can’t provide a scalable approach to quantum supremacy. Or more precisely: as the number of qubits n goes to infinity, and assuming you’re in the “anti-concentration regime” (which in practice probably means: the depth of your quantum circuit is at least ~log(n)), there’s a classical algorithm to approximately sample the quantum circuit’s output distribution in poly(n) time (albeit, not yet a practical algorithm).

Here’s what’s crucial to understand: this is 100% consistent with what those of us working on quantum supremacy had assumed since at least 2016! We knew that if you tried to scale Random Circuit Sampling to 200 or 500 or 1000 qubits, while you also increased the circuit depth proportionately, the signal-to-noise ratio would become exponentially small, meaning that your quantum speedup would disappear. That’s why, from the very beginning, we targeted the “practical” regime of 50-100 qubits: a regime where

1. you can still see explicitly that you’re exploiting a 250– or 2100-dimensional Hilbert space for computational advantage, thereby confirming one of the main predictions of quantum computing theory, but
2. you also have a signal that (as it turned out) is large enough to see with heroic effort.

To their credit, Aharonov et al. explain all this perfectly clearly in their abstract and introduction. I’m just worried that others aren’t reading their paper as carefully as they should be!

So then, what’s the new advance in the Aharonov et al. paper? Well, there had been some hope that circuit depth ~log(n) might be a sweet spot, where an exponential quantum speedup might both exist and survive constant noise, even in the asymptotic limit of n→∞ qubits. Nothing in Google’s or USTC’s actual Random Circuit Sampling experiments depended on that hope, but it would’ve been nice if it were true. What Aharonov et al. have now done is to kill that hope, using powerful techniques involving summing over Feynman paths in the Pauli basis.

Stepping back, what is the current status of quantum supremacy based on Random Circuit Sampling? I would say it’s still standing, but more precariously than I’d like—underscoring the need for new and better quantum supremacy experiments. In more detail, Pan, Chen, and Zhang have shown how to simulate Google’s 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!). Approaching from the problem from a different angle, Gao et al. have given a polynomial-time classical algorithm for spoofing Google’s Linear Cross-Entropy Benchmark (LXEB)—but their algorithm can currently achieve only about 10% of the excess in LXEB that Google’s experiment found.

So, though it’s been under sustained attack from multiple directions these past few years, I’d say that the flag of quantum supremacy yet waves. The Extended Church-Turing Thesis is still on thin ice. The wormhole is still open. Wait … no … that’s not what I meant to write…

Note: With this post, as with future science posts, all off-topic comments will be ruthlessly left in moderation. Yes, even if the comments “create their own reality” full of anger and disappointment that I talked about what I talked about, instead of what the commenter wanted me to talk about. Even if merely refuting the comments would require me to give in and talk about their preferred topics after all. Please stop. This is a wormholes-‘n-supremacy post.

### 306 Responses to “Google’s Sycamore chip: no wormholes, no superfast classical simulation either”

1. Rand Says:

> Daniel Harlow’s argument to me was that, if you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then he believes they could have the subjective experience of jumping into a wormhole and meeting each other in the middle—something that wouldn’t have been possible with a merely classical simulation.

Hey wait stop! Alice and Bob’s subjective experience would reflect one another’s? It sounds like we’re breaking no-signaling here – though perhaps we can’t get the information out of the computer? Still, that would be pretty exciting if true.

(I still have to read the paper, Quanta article, and Natalie Wolchover’s tweet thread defending it.)

2. SR Says:

I’m confused as to how Harlow’s argument would work. Quantum simulations must run on real-world physics, so in particular cannot entail faster-than-light information transfer. Simulated wormholes would however allow effective FTL communication by shortening geodesic distances between far-away points.

Unless I’m missing something, it seems like the only resolutions are (1) that you cannot simulate practically useful wormholes using QC or (2) that building a QC out of ordinary qubits lets you rewire the very geometry of space. It seems like (1) would be more plausible.

Sorry if I’m missing something…I don’t know anything about quantum gravity so would appreciate corrections from anyone who knows more.

3. Scott Says:

Rand #1 (and SR #2): Yeah, that’s exactly the point. You can’t get the information out of the wormhole, or not without “going around the long way,” which can only happen at the speed of light. Thus, the assertion that Alice and Bob would meet in the middle has a “metaphysical” character: it’s suggested by GR, but it could only ever be experimentally confirmed by Alice and Bob themselves, not by those of us on the outside.

4. SR Says:

Scott #3: Thanks for the explanation!

5. Alex Says:

Scott,

you write

“And yet, this is not a sensationalistic misunderstanding invented by journalists. Some prominent quantum gravity theorists themselves—including some of my close friends and collaborators—persist in talking about the simulated SYK wormhole as “actually being” a wormhole. What are they thinking?”

It doesn’t give me any pleasure to be the “I told you so” guy here, but… I told you so, Scott. More specifically:

https://scottaaronson.blog/?p=6457#comment-1939489

https://scottaaronson.blog/?p=6599#comment-1942272

Now, you have the entire field of QC being entangled in the worldwide press with actual creation of wormholes and other nonsense. Good luck in trying to clear up that mess in the lay public’s mind. Even more avoidable damage to the reputation and credibility of QC due to hype.

My question is, what were *you* thinking, Scott? You always seemed like the sensible guy to me. Evidently, the “QC adults” you mentioned in your first response to my concerns were not in the room when all of this was being cooked up… or maybe nobody really understands “the idea that you can construct spacetime out of entanglement in a holographic dual description” and that’s why all the confusion, unlike what you suggested to me in your other snarky response (“Once you understand it, you can’t un-understand it!”).

Anyway, I know no fundamental idea about your friends is going to change by whatever I could say. But I can tell you they will keep doing it, over and over again.

6. mls Says:

Just wanted to say thank you for the last three postings. All have been extremely interesting.

7. Corbin Says:

I think that, metaphysically, we have a Wigner’s-friend situation. After all, how do we ask Alice and Bob about their experience? We have to somehow measure their reported status from the quantum computers to which they were uploaded. Such measurements entangle Alice/Bob with the rest of the laboratory.

8. Scott Says:

Alex #5: This sort of thing—

(1) eliding the distinction between simulations and reality,
(2) making a huge deal over a small quantum computer calculating something even though we all knew perfectly well what the answer would be, and
(3) it all getting blown up even further by the press

—has been happening over and over for at least 15 years, so I didn’t exactly need you to warn me that it would keep happening! 🙂 Yes, I expect such things to continue, unless and until the incentives change. And as it does, I’ll continue to tell the truth as best I can!

Equally obviously, though, none of this constitutes an argument against the scientific merit of AdS/CFT itself, just like the tsunami of QC hype isn’t an argument against Shor’s algorithm. Munging everything together, and failing to separate out claims, would just be perpetuating the very practices one objects to when hypemeisters do it.

9. Will Says:

Hi Scott,

I think I must be missing something in your argument.

If “A foofs B” has a dual description “C blebs D”, and we establish that A does indeed foof B, would you agree that it is equally true to say that C blebs D?

If so, wouldn’t it be correct to say that this experiment has created a wormhole? It’s not a wormhole in our regular universe’s spacetime, but perhaps it’s a wormhole in some… where (? not exactly clear on this).

And from this, perhaps it follows why an equally-precise simulation on the classical computer wouldn’t create a wormhole in the same way? (This part seems dubious to me–I want to say that A foofing B is different from a simulation of A foofing B–after all, no matter how well you simulate a hurricane, nobody gets wet. But I’m wondering if this instinct is in conflict with my early claim that “A foofs B” is equally true as “C blebs D”. Hmm.. now that I think about it, maybe this is actually what you meant by “bring a wormhole into actual physical existence every time you sketch one with pen and paper.”)

10. Alex Says:

Scott #8,

Well, you are the one that sounded surprised about your friends in the first place, that’s why I quoted your paragraph. Otherwise, I wasn’t going to make any comment.

As for AdS/CFT, I wasn’t even commenting about its scientific merits, but about the cloud of noise that surrounds it and ends, due to the very nature of that noise, producing these confusions. And, noise that is deliberately perpetuated by the practitioners themselves, as noted, again, in what you wrote. I saw that in QG before and more recently, from these very same people, in QC. That was all of my “warning”. Of course, in light of that, these recent hype developments in QC are hardly a surprise. I was just hoping that, knowing the tactics, maybe there was a chance to stop it.

Anyway, whatever.

11. LK2 Says:

I do not comment about the wormhole BS. As a physicist I find the whole story just hilarious ad a bad signal for how research is conducted today (with the aid of “improper” strategies..).
I have colleagues who simulated on the IBM QC other physics systems (even nuclei) using 5-7 bits: all these calculations were possible inverting a small matrix on a piece of paper.
On the QC they spent days and days for getting something usable. We are still so far away for easily using these systems effectively, but it is exciting and promising. I’d like to think that for now it is like playing with ENIAC or something like that.

As for quantum advantage: my English is probably not good enough to get the message so I ask: Were you saying that:

1) quantum supremacy with nisq hardware was ALREADY expected to break down after a certain N?
2) How do you overcome the limit? Error correction or higher-fidelity? Or both !? Or what 😉 ?
3) Have the Kalai’s arguments any relevance in this limit?

Thank you very much for the whole great post!

12. Mateus Araújo Says:

I call bullshit on Harlow’s assertion that “it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle”.

The dual description doesn’t matter, because it still boils down to an experiment with two distant quantum computers sharing an entangled state. Alice’s actions change precisely nothing at Bob’s side, and vice-versa. It doesn’t matter if these actions are uploading oneself into the quantum computer, you still can’t get any information faster than light. It doesn’t matter if this information is somehow accessible only to the uploaded self, there can’t be any information there to start with.

13. Joseph Shipman Says:

https://pubs.acs.org/doi/pdf/10.1021/acsenergylett.2c01969

This is being hyped as a great QC advance. The paper looks careful and thorough. Is it?

14. arbitrario Says:

Hi, long time reader, first time commenter.

I still don’t get how Harlow’s argument is supposed to work. Even granting the consciousness uploading (which I guess is a discussion for another time), Alice and Bob would “experience” the simulation of SYK in (simulated) “real” spacetime, while if I understood correctly the wormhole is in the emergent holographic spacetime. All their experiences should be perfectly describable just in terms of QM.

Supposing that (rather than a simulation) we have an effective physical sistem in the real world which follows SYK hamiltonian and Alice and Bob have two pieces of this system. There still wouldn’t be an “actual” wormhole between them, it would just be an equivalent mathematical description.

Maybe I am missing something!

15. gentzen Says:

Daniel Harlow explained the thinking to me as follows (…). If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, …

If this experiment would have consisted of a quantum simulation on “two entangled quantum computers,” then the hype would have been justified, even if only 9 qubits had been used from each quantum computer (i.e. 18 qubits in total).

Of course, I am not interested in an overhyped “entangled tardigrade”-like entanglement between quantum computers here, but in honest quantum entanglement, i.e. one which could be used for quantum teleportation. What I have in mind is an experiment with a source of entangled particles (probably spin-entangled photons), two quantum computers which are able to perform quantum computations using such “quantum inputs”, and some mechanism to “post-select” those computations on the two quantum computers which were actually entangled (by some sort of coincidence measurements).

Alex #5, It seems a bit much to me to excoriate Scott on his own blog for overhyped QC/QG results when he is debunking the hype in self-same post. Moreover, Scott was *the* *first* scientist quoted in mainstream press having done so. Sure, you can get after him for being friends with Lenny Susskind, but we don’t have to answer for our friends especially when he is explicitly calling them out on his own blog.

From my recollection Scott has always been of the – “I don’t know how I can help, but I’d be happy to try…” – when it comes to the whole It from Qubit business. And I haven’t seen him even once hyping the business or over even so much as getting over excited by it.

Alex, in short maybe you should direct your (perhaps well-motivated anger) at the people actually making the hyped up beyond the pale claims rather than those who are debunking them just because the latter happens to be nice and friendly to them.

Scott,

“Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as “more real” than a classical simulation.”

Even granting the outlandish assumptions I would still grant those grounds as entirely specious. The whole point of the wormhole hype is generating the insipid claim in the public’s mind of wormholes in our *actual* 3+1 physical world. You don’t push your work in the NYT and Quanta magazine if you’re trying to sell your work to your quantum gravity peers. You push it to conjure in the general public’s mind ideas coming from sci-fi movies like Interstellar, etc. What’s missing in this work – even when granting outrageous assumptions – is any connection whatsoever to that 3+1 physical world. There is *nothing* about this work that suggests the so-called SYK “wormholes” have anything to do with the wormholes from general relativity in our 3+1 universe. Re-enacting the math of the SYK “wormholes” on a QC does nothing to change that.

John Baez said on Woit’s blog that this is like if:

“a kid scrawls a picture of an inside-out building and the headline blares

BREAKING NEWS: CHILD BUILDS TAJ MAHAL!”

But I think the offense is considerably worse than that. To me it is as if a baby scrawls a crayon picture of random lines and dots and the parent raves to the media – “my kid built the Taj Mahal!” – all the while neglecting to mention:

* it is a drawing
* it is inside out
* it is in a universe where the laws of perspective/drawing may be entirely unrelated to our own

How embarrassing for the authors when future research of actual gravity in our universe discovers that it does not contain and cannot contain wormholes of the ER variety. OR maybe they forgot that wormholes haven’t been discovered in observation and are just still a conjectured (highly contested?) solution of GR?

19. Anon Says:

I guess what confuses my very naive self about situations like this is that everyone I meet in academia (though not QC companies) working on QC agrees that hype like this wormhole business is bad for the field and swears they themselves would never do it. Yet somehow we get one of these papers every so often from very well established academic groups. Either I’m very bad at reading people, or I’m living under a rock and don’t interact with people who want hype – which do you think it is Scott?

20. 4gravitons Says:

Regarding the statement that there is nothing the experiment reveals that you couldn’t learn from a classical simulation, what about this section in the Quanta article:

“Surprisingly, despite the skeletal simplicity of their wormhole, the researchers detected a second signature of wormhole dynamics, a delicate pattern in the way information spread and un-spread among the qubits known as “size-winding.” They hadn’t trained their neural network to preserve this signal as it sparsified the SYK model, so the fact that size-winding shows up anyway is an experimental discovery about holography.

“We didn’t demand anything about this size-winding property, but we found that it just popped out,” Jafferis said. This “confirmed the robustness” of the holographic duality, he said. “Make one [property] appear, then you get all the rest, which is a kind of evidence that this gravitational picture is the correct one.””

This isn’t saying that there is a property which *could not* be detected in the classical simulation, but it at least seems to be a property that *was not* detected in the classical simulation, right? (Or did they see it first there? I haven’t read the actual scientific paper yet.)

And this is kind of nontrivial as a “thing learned about quantum gravity”, and not just for technical reasons. It’s nontrivial because AdS/CFT is already a conjecture. It’s a conjecture in the well-tested versions, but an even bigger conjecture is that it is in some sense “generic”: that there are lots of “CFT-ish” systems with “AdS-ish” duals. In this case, the system they were testing wasn’t N=4 SYM, or even SYK, but a weird truncation of SYK. The idea that “weird truncation of SYK” has a gravity dual is something the “maximal AdS/CFT” folks would expect, but that could not have been reliably predicted. And it sounds like, from those paragraphs, that the people who constructed this system were trying to get one particular gravity-ish trait out of it, but didn’t expect this other one. That very much sounds like evidence that “maximal AdS/CFT” covers this case, in a way that people didn’t expect it to, and thus like a nontrivial thing learned about quantum gravity.

21. Scott Says:

Will #9:

Hmm.. now that I think about it, maybe this is actually what you meant by “bring a wormhole into actual physical existence every time you sketch one with pen and paper.”

Yup! 🙂

22. Scott Says:

LK2 #11:

As for quantum advantage: my English is probably not good enough to get the message so I ask: Were you saying that:

1) quantum supremacy with nisq hardware was ALREADY expected to break down after a certain N?

Yes.

2) How do you overcome the limit? Error correction or higher-fidelity? Or both !? Or what 😉 ?

Yes. Higher fidelity, which in turn lets you do error-correction, which in turn lets you simulate arbitrarily high fidelities. We’ve understood this since 1996.

3) Have the Kalai’s arguments any relevance in this limit?

Gil Kalai believes that (what I would call) conspiratorially-correlated noise will come in and violate the assumptions of the fault-tolerance theorem, and thereby prevent quantum error-correction from working even in principle. So far, I see zero evidence that he’s right. Certainly, Google’s and USTC’s Random Circuit Sampling experiments saw no sign at all of the sort of correlated noise that Gil predicts: in those experiments, the total circuit fidelity simply decayed like the gate fidelity, raised to the power of the number of gates. If that continues to hold, then quantum error-correction will “””merely””” be a matter of more and better engineering.

23. Scott Says:

Joseph Shipman #13: I just looked at that paper. On its face, it seems to commit the Original Sin of Sloppy QC Research: namely, comparing only to brute-force classical enumeration (!), and never once even asking the question, let alone answering it, of whether the QC is giving them any speedup compared to a good classical algorithm. (Unless I missed it!)

There’s a tremendous amount of technical detail about the new material they discovered, and that part might indeed be interesting. But none of it bears even slightly on the question you and I care about: namely, did quantum annealing provide any advantage whatsoever in discovering this material, compared to what they could’ve done with (e.g.) classical simulated annealing alone?

24. Scott Says:

arbitrario #14: I should really let Harlow, or better yet one of the “true believers” in this, answer your question!

To me, though, it’s really a question of how seriously you take the ER=EPR conjecture. Do you accept that two entangled black holes will be connected by a wormhole, and that Alice and Bob could physically “meet in the middle” of that wormhole (even though they couldn’t tell anyone else)? If so, then I suppose I see how you get from there to the conjecture that two entangled simulated black holes should be connected by a simulated wormhole, and that a simulated Alice and Bob could meet in the middle of it, again without being able to tell anyone (even though now it’s metaphysically harder to pin down what’s even being claimed!).

25. Still Figuring It Out Says:

Please correct me if I got anything wrong, but it sounds like we could do the Alice and Bob wormhole experiment in the near future. We just need to replace uploaded-brain Alice and Bob with much simpler probes (perhaps short text strings or simple computer programs?), and then extract their signal from the wormhole by “going the long way” with the speed of light, which shouldn’t take that long if both quantum computers are here on Earth.

Then again, entangling the quantum computers seems like a large technical bottleneck, and once we solve that, how is this experiment any different (apart from scale) from entangled photon experiments in quantum teleportation?

26. Scott Says:

Anon #19: I think what’s going on is that there’s a continuum of QC-hype-friendliness, from people who dismiss even the most serious QC research there is, from people who literally expect Google’s lab to fall into a wormhole or something. 🙂

At each point on the continuum, people get annoyed by the atrocious hype on one side, and also annoyed by the narrow-minded sticklers on their other side. As for whether the point where I sit is a reasonable one … well, that’s for you to decide!

27. Mark S. Says:

Scott – regarding the wormhole-in-a-lab, what would have been your sentiment about the old crummy BB84 device from the late-80’s developed by Bennett, Brassard, Smolin and friends at Yorktown Heights that could send “secret” messages a whopping distance of 32.5 cm? The turning of the Pockels cells famously made different sounds depending on the basis of the photons that Eve could have listened to instead of having to measure the qubits directly.

In 1989 Deutsch stated that the experimentalists “have created the first information processing device with capabilities that exceed those of the Universal Turing Machine.” We know so much more now and would definitely not describe the experiment as extending beyond Turing, but at least Deutsch posited that something *different* was happening in the lab at Yorktown Heights in the late 80’s than what happened at the beach in Puerto Rico where Bennett and Brassard first met to talk about what would become BB84 in the early 80’s.

28. Remarkable: “Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage,” by Xun Gao, Marcin Kalinowski, Chi-Ning Chou, Mikhail D. Lukin, Boaz Barak, and Soonwon Choi | Combinatorics and more Says:

[…] There is a nice blog post by Scott Aaronson on several Sycamore matters.  On quantum suppremacy Scott expresses an optimistic […]

29. Johnny D Says:

Everything that is run on 9 or 25 qubits of Google’s qpu can be run on a computer. That doesn’t diminish the need to demonstrate it on the qpu. QC has only just begun to explore complex quantum states. It is awesome that Google verifies that algorithms that illuminate very ‘quantum’ behaviors work on real quantum systems. What we learned was qm works for these types of complex states.

I watched the Quanta utube video. They spoke of the algorithm being of a transversable wormhole. If I understand transversable wormholes, they have no horizons but they are not a shortcut, but a long way. It is great that a team first used ai to construct a qcircuit to illustrate this effect in a minimal way and that the circuit worked.

I am very excited for the day when Google will have hardware below threshold thm’s conditions and we do learn awesome stuff about qm they we don’t know!!!

30. OhMyGoodness Says:

“Yes, I expect such things to continue, unless and until the incentives change. ”

How would you change the incentives?

John Q Public has a great thirst for doom forecasts and the latest zany results from the crazy world of quantum mechanics and substituting simulation for the subject of simulation is a grand way to provide that.

There was reliance on the personal integrity of scientists even to the gallows but as the volume of boiler housed data, irreproducible results, over-the-moon hype,etc ever increases, it might be reasonable to re-examine that premise. As I have stated before you display personal integrity in the best tradition of science but many of your colleagues raise more than a reasonable doubt.

31. Gil Kalai Says:

“Gao et al. have given a polynomial-time classical algorithm for spoofing Google’s Linear Cross-Entropy Benchmark (LXEB)—but their algorithm can currently achieve only about 10% of the excess in LXEB that Google’s experiment found.”

Gao et al.’s main observation is that when you apply depolarization noise on several gates, there is still some correlation between the noisy samples and the ideal distribution (hence large LXEB), and they apply this observation to split the circuit into two distinct parts (which roughly allows computations separately on these parts). I would expect that you can get 100% (or more) of the Google’s LXEB as follows: apply the noise only to some of the gates in the “boundary”, get substantially larger correlation, and make sure that the resulting noisy circuit still has quick (perhaps a bit slower) classical algorithm. (This is related to the “patch” and “elided” circuits in Google’s 2019 paper. In the patch circuits all boundary edges are deleted and in the elided circuits only some of them are.)

32. LK2 Says:

Scott #22:

thank you very much!
It seems time for me to look seriously into the fault-tolerance theorem.

33. Mitchell Porter Says:

In the original ER=EPR paper (arXiv:1306.0533), Maldacena and Susskind write:

“It is very tempting to think that *any* EPR correlated system is connected by some sort of ER bridge, although in general the bridge may be a highly quantum object that is yet to be independently defined. Indeed, we speculate that even the simple singlet state of two spins is connected by a (very quantum) bridge of this type.”

To me, this is saying there is a continuum between the concept of wormhole and the concept of entanglement. All wormholes, quantum mechanically, are built from entanglement, and all entanglement corresponds to a kind of wormhole. (And presumably, e.g. all quantum teleportation lies on a continuum with traversable wormholes, etc.)

I find this an attractive and plausible hypothesis, and well-worked out in the case of entangled black holes, but still extremely vague at the other end of the continuum. What is the geometric dual of the two-spin singlet state? Does it depend on microphysical details – i.e. the dual is different if a Bell state arises in standard model physics, versus if it arises in some other possible world of string theory? Or is there some geometric property that is universally present in duals of Bell states?

As far as I know, there are still no precise answers to questions like these.

34. Scott Says:

4gravitons #20: To the extent they learned something new, it seems clear from the description that they learned it from the process of designing the compressed 9-qubit circuit, and not at all from the actual running of the circuit. By the time they’d put the latter on the actual QC, they’d already learned whatever they could.

35. Shion Arita Says:

@Scott #24

But if Alice and Bob are uploaded into physically distant parts of a quantum computer, if they are able to meet, even ‘encrypted’ inside the wormhole, even if they can only get out the long way, there must be some kind of long-range causal connection, since Alice and Bob started out far apart, which would necessitate there existing something like a wormhole in actual physical reality to enable that.

In fact, something like this (with simple particles playing the role of Alice and Bob, across more feasible distances), might actually be a good way to experimentally test ER=EPR.

36. bystander Says:

Kudos for calling the hype what it is. Regarding your friends that do the damage, recall the advice: “Lord, protect me from my friends; I can take care of my enemies.

37. Alex Says:

That’s all very fair. I don’t consider Scott to be responsible by any means of this stunt. And I salute his strong rebuttals.

What happened was that I just read that paragraph I quoted and that made me recall those previous brief conversations in this blog, and then I just felt a bit frustrated at his expressed frustration with his friends.

He can befriend whoever he wants, of course, and be involved in whatever research lines he likes.

Personally, my choices would obviously be very different, since I don’t respect those people. And, indeed, I can assure you that my anger goes to them, not to Scott.

But, going back to the problem of hype, I think more needs to be done. This type of hype is new, QG inside QC. Both fields already had a difficult time with hype of their own, but this new combination is particularly worrying and dangerous, the levels of surrealism are jaw-dropping even by the standards of hype in these fields.

38. Christopher Says:

> This is going to be one of the many Shtetl-Optimized posts that I didn’t feel like writing, but was given no choice but to write.

Then why didn’t you just have ChatGPT write it? XD

39. Scott Says:

bystander #36, Alex #37, etc.: Like, imagine you had a box that regularly spit out solid gold bars and futuristic microprocessors, but also sometimes belched out dense smoke that you needed to spend time cleaning up. It would still be a pretty great box, right? You’d still be lucky to have it, no?

Lenny Susskind has contributed far more to how we think about quantum gravity — to how even his critics think about quantum gravity — than all of us in this comment thread combined. If his friends sometimes have to … tone things down slightly when he gets a little too carried away with an idea, that’s a price worth paying many times over.

40. Scott Says:

Shion Arita #35:

In fact, something like this (with simple particles playing the role of Alice and Bob, across more feasible distances), might actually be a good way to experimentally test ER=EPR.

But if (as we said) you can’t get a message out from the simulated wormhole, if all the experiments that an external observer can do just yield the standard result predicted by conventional QM, then what exactly would the experimental test be?

41. Scott Says:

Gil Kalai #31: Yeah, I also think it’s plausible (though not certain) that the LXEB score achieved by Gao et al.’s spoofing algorithm can be greatly improved. Keep in mind, though, that the experimental target that algorithm is trying to hit will hopefully also be moving!

42. Scott Says:

OhMyGoodness #30:

How would you change the incentives?

I’m not sure, but I’d guess the strong, nearly-unanimous pushback that the “literal wormhole” claim has gotten in the science blogosphere has already changed the incentives, at least marginally!

43. Alex Fischer Says:

Do you think that the noisy random circuit sampling result (or ideas from it) can be used to get a polynomial time algorithm for noisy BosonSampling? Based off my first reading, it seems not immediately applicable to BosonSampling, but it seems like the ideas could be transferred to that.

44. Scott Says:

Still Figuring It Out #25:

Please correct me if I got anything wrong, but it sounds like we could do the Alice and Bob wormhole experiment in the near future. We just need to replace uploaded-brain Alice and Bob with much simpler probes (perhaps short text strings or simple computer programs?), and then extract their signal from the wormhole by “going the long way” with the speed of light, which shouldn’t take that long if both quantum computers are here on Earth.

Then again, entangling the quantum computers seems like a large technical bottleneck, and once we solve that, how is this experiment any different (apart from scale) from entangled photon experiments in quantum teleportation?

Indeed, see my comment #40. Even after your proposed experiment was done, a skeptic could say that all we did was to confirm QM yet again, by running one more quantum circuit on some qubits and then measuring the output. Nothing would force the skeptic to adopt the wormhole language: that language exists, but the whole point of duality is that it doesn’t lead to any predictions different from the usual quantum-mechanical ones. See the difficulty? 🙂

45. Scott Says:

Alex Fischer #43: Excellent question! While the many differences between RCS and BosonSampling make it hard to do a direct comparison, some might argue that the “analogous” classical simulation results for noisy BosonSampling were already known. See for example this 2018 paper by Renema, Shchesnovich, and Garcia-Patron, which deals with a constant fraction of lost photons, with some scaling depending on the fraction.

In any case, whatever loopholes remain in the rigorous results, I’ve personally never held out much hope that noisy BosonSampling would lead to quantum supremacy in the asymptotic limit—just like I haven’t for RCS and for similar reasons. At least in the era before error-correction, the goal, with both BosonSampling and RCS, has always been what Aharonov et al. are now calling “practical quantum supremacy.”

46. Scott Says:

Mark S. #27:

Scott – regarding the wormhole-in-a-lab, what would have been your sentiment about the old crummy BB84 device from the late-80’s developed by Bennett, Brassard, Smolin and friends at Yorktown Heights that could send “secret” messages a whopping distance of 32.5 cm? … In 1989 Deutsch stated that the experimentalists “have created the first information processing device with capabilities that exceed those of the Universal Turing Machine.”

While I was only 8 years old in 1989, I believe I would’ve pushed back against Deutsch’s claim … the evidence being that I’d still do so today! 🙂

In particular, while you and I know perfectly well that this isn’t what Deutsch meant, his statement (which I confirmed here by googling) is practically begging for some journalist to misconstrue it as “IBM’s QKD device violates the Church-Turing Thesis; it does something noncomputable; it can’t even be simulated by a Turing machine.” And in this business, I believe we ought to hold ourselves to a higher standard than

(1) knowing the truth, and
(2) saying things that in our and other experts’ minds convey the truth.

We should also

(3) anticipate the exciting false things that non-experts will likely take us to mean, and rule them out!

Having said that, Deutsch’s statement was on much firmer ground than the wormhole claim in at least one way. Namely, the IBM device’s ability to perform QKD—an information-processing protocol impossible in a classical universe—was ultimately a matter of observation. There was no analogue there to the metaphysical question of whether a computational simulation of a wormhole “actually brings a wormhole into physical existence.”

47. manorba Says:

so next step is E.Musk asking for public funding to build a net of teleporting stations is that it?

Scott,

Could you explain a little more about the hope for the sweet spot with “circuit depth ~log(n)” … Now that you know this hope is dashed what does it mean for future experiments? Does this alter the trajectory of what Google will attempt next or the ability to scale up to higher qubits? Basically, I’m wondering what effect this has on the ongoing fight to prove quantum supremacy beyond all doubters…

49. James Cross Says:

“if someone simulates a black hole on their classical computer, they don’t say they thereby “created a black hole.” Or if they do, journalists don’t uncritically repeat the claim”.

“I will die on the following hill: that once you understand the universality of computation, and how a biological neural network is just one particular way to organize a computation, the obvious, default, conservative thesis is then that any physical system anywhere in the universe that did the same computations as the brain would share whatever interesting properties the brain has: intelligent behavior (almost by definition), but presumably also sentience”.

So the rules for sentience and black holes/worm holes are different?

50. Gil Kalai Says:

(LK2 #11, Scott #22) Hi everybody,

1) The crucial ingredient of my argument for why quantum fault-tolerance is impossible deals with the rate of noise. My argument asserts that it will not be possible to reduce the rate of noise to the level that allows the creation of good quality quantum error-correcting codes. This is based on analyzing the (primitive; classical) computational complexity class that describes NISQ computations (with fixed error-rate). The failure, in principle, for quantum computation is also related to the strong noise-sensitivity of NISQ computations in subconstant error rates.

2) Error correlation is not part of my current argument and I don’t believe in “conspiratorially-correlated noise that will come in and violate the assumptions of the fault-tolerance theorem, and thereby prevent quantum error-correction from working even in principle.” (Again, my argument is for why efforts to reduce the rate of noise will hit a wall, even in principle.)  I did study various aspects of correlated errors (mainly, before 2012) and error-correlation is still part of the picture: namely, without quantum fault-tolerance the type of errors you assume for gated entangled qubits will be manifested also for entangled qubits where the entanglement was obtained indirectly through the computation. (As far as I can see, this specific prediction is not directly related to any finding of the Google 2019 experiment.)

3) Actually, some of my earlier results showed that exotic forms of error correlation will allow quantum fault-tolerance (a 2012 paper with Kuperberg) and earlier, in 2006, I showed that if the error rate is small enough, no matter what diabolic correlations exist, log-depth quantum computation (and, in particular, Shor’s algorithm) still prevails.

4) Let me note that under the assumption of fixed-rate readout errors my simple (Fourier-based) 2014 algorithm with Guy Kindler (for boson sampling) applies to random circuit sampling and shows that approximate sampling can be achieved by low degree polynomials and hence random circuit sampling represents  a very low-level computational subclass of P. I don’t know if this conclusion applies to the model from Aharonov et al.’s recent paper and this is an interesting question.  (Aharonov et al. ‘s noise model is based on an arbitrarily small constant amount of depolarizing noise applied to each qubit at each step.) Aharonov et al. ‘s algorithm also seems related to Fourier expansion.

5) Scott highlights the importance of the remarkable agreement in the Google 2019 experiment between the total circuit fidelity and the gate fidelity, raised to the power of the number of gates, and refers to it as “the single most important result we learned scientifically from these experiments”. This is related to another aspect of my own current interest regarding a careful examination of the Google 2019 experiment, and it goes back, for example, to an interesting videotaped discussion regarding the evaluation of the Google claims that both Scott and I participated in Dec. 2019.  As some of you may remember, I view the excessive agreement between the LXEB fidelity and the product of the individual fidelities as a reason for concern regarding the reliability of the 2019 experiment.

51. Shion Arita Says:

@scott#40

Thought about it for a bit, and came up with this:

-Design a system that’s simple enough that it’s practically possible (though it probably will be difficult) to brute force ‘decrypt’ what happened inside the wormhole part of the simulation. This should work because it’s not actually ‘impossible’ to decrypt the wormhole–just exponentially difficult. So maybe there’s some kind of toy system that is complicated enough to do what we want but simple enough to be decryptable.

-Also have the computation generate some kind of hash that’s sensitive to everything that happens in it. The reason for this will become apparent later. Or rather each side of the wormhole will create its own hash, but this is a bit of a non-central detail

-Simulate this on a classical computer.

-Brute force decrypt the computation and verify that Alice and Bob had interaction inside the ‘wormhole’. Note that this does not necessitate a long-range causal connection in this case, since it’s just using large amounts of classical resources to locally simulate what the effect of a long-range causal connection would be. I guess you could also run it on a quantum computer without separated parts.

-Now run the computation on the entangled separated quantum computers, and see if each side’s hash matches the classical case. If it does, the same computation occurred. But since the parts were separated by a great distance, this means that there was some kind of long-range influence. This test should work because the hash allows us to see whether the encrypted data is the same or different, without decrypting it. You’d still have to bring both hashes together the long way to verify, but if they match it does in fact mean that the computation on the Alice side influenced the computation on Bob’s side.

52. OhMyGoodness Says:

manorba#47

His boring machines will be upgraded to tunnel through space time.

53. It’s a simulation, stupid – The nth Root Says:

[…] The second claim is that Sycamore’s pretensions to quantum supremacy have been refuted. The latter claim is based on this recent preprint by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani. No one—least of all me!—doubts that these authors have proved a strong new technical result, solving a significant open problem in the theory of noisy random circuit sampling. On the other hand, it might be less obvious how to interpret their result and put it in context. See also a YouTube video of Yunchao speaking about the new result at this week’s Simons Institute Quantum Colloquium, and of a panel discussion afterwards, where Yunchao, Umesh Vazirani, Adam Bouland, Sergio Boixo, and your humble blogger discuss what it means. … (Shtetl-Optimized / Scott Aaronson) […]

54. Johnny D Says:

A giant fault tolerant qc will give control of unitary developement and collapse. In this setting we know qm. What will we learn about qm? Nothing, we know it already.

It may simulate molecules and condensed matter well enough to give advances in those fields.

It may factor large numbers or do other calcs better than computers.

It won’t answer qg questions for all the same reasons humans can’t. No data!!! It will be able to simulate various holographic models, but it can never give evidence of the truth of the holography.

Of course it is also possible that nature is such that fault tolerance cannot be achieved. Systems with many dof collapse (or if you prefer get stuck in a branch). Is there any known system of large dof that doesn’t collapse?

In qg, unitary evolution is often assumed for large dof systems. I can never believe this. It leads to awesome math but physical reality, I don’t think so. It may be important for qm to have consistant unitary evolution even if it cannot be achieved.

An observer collecting Hawking radiation would certainly collapse the wave function. I don’t believe that the other branches which are unreachable to the observer can effect the gr metric that the observer lives in. I think ER=EPR can be true only in an unrealistic situation where unitary evolution is preserved, ie no observers. A fault tolerant qc could evolve a holographic version of this. That may be interpreted as an argument against believing in fault tolerance???

55. Mark S. Says:

Scott #46 – thanks! And thanks for considering my hypothetical. I can definitely get behind your third bullet. I think that’s a fair burden to place on the media and the scientists who communicate with them.

But, perhaps in relation to the first two, the author of the Quanta article has pointed to a talk led by Harlow on hardware vs. software at the 2017 IFQ Conference. Harlow’s contention, as I could interpret, was that it’s fair to be “hardware ambivalent” about certain topics in physics, and that it’s wrong to state that certain things having certain properties are only *real* if a lump of material realizes the properties, in contrast to software running on a (quantum) computer to characterize that material and those properties. At least immediately during the conversation at the conference, you and others appeared receptive to such a position (positions might have changed or been more refined afterwards, of course).

As an example I’ve heard that the toric code is essentially “the same” as a topological quantum computer. I think this question was initially posited by someone else, but – if a transmon or ion-trap quantum computer were to successfully implement a toric code, could you defend a NYT or Quanta headline such as “Physicists use quantum computers to create new phase of matter”?

56. Scott Says:

Could you explain a little more about the hope for the sweet spot with “circuit depth ~log(n)” … Now that you know this hope is dashed what does it mean for future experiments?

OK good question. What’s special about depth log(n) is that

(1) the quantum state is sufficiently “scrambled” to get “anti-concentration” (a statistical property used in the analysis of these experiments), but

(2) even if every gate is subject to constant noise, the output distribution still has Ω(1/exp(log(n))) = Ω(1/poly(n)) variation distance from the uniform distribution, raising the prospect that a signal could be detected with only polynomially many samples.

That’s why it briefly looked like a “sweet spot,” potentially attractive for future experiments — though as far as I know, no experimental groups had actually made any concrete plans based on this idea (though of course it’s hard to say, since once you’re doing an experiment your depth is some actual number, not “O(log(n))” or whatever). Anyway, the implication is now we can tell the experimentalists never mind and not to waste their time on the large n, “logarithmic” depth RCS regime! 🙂

57. ilyas Says:

Scott. Yet again a careful, measured and factual clarification. This time much needed, and this time with a tinge of sadness that this could even be reported. I dont know if the researchers or the people involved on the QC were given a chance to comment, but your point about how this further creates needless confusion is spot on. I wont comment on the word hype (or as Peter elegantly describes it, “publicity stunt”) since no comment is required. Thank you yet again

58. Scott Says:

James Cross #49: What an amusing juxtaposition! Here’s what I’d say:

We agree, presumably, that a simulated hurricane doesn’t make anyone wet … at least, anyone in our world.

By contrast, simulated multiplication just is multiplication.

Is simulated consciousness consciousness? That’s one of the most profound questions ever asked. Like I said in the passage you quoted, it seems to me that the burden is firmly on anyone who says that it isn’t, to articulate the physical property of brains that makes them conscious and that isn’t shared by a mere computer simulation.

At the least, though, we’d like a simulated consciousness to be impressive: to pass the Turing Test and so on. No one is moved by claims that a 2-line program to reverse an input string manifests consciousness.

Now we’re faced by a new question of this kind: is a simulated wormhole a wormhole? Or, maybe it’s only a wormhole for simulated people in a simulated spacetime?

However you answer that imponderable, my point was that a crude 9-qubit simulation of an SYK wormhole isn’t obviously much more impressive, conceptually or technologically, than a “simulated wormhole” that consists of a wormhole that you’d sketch with pen and paper. Or not today, anyway, when running ~9-qubit, ~100-gate circuits and confirming that they behave as expected has become routine.

Hope that clarifies!

59. Scott Says:

Shion Arita #51: While I didn’t follow every detail of your proposal, the question, once again, is how you’d answer a skeptic who says that you’re just doing a bunch of computations that give an answer that you yourself already knew in advance, computations that couldn’t possibly have given a different answer, rather than doing an experiment that could tell you anything about the possibility or impossibility of wormholes in our “base-level” physical spacetime.

After all, if the claim just boils down to “dude … if we created an entire simulated universe with wormholes, then that universe would have wormholes in it!”, then you didn’t need PhD physicists to help establish the claim, did you? 🙂

60. James Cross Says:

#56 Scott

Thanks for indulging me!

I think that sentience like a black hole (and presumably a wormhole) is a natural phenomenon that can be described and simulated with computations but the abstract model simulations in neither case are the same as the phenomenon.

However, there are simulations that are different from abstract models – concrete simulations that are realized in physical materials, for example wind tunnels and wave pools which are scale models which may have characteristics that make them nearly identical to the actual phenomenon they model.

So the question about a quantum computer wormhole would be whether it is merely an abstract model or is it a concrete model. From what I can gather, it is a purely abstract model in this research.

61. Michael Janou Glaeser Says:

Scott #39
“Lenny Susskind has contributed far more to how we think about quantum gravity — to how even his critics think about quantum gravity — than all of us in this comment thread combined.”

It is interesting that you wrote “to how we think about quantum gravity” instead of “to quantum gravity”. Susskind’s contributions to really understanding quantum gravity have been null, and his contributions to how (some) people think about quantum gravity have been very negative to the field: he has made people lose a lot of time with each idea he has proposed, none of which ended up quite working (and some of which, particularly the latest ones like QM=GR are sheer nonsense). His modus operandi of overhyping everything he does is not a bug but a feature: without it, his influence would have probably been much smaller.

So a better analogy would be with a box that produces shining plastic bars, but misleads some people into thinking that they’re solid golden bars, and also produces futuristic-looking microprocessors which however don’t work, and on top of that produces dense smoke as in your example. That’s Lenny Susskind as far as QG is concerned.

62. Scott Says:

Mark S. #55: I confess I don’t remember what I said at Daniel’s talk in 2017—I haven’t watched the video, and my memory is blurred by the birth of my son literally 2 days later.

The way I’d put it now, though, is that given any claim to have created some new physical entity in the lab—a nonabelian anyon, a Bose-Einstein condendate, a wormhole, whatever—there are several crucial questions we should ask:

(1) What, if anything, did we learn from studying this entity that we weren’t already certain about, e.g. from theory or calculations on ordinary laptops?

(2) What, if anything, could we learn from studying it in the future?

(3) What, if anything, can the new entity be used for (including in future physics experiments)?

(4) How difficult of a technological feat was it to produce the new entity? Where does it fall on the continuum between “PR stunt, anyone could’ve done it” and “genuine, huge advance in experimental technique”?

(5) Whatever the new entity is called (“wormhole,” “anyon,” etc.), to what extent is it the thing people originally meant by that term, and to what extent is a mockup or simulation of the thing in a different physical substrate?

(6) In talking about their experiment, how clear and forthright have the researchers been in answering questions (1)-(5)—and in proactively warning the public away from exciting but false answers?

My personal feeling is that, judged by the above criteria in their entirety, not leaning exclusively on any one of them, the wormhole work gets maybe a D. Not an F, but a D. So, that’s why I’ve been negative about it, if not quite as negative as some others on the blogosphere!

63. Scott Says:

Michael Janou Glaeser #61: Who if anyone, in your judgment, has made genuine contributions to understanding quantum gravity?

64. maline Says:

I also don’t see how this wormhole trick evades no-signaling. Alice writes down a message, encodes the paper into her simulation, and drops it into the wormhole. Bob encodes his brain into his own simulation, and experiences jumping into the wormhole, finding and reading Alice’s message, and then getting crushed into the singularity. How do we escape the conclusion that “the simulation of Bob recieving the message” is a property – and yes, an observable – of Bob’s quantum computer?

65. Shion Arita Says:

@scott#59

Disclaimer: I’m kind of a skeptic too to be honest and maybe I am misunderstanding Harlow’s proposed experiment. My understanding is that somehow the entangled nature of the quantum computers will allow Simulated Alice to end up meeting Simulated bob in the Simulated wormhole. But since the computers are far apart, in order for Alice’s and Bob’s data to meet, there would have to be something like a real wormhole connecting them, since what we’re calling ‘Alice’ and ‘Bob’ are actually physical states of the computer.

The point is that, if that kind of long-range wormhole-like influence isn’t possible in the real world, the computation will have different results when it’s run on the distant entangled quantum computers. Something will go wrong, and Alice and Bob won’t actually be able to meet. Thus a different hash.

The point is, if we simulate a universe with wormholes in it, yeah, it’ll have wormholes in it. But if we try to implement the simulation in such a way that it would rely on something like wormholes actually happening in the real world, if it’s not possible in the real world, it can’t work. So the computation in that case can’t have the same outcome.

66. Scott Says:

maline #64 and Shion Arita #65: At some point we’re going to go around in circles, but—from an external observer’s perspective, both a “real” wormhole (formed, say, from two entangled black holes) and a “simulated” wormhole (formed, say, from two entangled quantum computers) are just some quantum systems that obey both unitarity and No superluminal signaling. No story about “Alice and Bob meeting in the middle” can ever do anything at all to change the predictions that that external observer would make by applying standard QM. All statements about “meeting in the middle” have empirical significance, at most, for the infalling observers themselves, just like in the simpler case of jumping into a single black hole. Alas, I don’t see how any amount of cleverness can get around this.

67. SR Says:

This discussion has made me realize something amusing. We are speculating about creating virtual worlds in which simulated people may have the experience of living in a geometry with wormholes, even though our physics (presumably) does not directly allow this. What if our world, analogously, was created by people who lived in a purely classical world who thought it would be amusing to create a simulation which appears from the inside to be quantum, but in the “outside reality” computes everything slowly in a classical manner, and “steps time” only when those arduous computations are complete? Our subjective time may correspond to aeons in the outside world and we would not know it.

68. Matthew Says:

Many people in the High Energy Theory (ie string theory, ie formal theory, etc.) community would object to the statement that SYK even has a meaningful relation to quantum gravity. Let me explain.

The classic example of AdS/CFT relates a quantum system called super-Yang-Mills (like the strong force) in four dimensions to quantum gravity (ie string theory) in 5d with negative curvature (AdS). The quantum system is labeled by two parameters: the number N of colors of the strong force, and the coupling of the strong force. When N is large, extensive evidence shows that SYM is identical to gravity in 5d (with supersymmetry, etc.), ie all corrections to gravity are suppressed by 1/N. This means that if you could simulate SYM at large N, then in principle you could simulate a wormhole in 5d. Note that this duality holds no matter the kinematic regime (eg temperature) you study SYM, as long as the label N is large.

Of course, even theoretically computing SYM at large N (including what would be needed to describe a wormhole) is super hard, and simulating it is still a distant dream.

SYK is a 1d quantum system labeled by somewhat analogous parameter N. But even at large N, it is NOT dual to gravity in 2d AdS (or near-AdS) in the standard holographic sense. Instead, when people say its dual to 2d gravity they mean there is a part of the theory that is described by 2d gravity, and this part is dominant in a certain kinematic regime (eg low temperature), but this part of the system CANNOT be distinguished from the rest of the system by ANY label of the system. This is why SYK does not have a quantum gravity dual in the standard sense. What I am saying is quite standard, e.g. if you look at the standard review of SYK by Rosenhaus (on arxiv at 1807.03334), you see in Figure 5 that in the “AdS dual” there is a big question mark, unlike SYM and other more understood cases.

You might say that I’m splitting hairs, and that SYK is just dual to quantum gravity in some new perhaps more general sense. But this general sense is so general that pretty much any conformal field theory would be dual to quantum gravity, so the statement becomes almost vacuous. Take the simplest field theory: the critical 3d Ising model. This model can be simulated classically, eg by an evaporating cup of water at a certain pressure and temperature, no need for a fancy quantum computer. Since the critical Ising model is conformal and has a stress tensor, thus formally it has an AdS description in 4d with a graviton dual to the stress tensor. There may also be kinematic regimes of the Ising model where the stress tensor dominates, which would be dual to to the graviton dominating in the bulk. But there is no label like N for the Ising model which can be dialed to gaurantee this, so its a bit silly to call the Ising model quantum gravity. You could indeed rewrite the 3d Ising model in 4d AdS variables (which has been rigorously done), but this wont necessarily teach you anything about quantum gravity.

I think its a pity some people exaggerated this result, bc it would be super cool if a wormhole could one day be simulated by a system with an actual quantum gravity dual. This wouldnt be a wormhole in our spacetime, but it would still be a big accomplishment, and could well teach us new things about quantum gravity. Exaggerating SYK and its connection to quantum gravity has done our community a disservice.

69. Andrei Says:

Scott,

” If you had two entangled quantum computers, one on Earth and the other in the Andromeda galaxy, and if they were both simulating SYK, and if Alice on Earth and Bob in Andromeda both uploaded their own brains into their respective quantum simulations, then it seems possible that the simulated Alice and Bob could have the experience of jumping into a wormhole and meeting each other in the middle.”

There is something that bothers me about this ER = EPR thing in both its original form (wormholes) and in this simulation.

As far as I understand the black holes are entangled as long as they are built out of entangled particles. One particle goes in BH A and the other in BH B. However, Alice and Bob are not entangled, so isn’t the entanglement terminated when they jump in those BH?

Likewise, would the entanglement between those computers survive when Alice and Bob (which have some uncorrelated physical states) are uploaded?

70. LK2 Says:

Gil Kalai (#50):

thank you very much for the post.
Thank you very much.

71. Scott Says:

Andrei #69: No, in the scenario being discussed, I believe the number of qubits in Alice and Bob is small compared to the number of pre-entangled qubits in the two black holes. In the bulk dual description, Alice and Bob enter the two mouths of the wormhole without appreciably changing its geometry.

72. gentzen Says:

Andrei #69: No, in the scenario being discussed, I believe the number of qubits in Alice and Bob is small compared to the number of pre-entangled qubits in the two black holes. In the bulk dual description, Alice and Bob enter the two mouths of the wormhole without appreciably changing its geometry.

I guess this means that even if I could pull-off the feature (described in gentzen #15) to have two quantum computers which can both read quantum input (let’s assume spin entangled photons for definiteness), and the additional feature to detect (or “post-select”) when they each got one photon from an entangled pair, I would still not be able to simulate to wormhole with “two entangled quantum computers”. I would also need some quantum memory in both computers to collect enough entangled qubits to be able to simulate the relatively large “number of pre-entangled qubits”.

I start to feel just how disappointing the actually performed quantum simulation is, even compared to the thinking described by Daniel Harlow.

73. Gil Kalai Says:

(LK2#11,#70, Scott)

Scott’s new notion of “practical quantum supremacy” gives me another opportunity to explain a basic ingredient of my argument. To repeat Scott’s new notion: “practical quantum supremacy” refers to a quantum noisy device that for a fixed rate of noise represents classical computation, but practically can manifest “computational supremacy” in the intermediate scale. Here “computational supremacy” is the ability to use the device to perform certain computations that are impossible or very hard for digital computers.

A basic ingredient of my argument is:

” ‘Practical quantum supremacy’ cannot be achieved”

As a matter of fact, this proposed principle applies in general and not only for quantum devices.

” Any form of ‘practical computational supremacy’ cannot be achieved.”

In other words, if you have a system or a device that represents computation in P, then you will not be able to tune some parameters of that system or device to achieve superior computations in the small or intermediate scale.

NISQ computers with constant error rates (whether it is constant readout errors or the Aharonov et al.’s errors) are, from the point of view of computational complexity, simply classical computing devices. (This is very simple for constant readout errors and it is the new Aharonov et al. ‘s result for their noise model.) Therefore, our principle implies that they cannot be used (in principle) to demonstrate practical computational supremacy.

This conclusion has far-reaching consequences: The Google experiment had 2-gate fidelity of roughly 99.5%. Pushing it to 99.9% or 99.99% may lead to convincing “practical supremacy” demonstrations. The principle we stated above implies that engineers will fail to achieve 2-gates of 99.99% quality as a matter of principle. This sounds counterintuitive, as improving the 2-gate quality is widely regarded as a “purely engineering” task. The principle I propose, namely the principle of “no practical computational supremacy” tells you, based on computational complexity considerations, that what was considered as an engineering task is actually out of reach as a matter of principle. This principle lies in the interface of physics and the theory of computing.

So let me come back to LKT#11’s questions. The answer to part 3 is “yes”. The assertion that “practical quantum supremacy” is out of reach leads to some lower limits for the quality of the components of NISQ computers. These lower limits will also not allow good quality quantum error-correction.

LK2 and Scott, I hope that this explanation helps to clarify my argument. Let me know.

LK2(#70) I hope this comment is of some help. In addition, there are various places to read about my theory. Recently (Nov. 5) I wrote a blog post over my blog with the text of a lecture given in Budapest, and the same post has links to several of my papers on related matters and two to videotaped (zoom) lectures in Pakistan and in Indonesia.

Gil #73,

‘Scott’s new notion of “practical quantum supremacy” gives me another opportunity to explain a basic ingredient of my argument. To repeat Scott’s new notion: “practical quantum supremacy”’

Since the football World Cup is going on I must take this opportunity to raise a *red card* for this bit of passive aggressive rhetorical slight-of-hand 🙂

Scott’s entire point is that this “new notion” is nothing of the sort and so-called “practical quantum supremacy” has always been what the current slate of QC supremacy experiments were going for. So it isn’t new. Also, it wasn’t Scott who coined the term, but rather Aharonov et al.

Foul called, but play may now Proceed… 😉

75. Scott Says:

Gil Kalai #73: To clarify one point, “practical quantum supremacy” is not a new notion! It’s the same notion that I and others have been talking about since Arkhipov and I introduced BosonSampling more than a decade ago. I’m just being a good sport and using Aharonov et al’s new term for it.

76. Scott Says:

Adam Treat #74: You beat me to it!

77. Gali Weinstein Says:

Susskind tries to establish the equivalence between the ER = EPR traversable wormhole protocol and Sycamore’s quantum hardware. Only if they are equivalent, then based on experiments performed on the second we can learn about the first. I am also not convinced.
In their paper, “Traversable wormhole dynamics on a quantum processor”, Jafferis et al. write: “As described by the size-winding mechanism, information placed in the wormhole is scrambled throughout the left subsystem, then the weak coupling between the two sides of the wormhole causes the information to unscramble and refocus in the right subsystem. Owing to the [quantum] chaotic nature of such scrambling [in presence of chaos]–unscrambling dynamics, the many-body time evolution must be implemented with high fidelity to transmit information through the wormhole”.
And in their paper, “Quantum Gravity in the Lab”, Brown et. al write that the scrambling, i.e. the exponential growth of OTOC, is a manifestation of quantum chaos.
Chaotic systems such as random circuits exhibit high-temperature teleportation and no-size winding. Teleportation occurs at times larger than the scrambling time due to random dynamics. Only a single qubit can be teleported with high fidelity in the high-temperature limit, but with the right encoding of information, many qubits can be sent at low temperatures and intermediate times in a holographic system hosting a traversable wormhole.
Jafferis et al. add: “This analysis shows that teleportation under the learned Hamiltonian is caused by the size winding mechanism, not by generic chaotic dynamics, direct swapping or other non-gravitational dynamics (Supplementary Information)”.
What I want to ask is about the chaos. It seems to me that on the one hand, Jafferis et al. speak of information that is placed in the wormhole which is scrambled in the presence of chaos. On the other hand, there are random quantum circuits and chaotic systems. According to Jafferis et al., they are equivalent.

78. fred Says:

Scott
“In more detail, Pan, Chen, and Zhang have shown how to simulate Google’s 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!).”

I’m not sure that falling back to arguments about energy efficiency is very insightful.
It’s true that classical server farms (cloud systems) use way more electricity than what they typically try to simulate but such systems are truly programmable/universal, i.e. they can be used to compute a zillion practical things. Like, alpha folding simulations obviously use an immense amount of energy compared to the actual energy involved in the folding of a single protein, but that’s not the point.

79. Gil Kalai Says:

Guys, you missed my point. I tried to define and use the terms in a precise way:

“practical computational supremacy” refers to a situation that you have a system or a device that represents asymptotically a computation in P, but by proper engineering of this device you achieve superior computations in the small or intermediate scale.

I refer by “practical quantum supremacy” to “practical computational supremacy” via a quantum device.

Both these terms refer to devices which asymptotically admit an efficient classical simulation.

Adam wrote: “‘practical quantum supremacy’ has always been what the current slate of QC supremacy experiments were going for.

As far as I can tell, the insight that NISQ sampling represents classical computation to which Aharonov et al. now added a nice new result was not observed by Aaronson and Arkhipov but later by Guy and me (for boson sampling) and subsequently by various other groups of researchers in various situations. (Certainly, the fact that for RCS, asymptotically classical algorithms performs better when n is large is a new surprising discovery by Gao et al.)

There was a different reason why people advocated for quantum supremacy experiments in the intermediate scale and this is because this was the only regime where verifying the outcomes was at all possible.

In any case, the purpose of my comment was to explain my argument using these new/old terms in the way I defined them in the comment.

If you find the terms “practical quantum supremacy” and “practical computational supremacy” as I used them in my comment confusing, you can replace them with the terms “practical quantum supremacy, not supported by asymptotic analysis” and “practical computational supremacy, not supported by asymptotic analysis.” In this case my proposed principle simply reads:

Practical computational supremacy not supported by asymptotic analysis cannot be reached.

80. Stephen Jordan Says:

It seems to me the premise that “things meet in the middle of the wormhole after jumping in” may have some complexity-theoretic meat to it. (Hopefully I am not posting the same comment twice. I think though that there was a snag in email verification the first time around.)

Suppose Alice and Bob are separated by d lightseconds. Alice has a bit string x, and Bob has a bit string y, each of length n. Let $$f:\{0,1\} \to \{0,1\}$$ be some function that requires $$T = 2^{\Omega(n)}$$ steps to compute. The objective is for Bob to obtain $$f(x \oplus y)$$ as soon as possible. For simplicity, suppose all computation, classical or quantum, occurs at one elementary operation per second.

Classically, the best strategy is for Alice to transmit x to Bob, which takes d seconds, and then Bob to compute $$f( x \oplus y)$$, which takes T seconds. So, the soonest Bob can obtain the answer is after time d + T.

Now, suppose Alice and Bob share entanglement. Does this help? We know from quantum communication complexity that sharing entanglement can help with some tasks. But given that f is fully general other than being hard to compute, it seems clear that the answer must be no. Now, suppose that Alice and Bob’s entangled state represents a wormhole. Then, Alice and Bob can each send (i.e. upload) delegates into the wormhole who know x and y, respectively. (The delegates needn’t be “minds”. They could just be simple computer programs.) These delegates then “meet in the middle,” share their bit strings, and proceed to compute $$f (x \oplus y)$$. During this time Alice sends the necessary classical information to Bob so that he can extracting these delegates from the wormhole and learn their answer.

In this case, the computation of f by the delegates inside the wormhole and the transmission of classical information from Alice to Bob are happening in parallel, at least naively. So, the time needed for Bob to obtain $$f(x \oplus y)$$ seems as though it has been reduced from d + T to $$\max \{d,T \}$$. What gives? Physically, Alice and Bob share an entangled state $$| \psi \rangle$$. If I understand correctly, the dynamics inside the wormhole where the delegates have met is ultimately implemented by unitaries that are local to the two sides: $$| \psi(t+1) \rangle = U_A \otimes U_B \left| \psi(t) \rangle$$. And while this is going on, some classical information is in transit from Alice to Bob.

So it must be that either: entanglement provides a generic speedup whereby the computation and the classical communication can effectively be done in parallel. (It seems that this must not be right. Maybe if we think about it hard enough we could point to a theorem guaranteeing that this could not be right.) Or, the unscrambling step by which the delegates are extracted from the wormhole after the necessary classical information has been transmitted from Alice to Bob requires a number of computational steps that grows at least linearly which how much subjective time has passed for the delegates when they were inside the wormhole. (Maybe this lower bound on complexity of unscrambling is a standard assumption.)

81. fred Says:

The good news is that the authors will be retract their confusing wormhole claim by using the same QC to reverse time in order to truly undo this mess…

https://phys.org/news/2019-03-physicists-reverse-quantum.html

82. LK2 Says:

Gil Kalai #73:

Thank you very much for the additional explanations. Now I have to go through
your papers to understand how these two statements:

” ‘Practical quantum supremacy’ cannot be achieved”
” Any form of ‘practical computational supremacy’ cannot be achieved.”

are proven and how. Thank you again very very much!

LKT #22, Gil himself acknowledges that it is a conjecture and that he has no proof. So don’t look too long for the proof. At most you will get to understand the motivations behind the conjecture which I am also trying to understand. At this point, I don’t hold out too much hope though that I’ll be able to understand it because I think even Scott has admitted to reading Gil’s 1999 paper and subsequent work and still does not understand Gil’s argument for how Fourier-Walsh analysis of noise in NISQ somehow leads to the conjecture.

84. Scott Says:

Stephen Jordan #80: Extremely interesting, thanks! But if there’s any computational speedup to be had by “doing the computation inside the wormhole,” then shouldn’t that also show up as something much more elementary: a speedup for an entangled Alice and Bob compared to unentangled ones? In general, don’t we always maintain the invariant that, from the outside observer’s perspective, anything whatsoever that’s stated in terms of a non-traversable wormhole can be restated without mentioning the wormhole at all? Indeed, isn’t that in some sense the whole point of ER=EPR?

85. Scott Says:

Gil Kalai #79: No, I don’t think I missed the point. I just think you’re wrong, both about the past of quantum supremacy and about its future! Regarding the future, I’m happy to wait for future experiments to settle much of what’s at issue between us, as I trust you are as well. Regarding the past, here’s what Alex and I wrote in the 2011 BosonSampling paper:

The fundamental worry is that, as we increase the number of photons n, the probability of a successful run of the experiment might decrease like c-n. In practice, experimentalists usually deal with such behavior by postselecting on the successful runs. In our context, that could mean (for example) that we only count the runs in which n detectors register a photon simultaneously, even if such runs are exponentially unlikely. We expect that any realistic implementation of our experiment would involve at least some postselection. However, if the eventual goal is to scale to large values of n, then any need to postselect on an event with probability c-n presents an obvious barrier. Indeed, from an asymptotic perspective, this sort of postselection defeats the entire purpose of using a quantum computer rather than a classical computer.
For this reason, while even a heavily-postselected Hong-Ou-Mandel dip with (say) n = 3, 4, or 5 photons would be interesting, our real hope is that it will ultimately be possible to scale our experiment to interestingly large values of n, while maintaining a total error that is closer to 0 than to 1. However, supposing this turns out to be possible, one can still ask: how close to 0 does the error need to be?
Unfortunately, just like with the question of how many photons are needed, it is difficult to give a direct answer, because of the reliance of our results on asymptotics. What Theorem 3 shows is that, if one can scale the BosonSampling experiment to n photons and error δ in total variation distance, using an amount of “experimental effort” that scales polynomially with both n and 1/δ, then modulo our complexity conjectures, the Extended Church-Turing Thesis is false. The trouble is that no finite experiment can ever prove (or disprove) the claim that scaling to n photons and error δ takes poly(n, 1/δ) experimental effort. One can, however, build a circumstantial case for this claim—by increasing n, decreasing δ, and making it clear that, with reasonable effort, one could have increased n and decreased δ still further.

Granted, we had no theorem to the effect that, if the error is held constant, then BosonSampling does not yield scalable quantum supremacy. Such theorems would come later, from the work of you and Guy and later Renema, Garcia-Patron, and others. Our focus on total variation distance, rather than (say) fraction of lost photons, also seems rather comically overoptimistic with hindsight.

Nevertheless, the very fact that we stressed the need to continually decrease the error if you care about the scaling limit, shows that we weren’t expecting scalable quantum supremacy from BosonSampling with a constant error rate. That, in turn, makes clear that when we discussed the prospect of experiments with particular numbers of photons like 50, we were already focused on what Aharonov et al are now calling “practical quantum supremacy.”

86. Stephen Jordan Says:

Scott 84: One can pose a question about what Alice and Bob can achieve if they share a certain entangled state and use local unitaries and classical computation, which makes no reference to wormholes. Namely, if a computation requires T timesteps and there is a time delay of d timesteps for classical data to travel from Alice to Bob, then what is the minimum time interval from when Alice and Bob receive their respective bit strings x and y to when Bob can output $$f(x \oplus y)$$? I conjecture that the answer is still d + T even if Alice and Bob are allowed to possess any (x,y independent) shared entangled state from the beginning.

We also have the claim that “people who jump into wormholes can meet in the middle, share information, process it, and then be extracted back out of the wormhole and asked about their experiences, provided certain classical information is transmitted from one side to the other to enable the extraction”.

It seems to me the only way that the former conjecture and the latter claim are consistent is that unscrambling these people from the wormhole to ask them questions later requires not only classical communication from one end of the wormhole to the other, but also an amount of computation for the unscrambling which grows linearly with how much time they spent in the wormhole. This is a constraint from computational arguments that is stronger than the constraint coming purely from no-signaling. (The latter constraint being automatically satisfied by the necessity of the classical communication.)

87. shion arita Says:

@scott#66

I think I essentially agree with you actually, Scott. I ultimately think that there’s one part that’s very confusing to me:

The problem is, that, as far as I understand it, the claim is that Alice, who’s being simulated at real-life location A, goes into the simulated wormhole, and Bob, simulated at real-life location B goes in, and they ‘meet’ in the simulation.

Therefore, in this case, bits of information that depend on events at real life location A become correlated with bits that depend on events at real life location B, without any classical information exchange.

How is this not FTL signaling? Even though us, outside the system, can’t understand or access the signal, I don’t see why that means that the signaling did not occur?

88. The Wormhole Publicity Stunt | Not Even Wrong Says:

[…] Something I should have linked to before is Scott Aaronson’s blog posting about this, and the comments there. One that I think is of interest explains that SYK at large N is not […]

89. Gil Kalai Says:

Thanks for the useful historical comments, Scott.

I used a precise definition of “practical quantum supremacy” to present my proposed principle for why such “practical quantum supremacy” is out of reach. Again the heuristic principle I propose is

“Computational supremacy not supported by asymptotic analysis cannot be reached”

This seems an interesting statement in the interface between the theory of computing and physics and it can be interesting to discuss it. Based on what we now know (including Aharonov et al.), this principle, if true, implies that NISQ devices (and in particular RCS and boson sampling) cannot reach quantum supremacy. (We can study this proposed principle for smaller computational complexity classes, e.g. to look at examples where computationally inferior devices or algorithms perform better in the small/intermediate scale.)

Scott: I just think you’re wrong, both about the past of quantum supremacy and about its future!

Scott, I am mainly interested in explaining and discussing my argument and other aspects of noise sensitivity in the context of NISQ systems, and examining current experimental claims. On those matters we disagree (and also on those we agree) I am very curious to know if I am right or you are.

I am also quite interested in the history of the story, and I try to give accurate, fair, generous, and detailed credits.  (I will look carefully at what you wrote.) But this is of little relevance to my comment.

Adam Treat (#83) “At this point, I don’t hold out too much hope though that I’ll be able to understand it because I think even Scott has admitted to reading Gil’s 1999 paper and subsequent work and still does not understand Gil’s argument for how Fourier-Walsh analysis of noise in NISQ somehow leads to the conjecture.”

I will be happy to explain it privately (and other aspects of the noise stability/noise sensitivity theory) both to you and to Scott. Drop me an email, guys.

90. James Cross Says:

#81 Fred

QCs actually are Universal Proving machines that can allow us to prove any possible sup[er]position about reality and the universe. 🙂

91. Scott Says:

Gil #89, I used the term “practical quantum supremacy” in my original post, before you’d even shown up in the comment section, because it was the term Aharonov et al used for what I and others had been talking about for more than a decade. To whatever extent your definition differs from Aharonov et al’s, why is that relevant here?

92. LK2 Says:

Adam Treat #83: I will keep in mind your comment as I go through Gil’s papers. Thanks!

93. Ted Says:

Scott, a technical question inspired by your long blog post on AdS/CFT and brain uploading back in July: Can the duality between the “regular QM” picture and the wormhole picture be implemented efficiently? I.e. if one is given only the quantum logic circuit implementing this simulation on a large number of qubits, can one efficiently compute the details of the corresponding process of traveling through the wormhole in the complementary description? And vice versa, if one is only given the details of the wormhole description?

If the duality can’t be computed efficiently, then it seems to me that that would even further weaken the (already very weak!) claim to have “created/simulated” a wormhole. Because if we don’t require that the simulation can be “read out” efficiently, then we can equivalently say (via waterfall-type arguments) that (a larger version of) the quantum circuit would have simulated every physical process.

94. fred Says:

James Cross #90

“QCs actually are Universal Proving machines”

I never said they weren’t, but obviously such QCs don’t exist right now and probably not for a few decades (if ever).
So I’m talking about the commercial classical computers we do have vs the type of quantum *gadgets* we only have at our disposal right now … a few dozens noisy qubits just aren’t a QC, even if their behavior can’t be simulated easily.

95. Simulan en un ordenador cuántico el llamado protocolo de "teletransporte en un agujero de gusano atravesable" - La Ciencia de la Mula Francis Says:

[…] Aaronson, «Google’s Sycamore chip: no wormholes, no superfast classical simulation either,» Shtetl-Optimized, 02 Dec 2022; Mateus Araújo, «The death of Quanta Magazine,» More Quantum, 01 Dec 2022; Douglas Natelson, […]

96. Actualités quantiques de novembre 2022 Says:

[…] publié dans Nature, un article de mise au point publié dans ArsTechnica,  un post de Scott Aaronson, un thread intéressant dans Twitter que tous les physiciens n’ont pas encore quitté pour […]

97. John Says:

In the preprint it says the runtime of the algorithm is 2^(O(log(1/e)) = poly(1/e) and it is not practical due to a large exponent. So is it still possible that there is a superpolynomial advantage for NISQ devices conducting RCS experiments?

98. Scott Says:

John #97: In the “anti-concentration regime” that they study, it could at best be a polynomial advantage, albeit conceivably a large one, maybe even with the exponent depending on the noise rate.

The other remaining hope is to go outside the anti-concentration regime—for example, by using constant-depth random noisy quantum circuits with no spatial locality. No one knows yet whether that yields scalable quantum supremacy or not.

99. The truth about wormholes and quantum computers - The magazine that never fails to amaze Says:

[…] on the quantum computer, but missed the boat entirely on this front, as many others were quick to correctly point […]

100. Mateus Araújo Says:

I’ve emailed Daniel Harlow asking him to substantiate his claim (or unendorsed explanation?) that the simulated Alice and Bob could meet in the middle of the simulated wormhole. His answer was that I need to learn more AdS/CFT to understand it.

I think this firmly puts this claim in the camp of complete bullshit.

101. The truth about wormholes and quantum computers - California Press News Says:

[…] on the quantum computer, but missed the boat entirely on this front, as many others were quick to correctly point […]

102. David Brown Says:

According to conventional wisdom, if wormholes exist, then they are so unstable they must be irrelevant to computer technology.
Keep in mind that journalists have incentives to over-dramatize the news. Exaggeration sells but is seldom punished in journalism.

103. Scott Says:

Mateus #100: My guess is that if you studied AdS/CFT more, you would indeed understand better why people came to believe that a simulated Alice and Bob could meet in the middle of a simulated wormhole … and also there’d be a metaphysical leap that you’d still consider to be bullshit. 🙂

But at least you’d understand the statement modulo that leap (I wish physicists would be more explicit about such leaps, incidentally!)

Scott #102,

I’ll guess the opposite. Simply studying the papers of AdS/CFT will not be dispositive for Mateus or others to understand why others believe simulated Alices/Bobs may meet in the middle. I don’t think there is *any* technical reason or feature of AdS/CFT that sheds any light on such beliefs. Rather, I think it is only through embedding oneself in the culture of those believers one may come to understand the belief.

Further, to truly understand it I think one has to have a suspension of skepticism brought on by repeated inculcation with that culture and have weaknesses in human psychological biases and peer pressure that are sufficient to do away with said skepticism.

Even though you’ve had such exposure to the culture and know the AdS/CFT conjecture well enough, I’d submit this is the reason you don’t share the belief nor really understand what motivates it from a technical perspective. The culture/psychological pressure is not enough to overcome your healthy pentient for not buying in.

105. Carey Underwood Says:

Professor Matt Strassler has been weighing in on this as well (as can be expected given that he’s my other favourite physics blogger), might be illuminating on some of the wormhole questions.

https://profmattstrassler.com/2022/12/06/how-do-you-make-a-baby-cartoon-wormhole-in-a-lab/

106. Scott Says:

Adam Treat #103: How much have you studied this, in order to feel confident of what you say? Do you deny, for example, that classical GR admits solutions with non-traversable wormholes in which Alice and Bob, entering from opposite ends, could nevertheless meet in the middle? Or that there are reasons to believe that, in quantum gravity, such solutions would be relevant when you have two entangled black holes?

107. Mateus Araújo Says:

Scott #102: My point is that this is unscholarly behaviour.

It reminds me of Joy Christian. When confronted with the mistakes in his “disproof” of Bell’s theorem, he would dismiss them by saying people needed to study geometric algebra.

Scott #105, I guess I’ve studied it enough to feel confident in my assertions, but that’s a bit of a tautology isn’t it? I can acknowledge it is possible my confidence is misplaced, but luckily I’m enough of an open mind to be happy when corrected.

“Do you deny, for example, that classical GR admits solutions with non-traversable wormholes in which Alice and Bob, entering from opposite ends, could nevertheless meet in the middle?”

I deny that those solutions have anything meaningful to say about what it might be *like* to meet in the middle. More forcefully, I deny that those solutions characterize in any way *that it is like anything at all* for intelligent agents to “meet in the middle.”

“Or that there are reasons to believe that, in quantum gravity, such solutions would be relevant when you have two entangled black holes?”

Reasons to believe is too open ended of a statement to disagree with it in any kind of specific way. However, given any such *reason* I’d likely contend it is *also* perfectly reasonable to contend that the actual theory of quantum gravity might make those reasons into total and complete non-sequitur. I also contend that it is likely the case that what we don’t know about real QG far outpaces what we do know.

In a happy coincidence I see that Matt Strassler has something to say that might be relevant to the situation:

“Extremely Important Caveat [similar to one as in the last post]: Notice that the gravity of the simulated cartoon wormhole has absolutely nothing to do with the gravity that holds you and me to the floor. First of all, it’s gravity in one spatial dimension, not three! Second, just as in yesterday’s post, the string theory (from which we ostensibly obtained the JT gravity) is equivalent to a theory of quarks/gluons/etc (from which we might imagine obtaining the SYK model) with no gravity at all. There is no connection between the string theory’s gravity (i.e. between that which makes the wormhole, real or cartoonish) and our own real-world gravity. Worst of all, this is an artificial simulation, not the natural simulation of the previous post; our ordinary gravity does interact with quarks and gluons, but it does not interact with the artificially simulated SYK particles. So the wormholes in question, no matter whether you simulate them with classical or quantum computers, are not ones that actually pull on you or me; these are not wormholes into which a pencil or a cat or a researcher can actually fall. In other words, no safety review is needed for this research program; nothing is being made that could suck up the planet, or Los Angeles, or even a graduate student.”

https://profmattstrassler.com/2022/12/06/how-do-you-make-a-baby-cartoon-wormhole-in-a-lab/#more-13380

Emphasis mine. I haven’t read his whole post – his caution just stood out because he highlighted it in red – so I don’t know if it is relevant to Harlow’s account, but it is striking enough I thought it might be useful to call out.

Err, one part I emphasized is the wrong thing. Anyway, the whole blog post is interesting and would recommend the read for anyone still interested in the whole “wormhole” thing.

111. Scott Says:

Adam Treat #107: Well yes, that a simulated holographic wormhole can’t bend the spacetime of “real” observers like ourselves is the same (obvious) point that I made, both in this post and in the pages of the New York Times! But it doesn’t answer the question I thought we were talking about, namely whether it would ever make sense to speak of simulated observers, living in the bulk dual theory, whose spacetime would be bent in the manner predicted by that wormhole solution to GR (which in particular would let those simulated observers “meet in the middle” of the simulated wormhole).

I would never object to anyone speculating about such fun things! The one part that I do object to, is people passing over the metaphysical enormity of what needs to be presupposed in such a discussion, as if it didn’t even require comment.

Scott #110,

Yes, that’s why I said I emphasized the wrong point in the blog comment… I challenge the idea that studying holographic gravity duals will help to clarify because the whole concept here of “simulated observers” and “meeting in the middle” is too nebulous. At this point I’m not really sure whether we disagree.

113. Nick Says:

I really appreciate you calling out so strongly and unambiguously that there is a big problem with what happened here. Even if your anti-hype work doesn’t often have noticeable short-term effect on the information ecosystem around QIS (though sometimes it does!), it makes a big difference. If it weren’t for you and a few other experts who regularly make honest (and if necessary brutally deflationary assessments) of developments in QIS, very many MORE people would be completely taken in by hype events. Especially the people like myself who are interested in quantum computing and know something about it, cannot evaluate claims in theoretical physics, and think it would be super awesome if any of these claims about very profound physics experiments being done in a quantum computer were true. If you and others didn’t do this, future people working in QIS (many of whom read your blog I’m sure) would be way worse off.

114. Publicity Stunt Fallout | Not Even Wrong Says:

[…] Latest news this evening from Scott Aaronson at the IAS in Princeton: […]

115. Andreas Karch Says:

I am afraid we will indeed remember that wormhole story a long time from now — the moment a promising research field lost its credibility. Some big shots got to step up to the plate and call this out for the nonsense this is. Otherwise we’ll soon all be seen as crazy. I can see why they are reluctant. The paper has some nice results. But this is not what this is about. The staggering exaggeration of what has been accomplished has the potential to hurt the entire enterprise.

116. manorba Says:

Andreas Karch #114 Says: a promising research field lost its credibility.

i wouldn’t be so drastic, but yes communicating science to the general public is getting at an all time low.
I again advocate for a change in budgeting science resources: allocate some to enlighten people, otherwise i fear it’s gonna bite us in the ass in the future. actually it’s already happening.
Incidentally, one of the pivotal moments to me was the announcement of FTL neutrinos here in italy some years ago.

117. Nancy Lebovitz Says:

David Nirenberg wrote the very impressive _Anti-Judaism: The Western Tradition_, an analysis of anti-semites from their own words. It just goes to show how it’s easy to be fooled outside your specialty.

I kept seeing the headlines for a week or two about creating a wormhole, and it’s only more recently that I was seeing clearer headlines about creating a “wormhole”.

118. George Ellis Says:

Way back,

“all entanglement corresponds to a kind of wormhole”.

False! Entanglement was established by using the Schroedinger equation in Minkowski spacetime. No wormhole anywhere in sight. Anton Zeilinger’s Nobel prize winning entanglement experiment has nothing to do with any wormhole anywhere.
It’s propaganda, folks. Don’t fall for it. ER = EPR is false. EPR happily takes place without ER.

119. Scott Says:

George Ellis #117: I think the correct (and extremely interesting) nuggets in “ER=EPR” are:

(1) There are forms of entanglement, for example between two black holes, that admit useful dual descriptions involving non-traversable wormholes.

(2) Conversely, a non-traversable wormhole between two black holes admits an equivalent description in terms of entanglement between the black holes.

With the fact that the wormhole can’t transmit information being directly related to the fact that entanglement can’t, in a manner that’s now understood.

Where I agree with you is this: I think that, for the vast majority of entangled states one cares about in physics, a dual description in terms of wormholes simply isn’t useful, even in those cases where it meaningfully exists (which is far from all of them).

120. Evan Says:

Scott,

“We knew that if you tried to scale Random Circuit Sampling to 200 or 500 or 1000 qubits, while you also increased the circuit depth proportionately, the signal-to-noise ratio would become exponentially small, meaning that your quantum speedup would disappear. That’s why, from the very beginning, we targeted the “practical” regime of 50-100 qubits”

Is there any literature or people out there narrowing down this window of experiment sizes in a more precise way? I have heard people involved in the Google rcs, when approached with the latest classical simulations of their experiment, deflect criticism on the grounds that the advantage claim would clearly have held if they had used “just one or two more qubits”. While maybe true for n=54, it seems like moving this goalpost much further is problematic.

Is there an “n” and “t” such that a classical simulation of Google’s RCS experiment with number of qubits “n” in time less than “t” (wall clock or compute hours, or pick your own a parameter) would definitely convince you that the experiment could never be used to (and so never actually did) demonstrate computational advantage?

121. Quantum Circuits in the New York Times | Gödel's Lost Letter and P=NP Says:

[…] Scott Aaronson: “If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper.” (See also Scott’s post here.) […]

122. Bruce Smith Says:

Scott #118:

Could part of the confusion stem from the term “non-traversable wormhole” itself?

If a worm was naming things, I imagine “traversable” would be a requirement for anything (of any kind, in any field) to be described metaphorically as a “wormhole”.

Maybe within QG there is some generalization of “wormhole” which needn’t be traversable… if so, it deserves a better general name, and then “wormhole” could be the subtype of it which *is* traversable.

Then these headlines would be using that better name, which by not implying traversability, would not wrongly suggest FTL communication.

123. I Says:

Are you bearish on topological data calculations being a potential path to quantum supremacy? That’s an approach that you don’t seem to mention as often as other paths to QS.

124. Davidson Says:

In principle, both the media and scientists should communicate the truth without exaggeration. But in reality, both journalists and scientists are evaluated by how many people read their work, which is a fair measurement if everyone is truthful. From a game theory perspective, if everyone is truthful, then one can gain a tremendous advantage by being even slightly untruthful. To me, what we are seeing is a “clickbait”, something not too far from reality but will attract a lot more eyeballs. In the age of the internet, clickbaits seem impossible to avoid, so if this indeed falls into the discussion of clickbaits, it might just be part of a much more serious problem.

It’s nice having people like Scott to demystify it, but Scott’s power is limited.

125. Scott Says:

I #121: Yes, “bearish” is a good word. There have been some recent dequantizations of the LGZ algorithm for topological data analysis (e.g. this, on which Lloyd himself is coauthor). Moreover, even if a quantum speedup does exist, it will depend on finding examples of practical importance where (eg) the Betti numbers are enormous, and where it suffices to estimate them up to an exponentially large additive error.

126. beleester Says:

I’m not a physicist, but if I’m following the article correctly, the quantum process happening on the computer is supposed to be exactly equivalent to a wormhole in AdS space. Like, it’s not just calculating what the math says a wormhole would look like, it’s creating a quantum system which has certain physical properties you can measure, which map to corresponding properties on a wormhole. So it’s a picture of a wormhole, but it’s an accurate picture, one that might let us empirically test the math that describes wormholes.

(Or it would be accurate, if we lived in AdS space, and if the quantum system they used was the actual one they wanted to test instead of an approximation. But presumably they’re hoping that with enough cleverness you could build a quantum system that maps to something that actually exists.)

Am I correct here, or am I misunderstanding what “dual” means?

127. Scott Says:

Bruce Smith #122:

Could part of the confusion stem from the term “non-traversable wormhole” itself?

Yes, that’s possible. Alas, it’s been fixed since 1957 that “wormhole” means what used to be called “Einstein-Rosen bridge”: that is, any tunnel that connects two spacetime points in a topologically nontrivial way, regardless of whether it’s traversible.

What would the worms say if you told them there was a hole that they could enter, all right, but it would lengthen faster than the maximum speed with which they could wriggle so that they could never reach the other end of it?

128. Scott Says:

beleester #126: A lot of extremely contentious metaphysics is packed into your phrase “map to”! 🙂 Just because a quantum circuit acting on some qubits “maps to” (in this case, crudely and approximately) a system involving black holes, or wormholes, or nonabelian anyons, does that mean the latter objects are thereby brought into existence?

If so, then why not also a simulation running on a classical computer, which likewise “maps to” the black hole or wormhole for some definition of “map to”? I think the burden is firmly on believers in the one but not the other to articulate the difference.

So it’s a picture of a wormhole, but it’s an accurate picture, one that might let us empirically test the math that describes wormholes.

As I’ve said many times, this is true only for a very nonstandard definition of “empirically test.” From this 9-qubit simulation, nothing was learned that we didn’t already know from classical simulation, by the authors’ own account. If you could scale up to 100 qubits, there would be some hope of learning stuff that’s infeasible to calculate classically. Even then, though, the thing you could hope to learn would be certain abstruse properties of a toy mathematical model that can describe wormholes. Thus, it seems much better to me to call it a “simulation” rather than an “experiment”—especially if there’s a risk (as we learned there is in this case!) of confusing people into thinking that the microchip is literally bending spacetime and creating wormholes in our universe.

129. Scott Says:

Evan #120: Yeah, if (for example) the Gao et al. algorithm could be improved to reproduce Sycamore’s observed LXEB score, with the same running time, then I’d say that Google’s quantum supremacy claim would be dead, and could only be revived by a new and better experiment. Their supremacy claim would, however, still have been true at the time they made it (Fall 2019), and in fact would’ve stood for more than 3 years despite serious efforts to topple it—all things considered, not the worst run!

130. Publicity Stunt Fallout by jjgreen - HackTech Says:

131. Quantum Circuits in the New York Times by furcyd - HackTech Says:

[…] Scott Aaronson: “If this experiment has brought a wormhole into actual physical existence, then a strong case could be made that you, too, bring a wormhole into actual physical existence every time you sketch one with pen and paper.” (See also Scott’s post here.) […]

132. A Raybould Says:

James Cross #60: A digital simulation of a brain would not be a brain, but whether it would instantiate a mind is a different question altogether.

133. Mitchell Porter Says:

George Ellis #118 (the cosmologist, I presume), says, in response to my #33, that “ER=EPR is false” because Schrodinger and Zeilinger didn’t need wormholes in order to talk about or utilize entanglement.

Sorry, but that doesn’t prove that the proposed conceptual unification is wrong. That one may use Maxwell’s equations doesn’t debunk electroweak unification. This is a question about the fundamental nature of entanglement, in our world that contains gravity, and presumably quantized gravity.

134. Andrei Says:

Scott,

“With the fact that the wormhole can’t transmit information being directly related to the fact that entanglement can’t, in a manner that’s now understood.”

I think the ability to send information via entanglement is interpretation dependent.

If one accepts the mainstream, Copenhagen view, where the result of a spin measurement is genuinely random (say A got +1), then it follows that B, by performing his measurement (+1 in this case), accesses this result instantly. So, a bit of information (1) has been sent instantly from A to B. I see no reason to claim that this particular bit is not information.

True, A cannot control his result, so he cannot send a message meaningful to him, but so what?

You can also devise a situation where the information sent is meaningful. Say you use entangled particles to extract a lottery number, on Earth. This number happens to be 10011101100. At that instant, someone on Mars would get the same number, and win the lottery on Mars. Again, I see no justification to claim that the string “10011101100” is not information.

It’s only if you take a hidden-variable view (Bertlmann’s socks scenario) that you can deny any transfer of information via entanglement. In this case the information was already at B, so nothing was transferred from A.

135. Andy Trattner Says:

Scott, it’s an absolute pleasure. Thanks for sharing your thoughts always. Big blog fan here! Just a minor nit: to my knowledge you must “center on” you cannot “center around”. Sorry in advance for being a grammar nazi…but i appreciate your thoroughness in debunking things too much to not share this tiny thought-byte 🙂

https://grammarist.com/usage/center-around-or-center-on/

Dear Scott,

This is regarding the wormhole issue:

What I gather is that people are making claims which are more or less fully equivalent to the following two:

1. Suppose someone 3D-prints a certain plastic toy using some opaque material. The toy consists of a small Stanford bunny inside another, bigger, Stanford bunny. Suppose he also makes a holographic recording of this toy using optical lasers. He then recreates a 3D holographic image / projection out of that recording, and shows it to me.

I should now make myself believe that I am seeing (a recreation / image of) *both* the bunnies in that holographically recreated image.

2. If someone brings a soap-film to me, I should not touch it without wearing proper gloves and all; else, I will get [an arbitrarily large] electrical shock.

Am I getting the situation right?

Are the claims being made more or less equivalent to a combination of the above two? Or am I going wrong somewhere (esp. in the first scenario)?

If *am* I going wrong, I would appreciate if you / someone else could point out my wrong-ness. (Better, if the clarification arrives using the same analogies.) Thanks in advance.

Best,
–Ajit
[I know your initial reply to NYT, and so, you / people might be wondering why I am raising something like this, here. Well, I would’ve liked to ask Dr. Peter Woit the above, but, in the past, he hasn’t allowed my comments in. So, I didn’t even try his blog this time round.]

137. Scott Says:

Andrei #134: I meant that an entangled state shared by Alice and Bob can’t be used as a communication channel in the sense of Shannon, allowing Alice to transmit a bit that she decides after the channel is set up. That’s always what I mean by phrases like “transmit information.” I understand perfectly well what shared randomness means, but in computer science we just call it “shared randomness”: a strictly weaker resource than information transmitted from Alice to Bob or vice versa.

138. Andrei Says:

George Ellis,

“False! Entanglement was established by using the Schroedinger equation in Minkowski spacetime. No wormhole anywhere in sight. Anton Zeilinger’s Nobel prize winning entanglement experiment has nothing to do with any wormhole anywhere.”

I think that this boils down to what interpretation of QM one accepts. While one uses the Schroedinger equation to calculate results in Minkowski spacetime it is not at all obvious that what happens behind the curtain also happens in Minkowski spacetime. For example, there is no relativistic description of the collapse. We do not know for sure what happens in an EPR experiment. How are those correlations realized? How can you get perfect correlations between space-like measurements? As EPR has shown there are two options:

1. Hidden-variables (Bertlmann’s socks explanation). This can be described in Minkowski spacetime.

2. Non-locality (the result at A is sent instantly at B). This cannot be accomodated in SR, so, if one takes this option, he can envision wormholes or whatever.

I personally prefer option 1, but 2 is still a logically coherent option.

139. xnarxt Says:

Wormhole stuff: there are two quantum computers at different locations but with entangled cubits. Put Alice in the one, and Bob in the other, and let the computers run. Although this whole situation could be simulated on a classical computer just as well, isn’t the novelty of the quantum version just that the experiment starts out with two physically distant computers, and the “meet in the middle” can be said to have happened (according to some interpretations) faster than light could have traveled to the midpoint? If so, isn’t the alice-bob-wormhole thing just a nice story to tell on top of the basic assertion that a QC consisting of entangled qubits at distant locations can nonetheless do computations using all these qubits although they are far away from each other? Instead of Alice and Bob we could just have two numbers (say, 3 and 5) at each end, run an addition algorithm, and tell a story about how actually the numbers have already “met in the middle” and added up to 8 faster than light, although there’s no way to actually get this 8 out of the computer(s) that fast.

So then there is nothing special about SYK specifically, besides providing a plausible substrate to support human-like consciousness, and stimulate people’s metaphysical intuitions in a more interesting way than mere integers tend to do.

Andrei
It’s meaningless to say that ” the result of Alice’s measurement is send ( instantly or not) to Bob” with A and B being spacelike separated .
There’s no way to define an absolute temporal order between spacelike separated events in special relativity.
Information can be transferred only between events that are timelike or null- separated ( for example, event A can transmit a signal to event B, if B is *inside* or *on* the future light cone of A and that’s true both for Special And General Relativity).
And there is the no-signaling theorem in Standard QM.
So *standard* QM is “non local” only in that subtle sense that’s in accordance with Relativity!

On the contrary, in non local hidden variable theories ( like Bohmian mechanics), there’s a need for some “absolute reference frame” and that’s not in accordance with the relativistic framework. That’s why these hidden variable theories are called explicitly ” nonlocal”.
Local hidden variable theories have been already rejected by ( Nobel prize winning) experiments that tested Bell’s inequalities. Only totally implausible miracles ( as in “superdeterminism”) could save the day for local hidden variables…

It’s noteworthy also that these “traversable wormholes” that are related to AdS/ CFT etc. are supposed to be of a special kind: they’re not “shortcuts”, in other words you cannot transmit information through them faster than through the exterior ” normal ambient spacetime”. They’re problematic in some aspects when interpreted
as spacetime geometrical structures, but they can’t violate causality anyway…

141. Nikhil Tikekar Says:

Even a simulation of a toy model on a quantum computer brings up many issues:

(1) Not only simulation of a storm on a classical computer doesn’t make you wet but the same bits, under a different mapping/semantics, could simulate many different systems
(2) The underlying changes in physical currents & voltages could also simulate those characteristics of some other semiconductor systems
(3) Similarly, the same quantum computation sequence could simulate many systems – chemistry, materials, quantum field theory. But crude- in 9 bits & given no. of ops
(4) Under certain QFT interpretation, *in a toy model*, it (& other quantum systems it simulates) could correspond to a ‘space-time-quantum gravity’ with 1 additional dimension, wormhole etc.
(5) Our universe doesn’t have characteristics of that toy model – no. of dimensions, topology of space,…
(6) Also, a superconducting ‘q-bit’ is a collective state of many body system on some scale – the same physical system could have different physical models on lower scales (electrons/ nuclei, quarks/gluons,..) – each of those too may have a ‘dual’ in some toy model?

142. fred Says:

Andrei #134

“If one accepts the mainstream, Copenhagen view, where the result of a spin measurement is genuinely random (say A got +1), then it follows that B, by performing his measurement (+1 in this case), accesses this result instantly.”

I guess you’re implying that, in the many world interpretation, outcomes for each possibility are coming into existence (once the spin decoheres with A and/or B), so, *globally* no information is really being transmitted?
But although it looks like no information is being created, this still assumes A and B’s consciousness are being split (there’s a A seeing a 0 bit, and another A seeing a 1 bit). So we swapped wave function collapse for consciousness splitting, but consciousness splitting across MW branch is not that special, as consciousness is already split in space and time (my brain and your brain don’t share consciousness, and I’m not sharing consciousness with my brain from 10 minutes ago).
Also there’s still the question about how decoherence happens across space-time, e.g. once A and the bit get entangled (A’s consciousness has split), does B instantly get entangled with its own bit as well?

143. Mateus Araújo Says:

fred #142: Your last question is easy. Decoherence is a physical effect, and as such can spread no faster than the speed of light. For Alice’s decoherence to reach Bob it needs to be transmitted there via a physical medium. Usually a cascade of particles getting entangled with each other, but can be modelled more simply as a photon that gets entangled with Alice and is sent to Bob.

144. Mateus Araújo Says:

Dimitris Papadimitriou #140: Quantum mechanics is nonlocal in the precise sense that it violates Bell’s assumption of Local Causality, namely that the probability of an event should depend only on variables in its past light cone.

145. Physics student Says:

I’m convinced by Bell’s test that it requires that there’s some object, which can’t be assigned to a specific physical location (i.e. global, shared state) but nevertheless all actions done on that object in spacelike separated spacetime locations are independent of the order they are done.

One such possible object is the wavefunction. Tensor product of matrices and doing matrix multiplication on one subspace that’s different in different location (similar to density matrix) are another such objects (or the same, just more concrete).

What I’m not convinced is that the size of this object has to be exponential.

Even if you could send N different Bell pairs simultaneously, you only need object with size O(N) to describe what is happening correctly – as you could just have N independent qubits, which only require O(N) shared state.

Are there any protocols like Bell’s test possible only when the size of the shared state is O(N^2)? O(2^N)? Have any of these protocols ever ran experimentally ? Even for simultaneous N Bell experiments, is there any proof the shared state must be O(N) (or could you abuse a smaller shared state to correctly reproduce N Bell tests)?
Of course, I assume whatever it is you’re doing on the shared state is independent of order in two spacelike separated events. That’s fair game since that’s how quantum mechanics is consistent with causality.

The jump from a single bit of shared state confirmed experimentally to an exponential Hilbert space vectors of shared state seems like a huge leap. Many times in physics it’s some harmonic interaction and these definitely do not need exponential descriptions. As I understand the paper about LXEB spoofing, they found out that decoupling the description of the system into two independent descriptions has preserved enough of the LXEB to spoof the results.

I just feel like quantum computing takes the wavefunction too seriously. Physics already know the description of the wavefunction is extremely redundant in many ways (there are gauge symmetries – which are already an exponential redundancy physics has discovered in the wavefunction description). Statistical mechanics and classical mechanics ignore the exponential Hilbert space and give accurate results, another hint at giant redundancy.

146. Scott Says:

Physics student #145: Err, welcome to quantum computing research, whose entire point is to do experiments that will make the exponentiality of Hilbert space more and more manifest! Or, if not, to figure out why it can’t be done. Hence, for example, the recent quantum supremacy experiments based on Random Circuit Sampling.

As I explained in this post, if you read it, current classical algorithms to spoof the RCS experiments in reasonable time either require hundreds of thousands of cores, costing much more in electricity than the QC even with its dilution refrigerator, or else they achieve LXEB excess that’s only ~10% of what Google observed.

So how does the QC do this? Occam’s Razor suggests one possibility: it does so by exploiting 253 or 260 Hilbert space dimensions, exactly the way textbook QM has suggested such a system would since 1926. If you’ve got a better explanation, happy to hear it!

Ultimately, of course, the hope is to scale up to devices with thousands or millions of error-corrected qubits, and thereby (e.g.) use Shor’s algorithm to factor products of thousand-digit primes, and otherwise force the exponential-Hilbert-space skeptics in blog comment sections into a tighter and tighter corner. Or maybe they still won’t care! 😀

147. Tu Says:

Glad to hear you are back in Princeton! Looks like Hoagie Haven is not catering the conference– be sure to stop by and order a heart-stop for me…

148. Scott Says:

Tu #147: Left Princeton last night, now at Q2B in Santa Clara. I got my chocolate orange scoop at the Bent Spoon; that’s what matters!

Mateus Araujo#144

My previous comment was explicitly about the Relativistic notion of Locality/ Causality ( that is fully respected by QM).
In an EPR experiment, there’s no objective, physically meaningful way to determine which of the two spacelike separated measurements ( Alice’s or Bob’s ) “collapses the wavefunction” first. I think that you don’t disagree with that.
There is no transmission of any signal from A to B or vice versa. All interpretations of QM agree with that ( they better do so!).
I agree with you of course that the probabilities are not determined exclusively by the past light cone. That’s our usual ( but rather inexplicable, at least classically) “non separability” that characterizes entanglement.
That “Bell-non locality ” does not violate relativistic locality, that’s the whole point.

150. Craig Says:

This all reminds me of time travel through crossing time zones.

151. DR Says:

As a mathematician, I’m curious to know what’s the actual topology of the Lorentzian surface with boundary that is the spacetime this simulated wormhole lives in. Or, at least, in the vicinity of the wormhole itself. Do people actually know?

152. WA Says:

Just looked at the IAS workshop program, and my oh my are these talks absolutely mouth-watering. Hoping some of them will become publicly available later either by video or transcript. Thanks Scott for the great blog post!

153. Physics student Says:

Scott #146:

I’m not quite convinced that the random circuit sampling is a good test. First of all, they proved that as the number of qubits scales, the spoofing is actually going to outperform the noisy quantum computers.

But conceptually, there are “random quantum circuits” all over physics. Yet physicists continue to work and make approximations that turn out to be exceptionally good. Most statistical ensembles of particles are a kind of a random quantum circuit. And there are just so many ways physicists have approximated their partition functions, and their approximation of the probability distribution of the ensemble worked well.

It’s just not very believable that on the one hand, approximations work so well in physics, and on the other hand, reality is so exponentially complicated Hilbert space. And of course you’ve got classical mechanics which is the best approximation of them all, which also works suspiciously well.

If I’ve had to guess an approximation that could possibly get similar results to RCS, I’ll guess something like this:

phi=e^(i Ai qi + Bij qi qj +Cijk qi qj qk +…)

where qi is the value of the ith qubit, and we have some parameters of a vector Ai, matrix Bij, 3-dimensional tensor Cijk etc, up to some n < N (and of course, dividing by appropriate normalization). And at each time step we approximate by choosing/randomizing a wavefunction close to the one we seek.

There's definitely n for which this approximation is exact, but I'm betting it will be on par with google's QC far earlier. Maybe even for n=2 or 3 or 4. Since the circuit is random, it won't be sophisticated enough to distinguish real quantum simulation and a simulation which just keeps track of n'th order correlations and skips the rest. I also won't be surprised if reality is similar and only keeps track of nth order correlations instead of exponentially many correlations for n=N. Harmonic interactions, as an example, are already exact for n=2. You can hardly find physical predictions which would change if that was the case as physicists already get pretty good results from low order corrections(the famous anomalous magnetic moment calculation/prediction is "only" up to 5th order corrections. )

154. Scott Says:

Physics student #153:

1) I notice that you never responded about a fault-tolerant QC running Shor’s algorithm. Do you believe that that’s fundamentally possible, or not? If not, what physical principle is going to come in and prevent it? Will you agree that the discovery of that principle would be a revolution in physics?

2) As I said in the post — which it seems I might as well not have written, for all the impact it had! — we knew from the beginning that Random Circuit Sampling, with a constant noise rate, was extremely unlikely to provide a scalable exponential speedup without error-correction. It was just that we couldn’t rigorously prove it for the full range of possible circuit depths.

3) Your comments about 2nd, 3rd, etc order correlations seem to show a fundamental misunderstanding of the way RCS experiments are validated. Yes, the low-order correlations are easy to approximate classically — which is exactly why we don’t use them! Instead, we look at the full output that’s actually sampled, and challenge a skeptic to produce a sample from nearly the same probability distribution using a fast classical algorithm. A decade ago, that was just an idea; now it’s actually been done experimentally. So, as even Gil Kalai recognizes, the bar for skeptics is now higher. It’s no longer enough to give a-priori appeals to personal incredulity; now you have to explain how the actual results were actually obtained without a 253-dimensional Hilbert space.

155. Andrei Says:

“It’s meaningless to say that ” the result of Alice’s measurement is send ( instantly or not) to Bob” with A and B being spacelike separated .
There’s no way to define an absolute temporal order between spacelike separated events in special relativity.
Information can be transferred only between events that are timelike or null- separated ( for example, event A can transmit a signal to event B, if B is *inside* or *on* the future light cone of A and that’s true both for Special And General Relativity).”

I fully agree that if SR is true what you are saying above is also true. In fact this is the basis of my argument that QM + the assumption that the measurement results are not both predetermined leads to a contradiction with relativity.

Let’s take a look again at the EPR-Bohm setup. In regards to how the results are generated we have 3 logical options:

1. Both results (at A and B) are random.
2. One result is random (say at A).
3. None of them is random.

Hopefully, you agree that no other option exists.

Let’s see now which option corresponds to the standard/Copenhagen view.

Option 1 is falsified by experiment, since the probability of agreement between 2 random spin measurements is 0.5. The experiment shows a perfect agreement, probability 1.

Option 3 corresponds to the hidden variable interpretations (both results are predetermined).

So, like it or not, you HAVE TO accept 2. You cannot say you disagree with 2. and yet agree with standard QM. It’s logically contradictory.

OK, but option 2. introduces an asymmetry between A and B labs. One has to measure first (the one that represents the genuinely random result) and the other is determined/caused by the first. Again, this is not an assumption, it is the inevitable outcome of accepting standard QM. So, standard QM is incompatible with relativity. The only way to make QM compatible with relativity is hidden variables.

“And there is the no-signaling theorem in Standard QM.”

Yes, and that theorem is completely, utterly irrelevant as far as the compatibility with SR is concerned. You cannot use entanglement to communicate a meaningful message because you cannot control the signal, not because there is no signal. But SR couldn’t care less about your ability to control the signal. SR does not distinguish between signals that can be controlled by humans and those that cannot be controlled. The only aspect SR is concerned is the velocity of that signal, which, in the EPR-Bohm setup has to be infinite.

“So *standard* QM is “non local” only in that subtle sense that’s in accordance with Relativity!”

False, see above.

“On the contrary, in non local hidden variable theories ( like Bohmian mechanics), there’s a need for some “absolute reference frame” and that’s not in accordance with the relativistic framework. That’s why these hidden variable theories are called explicitly ” nonlocal”.”

Indeed, they are honestly admitting this problem and they are trying to work around it. Standard/Copenhagen QM is simply in denial. It faces the same problem as Bohm, but denies it based on a variety of red-herrings, like the non-signaling theorem.

“Local hidden variable theories have been already rejected by ( Nobel prize winning) experiments that tested Bell’s inequalities. Only totally implausible miracles ( as in “superdeterminism”) could save the day for local hidden variables…”

Well, I disagree that superdeterminism is implausible. Superdeterminism means that the source and detectors have correlated physical states. There is nothing implausible about that. Nature shows us plenty of distant systems that have correlated states. What’s so special about the systems involved in Bell tests?

156. Ernesto Galvão Says:

Regarding the results of Aharonov et al., on simulation of noisy random circuits. The efficient simulation exists because Pauli paths with high Hamming weight contribute exponentially little, so that one can have a good approximation by summing a reasonable number of low-weight paths. Do you have any knowledge, or intuition, about the distribution of path weights for more structured circuits of interest? I was wondering if the Feynman sum over paths using Pauli bases could be a useful approach for practical classical simulation of general circuits.

Andrei

I’m afraid that these discussions about EPR/ non locality are going around in circles ( and Scott will soon start losing his patience 🙂), so I’ll try to summarize it here:
– The statistics in EPR experiments show correlations that are in accordance with QM ( note here that I won’t adopt a specific interpretation. By *standard QM* I mean no modifications or substantial deviations ).
Now, roughly speaking, these correlations are stronger than those allowed by “local deterministic hidden variable” theories and real experiments confirm that.
Why’s that so? Because “local” theories are restricted by the causal structure ( in special relativity this is the fixed light cone structure), so they cannot do the job!

– Standard QM is fine with SR. The impossibility of using EPR for faster than light signals is not just an “engineering” problem as you suggest, it’s fundamental!

– You seem to believe that a local strictly deterministic theory is enough to give the correct statistics but it is not!
Superdeterminism is not just “strict determinism”: Its premise is that the statistics that we see are supposedly “wrong”, because some weird and unexplained conspiracy in the initial conditions ( or something like backwards causality that also presupposes restricted boundary conditions) is messing up our experiments in such a way so as to give us “the wrong impression” that QM is correct, while it’s “actually”not!
So, superdeterminism is “strict determinism” plus the assumption that only a specific subset ( of measure zero) of initial conditions is allowed, and that’s cooked up as if Nature wants to cheat us , somehow….
Even slight deviations from these infinitely fine tuned initial conditions will give you ( especially in a real universe) different statistics, so no “apparent” agreement with QM anymore…
Such conspiracies and implausible coincidences are comparable only with Solipsism or the… Simulation hypothesis. Actually the latter is less implausible compared with the miracle needed for superdeterminism and that says it all.

158. Physics student Says:

I’m not misunderstanding what’s being done in RCS. Again, physicists approximate probability distribution of random quantum processes all the time. You can have a huge physical quantum system. You assume the probability distribution in the canonical ensemble, for example. And you get good results about the behavior of that probably distribution. Usually it involves looking at only low order corrections, assuming average values and all kinds of tricks, but the end result is that you know a lot about otherwise impossible to calculate probability distribution. And it works surprisingly well in practice.

So you’re left with two options:
– RCS experiments are somehow different than all the other things physics manage to approximate. In all the way configurations of atoms specify a random quantum system, there’s some inherent physical structure that’s missing in RCS. Answering your 1. question, with a positive answer (which you probably object to).

– RCS experiments are no different than random quantum system in physics. Which force you to accept the fact that physicists have had surprisingly accurate approximations for these for many years. It’s not just the classical approximation – there are approximations all the way. In a world where random quantum systems are a hard problem physicists would have been jobless by now. It just stands in contrast to the fact all the things I’m studying work well and give good approximations in practice.

As for Shors algorithm, it’s definitely not a random system. there’s no reason to think any approximation would give interesting result.

I think a general quantum computer is solving a much harder problem than nature does.

Nature solves the problem of calculating time evolution of a time independent Hamiltonian, and nature has freedom to pick the basis because we can’t measure the entire world. It also does it for a very specific Hamiltonian, one which has many surprising symmetries.

Quantum computer isn’t even describable by time dependent Hamiltonian.

I have no reason to think there’s any hope of reduction between them. Please refer me if you know something similar to such a reduction.

Solving time evolution for nature’s constant Hamiltonian is what nature will give you if you somehow managed to engineer a quantum computer. The rest of the reduction is entirely up to you. I’m pessimistic about that. All I’ve seen in my physics courses so far dealing with time dependent Hamiltonian is Dyson series, and it’s much much harder to calculate than regular Hamiltonians. I’ve also seen adiabatic approximation but that’s definitely not something applicable to quantum computer in general.

I don’t see any reason you would need a ‘shocking’ physical revelation for QC to be impossible. It already implicitly assumes more than nature gives. We’re not missing any physical principle, we were just optimistically over generalizing what nature actually does. It’s all broken telephone problem with humans.

What is interesting to me is finding communication protocols like Bell that can be done experimentally that don’t assume “computation hardness” of extremely recently defined problems and prove interesting lower bounds on the size of shared state. Bells statement stands regardless of advancements in chip fabrication.

159. Andrei Says:

fred,

“I guess you’re implying that, in the many world interpretation, outcomes for each possibility are coming into existence (once the spin decoheres with A and/or B), so, *globally* no information is really being transmitted?”

Standard QM/Copenhagen is not not MWI. I didn’t write that with MWI in mind. MWI is a deterministic theory so that argument does not apply to it.

We do not have a theory of consciousness, so there is no way to tell if the story MWI proposes makes sense. Other interpretations do not need a theory of consciousness since what we experience (particles moving around in 3D space) is taken to be part of the ontology.

MWI presents us with a gap between what the theory postulates (a universal quantum state evolving as described by the Schrodinger’s equation) and what we experience (particles moving in a 3D space). Until this gap is closed by showing how our conscious experience emerges out of the universal quantum state I see no reason to take MWI seriously.

160. Mateus Araújo Says:

Dimitris Papadimitriou #149: Textbook quantum mechanics with wavefunction collapse violates relativistic causality obviously and explicitly. Now whether “quantum mechanics” violates relativistic causality will depend on what your definition of “quantum mechanics” is. The standard approach is a Strawberry Fields one, “Nothing is real and nothing to get hung about”. Because as soon as you start saying what is real you get an obvious violation of relativistic causality.

The only way I know to get around this is with Many-Worlds. Then everything can be real, respect relativistic causality and Bell’s Local Causality.

Andrei, Dimitris,

Since the conversation has somehow arrived back at the topic of superdeterminism I feel compelled to once again point out these two papers:

They purport to address the “insane conspiracy” problem of superdetermism and show that some superdetermistic theories do not have this feature. I’ve no idea if the papers are correct, but it’d be so superawesome for some superdetermistic critic to have a go at ’em.

162. Physics student Says:

Here’s a quick back-of-the envelop calculation for what could’ve happened in Google’s sycamore:

2^54 is ~ 10^16. Avogadro’s number is 10^23. Assuming Sycamore weights around 1 gram, having O(10^23) particles. You have around 10^7 orders of magnitudes between them. (Remove from it my inaccuracy in guessing sycamore’s size, they didn’t specify its weight in arxiv, but there was a photograph that looked like it is from a real camera, so the orders of magnitudes being gram are OKish).

It could have gotten the result with completely classical particles too. A real milestone would be when the Hilbert space size exceeds the weight of the computer divided by Avogadro’s number. You know what, I’m even going to place a bet: QC won’t ever reach a good weight/(Avodagro’s number * Hilbert space size) ratio. I probably need to include fidelity into the ratio so they won’t cheat with low fidelity fake QCs. log2(10^23) ~ 76 qubits. I’ll give myself some leeway because I’m doing extremely gross estimates here and say that ~100 qubits QC won’t ever outperform a classical algorithm. Here’s a concrete line in the sand for you.

More gross calculation: with around 10^7 particles per qubit, you’d expect fluctuations of order sqrt(10^7) ~ 10^3, so error rate around 10^-3. Here’s my full bet: error probability ^2 > weight /(avogadro’s number * hilbert space size).

Everything is very grossly estimated, but since the Hilbert space size is exponential it shouldn’t matter if QC is truly possible. They should supposedly exceed this bound exponentially. If they smash this bound exponentially in the future, you win. If they dance around this bound while making wormhole press releases for the next 10 years, I win.

163. Johnny D Says:

Scott #154, 1: Error correction fixes errors from decoherence where decoherence is modeled as random unitary operators on a small number of qubits. The correction proceedure requires measurements that are projection operators followed by unitaries that depend on measurement results.

If the environment can act as random projections as well as random unitaries, does error correction still work? If not, and the environment acts as projections, wouldnt this doom Shor’s algorithm? Would it be revolutionary? Or just that any large dof system acts as a measurement device?

164. Mateus Araújo Says:

Adam Treat #160: I “had a go” at an earlier paper of Hossenfelder here, and she completely ignored my criticism. I see no reason to waste my time with her papers ever again.

165. Gali Weinstein Says:

Another comment on ER = EPR. You mentioned Dennis Overbye’s article. Overbye wrote another article on ER = EPR: “Black Holes May Hide a Mind-Bending Secret About Our Universe”. In this article, Overbye writes: “Einstein probably never dreamed that the two 1935 papers had anything in common, Dr. Susskind said recently. But Dr. Susskind and other physicists now speculate that wormholes and spooky action are two aspects of the same magic and, as such, are the key to resolving an array of cosmic paradoxes”.
Susskind writes in his paper, “A holographic wormhole traversed in a quantum computer”: “At the time, these two ideas – wormholes and entanglement – were considered to be entirely separate”. But Susskind suggested a relationship between ER and EPR.
And someone wrote on Facebook that “All this talk of ER = EPR being discovered in 2013 is incorrect. For Peter Holland spotted the link between ER and EPR some 30 years prior to L. Susskind. All of which can be found in his textbook on De Broglie-Bohm theory entitled
‘The Quantum Theory of Motion by Peter R. Holland (1993)”: https://www.facebook.com/photo/?fbid=10160359957488781&set=a.46713623780
Anyway, what Susskind writes about Einstein in 1935 is not true and I uploaded a comment on this:
http://arxiv.org/abs/2212.03568

166. fred Says:

Scott #154

“I notice that you never responded about a fault-tolerant QC running Shor’s algorithm. Do you believe that that’s fundamentally possible, or not? If not, what physical principle is going to come in and prevent it? Will you agree that the discovery of that principle would be a revolution in physics?”

From an engineering point of view, there are often unforeseen limitations emerging from complex interactions of different domains such as physics of materials, chemistry, thermodynamics, mechanics, economics, etc. Those different knowledge fields are themselves at a much higher conceptual level compared to the underlying basic physics they all share, so their own laws/heuristics only hold in specific domains with specific assumptions, and all those various models (often highly non linear) just don’t overlap.

There’s nothing in the basic laws of physics explicitly saying that you can’t build a stable stack of quarters from here all the way up to the edge of space.
But do you believe it can be done? Given an existing stack of quarters, it’s trivial to just add one more quarter to it, and then by recursion assume the stack can be arbitrarily high. But that’s not how system scalability works in practice: at some point, what works for 100 quarters won’t work for 1000 quarters, because new problems are introduced: e.g. the wind will screw things up, and if you build your stack inside a tube with a vacuum, you’re now facing another totally different engineering challenge (create a 100 mile-high tube that can contain a vacuum). And, even without air, you’d have to deal with the effects of tides, plate tectonics, strength limitations in alloys, etc.

There’s also no specific law of physics telling us whether building room temperature super-conductors is impossible.

Same about building a stealth bomber that can travel faster than mach 5 at sea level.

It also goes the other way: a hundred years ago, it would have seem impossible (given the technology of the day, but given pretty much the same laws of physics) to build a gravitational wave detector that could measure changes in distance around 1/10,000th of the diameter of a proton, between two mirrors separated by 4km.

So, for the vast majority of hard engineering problems (and building a QC *is* a hard engineering problem), the fact that there’s no clear black and white basic principle saying it’s impossible isn’t really helping much at all. It wouldn’t be the first time we set up to build something, and then it never happens because various requirements just can’t be met within the same system (often it’s quietly killed because money runs out and people move on to other things because some new engineering progress makes an entirely different problem more exciting to work on).

167. fred Says:

If my example of a huge stack of quarters all the way to space seems pointless, well, it’s just a simplified version of a space elevator.
Space elevators were super hyped about 20 years ago, but we’re still waiting.
Again, no basic law of physics against it, yet the practical feasibility is an entirely different thing, because it involves so many domains, often with conflicting requirements.

The only time we can easily prove that a machine isn’t feasible in theory is when it involves perpetual motion/free energy.

168. fred Says:

Andrei

“We do not have a theory of consciousness, so there is no way to tell if the story MWI proposes makes sense. Other interpretations do not need a theory of consciousness since what we experience (particles moving around in 3D space) is taken to be part of the ontology.”

Ok, sure, science was always about taking subjective experience out of the equation, but it’s also a fact that every “measurement” that’s done ends up appearing in consciousness, so, from a practical point of view, we can’t dissociate the two: eventually the trace of any measurement has to appear in someone’s consciousness, like looking at a dial, at the end of a long causal chain.
We just don’t know for sure what’s “measurement” when no consciousness is involved.
Assuming that a world without consciousness would just be that same as a world with consciousness is just that… an assumption. And we’ll never know one way or another, by definition (all we perceive is from subjective experience).

169. not again Says:

Physics student #157: sorry, what exactly is your proposal for approximating (the XEB signature of) RCS?

170. fred Says:

Mateus

“Superdeterminism is unscientific”

I’m not sure that’s true in the big picture.
If you’re able to talk about Superdeterminism, then it’s scientific, no? We could still learn new things by thinking about it (like, in which ways it’s wrong or in which ways it’s irrelevant).
For example, the things involved in Superdeterminism seem very similar to prior thought experiments like Maxwell demon (which is about deconstructing the statistical mechanics probabilistic view of the world by looking at how things happens at the particle level). And we now understand Maxwell demon way better than we used to, in terms of information theory (like, the demon itself would need to have bits to keep track of the system, and manipulating those bits requires energy, etc).

Mateus Araujo#159

I have already summarized in my previous comment #140 the standard meaning of Relativistic Causality.
Leaving aside for the moment GR ( where the causal structure can be complicated and weird with Closed Timelike Curves, Cauchy horizons etc), in Special Relativity, which is suitable for our discussion, things are quite simple: only timelike or null-separated events are causally related! There’s no ambiguity about this.
It’s not a matter of “interpretation”.
It seems that you’re adopting Andrei’s notion of what “relativistic causality/ locality ” means ( that switches freely between signal propagation and mysterious ill-defined ” influences” that supposedly “explain” entanglement ) and that’s exactly one of the usual problems with all these discussions.
If anyone of us that are participating in these uses a different notion of causality or locality then misunderstandings are guaranteed.
That’s why I tried to be as clear as possible ( for a comment on a blog post) and explicitly differentiated causality/ locality in special relativity ( no-signaling/ the causal structure is always respected) from the “weak” or mild non locality ( better: “non separability” ) that characterizes QM ( that does not allow for superluminal signaling, so it does not violate relativity!
You know very well that QFTs are working well for any version of standard QM ( consistent histories, Zurek, MW, Relational QM, Copenhagen and so on…).
Besides that, the same “weak” non locality appears also in the literal version of MW that you advocate:
How does the ( causally isolated ) measurement results on the “left” ( Alice) are choosing the “correct” branches with those on Bob’s side so as to give results consistent with the predictions of QM ( well, except for the ” maverick” worlds that deviate significantly from the Born rule )?
The only answer is that they have to be so, for consistency reasons., exactly like the other versions of QM!

That’s not surprising: all these interpretations are based on the same basic formalism.
And , whatever the reality status of the wavefunction is, there’s no doubt that the notion of locality is, by definition , associated with the physical 4-dimensional spacetime.

172. Scott Says:

Gali Weinstein #164: I just read the note of yours that you linked, and found it utterly unpersuasive. You don’t provide even a single example of a quote from Einstein showing him thinking about ER and EPR in the same context.

As for the book by Peter Holland, it shows me that people had speculated decades ago that entanglement might somehow be explainable by wormholes. But there’s no formal context there analogous to AdS/CFT, where an entangled state on a boundary is literally dual to a wormhole state in the bulk.

173. Mateus Araújo Says:

fred #168: I wrote an entire blog post explaining why superdeterminism is unscientific. I’m not going to type it again here. You seem to be replying only to the title, not to the content. There’s nothing unscientific about Maxwell’s daemon. It’s a thought experiment, not a desperate attempt to dismiss inconvenient experimental results.

I’ve seen the first ( Donadi/ Hossenfelder) paper several months ago.
Alas, it doesn’t evade the need for restricted boundary conditions. If you look more carefully, they assume that information is transmitted from the measurement devices backwards in time to the source ( the prepared state), so it’s not surprising that their model “evades” Bell.
Although it’s not explicitly stated in their paper, this classically means that either closed timelike curves are involved ( with all the consequences, like Cauchy horizons, violation of energy conditions, instabilities…) or , alternatively, signal propagation in the past light cone that’s not compatible with Relativity.
In both cases, also, restricted boundary conditions are needed if their model has a chance to avoid inconsistencies.
So, you see, the problem with these proposals is more general, it doesn’t have to do with any specific toy model:
Because “normal” ( i.e. “non superdeterministic”, without conspiratorial restrictions ) local hidden variable theories have already been falsified by experiments, there is a “Deus ex Machina” kind of solution needed for these theories to have a chance: Extremely fine tuned boundary conditions and implausible coincidences.

175. Mateus Araújo Says:

Dimitris Papadimitriou #169: Special relativity does not care about “signalling”. Any physical effect coming from outside the past light cone is impossible. People do take care to distinguish between these two notions of non-locality. The first, agent-centric notion, is called indeed signalling. The latter, that belongs to the ontological level, is usually called action at a distance. Bell, for example, in La Nouvelle Cuisine, argues strongly against adopting no-signalling as our notion of locality.

In your comment #140 you seem very close to understanding how textbook quantum mechanics is nonlocal. Indeed, within special relativity it is meaningless to have an ordering between the measurements on Alice and Bob. But that’s exactly what textbook quantum mechanics needs in order to make the correct predictions! You do need Alice to make her measurement first, collapse Bob’s state, and then let Bob do his measurement (or vice-versa). You can’t let both parties do measurements on the uncollapsed wavefunction, otherwise you get nonsensical predictions.

It is emphatically not the case that QFTs work with any version of standard QM. There’s no wavefunction collapse in QFT (and there can never be, as wavefunction collapse explicitly violates relativity), and therefore QFT is incompatible with any interpretation that requires it, like Copenhagen.

There’s no “weak” nonlocality appearing in Many-Worlds. That you get the correct results is a direct consequence of evolving the quantum systems according to the correct dynamical equations. There’s no fiddling with the answer to make it come right for “consistency reasons”. Brown and Timpson wrote about it in detail here, but the gist of the explanation is that Alice and Bob both split, locally, and the correlations only start to exist in the intersection of their future light cones. A fortiori there are no correlations before, just two Alices and two Bobs.

176. Gali Weinstein Says:

Scott #170. In my paper: “The Einstein–Rosen Bridge and the Einstein–Podolsky–Rosen Argument: Singularities and Separability”
I provide a letter from Einstein to Michele Besso from 1936:
“Enclosed I am sending you a short paper, which represents the first step. The neutral and the electric particles appear, so to speak, as a hole in space [Loch im Raume], in such a way that the metric field returns into itself. Space is described as double sheets. In Schwarzschild’s exact spherical symmetric solution, the particle appears in ordinary space as a singularity of the type 1-2m/r. Substituting 1 – 2m = u2, the field becomes regular in u- r-space. When u extends from -∞ to +∞, r extends from +∞ to r = 2m and then back to r=+∞. This represents both ‘sheets’ in Riemann’s sense, which are joined by a ‘bridge’ at r = 2m or u = 0. It is similar to the electric charge.
A young colleague (Russian Jew) and I are relentlessly struggling with the treatment of the many-body problem on that basis”.
The Russian Jew is Boris Podolsky. Einstein worked with Rosen and Podolsky on both problems the ER and the EPR as I show in the above paper.
The note I’ve uploaded to the ArXiv is a short summary of my paper: “The Einstein–Rosen Bridge and the Einstein–Podolsky–Rosen Argument: Singularities and Separability”.

177. Scott Says:

Gali Weinstein #174: The new Einstein quote that you’ve provided still says nothing whatsoever about QM in the context of wormholes. It still contains no hint of “ER=EPR.” If Podolsky also thought about ER bridges, that no more makes your case than does the obvious fact that Einstein and Rosen thought about them. But I fear that continuing to point this out to you is futile.

178. fred Says:

Mateus #173

“You can’t let both parties do measurements on the uncollapsed wavefunction”

Who is the “you” being addressed here?
The God at the center of Superdeterminism’s big “conspiracy”? 😛

179. Scott Says:

fred #165, #166: Of course scalable QC could be possible in principle, but too hard in practice in the current state of civilization … just like scalable classical computing was from Babbage in the 1820s till about 1950.

The thing is, though, the QC skeptics are almost never content to make that case and stop there. Instead, like “physics student,” they can almost never resist going further, and educating us about how if we just understood more physics, we’d see why it can never work even in principle.

They don’t know or don’t care that, as soon as they do that, the ball shifts from our court to theirs: now it becomes their job to clearly articulate the physical principles that rule out or “censor” QC, show how to simulate all realistic quantum systems in classical polynomial time, make predictions from their new principles, and explain away experimental results that appear to contradict their predictions.

180. Scott Says:

Johnny D #162: Quantum error-correction works against any errors whatsoever—not just depolarizing noise but unitaries, random projections, etc etc—so long as the errors act on only a small fraction of qubits. That was Peter Shor’s whole insight from 1995, that once you correct Pauli X, Z, and XZ errors, you automatically correct all other possible 1-qubit errors as well, because of the completeness of the Pauli basis.

For fault-tolerant QC, the discussion is more complicated, and you can indeed invent “conspiratorially correlated” forms of noise that would doom the known schemes. You have to be extremely careful, though, to prevent your conspiratorial noise models from equally dooming scalable classical computation, in contradiction with observed reality! 🙂

181. Scott Says:

Physics student #161: I eagerly take that bet! I bet that, by the end of 2032, it will be generally accepted that a programmable QC with ≥100 qubits has performed a quantum computation that is infeasible to simulate classically with the resources and algorithms available at that time.

I’d like to put some money on this—how about \$1000? Can you please email me, so we can agree on the terms and I know who to pay or collect from?

182. fred Says:

Dimitris

“If you look more carefully, they assume that information is transmitted from the measurement devices backwards in time to the source ( the prepared state), so it’s not surprising that their model “evades” Bell.”

On one hand Bell’s Theorem assumes that the hidden variables are not correlated with the measurement settings.
On the other, GR views space-time as an immutable block, and if information is never destroyed, then it’s not surprising that the past implies the future and the future equally implies the past (so information flows “both ways”).
Those two views of the world can’t be true at once, and they both equally imply some odd conspiracy if you believe the opposite one. E.g. why would a world that’s totally deterministic still look as if randomness is at its core? It makes no sense either.
And even MW can’t reconcile the two perfectly.

183. Gali Weinstein Says:

Scott Comment #176.
As to “QM in the context of wormholes”.
Einstein did not speak of wormholes. John Wheeler invented this word in the 1960s. For Einstein in 1935, the ER bridge was NOT a wormhole, i.e. two black holes connected by a throat. Einstein did not even speak of “black holes”. Please read Einstein’s original papers and letters! For Einstein, the ER bridge only served as a model for elementary particles because he wanted to exclude singularities from the field.
In 1935 Einstein worked on classical general relativity (!) and unified field theory, i.e. the unified field theory (ER bridge theory) should use methods of classical general relativity, not quantum mechanics to account for atomic and electrical phenomena. Einstein believed that quantum mechanics cannot serve as a new theoretical basis for physics because it is an incomplete representation of real physical systems (he showed this through the EPR argument). Accordingly, quantum mechanics should be adapted to the foundations of the general theory of relativity.

184. Cornellian Physics Student Says:

I can’t believe the audacity of some people who claim that quantum computers are somehow compatible with the laws of physics. This is such a ridiculous claim that it almost defies belief. Quantum mechanics is the most well-tested and well-established theory in all of physics, and it clearly states that the behavior of quantum systems is fundamentally unpredictable. Yet some people still insist on trying to build computers that rely on these quantum systems, even though it is completely impossible. The laws of physics simply do not allow for the kind of control and predictability that is required for a functioning computer. Anyone who claims otherwise is either deluded or trying to deceive others. This is nothing more than pseudo-science at its worst, and it needs to be rejected out of hand.

185. Scott Says:

Gali #181: In that case, it has nothing to do with “ER=EPR” in the Maldacena-Susskind sense, which is a relation between two concepts one of which Einstein didn’t even accept the reality of.

186. Scott Says:

“Cornellian Physics Student” #182: My presumption of good faith in whomever I’m talking to is so strong that it’s eaten me up over the past 6 months to have to abandon it, but—you are not arguing in good faith. You are a terrible human being, if you even are a human being rather than ChatGPT or the like. And similar comments will be ruthlessly moderated out going forward.

Mateus Araujo#173

– If you disagree with with my brief ( but accurate) definition that causally related events in Relativity are those that are timelike or null-separated, then I can’t do anything about it. The “Causal Structure”, in the context of SR/ GR is , essentially and briefly, defined by the Light Cone Structure.
I’m not sure about the definitions used in the context of “quantum foundational” blog discussions, because I admit that I have not much experience of such discussions ( only the last couple of years, due to lockdowns and such, I’ve tried to participate a little…🙂).

– Violation of Relativistic causality/ locality is a much more serious issue than the “weak” non separability of QM. Breakdown of Relativistic causality implies inconsistencies, actually opens Pandora’s box!
– This is definitely not the case with the “weak” QM non locality.
(By the way, this is something that people who are concerned about the black hole information problem, for example, tend to overlook sometimes. Loss of unitarity ( retrodictability ) is actually a minor issue, compared with the massive violation of locality/causality that some attempts to resolve the paradox are implying.

– I have read the paper from Brown et al. They don’t address the basic issues about MW. Nobody does, anyway, so I don’t blame them ( the mathematical description of the spacetime geometry of the splitting semi-classical worlds ,e.g.)
– The way that you address my previous comment about Alice and Bob’s measurements in the context of MW are actually similar to the way that Relational interpretations do it. That’s the way I understand “collapse” myself, too ( and that’s why I use ” “).
– I don’t agree that the validity of QFTs is ” interpretation dependent”. ( Alternative theories, like Bohm and GRW are another story).

I don’t think that we have any other substantial disagreements…

Fred#180

Yeah, in GR you can “retrodict” the past ( assuming that you’re talking about a “globally hyperbolic spacetime”), but you can’t send any physical signals through the past light cone.

189. Andrei Says:

“I’m afraid that these discussions about EPR/ non locality are going around in circles ( and Scott will soon start losing his patience 🙂), so I’ll try to summarize it here:”

The main reason for going in circles is that you are simply ignoring my arguments. I have explained what the logical option are, why standard QM needs to take the option 2 and why this option conflicts with relativity. You did not answer to any of that. So, there is no need to summarize anything, just go back and deal with my argument. Is there a forth option? Which is it? If not, do you agree/disagree that standard QM corresponds to option 2, etc.

“– Standard QM is fine with SR. The impossibility of using EPR for faster than light signals is not just an “engineering” problem as you suggest, it’s fundamental!”

Bohm’s theory does not allow you to use EPR to send faster than light signals. But, by your own admission, Bohm’s theory is incompatible with relativity (requires an absolute reference frame, etc.). So, your claim has been proven wrong again. The non-signaling theorem is irrelevant as far as the compatibility with relativity is concerned.

“Superdeterminism is not just “strict determinism”: Its premise is that the statistics that we see are supposedly “wrong”, because some weird and unexplained conspiracy in the initial conditions”

I fully agree with you that if you define superdeterminism in this way it’s nonsense. The problem is Bell did not define it that way, and I do not accept your definition either. I disagree that superdeterminism implies that the statistics are wrong and I disagree that initial conditions play any role. You are simply making a straw man argument.

190. Andrei Says:

While I do think superdeterminism is correct, since it is the only way to make QM compatible with relativity, I do not think that those papers are making it justice. This quote is from the first one:

“An often-raised question is how the hidden variables at P already “know” the future detector settings at D1 and D2. As for any scientific theory, we make assumptions to explain data. The assumptions of a theory are never explained within the theory itself (if they were, we wouldn’t need them). Their scientific justification is that they allow one to make correct predictions. The question how the hidden variables “know” something about the detector setting makes equally little sense as asking in quantum mechanics how one particle in a singlet state “knows” what happens to the other particle. In both cases that’s just how the theory works.”

Hossenfelder admits above that her model can’t explain what it was supposed to explain so she just postulates it. That’s bad. I advise anyone against using Hossenfelder as a source for their information on superdeterminism.

191. Andrei Says:

fred #167

“We just don’t know for sure what’s “measurement” when no consciousness is involved.
Assuming that a world without consciousness would just be that same as a world with consciousness is just that… an assumption. And we’ll never know one way or another, by definition (all we perceive is from subjective experience).”

OK, take Newtonian mechanics. It postulates a 3d space, a time and some particles moving around. This is what we (consciously, how else?) observe. The output of our conscious experience is taken as a brute fact and used as a primitive in the theory. So, you can use Newtonian mechanics without discussing consciousness.

This is how other classical theories, like electromagnetism, or GR work. So, there is no reason to require them to deal with consciousness. The same is true for standard QM, Bohmian mechanics, GRW, etc.

MWI is different. It postulates an entity that looks nothing like what we observe (a universal quantum state evolving in a huge-dimensional space). How do you get from here to what we observe? This is what MWI should address first in order to be taken seriously.

192. Gali Weinstein Says:

Scott #184
Lenny Susskind and Adam Brown wrote in their paper: The idea of a wormhole dates back to 1935, when Albert Einstein and his collaborator, Nathan Rosen, studied black holes in the context of Einstein’s general theory of relativity. […] In the same year, Einstein and Rosen wrote another paper, this time in collaboration with Boris Podolsky. […] At the time, these two ideas – wormholes and entanglement – were considered to be entirely separate”. I showed in my paper that for Einstein, ER and EPR were not “entirely separate”.
True, I agree that it has nothing to do with “ER=EPR” in the Maldacena-Susskind sense, which is a relation between two concepts one of which Einstein didn’t even accept the reality of. I can edit my paper and add your comment to the paper. Okay? I will do this 🙂

Andrei

– Bohmian mechanics is in tension with relativity because, as the proponents of the theory admit explicitly, there’s a need for an absolute reference frame , so the usual Relativistic effects cannot be explained kinematically anymore in their theory. They need to go back to the pre- Einstein days of Larmor, Fitzgerald, Lorentz et al ( good luck with that).
– I have noticed here and there ( not exclusively in this comment section) that some people are trying to “redefine” special relativity the way they like ( or, more accurately, the way that fits their philosophical prejudices).
Well, Special Relativity is perfectly well defined, there’s no ambiguity about it and it’s easy for anyone to check the basic axioms and its structure.

– About “superdeterminism”: The violation of “statistical independence” has to be of a very special kind, that evades the inefficiency of the “local hidden variable theories ” to predict the correct correlations.
As I ( and other commenters) have already said, the correlations are stronger than those allowed by such theories and , moreover, they are in agreement with QM. How can you evade that?
Well, if you insist that your theory does not incorporate any “spooky action” you need a miracle. A “Deus ex Machina “: Unbelievably contrived boundary conditions/ coincidences.
Actually, the situation is perhaps even worse, because even if you contrive such a model , it will be probably unstable, even for arbitrarily small perturbations, so it will break down like a house of cards.
Nothing more to say about that …

194. OhMyGoodness Says:

Not my area of expertise but I don’t understand the need for all these complicated structures. If someone can point me to experimental evidence that precludes the following then much appreciated.

In the case of two entangled photons emitted from a parent photon that interact with a measuring device then each conforms to a wave function of the system or if considered as individual wave functions must evolve in a manner that corresponds to initial conditions (when created). It is the initial conditions that are not known and no tunnels or superluminal communication required.

Further consciousness and decisions have nothing to do with it. If these photons interact with any matter (measuring device or retina of a lizard person from Zeta Reticuli or chlorophyl plant molecule or surface of a dyson sphere or interstellar light sail) than the initial conditions are revealed in the sense that the interaction is consistent with the initial conditions, As an example the lizard people at Zeta Reticuli and humans on Earth would agree on measurements related to the Cosmic Background so not a function of consciousness or decisions and not a patriarchal construct.

I have tried to imagine my experience traveling on a photon with no time evolution at all but can’t reasonably imagine it at all.

I regret a fuzzy headed spur of the moment answer I gave here to a many worlds advocate some months ago was inconsistent with the above.

195. manorba Says:

“I have tried to imagine my experience traveling on a photon with no time evolution at all but can’t reasonably imagine it at all.”
Whenever i came up with this question everyone with some physics knowledge told me that it just doesn’t make sense. First of all with no time evolution there would be no experiencing and thinking. It would need also a 3d spacial structure where some sort of signals would go around, how can you do it at the speed of light? and that’s just throwing away all the known costraints of mass and energy.
The most relevant thing that came out of these discussions is the fact that we travel through time at the speed of light. Moreso, everything travels at the speed of light in spacetime. Massless object like photons have their velocity all in the space components, everything else has their velocity in the time axis.

196. JimV Says:

Fred @ 167: “We just don’t know for sure what’s “measurement” when no consciousness is involved. Assuming that a world without consciousness would just be that same as a world with consciousness is just that… an assumption.”

As I understand it the C60 (60-atom carbon molecule) double-slit experiment was done in a vacuum at low temperature to reduce the chances of an extraneous interaction, since the more atoms (more macroscopic) the more chances there are for extraneous interactions. They got the interference pattern under that condition, then gradually raised the temperature, and the interference pattern gradually disappeared. The explanation was that the C60 atoms began to emit/absorb thermal photons, which interacted with the experimental apparatus in a way that determined which slot the conglomerate would go through. To my knowledge, no human consciousness experienced those interactions (only their result). There have also been machine-run double-slit experiments.

So in my world view, the local universe got along without us fine for around 13 billion years in our time frame, quantum mechanics and all, and will continue to do so after, in a blink of a cosmic eye, we are all gone. That has always seemed the sensible view to me. I also think an approximately-spherical Earth with central attraction is more sensible on its face than a flat earth resting on an indefinite number of turtles. I acknowledge that sensibility is an assumption (but a sensible one).

197. James Cross Says:

#194 JimV

Regarding “consciousness”

“So in my world view, the local universe got along without us fine for around 13 billion years in our time frame, quantum mechanics and all, and will continue to do so after, in a blink of a cosmic eye, we are all gone.”.

“world view” – synonym for our conscious perspective on “world”
“local universe” – conception
“13 billion years” – mental abstraction from various observation of regular natural phenomena
“time frame” – does the universe know of a “time frame”
“quantum mechanics” – derived from human mental activity as description for phenomena

Right, no consciousness required. We can’t even talk about it, describe it, study it, or even participate in [whatever it is] without mental activity. So it funny to think of it as so completely independent of [whatever it is].

198. Gil Kalai Says:

(Scott #22): “Gil Kalai believes that (what I would call) conspiratorially-correlated noise will come in and violate the assumptions of the fault-tolerance theorem, and thereby prevent quantum error-correction from working even in principle.”

Let me try to demystify my “conspiratorial correlations” (as Scott referred to them) that I studied before 2012. My motivation was to try to understand a general mathematical principle that explains and/or manifests the failure of quantum fault-tolerance. The principle I proposed in 2005/6 was:

(Correlation principle:) “Cat states on two qubits are subject to correlated errors”.

1. The correlation principle for gated qubits is part of the standard assumptions on noisy quantum circuits

2. The correlation principle holds for NISQ systems. (Do you agree, Scott?) In other words, suppose you take a NISQ computer (like Sycamore) and run some computation that causes two far-apart qubits A and B to be in a cat state. Then when you analyze the noisy state, you will discover that the errors for A and B are correlated. (Here we even allow measuring the other qubits.)

3. Quantum fault-tolerance would refute the correlation principle.

4. The correlation principle (as an assumption on the noise) causes current fault-tolerance schemes to fail.

5. For sufficiently small error rate the correlation principle still allows log-depth quantum computing.

6. It follows from 2. that the correlation principle is not refuted by the Google experiment or any other NISQ experiment. (The impressive claimed statistical independence regarding fidelities in the Google experiment notwithstanding.)

7. The correlation principle is related to more general mathematical statements regarding the relation between “signal” and “noise” in (realistic) noisy quantum systems, and to ways of modeling the dynamics of such systems. There is more to be done in this direction, and in particular to express the relation between “signal” and “noise” in representation theoretic terms.

8. If my main current claim asserting that “the fault-tolerance constant is unattainable” is correct then this would also lead to the validity of the correlation principle.

199. Over the years, the correlation principle or similar mathematical ideas were proposed by several mathematicians as their instinctive concern regarding quantum computing.
200. Johnny D Says:

While the worm hole language led to misunderstanding of the contribution to science of that work, I am still confused why the non-Abelian exchange stats paper is not a major result.

The way I read the exchange stat paper and its theoretical companion (also recent) is that there is a new path to universal topological qc by ‘deforming’ surface code. This theoretical discovery was then confirmed on Google’s hardware and it is compatible with the usual error correction in surface code. Thus a new path to fault tolerance. I read it to mean that once Google has hardware satisfying some reasonable threshold, they will have clear path to scalable fault tolerance realization. Why do you see this as just another small qubit calc? Is it not more??

201. Nan Jiang Says:

(Another) long time reader and first-time commenter here!

Re “wormholes”: I am left more confused than I was after reading this post & the discussion. As an outsider, when I first read the news I was both deeply suspicious but also guessed an interpretation under which there is some legitimacy to the claim. Then I read Scott’s snippets (“pencil and paper”) and thought the wormhole claim is likely nonsense. And then I read this page and feel my original reaction might have a chance to be not too wrong…?

Before start: let me first totally agree that the PR/hype is harmful when we consider a lay audience. BUT, for me (and likely many others who commented with a similar sentiment), that was the first thing I ruled out. What’s unclear yet extremely curious to me is whether the following claim is true:

Question 1—There exists a set of reasonable (but possibly ENORMOUS) metaphysical presumptions and quantum gravity conjectures (possibly beyond what is commonly accepted), under which the experiment has brought a wormhole-ish object into physical existence more than what a classical simulation or pencil & paper can do.—

I still find no definitive answer to Question 1. Note that it is different from

Question 2—Do you think a wormhole has been brought into existence?

Most discussions that dismissed the wormhole claim were answering Q2, since they were based on the person’s *own* presumptions. That said, I would guess they would also say no to Q1, and I want to understand why. Of course, the burden is on the “believer” (which I am not) to come up with the set of those presumptions and positions. I know I will do an extremely poor job in this, but it seems to be along the lines of what’s mentioned in #33, that “*any* EPR correlated system is connected by some sort of ER bridge”. I am sure someone else can unpack & reveal the hidden presumptions much better and more clearly than me.

Final question: if you say no to either Q1 or Q2, at which level do you reject the presumptions/positions behind the wormhole claim?

(A) The wormhole claim requires a presumption that I personally find unjustified, but I agree that it is still a respectful position to take; it’s just not mine. (Then your answers are Yes to Q1, No to Q2.)

(B) The wormhole claim requires a presumption that I personally find unjustified. Although some respectable people hold that position and there is a bit of legitimacy in it, it is in direct conflict with some other principles/positions I hold dearly that I think we must disregard it.

(C) The wormhole claim requires a presumption that I personally find unjustified. Although some respectable people hold that position, I think they are completely misled.

(D) The wormhole claim requires a presumption that I personally find unjustified. The fact that some respectful people are making the wormhole claim is simply because they didn’t think about the presumptions behind it, which they themselves would also find ridiculous.

???

202. Parisian Says:

Scott 183:

While it is true that Einstein did not accept the reality of certain concepts, such as the concept of entanglement, it does not necessarily mean that the “ER=EPR” connection proposed by Maldacena and Susskind is invalid. In fact, the “ER=EPR” connection is a theoretical result derived from the principles of quantum mechanics and the theory of relativity, which are both well-established theories that have been extensively tested and confirmed through experiments. Just because Einstein may not have accepted the reality of one of the concepts involved in the “ER=EPR” connection does not mean that the connection itself is not valid.

203. JimV Says:

James Cross : [“time frame” – does the universe know of a “time frame”]

In case there is some confusion there I was referring to our relativistic inertial frame, although it has been said the universe has an arrow of time based on its entropy growth.

If you are referring to whether a tree in a forest exists if no human has ever semantically characterized the event, again it seems only sensible to me to assume that it does, and all our semantics does is describe our experiences of nature, not create them. You are entitled to believe the opposite, but it equates to flat-earthism in my view as stated previously.

Arguably the whole issue is off-topic per our hosts wishes and ought not to have been raised or responded to, for which I apologize for my part and will make another donation to Ukraine as a fine.

204. JimV Says:

I will pay a second fine, if this makes it through moderation, to state that AlphaGo experiences various outcomes of Go moves and creates it own theory of how to win a Go game, similar to how humanity arrives at its various theories. I am happy to call that a form of consciousness, having previously defined consciousness as the operation of computational (including logic) ability, memory, and goals that drive decision-making. Where I get into an argument is with those who insist that only human brains and human theories are True Scotsmen.

205. Scott Says:

Gali Weinstein #190: Yes, you should at least edit your paper, if you don’t retract it or change its entire message!

206. Scott Says:

Johnny D #197: You’ve now asked me some version of that extremely specific question like 10 times!

Speaking generally, the reason I’m less excited than you are by yet another quantum computing experiment “demonstrating some phenomenon for the first time,” is that in nearly all such cases, we knew perfectly well what the result of the “experiment” was going to be before running it. In other words, these things are “tech demos” (a cynic would say: “publicity stunts”) more than they are experiments in the usual scientific sense.

So then, the only remaining question is what hardware advances were made that let the demos be done at all. In some cases (eg, sampling-based quantum supremacy experiments), major hardware advances were needed. In most cases, though (wormholes, nonabelian anyons, etc), it’s just previously existing hardware that was repurposed for a new demo, meaning that even the fact that the experiment was doable comes as no surprise to anyone following the subject.

207. Ava Says:

Scott 203,

How dare you say that these quantum computing experiments are merely “tech demos” and “publicity stunts”! The fact that we often have a good idea of what the result of an experiment will be does not make the experiment any less valuable or important. In fact, these experiments are crucial for advancing our understanding of quantum mechanics and for developing new technologies based on this understanding.

Furthermore, even if the hardware used in some of these experiments was previously existing, that doesn’t mean that the experiments themselves were not significant. It takes a great deal of skill and expertise to design and conduct these experiments, and they can still provide valuable insights and information, even if they are not groundbreaking in the way that some other experiments might be.

So to dismiss these experiments as nothing more than “tech demos” is not only unfair, but it shows a lack of understanding and appreciation for the hard work and dedication that goes into conducting them.

208. Scott Says:

Nan Jiang #197: Let’s see if this short answer helps. Yes, there’s a whole package of metaphysical assumptions that you could accept, under which a quantum computer simulating a crude version of the SYK model would “create a wormhole” in a stronger sense than a mere classical computer simulation or whatever would do so. However, that metaphysical package is so enormous, contentious, and non-obvious that the fact that the little experiment was actually done is an almost comically irrelevant addendum to it! Again, the experiment was not especially difficult or impressive by 2022 standards, nor did we learn anything from it that we didn’t already know. Given that, to me it seems more intellectually honest to just debate the metaphysical package directly (if you’re into that sort of thing), rather than using the “prestige” and “authority” of having done the experiment to “illegitimately jump to the front of queue” of the metaphysical debate, as if having done this little demo makes one more entitled than anyone else to an opinion in the debate. If that makes any sense.

209. Scott Says:

Ava #204: I said they were tech demos that a cynic would call “publicity stunts”! My own opinion is that they span an enormous range, from tech demos that are extremely impressive, eminently worth doing, and worth Nature covers and maybe even Nobel Prizes, even if those of us who accept QM already had a pretty good idea what their outcomes would be (e.g., the loophole-free Bell tests and quantum supremacy demos), to, at the other end of the range … well, yes, publicity stunts.

210. Scott Says:

Parisian #199: What could possibly have made you think that Einstein’s skepticism of QM would render ER=EPR invalid in my eyes?!? Whether ER=EPR is valid or not, that well-known bit of scientific history clearly has no direct bearing. Here at Shtetl-Optimized, there are no infallible authorities … not even Big Al himself!

211. Ava Says:

Scott 206:

That’s not what you said. You said that nearly all of these experiments are just tech demos and publicity stunts, without acknowledging the value and significance of many of them. You can’t just brush off the hard work and dedication of scientists who conduct these experiments by calling them “publicity stunts.”

Furthermore, the fact that some of these experiments may not be groundbreaking or revolutionary does not make them any less important. Every experiment adds to our understanding of quantum mechanics and helps us move closer to developing practical applications of this technology.

So, if you want to have an intelligent and respectful conversation about these experiments, you need to show more respect for the scientists who conduct them and the work they do. Dismissing their efforts as nothing more than “tech demos” is not only unhelpful, but it’s also unfair and ignorant.

212. Scott Says:

Ava #208: It’s exactly what I said—scroll up and read it!

You’re hereby banned from this blog for 1 month, on the ground that being gaslit about what I wrote just a few comments up raises my cortisol to unacceptable levels.

213. Samuel T. Says:

Scott #205:

Wow, way to completely miss the point. The fact that the experiment was done is not about trying to “illegitimately jump to the front of the queue” in some metaphysical debate, it’s about providing actual, tangible evidence for a concept that has previously only been theoretical. It’s not about “prestige” or “authority”, it’s about science and progress. And if you can’t see that, then maybe you should stick to debating your precious metaphysical assumptions and leave the real work to the rest of us.

214. Parisian Says:

Scott 207,

You may not believe in the authority of Einstein, but that does not change the fact that his skepticism does not invalidate the “ER=EPR” connection. You cannot simply dismiss a well-established theory because it does not align with your personal beliefs. The “ER=EPR” connection has been extensively tested and confirmed, and it is based on the principles of quantum mechanics and the theory of relativity – theories that have been proven to be correct time and time again. You cannot simply ignore the evidence and facts just because they do not fit your preconceived notions.

215. Scott Says:

Samuel T. #210: But there isn’t new evidence here for any previously theoretical concept! The simulation got exactly the result that we knew it would get from having calculated the answer classically. Dan Jafferis, one of the coauthors, was perfectly clear and explicit about that point in his talk at IAS.

The one new thing we learned, is that a crude mockup of SYK can indeed be compressed into 9 qubits. Again, that wasn’t learned from doing the experiment, but only from designing it. And it has no bearing—none, zero—on how SYK, wormholes, or any of the other theoretical concepts being simulated relate to our universe.

216. Scott Says:

Parisian #211: TROLL DETECTED.

You’ve been trying to stress me out, through hostile and illogical comments written under multiple aliases. You are permanently banned from this blog. Anything that I suspect is from you—whoever you are—will henceforth be ruthlessly deleted.

217. Mike Says:

Scott #213: These people/person are transparent and so annoying.

218. Mitchell Porter Says:

DR #151 asks what the topology of the one-dimensional wormhole could be.

I believe that topologically, for both non-traversable and traversable wormholes in 1+1 dimensions, it’s just R^2.

When you only have one dimension of space, to have a non-traversable “wormhole”, I believe it’s something like this:

… x _ _ _ x …

Each x is an event horizon when approached from the dotted region, in the sense that an influence can propagate into the dashed region, but not out of it.

The dashed region is then the “wormhole”. That is, it is a segment of the 1-dimensional space, that lies on the other side of two event horizons.

To get a traversable wormhole, there has to be a perturbation of the non-traversable wormhole that makes the dashed region temporarily traversable.

At least, this is what I infer from a passage in Gao-Jafferis-Wall (arxiv:1608.05687):

“The wormhole is only open for a small proper time in the interior region. This is quite different from the usual static wormhole solutions which do not have event horizons”

I warn you that my “understanding” could be wrong in certain particulars. This is just what I’ve picked up from browsing the literature.

You might learn something from talks that Maldacena has given on wormholes in AdS2 space, e.g. one called “AdS2, SYK and wormholes”.

219. Shmi Says:

Scott, was there any discussion of gravcat-type setups at the workshop? Seems like one of the few low-energy cases where the result is not completely predicted by quantum mechanics.

Nan Jiang # 198, also Scott # 205 and Scott # 212:

I personally found Dr. Matt Strassler’s blog post dated 06th December [^] very informative.

Especially given the fact there were only 9 qubits, and keeping in mind the “work-flow” mentioned by Strassler, perhaps it’s better to raise the following question:

What was it precisely that they showed? Was it demonstrably closer even to just SYK, as compared to the good old quantum tunnelling?

Best,
–Ajit
[Note to Scott: For this reply, I’m using a different email ID than the Yahoo! ID which I usually use, because Yahoo! *email* services continue to be down in India.]

221. OhMyGoodness Says:

Wow. Welcome to the neo modern world where the old dusty scienntific method and pursuit of scientific truth has been too often replaced by pursuit of self interest and internet mob rule. Where hordes of claquers roam the internet attacking self evident truths that conflict with some bizarre belief and recitation of facts are deemed dangerous content.

As best I can tell this results from the emotional amplification impact of of the internet coupled with an educational system that has become overly ideological. The amygdala has reasserted control over the pre frontal cortex for most of society so devolutionary forces ascendent.

Heaven forbid that anyone points out that numerical simulations of reality are distinct from the real system being simulated. It’s now taboo.

222. I Trolled You Says:

Scott:

I’m the guy who’s been trying to stress you out with multiple infuriatingly stupid comments under multiple aliases.

Two years ago I came down with long COVID that’s given me debilitating fatigue and brain fog. My academic career in STEM has since completely fallen apart. I’m no longer able to do academic work, my one passion in life. Now I’m 25 years old, no hope for the future, feel brutally sick every fucking day, living with my parents again because I’m unemployable and have no source of income. I don’t have a social life anymore. I have nothing to live for, basically.

I enjoy getting a rise out of you because it’s funny and entertaining. It distracts me from my sickness. I don’t even have the energy to watch TV shows or movies.

My life sucks and I don’t apologize. It was funny and amusing and I’ll probably keep on doing it. Good luck dealing with it.

223. Ted Says:

Scott #205,
I apologize for this comment that simply draws attention to a previous comment – I try hard to avoid making these, out of respect for your time. But I’d be very curious to hear your take on comment #93 above, which asks whether the duality between the two physical descriptions of this simulated process can be implemented efficiently – i.e. whether one can efficiently convert back and forth between the two descriptions as the level of detail in the description (e.g. the number of qubits employed in the simulation) gets larger. It seems to me that this is a very concrete question (which probably has a simple yes/no answer) that would actually shed some light on the “metaphysical package” that you mentioned.

If I understood your July post correctly, then the “classic” AdS/CFT dictionary is polynomially efficient to apply when one uses (polynomially many copies of) the boundary CFT state to reconstruct events outside of an event horizon, but it is exponentially complex to apply when reconstructing events inside an event horizon. (I know that this particular Sycamore experiment employed a much simpler variant gravity/QM duality, but I assume that fact is probably still true in this case?) But I’m unsure about the computational complexity of applying the duality to a situation with a traversable wormhole, which seems like an edge case in between those two extremes.

224. Scott Says:

I Trolled You #219: Yes, my previous troll was also someone who found the only meaning in his life from inflicting his own misery on others. The better I understand what I’m up against, the easier it will be in the future to resist the temptation to engage.

225. Scott Says:

Ted #220: The whole point of the 2019 paper by Bouland, Fefferman, and Vazirani was that there are situations (not necessarily for SYK in particular, but for holography in general) where the boundary-to-bulk map seems not to be implementable in polynomial time.

Mitchel Porter #215

In the ( Gao/Jafferis/ Wall ) paper that you mentioned , there is one thing that seems missing:
How do they avoid the presence of an inner Cauchy horizon ( at least for the short time interval that their wormhole is open internally)? There has to be , if the causal structure is respected, even in that simplistic toy model. Or perhaps not?

Scott, if you ever get tired of these people enough perhaps you could consider moving to Woit’s method of moderation. Basically, he tries to only allow those comments he personally finds interesting or worthwhile for advancing the conversation or topic he is blogging on. Instead of trying to ban the worst of the worst, make it a competition to only allow the best of the best? Maybe you could try it for a blog post or two and see if it doesn’t decrease your stress levels while still providing good blog interaction.

I suggest this with full knowledge my own comments might not pass muster. This is *your* blog after all and I think it terrible others can come and pick on you like this. I respect your recent decisions to stick up for yourself. Stay sane friend.

228. Ted Says:

Scott #222: Yes, that’s exactly what I said in my previous comment (that the AdS/CFT dictionary “is exponentially complex to apply when reconstructing events inside an event horizon”). I was just wondering whether the specific scenario of the traversable wormhole simulated by the Sycamore experiment was one of those situations where the dictionary is not implementable in polynomial time. That may not be an easy question to answer, of course.

229. Scott Says:

Ted #225: Sorry, I don’t know. Partly, of course, the answer would depend on what exactly we took the appropriate generalization of this 9-qubit experiment to be.

230. SR Says:

I read through some of the above comments on validity/interpretations of QM and superdeterminism, and I still feel like there is *some* deep philosophical mystery in the vicinity.

As pointed out by Mateus Araújo above, QFT is local, so it seems to me that the only ways to reconcile this with the nonlocality of standard QM evidenced by the Bell test are: (1) QFT being totally incorrect in an easily-measurable way, (2) Many-Worlds, so that the appearance of non-locality is only due to our observable universe residing in a slice of the “true” wavefunction, (3) superdeterminism.

There is something disquieting about each of these possibilities. (3) is the easiest solution but, as Scott has said before on the blog, it lets you explain away *anything* as it questions preconditions for doing science. Thus, I don’t believe superdeterminism is a satisfactory explanation for the results of Bell tests. That said, there is something profoundly weird about how absolutely everything in our universe is allegedly generated by just one (still-undiscovered) QFT. It’s very weird that we feel that we have the free will to perform experiments and the consciousness to feel like we understand them, when, to our best understanding, everything is predetermined by the laws of physics. In this broader sense, superdeterminism does weirdly seem to be true, unless we can find a causal role for free will/consciousness that modifies existing physics.

(2) is nice in that it can explain non-locality in an intuitive way. However, even other than the philosophical excess of positing so many worlds, there are concrete problems that I don’t think anyone has answered. My understanding is that the decoherence program may well explain where the Born rule probabilities come from. However, it seems no one understands why we only have the experience of one world– why does consciousness branch? Another problem is that evolution of a generic pure state |psi> cannot lead to an exactly diagonal density matrix |psi><psi|. At best, decoherence explains why the density matrix of the universe is approximately diagonal for most times, which then calls into question what the implications of the nondiagonal elements are for consciousness, and whether the branches will recohere at some point. Even aside from these points, it seems awfully convenient that we can dispense with the one clear signal of nonlocality we see in QM by postulating locality in an extended state space– so convenient that it feels like cheating.

(1) would obviously be a huge deal in physics. I do have some sympathy for the gravitational collapse or Wigner-von Neumann (+ panpsychism) interpretations of QM as possible modifications of physics, as they would kill two birds with one stone (i.e. explain collapse by modifying QFT while also explaining quantum gravity or consciousness). But it really feels like wishful thinking for these modifications to be not only true but *also* allow for limited nonlocality in a way that directly allows for Bell tests without postulating many worlds.

231. Mike Says:

Pretty exciting https://arxiv.org/abs/2212.04749: we finally have verification of the initial quantum supremacy benchmark, and it agrees very well with the extrapolated LXEB. Turns out there were no secret quantum gremlins conspiring against humanity after all.

SR #227

There are two distinct notions of ” non locality” in physics:

– The “strong” non locality ( violation of Relativistic causality ), that has to do with faster-than-light transmission/ transportation. This has never, ever, been observed in Nature!
Thousands of experiments and numerous observations confirm that , so far, this “strong non-locality” exists only in science fiction!

– The “weak” non-locality ( “spooky action at a distance” ) that characterizes QM and is usually associated with entanglement/ EPR.
This is actually very common and has been confirmed by all experiments. Whatever its “explanation” is, it is really an essential part of physics.

There is much confusion ( sometimes deliberate) that surrounds these two notions of non-locality.
Many people believe that these two notions of non locality are only approximations ( in a more fundamental theory), but this is , so far , speculative.
There is no problem of course with speculation, many real breakthroughs start like that in science, but I think that it has to be clear that, so far, these hypotheses do not have experimental / observational support.
The above two notions of ( non) locality are distinct and better not be confused with each other.

QFTs are “local” in the sense that they do not violate relativity! This is manifestly true. It has nothing to do with any interpretation of QM.
I hope that this is clear enough.

233. Andrei Says:

“– Bohmian mechanics is in tension with relativity because, as the proponents of the theory admit explicitly, there’s a need for an absolute reference frame , so the usual Relativistic effects cannot be explained kinematically anymore in their theory. They need to go back to the pre- Einstein days of Larmor, Fitzgerald, Lorentz et al ( good luck with that).”

Right, but Bohmian mechanics still complies with non-signaling, so the fact that standard QM does not allow signaling via EPR is no evidence that standard QM is compatible with relativity.

I notice that you still didn’t address my argument in post #155.

“– I have noticed here and there ( not exclusively in this comment section) that some people are trying to “redefine” special relativity the way they like ( or, more accurately, the way that fits their philosophical prejudices).”

I didn’t redefine SR anywhere, in fact I agreed with anything you said about SR.

“– About “superdeterminism”: The violation of “statistical independence” has to be of a very special kind, that evades the inefficiency of the “local hidden variable theories ” to predict the correct correlations.”

Once you have any sort of violation of statistical independence in a theory, that theory cannot be ruled out via Bell theorem, since Bell’s theorem becomes invalid in that context. Sure, this does not make that theory right. It’s a similar situation with non-locality. You cannot rule out a non-local theory using Bell, but of course, it doesn’t follow that any non-local theory would reproduce QM. Newtonian gravity for example is non-local, but doesn’t reproduce QM.

So, you are free to propose some other argument against superdeterminism. I don’t understand your point about local hidden variable theories being inefficient. What’s your argument here?

“Well, if you insist that your theory does not incorporate any “spooky action” you need a miracle.”

What is your evidence for this assertion?

“Unbelievably contrived boundary conditions/ coincidences.”

What is your evidence for this assertion?

“even if you contrive such a model , it will be probably unstable, even for arbitrarily small perturbations, so it will break down like a house of cards.”

What is your evidence for this assertion?

“Nothing more to say about that …”

Any sort of sound argument, maybe?

234. Andrei Says:

SR,

“As pointed out by Mateus Araújo above, QFT is local”

No, QFT is not local. If you add hidden variables to it, it could be local, bot otherwise it shares the same situation with standard QM in regards to EPR.

“so it seems to me that the only ways to reconcile this with the nonlocality of standard QM evidenced by the Bell test are: (1) QFT being totally incorrect in an easily-measurable way”

No, see above.

“(2) Many-Worlds, so that the appearance of non-locality is only due to our observable universe residing in a slice of the “true” wavefunction”

Please see my objections against MWI in post #158.

“(3) superdeterminism.”

Yes, this is the only remaining possibility.

“(3) is the easiest solution but, as Scott has said before on the blog, it lets you explain away *anything* as it questions preconditions for doing science.”

This is simply not true. Superdeterminism means that the polarization of the photons emitted by a suitable source are correlated with the orientation of some distant polarizers. It can be shown, based on high-school electromagnetism, that such correlations are to be expected, without any other assumptions. So, as long as Scott or anyone else is ready to argue that classical electromagnetism “questions preconditions for doing science” they have no argument.

“It’s very weird that we feel that we have the free will to perform experiments and the consciousness to feel like we understand them, when, to our best understanding, everything is predetermined by the laws of physics.”

It’s not superdeterminism, but determinism in general that conflicts with free will. This didn’t bother classical physics much. Also, MWI is deterministic as well, so it has the same problems.

“My understanding is that the decoherence program may well explain where the Born rule probabilities come from.”

No.

“However, it seems no one understands why we only have the experience of one world– why does consciousness branch?”

Sure, as pointed in post #158, MWI needs a theory of consciousness, otherwise it’s pure speculation.

235. fred Says:

JimV #194
“So in my world view, the local universe got along without us fine for around 13 billion years in our time frame, quantum mechanics and all, and will continue to do so after, in a blink of a cosmic eye, we are all gone. That has always seemed the sensible view to me. I also think an approximately-spherical Earth with central attraction is more sensible on its face than a flat earth resting on an indefinite number of turtles. I acknowledge that sensibility is an assumption (but a sensible one).”

To clarify, I didn’t mean that there isn’t a consistent(*) external reality outside of consciousness.
(*i.e. math works to describe it)
I’m just saying that we don’t have and never will have direct access to that reality, we can *only* ever experience that outside reality through appearances in our consciousness. And those appearances have limitations, for example I’m not able to perceive things in a state of QM superposition (and I’m not able to perceive things on the moon or 100 years in the past or future, etc).
Or, if you prefer, any “measuring device” in the QM sense is also a QM system. And no physical system seems able to record or preserve directly (the most direct correspondence of perception for a system that’s maybe not conscious) the wave function it belongs to. I’m not saying that there isn’t indirect evidence of superposition: a screen does record the pattern of interference of multiple electrons as they hit the screen.
But we only ever see an electron at one spot on the screen, and we’ll never know what it’s like to be the electron that undergoes superposition or what it’s like to be the screen that then entangles/decoheres with the electron. We can think of conscious beings as super advanced/complex screens, and, as a conscious being with subjective experience, I can say a few things from my own point of view, that electrons or screens can’t say! And maybe because we are conscious we’ll be able to use that ability to understand better the external reality we belong to (beyond what electrons and screens alone can tell us):

1) how consciousness extends in space: i.e. brains separated by space are isolated islands of perception. In other words, your perceptions and my perceptions don’t overlap. It’s not as trivial as it seems since perceptions within a cubic foot of brain do seem to overlap, but we also have evidence that multiple loci of experience can exist within one brain (split-brain syndrome, etc).

2) how consciousness extends in time: i.e. multiple versions of a same brain separated along the time axis are isolated islands of perception. In other words, my present perceptions and my past perceptions don’t overlap. But just like for 1) there’s also some overlap on small time scale. The only way I know that past versions of myself existed is through memories (inside and outside the brain).

From 1) and 2) it seems reasonable to expect that if MWI is correct, perception would also split across QM branching. I.e. when my brain decoheres with a particle that’s in superposition, subjective experience splits. But like for 1) and 2), there could be an overlap while the branching is still limited. Since a brain is itself a QM system, described by a wave function, we also expect it to be in some form of superposition at any given moment in time, even if very briefly. This superposition doesn’t seem to be a key ingredient of how brains work, quite the opposite: like classical computers, brains seem to exhibit a macro state that is somehow isolated from all the stuff that’s going on at the microscopic level. But maybe consciousness is sensitive to it, and maybe we can find evidence of this… maybe not with wet brains, but maybe with AGI brains. Maybe we’ll be able to put an AGI (that claims to be conscious) in a long state of superposition (e.g. AIs that are implemented on a QC), and it will maybe be able to tell us what it felt like… that’s really the only chance we have to ever progress on this types of questions.

236. fred Says:

SR
“It’s very weird that we feel that we have the free will to perform experiments and the consciousness to feel like we understand them, when, to our best understanding, everything is predetermined by the laws of physics.”

What’s even weirder is that determinism isn’t even the core culprit here!
You only have two ingredients at each end of the spectrum: perfect determinism (e.g. a bunch of balls moving around and hitting one another… i.e. every event is caused by a prior event), and pure randomness (everything is uncorrelated noise, i.e. no causality, things happen without a prior cause… very weird too when you think about it).
And then you can mix those two in various amount, on a continuous scale. QM is a mix of determinism and randomness, somewhere in the middle of the scale. MWI + consciousness also seems to lie in the middle of the scale (the wave function of the universe is determined, but my place as a conscious being on that structure seems random, from my subjective point of view).

When it comes to free will: sure, determinism seems to obviously exclude it… but randomness seems to exclude it too! For the general idea of free will isn’t exactly understood by throwing a dice at every moment a supposed “decision point” happens.
So, yea, what’s weird with free will is that no conceptual model of dynamic systems seem able to generate it. In other words, free will isn’t real, it’s just a false but convenient concept that high level probabilistic engines (brains) use to model themselves within their environment.

237. fred Says:

Last few points on free will:

– “free will” is really just a more confusing variation of the concept of “decision” or “choice”. There’s no choice in a truly deterministic universe.
But randomness seems to at least be able to generate what we usually understand by “choice”: i.e. a closed system is letting a bit of information from the outside (seen as random) influence its evolution. But you see that you need to define a closed system, so arbitrarily separate a part of the universe from the rest of the universe. A conceptual jump that’s happening in probabilistic engines (brains).

– the core of the problem comes from the fact that a closed system (like a brain or a computer) can never simulate itself perfectly, because that would require infinite resources (the simulation has to include itself). So brains evolve to simulate their environment imperfectly, using probabilistic models and lots of simplifications. It’s a result of the fact that math works to model and predict the universe with some success, which is only possible when the universe is for the most part deterministic (you can’t compress pure random noise). This is why statistical mechanics work and why brains naturally evolve, and why the concept of free will shows up in those brains.

– maybe the concept of free will evolves to stabilize the brain from a “psychological” point of view. Maybe if consciousness is totally able to see the concept of free will as an illusion, it would overwrite its ability to be useful for the organism (e.g. enlightenment).

238. fred Says:

I’m personally really looking forward to seeing Shor’s algorithm actually implemented, on the smallest necessary set of proper logical qubits, sufficient to factor very small numbers (with absolutely no quantum advantage compared to classical computers).
To me that would be proof enough that QC works.

Do you guys think we will see this in our lifetime?

239. Scott P. Says:

When it comes to free will: sure, determinism seems to obviously exclude it… but randomness seems to exclude it too! For the general idea of free will isn’t exactly understood by throwing a dice at every moment a supposed “decision point” happens.
So, yea, what’s weird with free will is that no conceptual model of dynamic systems seem able to generate it. In other words, free will isn’t real, it’s just a false but convenient concept that high level probabilistic engines (brains) use to model themselves within their environment.

I think there are two responses to this:

One is to point out that the same system can be at once both deterministic and random. The stock market tends upward an average of 8% a year, but its daily fluctuations are unpredictable. I can tell you with certainty that one half of a 1-gram sample of Hafnium-163 will decay in the next 40 seconds, but I can’t tell you whether a specific atom will do so in that time frame. The latter comes from Heisenberg’s Uncertainty Principle. There is a limit to the total available knowledge about a system.

Two — what is free will? I think the best definition of free will is that it is a property of certain entities that can observe themselves and their own environment, develop a model of their own behavior, then modify that behavior as a consequence of that analysis. I observe that if I get up at 8:00 am, then I am too rushed to enjoy breakfast, so I make the decision to get up at 7:30, reasoning that a calmer morning is a decent tradeoff for a little less sleep. A rock can’t analyze and make decisions like that, nor a plant, nor even my cat, but humans can, therefore we say they have free will. It goes beyond a mere response to external stimuli to encompass a broader understanding of what we do and why.

Since human behavior is therefore partially the result of recursive self-analysis, it is neither wholly predictable nor wholly unpredictable, which is perfectly compatible with what we know of the universe — see 1).

240. JimV Says:

Fred (wow is this way off-topic, I hope it gets left behind in moderation), I don’t see the mystery of any of that. A computer consciousness would work exactly the same way, only experiencing whatever its sensory devices (if any) report to it, and only remembering those past events which it had stored in memory. Why would you expect anything different? Yes, I suppose it is too bad that we aren’t gods, but I never thought the universe owed that to me, and don’t really think that concept makes much sense.

Secondly, I don’t see why there has to be a contradiction between determinism and the sense of having free will. I do make decisions, I experience that decision-making, I am responsible for them, and I use determinism (as best I can) to decide those decisions. Again, a machine intelligence would do the same (or a god, for that matter, just with fewer limitations). My only problem with free will is with those who think it is something magic which allows them to impose it on the universe and over-rule determinism. I guess determinism is also what allows us to make wrong decisions (based on wrong data and/or wrong processing), which proves that it exists, at least in my case.

241. Roger Schlafly Says:

@SR#229: If those were really the only consequences of the Bell tests, then the Nobel citation would have said so. It did not.

@fred#235: You are right that our dynamical models do not generate free will. That is why it is called free will.

242. fred Says:

It always strikes me how often people (including high profile physicists) try to dismiss the fact that determinism precludes free will (or similar concepts) by invoking predictability.
A system either is or isn’t deterministic, regardless of how well its behavior can be predicted.

Two bodies in orbit around one another or a simple pendulum are systems which behavior can be easily predicted arbitrarily far in the future – roughly, the quality of the prediction scales linearly with the precision of the initial conditions.

Three bodies in orbit around one another or a double pendulum are systems which behavior can’t be predicted arbitrarily far into the future, i.e. the quality of the prediction only improves with the log of the precision of the initial conditions (chaotic systems).

But all those systems are deterministic in just the same way.
A double pendulum doesn’t acquire some special/magical property over a simple pendulum just because it’s much harder to predict.
And that doesn’t stop us from simulating double pendulums on computers (with finite memory and processing power) in a way that’s qualitatively indistinguishable from real pendulums…
And the same holds for complex systems like human brains, they’re still deterministic and what matters is not whether one can predict exactly a given human brain, but whether one can do a good enough job at spoofing a human brain: once AGIs are realized, by definition they will do things that are indistinguishable from what human brains can do, and those AGIs will be implemented on classical computers, making it painfully (for some) obvious that predictability is irrelevant, since classical computers are deterministic systems that can be perfectly simulated by simple cloning.

243. manorba Says:

fred #241 Says: And the same holds for complex systems like human brains, they’re still deterministic…

In recent years i discovered the work of nobel laureate Ilya Prigogine, which gave me a totally different view on determinism:
not only he was absolutely cool with the probabilistic nature of QM, he strongly advocated against the determinism of classic systems!
I’m sorry i can’t give you any direct references because my books are italian editions and they do not match international ones, but his whole body of work is very interesting.

244. OhMyGoodness Says:

manorba#242

I hope this story doesn’t bore you-

I had a cousin that was in a summer program for talented disadvantaged high school students at Yale. At the end of the summer Yale offered a full scholarship. We had a friend with a PhD in Compsci working then at Sandia Labs. Our friend told me that he gave my cousin a difficult puzzle making the rounds at Sandia and was impressed when my cousin solved it in a few minutes. As I remember the key was that the base of the numbers in a sequence was changing in a regular way. I guess my cousin was around .999 percentile in mathematical talent.

My cousin turned down the scholarship offer and didn’t even attend university. He was perfectly content working as a cook in a 24 hour restaurant and dreaming. Any way, my cousin was impressed with Prigogine and told me a few of his ideas concerning thermodynamics and entropy as I recall. I haven’t read any of his books but will soon download one and give it a look.

245. Mitchell Porter Says:

“How do [Gao-Jafferis-Wall] avoid the presence of an inner Cauchy horizon”

I suggest you mail Daniel Jafferis and ask!

246. Ilya Zakharevich Says:

Physics student #162, fred #166, Mateus Araújo #175

(#162) More gross calculation: with around 10^7 particles per qubit, you’d expect fluctuations of order sqrt(10^7) ~ 10^3, so error rate around 10^-3. Here’s my full bet: error probability ^2 > weight /(avogadro’s number * hilbert space size).

Well, your observations may be a source of fun when dining with your SO. On the other hand, I cannot see how they can be made in good faith. They contradict whatever we know about scaling of QC, e.g., the error-correction of QC.

(#166) There’s nothing in the basic laws of physics explicitly saying that you can’t build a stable stack of quarters from here all the way up to the edge of space.

Whoa there! What do you think hinders you from building a high stable stack of quarters — presidential decrees, or the little green men?! Who did teach you your physics?!

Of course (!) there are plenty of physical laws governing stability — in particular, stability of such stacks. Look up “eigenvalues”, if you have not yet…

(#175) … the gist of the explanation is that Alice and Bob both split, locally, and the correlations only start to exist in the intersection of their future light cones. A fortiori there are no correlations before, just two Alices and two Bobs.

You are discussing MWI as if it were something well-defined and well-understood!

As far as I can see: currently, there is no viable description of topology/geometry/whatever of branching. Hence everything you say involving this geometry would be just (demagogic?) hand-waving with very little convincing power.

We know how to describe branching-due-to-arrow-of-time in 0+1 dimensional space-time — thanks to Doob! And until something like “filtrations of σ-algebras in probability spaces” is known¹⁾ at least in 1+1-dimensional space-time, the phrases like “both split, locally, and the correlations only start to…” do not carry any weight.

¹⁾ Is it?! If it is, then what I say here is moot!

247. fred Says:

“Nirenberg asked whether, a century from now, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.”

But none as momentous as that first time someone illustrated GR by putting a bowling ball and a bunch of tennis balls on a trampoline!

248. fred Says:

manorba #242

“In recent years i discovered the work of nobel laureate Ilya Prigogine”

Nice! I read some his books over 25 years ago, when I was really obsessed by questions (and lack of answers) around the arrow of time.

249. fred Says:

Ilya #245

“Whoa there! What do you think hinders you from building a high stable stack of quarters — presidential decrees, or the little green men?! Who did teach you your physics?!”

Haha, believe it or not, that didn’t stop me from writing a little 2D physics engine for fun back in the days:

250. manorba Says:

OhMyGoodness #243 Says: I hope this story doesn’t bore you-

Why would it! actually, it was a nice story, not only for the prigogine angle. I can understand your cousin life choices even if i’m far from being a math gifted person… through the years i realized that i’m not comfortable with today’s society infatuation for competition and exhausting oneself to death. And i love cooking 😉

251. fred Says:

manorba #242

“he strongly advocated against the determinism of classic systems!”

Mathematically, even Newton laws of mechanics can lead to non deterministic situations:
https://en.wikipedia.org/wiki/Norton%27s_dome

roger #240
“You are right that our dynamical models do not generate free will. That is why it is called free will.”

You might as well call it pixie magic dust then?
A dynamic system is a system which state evolves with time (a parameter).
My point is that dynamic systems either evolve following causality (current state is derived from prior state) and/or randomness (current state is independent of prior state), and then any degree of mix of those two things (where events depend partly on prior events and partly on some randomness).
Note that randomness is non-determinsm, meaning an event without any cause within the system. Whether that randomness is pure (appearing magically within the system) or is a dependence on causes external to the system is basically the same.
That’s it!
What other ingredient would there be?
This holds for any systems, including a soul (the supposed source of your free will?).
Even if you make the dynamics of the system more complex, like the time parameter loops back on itself, so that prior state also depends on current state, only allowing for stable configurations (the initial conditions have to reappear eventually).
Anyone would disagrees would be welcome to explain how a system that can evolve can have a state that depends neither on prior states or randomness or external states (and the extended system just obey the same limitation… e.g. the finger of God also obeys determinism, but it’s considered outside our system), or clarify what they mean by “free will”.

252. WA Says:

As others have explained numerous times, quantum nonlocality is about certain quantum correlations which cannot occur between classical non communicating systems. These quantum correlations themselves obey no signalling (i.e. do not violate SR). The only violation of SR happens on blackboards and inside academics’ minds when they become so fed up with QM that they decide to replace local quantum degrees of freedom with hidden classical ones. This is the hidden variable theory.

The only way you can perform this replacement is if the classical variables are secretly SR-violating. This is a big blow to the hidden variable theory. Instead of accepting quantum variables and moving on with their lives, superdeterminists want us to pay an even higher price by proposing that these quantum correlations are due to local classical variables + some weird correlations between measurement apparatus and the system being measured.

Interestingly, there are mathematical correlations that obey no signalling but are too strong for QM. Numerous papers have been devoted to coming up with a physical principle that explains why these correlations are disallowed in QM. No compelling explanation has ever been found (afaik), other than the trivial one that this is a consequence of the algebraic structure of QM! I think this is another major blow to the hidden variable theory.

Another argument against hidden variables is that if we propose them we need to explain how the same behaviour arises in seemingly very different physical systems, e.g. entangled electrons behave the same way as entangled carbon atoms or entangled transmon qubits as far as Bell experiments are concerned. We need to come up with such an explanation with “one hand tied behind our back”, since we cannot use QM (it’s what we’re trying to explain away).

In short, quantum nonlocality poses problems only if you believe in classical hidden variables. Instead of accepting the obvious conclusion that hidden variables are not how nature works, (otherwise) very smart people devote much of their time trying to wriggle their way out of this conclusion by positing classical hidden variables with exotic properties. I prefer to embrace the qubits and their Hilbert space. As Scott so eloquently put it: to do otherwise is to believe that QM is a giant anti-clue, a red herring, rather than the profound insight into nature that it is.

fred # 241:

> “It always strikes me how often people (including high profile physicists) try to dismiss the fact that determinism precludes free will…”

Yes, free will *is* outside the scope of physics. No, that doesn’t mean free will doesn’t exist.

There is a philosophic position which says that free will is an axiom of epistemology, and hence, indirectly, at the base of *all* knowledge, including physics. I subscribe to it. [Which philosophy? LOL! Let [chat]GPT-n answer that question!] But, still, even this premise does *not* imply that free will can be part of physics — or of deterministic theories. … We seem to agree on that one, don’t we?

> “… But all those systems are deterministic in just the same way.”

That’s precisely why determinism cannot be regarded as *the* *only* criterion by which to tell whether a *physics* theory is sound or not. More basic criteria do exist. It’s just that these aren’t well known, that’s all.

For instance, the kinetic theory (stat mech) is a perfectly sound theory. So is molecular dynamics (MD). QM would have been a completely satisfactory theory too, despite probabilities. It’s just that the mainstream QM also carries with it the Measurement Problem, and *that* is what makes it *incomplete*. [*I* said so, not Einstein.]

> “… And the same holds for complex systems like human brains, they’re still deterministic and …”

Yes, people *are* so complex, aren’t they? Brains are complex too, even while being quite *physical* objects too. … Yes, a man does have a physical body. But that doesn’t mean he doesn’t have free will.

Bodies of only the dead are deterministic in the *physics* sense of the term, not the bodies of living beings.

*Living* beings (including amoebae) have both the body and the consciousness integrated together as part of their identity, and so, *all* their actions too reflect *both* these aspects.

Who knows, may be one to three of the following blog posts, which I wrote in the past, might be of interest:

“Some thoughts regarding Free Will,” 19 March 2022. [^]

“Determinism, Indeterminism, Probability, and the nature of the laws of physics—a second take…”, 01 May 2019 [^]

“Fundamental Chaos; Stable World”, 28 August 2019. [^]

> “… what matters is not whether one can predict exactly a given human brain, but whether one can do a good enough job at spoofing a human brain: …”

Just replace “brain” by “consciousness” or “mind”, and I would be perfectly fine with that.

Best,
–Ajit

254. fred Says:

Looks like fusion energy supremacy has been demonstrated!

255. Scott Says:

fred #253: Only “scientific supremacy,” not supremacy where you account for the actual energy cost of the lasers which is still 100x too high. Still good though!

256. fred Says:

Scott #254

Right, they demonstrated that you can get fusion using lasers (which seems simpler than the typical tokamak design using magnetic confinement of a super heated plasma), now it’s “just” a matter of making it 100x more effective.
In the meantime it seems it will help them create more effective fusion bombs…

257. manorba Says:

fred #253 Says: Looks like fusion energy supremacy has been demonstrated!

Now we know how to power the wormholes for the teleporting stations!
On a more serious tone, it seems that this time there’s a little more sobriety in the announcements, right?
So, QC and fusion have been mythological goals for years, and this is the time when we are finally witnessing some real progress…
i can’t help but feel that we are still in the proof of concept stage for both… then i realize that the “concepts” that are being “proofed” are functioning arrays of quantum gates and controlled fusion with net positive energy… i have to admit i’m in awe.

258. fred Says:

Ajit #252
“For instance, the kinetic theory (stat mech) is a perfectly sound theory. So is molecular dynamics (MD). QM would have been a completely satisfactory theory too, despite probabilities.”

What really matters are the fundamental dynamics, not derived high level approximate models.
Psychology and history are complex, but they’re all *just* the result of fundamental particle interactions (just lots of them) no matter how you slice it…
Let’s say you believe in “true” emergence, where high level system can mysteriously escape/override the dynamics of the interactions at the lowest level.
Which amount in believing that if you wish something hard enough, you can make it happen, through some yet to be discovered magical property allowing the symbols in a brain to override the rules governing the evolution of its own atoms… but even so, the evolution of such an emerging system would still have to be based on a mix of causal evolution or/and randomness…
Those limitations aren’t even physics, it’s really basic logic.

259. Karen Morenz Korol Says:

“When it comes to free will: sure, determinism seems to obviously exclude it… but randomness seems to exclude it too! For the general idea of free will isn’t exactly understood by throwing a dice at every moment a supposed “decision point” happens.
So, yea, what’s weird with free will is that no conceptual model of dynamic systems seem able to generate it. In other words, free will isn’t real, it’s just a false but convenient concept that high level probabilistic engines (brains) use to model themselves within their environment.”

This argument doesn’t hold up to me. If I were to really roughly paraphrase: “Free will doesn’t make sense with theory A or theory B that we already know about and like. Therefore no theory including free will can exist.” I hope there is a theory that does make sense with free will, that we haven’t fully understood yet.

260. JimV Says:

Last try: the concept of free will legally just means nobody held a gun to your head to make a certain decision; you made it by yourself, using whatever facts you had in memory and what ever computational processing you are capable of, in service of whatever goals you have; and you are legally and morally responsible for it, according to the rules of any reasonable society, provided you are of sound mind with a mostly developed brain. People who argue against that concept of free will are arguing for anarchy and against a civilized society. People who argue that position in a court of law will lose.

E.g., Putin is responsible for the invasion of Ukraine. That he would do so was deterministically predicted in 2014. Part of his decision was based on his control of nuclear weapons, which he thought would prevent effective intervention by other parties, and it has hindered intervention efforts. Another part was his belief in the efficacy of the Russia military, which was miscalculated. If his deterministic ability was better he might not have made the decision when he did, at least until he could accumulate more weapon stocks and trained soldiers.

Evolution also enforces that concept, such that those most capable of determining good decisions have the best chance to survive and reproduce. Trolls who spend their time and energy harassing people on the Internet should consider that.

261. Raoul Ohio Says:

Fusion energy supremacy has been demonstrated “for the first time” many times in the past, although not nearly at the pace of quantum supremacy being demonstrated for the first time.

Already there are articles everywhere about how soon FE will be heating your home. FE was more fun in the 50’s, when pretty soon electricity would be so cheap that utility companies would be giving it away for free.

262. Xirtam Esrevni Says:

For some reason, I felt compelled to have ChatGPT create a poem of two professors with diverging view points on quantum computing. For no other reason that I admire you and professor Preskill; please indulge:

Two esteemed professors, Preskill and Aaronson,
Debate the viability of quantum computing,
One espouses optimism, the other rebuke,
In the fundamental principles of quantum mechanics that undergird it, they duel.

Preskill posits that quantum computers may
Surpass classical ones in computational sway,
Solve problems intractable to classical machines,
With algorithms like Shor’s, a veritable quantum scepter held in its reigns.

But Aaronson refutes, with a cynical glance,
The inherent irrationality and impracticality, it’s just not chance,
To squander time and resources on such a pursuit,
Better to concentrate on classical methods, more steadfast and astute.

The discourse persists, as scientists strive
To overcome the obstacles of quantum mechanics to come alive,
But for now, the fate of quantum computing remains elusive,
Preskill and Aaronson offer their insights, but the verdict is still convulsive.

263. Andrei Says:

WA #251,

“As others have explained numerous times, quantum nonlocality is about certain quantum correlations which cannot occur between classical non communicating systems.”

Indeed, I am glad you specified “classical non communicating systems” instead of just “classical systems”. Interacting classical systems, like N interacting charges in electromagnetism imply correlations not unlike those in QM.

“These quantum correlations themselves obey no signalling (i.e. do not violate SR).”

No-signaling is a much weaker condition than SR locality. SR forbids direct causal links between space-like events. No-signaling is only concerned with events that can be controlled by an agent. Bohmian mechanics does not allow signaling, yet it violates SR.

“The only violation of SR happens on blackboards and inside academics’ minds when they become so fed up with QM that they decide to replace local quantum degrees of freedom with hidden classical ones. This is the hidden variable theory.”

This is false. EPR concluded that the only way to make QM local is to introduce hidden variables. The argument is straightforward:

You have and EPR-Bohm setup, with both detectors oriented on Z. You measure A and get, Z-UP. The state of B, according to QM is Z-DOWN. Locality implies that the measurement at A did not disturb (change) B so, we need to conclude that the state of B even before the A measurement had to be Z-DOWN. So, the particle was always in a state of well-defined spin, Z-DOWN in this case and the pre-measurement, superposed state simply reflected our lack of knowledge regarding that state. So, if you want locality, you need to accept that the spins were decided at the time of emission. Until they are actually measured, they are called “hidden variables”.

“Instead of accepting quantum variables and moving on with their lives, superdeterminists want us to pay an even higher price by proposing that these quantum correlations are due to local classical variables + some weird correlations between measurement apparatus and the system being measured.”

As you can see above, only hidden variables can restore locality. The “quantum variables” are not local. A quantum state is not a local object. So, if you take into account both EPR and Bell’s theorem you get superdeterminism as the only local option possible.

There is nothing “weird” about the superdeterminismic correlations. The atom emitting the photons is interacting electromagnetically with the detectors (which consists also of atoms, so charged particles). So, a Bell test consists of “classical communicating systems”. Once you impose the typical constraints for a system of interacting charges (the system obeys Maxwell’s equations) you conclude that the subsystems (source and detectors) cannot have independent states, so what is called as “superdeterminism” follows naturally.

“Interestingly, there are mathematical correlations that obey no signalling but are too strong for QM. Numerous papers have been devoted to coming up with a physical principle that explains why these correlations are disallowed in QM. No compelling explanation has ever been found (afaik), other than the trivial one that this is a consequence of the algebraic structure of QM! I think this is another major blow to the hidden variable theory.”

I fail to see the argument here. As explained above, no-signaling is irrelevant in regards to compatibility with SR. And hidden variables are required to achieve that compatibility.

“Another argument against hidden variables is that if we propose them we need to explain how the same behaviour arises in seemingly very different physical systems, e.g. entangled electrons behave the same way as entangled carbon atoms or entangled transmon qubits as far as Bell experiments are concerned. We need to come up with such an explanation with “one hand tied behind our back”, since we cannot use QM (it’s what we’re trying to explain away).”

The explanation is trivial. Fundamentally, all experiments consists of some charged particles (typically electrons and nuclei, but also positrons, etc.). They all obey Maxwell’s laws (and also Newton’s and Lorentz’). So all such systems are bound to the same constraints, so all display the same type of correlations.

“In short, quantum nonlocality poses problems only if you believe in classical hidden variables.”

No. Hidden variables are the only local option. Indeterministic QM cannot be local.

“Instead of accepting the obvious conclusion that hidden variables are not how nature works, (otherwise) very smart people devote much of their time trying to wriggle their way out of this conclusion by positing classical hidden variables with exotic properties.”

Because this is the only logically coherent way to approach the problem.

“I prefer to embrace the qubits and their Hilbert space. As Scott so eloquently put it: to do otherwise is to believe that QM is a giant anti-clue, a red herring, rather than the profound insight into nature that it is.”

Well, I hope the sound arguments presented above will one day make you and Scott reconsider.

264. Andrei Says:

Karen Morenz Korol #258,

“When it comes to free will: sure, determinism seems to obviously exclude it… but randomness seems to exclude it too!”

“This argument doesn’t hold up to me. If I were to really roughly paraphrase: “Free will doesn’t make sense with theory A or theory B that we already know about and like. Therefore no theory including free will can exist.” I hope there is a theory that does make sense with free will, that we haven’t fully understood yet.”

In this case theory A (determinism) and theory B (non-determinism) does not allow the existence of other theory (C) because of the logical law of excluded middle.

fred # 257:

> “Psychology and history are complex, but they’re all *just* the result of fundamental particle interactions (just lots of them) no matter how you slice it…”

Hmmm… *just* the result of… .

No, as ought to be amply clear by now, I don’t agree with this position. My position is that you can’t explain consciousness, or for that matter, even the life principle of any living being (its “living-ness”), in terms of material entities. I don’t agree with this reduction.

Guess, we would be simply repeating our respective points, perhaps using different words or higher-level contexts. But it all will still amount to nothing but repetition of the same underlying points. And, infinite regress is to be avoided. … So, let me sign off.

Best,
–Ajit

266. fred Says:

Karen

“I hope there is a theory that does make sense with free will, that we haven’t fully understood yet.”

Well first there needs to be an attempt at defining free will. I.e. what do we mean by free will?

For most people, free will is tied with the idea that, if you could unwind time, and bring the universe back to some point where you made a given decision, you would be able to somehow make a different decision, even if the entire universe (including your brain) was exactly the way it was the first time.
This clearly makes no sense logically since, if the state of the entire universe is exactly the same, how would your brain be able to make a different decision? It’s not like you’re able to bring along back with you the knowledge/wisdom associated with your initial choice.
Even if you believe in the idea of choice, this concept of free will also doesn’t make sense given the fact that who we are and what happens to us is always just the result of luck, at every level. We never chose how we were born who we are, we don’t control external events, and we don’t control internal events either: all the decisions we make (as a system) are based on the neural connections in our brain (when you ignore external events), and a brain just can’t control its own neural connections (even if a brain could control its own neural connections, the control at any moment would be based on the current state of his neural connections, etc). An analogy for this view is that of the evolution of a river: the water is the soul (the source of free will), the riverbed is the brain (made of matter). The water and the riverbed aren’t independent: water flows where the riverbed tells it to go, but the riverbed is carved by the flow of water. The separation between the two is just a concept, in reality they’re two sides of the same coin, inseparable, and evolving deterministically.

There’s another conception of “free will” which is not some fundamental internal concept, but an attribute of the models we use to view of other dynamic systems in the world (called “agents”). Because our own knowledge of the world is always approximate/probabilistic, we view those agents as black boxes which behavior we can’t fully predict, but their range of behaviors is known, and we have some expectations for them (probability distribution), which we constantly adjust as we interact with them. So an agent has “free will” if I know what’s his range of reasonable/possible behaviors, but I just can’t predict well enough its actual moment to moment “decisions”.
But the same applies to our own brain: we’re not able to ultimately deconstruct and explain the decisions we make. Every single decision I make is either the result of a clear causal chain of reasoning (akin to determinism), or based on some preferences and biases inside my own brain. So, in my own awareness, my thoughts and decisions just appear, but my brain can also model itself using the same probabilistic models it uses to simulate external agents. This is also tied to the very important concept/feeling of familiarity: every single event that appears in consciousness (whether from within or from the outside) comes associated with a certain “temperature” representing the amount of surprise associated with it. So here the idea of free will relates to the function of brains according to such model.

267. fred Says:

JimV #259

I’ve only said that free will is an illusion.
And even false concepts sometimes can be useful, from an evolutionary point of view, especially when we’re talking about confused apes running around on a sphere that’s drifting in the empty cosmos.

if you had been born as Putin, in his shoes with the exact same brain and same set of external circumstances, well, you’d make exactly the same decisions he’s made…
Truly recognizing that no one is responsible for their own brain (and that we’re all the pawns of luck and fate) doesn’t lead to anarchy, it leads to more compassion and humility.

Societies punish people committing bad behaviors not because the concept of free will and responsibility make any sense under scrutiny, but simply because these concepts somewhat seem to work as an effective and easy way to sometimes correct the bad behaviors through punishment and shaming (without anyone having to feel too bad about inflicting punishment and shaming… especially in societies that don’t have much knowledge in physics, biology, and neurology).

268. MaxM Says:

On the sources of semiotic confusion,

Articles mentioning quantum chips are almost always pictures of dilution refrigerators. When quantum chips are made to look more alien and weird than they should, it’s easier to jump to conclusions.

269. Roger Schlafly Says:

@fred: You say that events are either caused by the previous state (determinism), or independent of it (random), and magic pixie dust is the only other possibility.

Consciousness is the other possible cause. My conscious state appears inscrutable to you. You cannot model it, and my decisions appear unpredictable. You as might as well call them random, if you cannot predict them. I cannot really explain it either, except to say that I am more sure of it than any of my perceptions. Cogito ergo sum. (I think, therefore I am.)

Readers here might say I could be a troll or an AI bot, and hence not to be believed. Fine, decide for yourself. Do you really think that modern physics has explained causality so thoroughly as to rule out human consciousness? Or do you make decisions everyday that science cannot predict or explain?

270. Raoul Ohio Says:

Fred #166,

I like the SOQ (Stack of Quarters) example. It succinctly captures a vague notion that I have had about a few endeavors, controlled fusion and quantum computing being the prime examples.

I can envision at least three possible outcomes for such ventures:

1. They will someday work and meet expectations.

2. They will never work.

3. We will argue forever about whither they work or not.

In the event of 1, I will say “I’ll be darned! Who’da thunk it?”.

In the event of 2, at the end of time I will say “I’ll be darned! Must have been a case of SOQ.”.

Event 3 might unfold like this: each QC experiment produces something or other. Some skeptics will claim the result is not a computation. Other skeptics will figure out how to do it classically. Repeat. Then we will all be darned!

271. fred Says:

Roger #268

Consciousness is a deep mystery, but we can still put forth a few facts about it:

1) we can talk about consciousness. Even if we’re all very confused about it, humans have been thinking and writing books about if for thousands of years.
This might look like a trivial thing, but it does show that consciousness isn’t simply some by product of the “material” activity in the brain (information processing?) that can just be ignored.
So, at the very least the existence of consciousness creates an footprint in the neural patterns of the brain (even if just as an abstract symbol). E.g. it’s not clear if AGIs (if we ever create them) would be able to spontaneously talk about something like consciousness, independently of human input.

2) For thousands of years a large amount of humans have also been closely looking at how their mind works. For this you don’t need MRI machines, you just need to sit down and pay close attention to what appears in consciousness and how it appears. The fact is that our decisions are inscrutable inspite of the existence of consciousness. That’s not to say that (referring to 1)) the fact that we’re conscious doesn’t play a role in our decisions (like, me writing this post about consciousness has an impact on the arc of my life), but that consciousness doesn’t give us a particular insight into why we make decisions. Our thoughts just appear, just that most of the time we’re so lost in them, so identified with them, that we buy the idea that we’re the author of our thoughts. But if we actually pay close attention to it (using awareness to pay attention to awareness itself), we do see that thoughts just appear in awareness like everything else (perceptions, emotions, sense of familiarity, …). But this is again not that surprising since it’s logically impossible to imagine what it would be like to actually be able to bootstrap our own particular thoughts out of nothingness… we’d have to think our own thoughts before we think them. So thoughts have no choice but just appear.

I personally think that consciousness is very special, but not as a way for humans to escape determinism, but rather as a way to accept it, find some freedom within it. Because consciousness (as the space where our perceptions, thoughts, feelings all appear) is possibly the only thing that is truly pure, perfect, with no beginning and no end.

Andrei #262

You insist on definitions of “relativistic locality/ causality” that are only operational ( and/ or “made up” especially for use in quantum foundational debates ). Such definitions ( that include agents etc) are useful for distinctions between different interpretations ( and subtle philosophical debates ) but they don’t have any fundamental significance in physics.
As I said already several times, in Special Relativity, “causally related events are only these that are timelike or null-separated “. That’s it! This is the proper notion of Relativistic Causality/ locality and the one that has physical significance.
The light cone structure is what defines causality in relativity at the *fundamental level*, nothing else.

In Bohmian mechanics, an absolute reference frame has to be postulated, so as to define e.g. an absolute temporal order between Alice and Bob’s measurements.
If A is the first, then, roughly speaking, the settings on A affect the measurement outcome of B , but this superluminal “causation” , although it is inherent in the theory, at the fundamental level , cannot be used for faster than light transmission in experiments etc. ( assuming the distribution postulate for the “particle positions”. Note that in Valentini’s “cosmological” version of the theory, this distribution is not supposed to be in equilibrium initially, so only later the no-signaling is obeyed ).
So Bohmian mechanics does not really fits well with Lorentz invariance, at the fundamental level.

For your other comments ( about …electromagnetism, superdeterminism, local theories etc) I prefer to avoid any comments, because it’s quite useless, I’m afraid…

273. Fusion Quantum Says:

They got a breakthrough on Nuclear Fusion. No one now can say quantum computing is the nuclear fusion of the 60s.

274. xnarxt Says:

Granted, they couldn’t get a message back out from the wormhole, at least not without “going the long way,” which could happen only at the speed of light—so only simulated-Alice and simulated-Bob themselves could ever test this prediction. Nevertheless, if true, I suppose some would treat it as grounds for regarding a quantum simulation of SYK as “more real” or “more wormholey” than a classical simulation.

Can we do a homomorphic-encryption-reductio-ad-absurdum for this just like in https://scottaaronson.blog/?p=6599 (On black holes, holography, the Quantum Extended Church-Turing Thesis, fully homomorphic encryption, and brain uploading)?

I’m guessing not, since in this case we have 2 observers instead of just 1, but maybe someone has some ideas… I just find that homomorphic encryption analogy so nice and elegant that I wish it can be applied more broadly.

275. fred Says:

Fusion Quantum #272

To be fair, there’s already been plenty of breakthroughs in the field

https://www.nature.com/articles/d41586-022-00391-1

But all those experiments suffer from the same issue – we’re quite far from sustaining the reactions and extract usable energy from it.

Personally I hope this latest news will increase dramatically the funding for fusion research.

276. WA Says:

Andrei #262

“I fail to see the argument here”

The argument here is that classical hidden variables have been utterly useless at explaining why Alice and Bob can only win the CHSH game with probability ~ 85% and no more. When pressed hard enough to explain this, advocates of hidden variables shamelessly revert to the QM formalism and claim their theory predicts this because it is consistent with QM.

Where did this number 85% come from? Better yet, where do numerous other universal bounds on spatial correlations come from? You say Maxwell’s equations. Fine. Can you use Maxwell’s equations to derive these bounds without blatantly introducing QM in your derivation?

Saying that free will is “illusory” or that “does not exist” is another “category error”.
Free will is a vaguely defined emergent property that has to do with how we perceive the world ( and ourselves) at the macroscopic “classical ” level.
Of course it exists, for similar reasons that our thoughts, or music compositions, or philosophical concepts, or theatrical scripts , or the rules of sports games etc also exist.
Free will is not a hidden 5th fundamental force/ interaction of elementary particle physics! Concepts like consciousness or free will are only barely understood, because they have to do with tremendous, intimidating complexity ( and the self-reference element that they have does not help, either…).

The “free will assumption” that has to do with Bell/ EPR etc, is something different: It’s the assumption of “Statistical Independence”. Roughly speaking, measurements’ statistics are not affected by conspiratorial boundary conditions so as to coincidentally “mimicking” the predictions of standard QM.
This is not necessarily related with living conscious beings.
In real experiments, like those from Zeilinger et al (2017), measurement settings are affected / adjusted by the frequency of photons emitted from distant astrophysical sources.
In other subsequent experiments there were distant quasars used for that.
The “violation of statistical independence” loophole is considered a dead end nowadays, more than ever…

278. Daine Danielson Says:

Shmi #218: regarding “gravcats,” there was some excitement for the prospect of gravitationally-mediated entanglement experiments among the qubit crowd. At least, I should say, I spent a significant fraction of coffee breaks chatting about these things with fellow qubitzers! But this could be a function of my own interest in the subject :).

Though I’m not sure I understand your comment about “not completely predicted by quantum mechanics.” QFT has a clear prediction for these experiments at the level of linearized, quantized metric perturbations.

279. Andrei Says:

WA #275,

“The argument here is that classical hidden variables have been utterly useless at explaining why Alice and Bob can only win the CHSH game with probability ~ 85% and no more.”

I’ve just pointed out that your argument is fallacious, since it assumes that the hidden variables and detector states are independent variables (they can’t be since the combined system, source + detectors has to satisfy Maxwell’s equations). Since the number 85% was calculated based on wrong assumptions there is nothing here to explain.

“When pressed hard enough to explain this, advocates of hidden variables shamelessly revert to the QM formalism and claim their theory predicts this because it is consistent with QM.”

You cannot “press me hard” to “explain” a number calculated based on faulty premises. Present me with a sound argument and a correct calculation and I will be happy to provide whatever explanation is necessary.

“Where did this number 85% come from?”

It came from a superficial treatment of a Bell test. It simply assumes that the experimental parts don’t interact (because they are far away; hint: electromagnetism has infinite range). In this way the experiment is modeled using Newtonian mechanics with contact forces only (billiard balls). It is no wonder such a model fails to explain entanglement, since it can’t explain anything involving electromagnetism, like induction for example.

“Better yet, where do numerous other universal bounds on spatial correlations come from? You say Maxwell’s equations. Fine. Can you use Maxwell’s equations to derive these bounds without blatantly introducing QM in your derivation?”

In order to derive something you need to be able to do the calculation. This is problematic, since anything involving more than a few particles (around 100 with some simplifying assumptions) cannot be solved, even numerically. This is also true about QM. We cannot calculate neutron’s spin, we cannot derive the half-life of uranium, we cannot calculate the spectra of heavy atoms, such as lead, etc. But if you can’t calculate something it doesn’t mean that the theory is wrong. So, show me an experiment where the calculation was correctly performed and Maxwell’s equations are shown not to hold.

You also should not forget that, as proven in my post #251, without hidden variables you are in contradiction with SR. So, you have a lot to explain here – pretty much any experiment confirming SR.

280. Andrei Says:

“You insist on definitions of “relativistic locality/ causality” that are only operational ( and/ or “made up” especially for use in quantum foundational debates ). Such definitions ( that include agents etc)…”

“As I said already several times, in Special Relativity, “causally related events are only these that are timelike or null-separated “. That’s it! This is the proper notion of Relativistic Causality/ locality and the one that has physical significance.
The light cone structure is what defines causality in relativity at the *fundamental level*, nothing else.”

I fully agree, never claimed otherwise. You were the one insisting that no-signaling is enough for Relativistic Causality/ locality. I am happy you reconsidered.

I agree with your analysis of Bohm’s theory, but what you fail to mention is that, as my argument in #155 proves, standard QM (without hidden variables) is in exactly the same situation as Bohm. Randomness necessarily implies that “the settings on A affect the measurement outcome of B”, otherwise the predictions come out wrong.

281. Andrei Says:

“The “free will assumption” that has to do with Bell/ EPR etc, is something different: It’s the assumption of “Statistical Independence”. Roughly speaking, measurements’ statistics are not affected by conspiratorial boundary conditions so as to coincidentally “mimicking” the predictions of standard QM.”

You simply invent absurd definitions in order to make superdeterminism look silly. Let’s take a look at how Wikipedia defines the concept:

“Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.”

Can you find “conspiratorial boundary conditions” or ““mimicking” the predictions of standard QM” here?

Or’ let’s take Britannica:

“The definition of statistical independence—namely, that the probability of a compound event composed of the intersection of statistically independent events is the product of the probabilities of its components…”

Where do you see “conspiratorial boundary conditions” or ““mimicking” the predictions of standard QM” here?

Now, sticking with the correct above definitions explain to me why the polarization of a EM wave has to be independent of the distribution of distant charges. What’s so obvious about that?

“In real experiments, like those from Zeilinger et al (2017), measurement settings are affected / adjusted by the frequency of photons emitted from distant astrophysical sources.
In other subsequent experiments there were distant quasars used for that.”

So what? Electromagnetism has an infinite range. Expanding the system doesn’t make any difference. The same equations need to be obeyed.

“The “violation of statistical independence” loophole is considered a dead end nowadays, more than ever…”

By whom?

282. fred Says:

Dimitris #276

“Free will is a vaguely defined emergent property that has to do with how we perceive the world ( and ourselves) at the macroscopic “classical ” level.
Of course it exists, for similar reasons that our thoughts, or music compositions, or philosophical concepts, or theatrical scripts , or the rules of sports games etc also exist.”

By that sort of standard all illusions “exist”:
“Free will” must exist since it just appeared as a token in this very sentence!

A person walking on a path during the night sees a discarded rope, and misconstrues it to be a snake, and is thus frightened. However, with another hard look, carefully scrutinizing the “snake,” it becomes clear that it was merely a rope. The rope comes to be known as a rope. As the person considers this further, he becomes aware that there is actually no substance of a rope, but that it is a composite made up of numerous strands of hemp and is only provisionally labeled as a rope.

This is called the analogy of the snake, rope, and hemp. The awareness of the hemp is equivalent to the perfectly realized true nature; the awareness that the rope is composed of the various causes and conditions of the hemp shows the dependently arisen nature. And the mistaken understanding of the rope to be a snake represents the nature of attachment to that which is pervasively discriminated.

283. fred Says:

Dimitris

“Free will is a vaguely defined emergent property that has to do with how we perceive the world ( and ourselves) at the macroscopic “classical ” level.
Of course it exists, for similar reasons that our thoughts, or music compositions, or philosophical concepts, or theatrical scripts , or the rules of sports games etc also exist.”

By that sort of standard all illusions “exist”:
“Free will” must exist since it just appeared as a token in this very sentence!

A person walking on a path during the night sees a discarded rope, and misconstrues it to be a snake, and is thus frightened. However, with another hard look, carefully scrutinizing the “snake,” it becomes clear that it was merely a rope. The rope comes to be known as a rope. As the person considers this further, he becomes aware that there is actually no substance of a rope, but that it is a composite made up of numerous strands of hemp and is only provisionally labeled as a rope.

This is called the analogy of the snake, rope, and hemp. The awareness of the hemp is equivalent to the perfectly realized true nature; the awareness that the rope is composed of the various causes and conditions of the hemp shows the dependently arisen nature. And the mistaken understanding of the rope to be a snake represents the nature of attachment to that which is pervasively discriminated.

284. fred Says:

There’s an interpretation of Free Will that’s compatible with determinism: what we call Free Will is actually the feeling we get (in consciousness) when we contemplate the universe around us “happening” the way it’s supposed to happen, the way things “naturally” unroll according to the laws of nature, and all the processes in our own mind are also part of this unfolding.

285. O. S. Dawg Says:

Pardon my ignorance, but can you explain the phrase ‘the It from Qubit community’? What first comes to my lay mind is a member of the Adams family. Presumably I am confused.

286. Scott Says:

O. S. Dawg #284: Roughly speaking, the community interested in connections among the black hole information problem, AdS/CFT, and quantum information and computing. Central figures include Lenny Susskind, Patrick Hayden, John Preskill, Daniel Harlow, Juan Maldacena…

287. Mitchell Porter Says:

Dawg #284: “It from Qubit” is the name of the community. The name descends from a phrase due to physicist John Wheeler, “It from Bit”, meaning “physics from information theory”. “It from Qubit” then means “physics from quantum information theory”.

288. Andrei Says:

fred #283,

“There’s an interpretation of Free Will that’s compatible with determinism: what we call Free Will is actually the feeling we get (in consciousness) when we contemplate the universe around us “happening” the way it’s supposed to happen, the way things “naturally” unroll according to the laws of nature, and all the processes in our own mind are also part of this unfold.”

I think it’s more simple than that. Free will is the expected consequence of the fact that we have incomplete information about the state of the universe. Even if we know for a fact that the universe is deterministic, the little information we have about it is compatible with many possible outcomes. We cannot rule out the physically impossible futures since we don’t have the required information, nor the computing power to do that. So, we consider them possible. That’s all.

289. Willem Says:

Scott 285,

I strongly disagree with this statement. The community interested in these topics is much broader and more diverse than just a few individuals, and it is unfair to suggest that these individuals are the “central figures” in this field. There are many other researchers, both within and outside of these specific areas of study, who have made significant contributions to our understanding of black holes, AdS/CFT, and quantum information and computing. It is important to recognize and appreciate the work of all members of the community, regardless of their individual fame or prominence.

290. WA Says:

@ Andrei:

I think your position is different from the position of most who believe in hidden variables (including superdeterminists). The more common position is to agree that 85% is a universal maximum on the probability of winning CHSH, but then revert to QM for the explanation. I like your position better because it is falsifiable.

Instead of arguing back and forth here is a recipe for winning at least one Nobel prize (don’t forget about me when you’re swimming in money and fame):

Devise a strategy for winning the CHSH game with probability larger than 85% under similar experimental conditions as in any of the experiments demonstrating Bell inequality violation.

Your strategy can be based on any physical system, so you can use classical EM or quantum fields or any combination of them. Even if you don’t agree with the setup or its conclusion, do this for the sake of the prize (and humanity)

291. Mitchell Porter Says:

Dimitris Papadimitriou #225 (and anyone else interested),

I took my own advice (#244) and asked Daniel Jafferis about the Cauchy horizon issue. Basically he thinks that the Cauchy horizon of a GJW wormhole would be unstable under perturbation (something which is true of Cauchy horizons in general), so a generic GJW wormhole would e.g. have a lightlike singularity instead.

292. fred Says:

Andrei #287

I think such questions are often too abstract for most people until they’ve been subjected to what I would call a “forced choice”.

Imagine a man is at home with his family, strangers break in, tie up his wive and kids, and they tell him they’re going to kill all the members of his family except for one, and he has to pick which one will be spared.
The so-called perfect freedom of choice given (the choice is his and his alone, supposedly) clashes with the fact that the man simply can’t fully deconstruct the mental processes leading to any decision, whether based on some rational logic (“let me spare the youngest one”… why did he come up with this one instead of “let me spare my wife since we can have new kids”?) or some internal bias based on feelings. And even if the choice was blind (the members of his family are all put in similar looking boxes, and he can’t tell who he is picking), he will be subjected to immense self-guilt/regret in the future, as if he truly had free will and the decision was actually all his and his alone (which is nonsensical since not only everything is deterministic, but pretty much everything in the world is fully connected… you just can’t isolate anyone’s brain as a black box, some pure source of causality).

This situation isn’t as far-fetched as it sounds – everyday cancer doctors dealing with kids have to advise families about a course of action, very difficult choice of treatment between several options that can’t be applied together, and which outcomes are never certain. For the ones making the decisions the nature of their so-called “free will” is very different.

293. Scott Says:

Willem #288: I completely agree that the community is much broader than just those people! But sometimes, to define or identify a community to outsiders, it’s convenient just to name some members of the community who the outsiders are most likely to have already heard of.

Mitchell Porter #290

Yes, that’s why I’m “worrying” when people talk about traversable wormholes.
These unstable Cauchy horizons that develop null singularities when perturbed, are , typically, the “bad guys” that intervene when people are trying to theoretically construct traversable wormholes ( or non singular black holes etc).
Perhaps these models are already too simplistic and unrealistic and these instabilities have to do with the (semi)classical theory anyway , but this issue won’t go away just like that…
One of the authors ( A. Wall ) was concerned about Cauchy horizons in some other papers ( on a different, but related topic: GSL), so , at least I expected some comments from him on this , but I didn’t found something on that GJW paper.

Fred
Andrei

About the relation between determinism, free will, consciousness etc:
Consider the original version of Laplace’s demon in the Newtonian framework.
This being is a physical entity ( so it obeys the same laws as everything else ) that supposedly has unlimited computational and memory storage abilities : There is no speed limit or light cone structure or any holographic bound in that case, and everything is strictly deterministic and fully predictable.
So, Laplace’s Demon is able to predict the entire future history of the universe, but this knowledge is entirely useless to “him” ( or “her”, or “it”, whatever you like): Because he’s a physical entity too, everything that he’ll do or even think is already fixed, he cannot alter even the slightest detail.
Even Demon’s “thoughts” are predetermined, nothing can ever change that. Actually, this Being not only does not have any “god-like” powers ( as in the common Laplace’s Demon folklore picture ) but is as helpless as it gets!
In what sense this Being is conscious or even “living”?

Actually, this situation is reminiscent of the grandfather paradox that appears in GR when Closed timelike curves are present. Again, the paradox is much stronger than in the common folklore picture:
Not only the time traveller (that goes back in time e.g. to prevent her younger self from building the machine ) cannot change the history, but she can’t alter even the slightest detail from what’s already happened ( and she remembers) without introducing inconsistency.

296. Andrei Says:

WA #289,

„Devise a strategy for winning the CHSH game with probability larger than 85% under similar experimental conditions as in any of the experiments demonstrating Bell inequality violation.”

I cannot devise such a strategy based on classical EM because the rules of the game are incompatible with classical EM.

The rules are (https://circles.math.ucla.edu/circles/lib/data/Handout-2987-2567.pdf):

„1. A referee chooses x, y ∈ {0, 1} uniformly at random.
2. The referee gives Alice x and Bob y.
3. Alice responds with a ∈ {0, 1} and Bob responds with b ∈ {0, 1}.
If x = y = 1, then Alice and Bob win when they output different responses. Otherwise,
Alice and Bob win when they output the same response.”

What happens in this experiment, according to classical EM is:

We have an initial state, represended by the position and momenta of all charges. The charges are all the electrons and nuclei used to make the source and detectors.

The EM fields at the location of the source are determined by the charge distribution/momenta as shown by the equations here (21.1):

https://www.feynmanlectures.caltech.edu/II_21.html

To get the actual fields you need to add the field vectors associated with each charge.

Once you have the fields, you can determine how the electron at the source accelerates, using Lorentz force law and determine the so-called hidden variables, the polarisation of the emited EM waves.

As you can see, those hidden variables are not chosen randomly. They are strictly determined by the global state that includes the detectors. Alice and Bob have a say in what x and y are. So the first rule of the game must be rejected.

What happens next is also strictly determined by the initial state of the system. In principle, you can evolve that state until the emitted EM waves pass through the polarizers.

The orientation of the polarizers at the time of detection is also determined by the initial state.

It’s easy to see why Bell’s independence conditions fails here. In order to change the hidden variables you need to change the initial state, so the settings of the detectors may also change. Likewise, in order to change the settings at the time of detection you need to change the initial state, so the hidden variable may also change.

The CHSH game does not correctly model what happens in reality, this is why you cannot win.

297. fred Says:

Dimitris #294

The apparent paradox is: how come a deterministic universe is giving rise to the concept of choice?
What is there to choose? What is a choice?
In other words, how come a brain can evolve as a sort of “decision” engine when the individual atoms making up the brain just all move on rails?
The key thing to note is that the life arc of a particular brain/organism is indeed entirely predetermined, making the concept of choice somewhat paradoxical (choice really means something like: if perception is X and brain state is N, the behavior is f(X,N)… and this is only optimal as an approximation). But the probabilistic model of the world inside a brain is meant to work at the entire species level. So the concept of choice is really a type of behavior that “works” across many many individuals within the same species (and subspecies, in an never-ending refinement), and it’s only across multiple equivalent individual lives that the concept of counterfactual makes any sense (the “right” behavior is the one that works for the majority of individuals finding themselves in the same given situation).
And the “illusion” of free will comes from taking this concept of probability distribution (and choice) at the species level (where it applies) down to the individual level (where it doesn’t apply very well).
That said, you can then ask the same thing at the species level, taking the entire species as one macro organism. Is the entire species itself conscious? As you move up and up, eventually you can consider the entire earth as one evolving organism (Gaia).
Fundamentally we observe that the universe spontaneously evolves in a sort of “fractal” way: when fine structures start to mimic the whole, it makes those fine structures more persistent, and the better they mimic the whole, the more persistent they become, in an ever growing spiral of complexity.

298. fred Says:

Dimitris

“Actually, this situation is reminiscent of the grandfather paradox that appears in GR when Closed timelike curves are present. Again, the paradox is much stronger than in the common folklore picture”

More specifically on this.
I think it’s difficult because this assumes that time travel is actually possible, and all those conjectures (if possible) depend on our model of how time works.

It’s also similar to other thoughts experiments where we consider the possibility of predicting the future.
Just like for time travel, you can only predict things which are self-consistent.
And then it’s just a matter of building an object which future’s state depends on the opposite of the prediction to break things down. Which shows that, logically, you can’t predict the future. Which is also not very surprising since a part can never simulate the whole (perfectly), because the whole contains the part, so the part has to simulate itself, and the simulation would require an infinite amount of resources (and a part is by definition limited) in an infinite regress.
The same “resource” paradox would happen with time travel: either the mass you sent back in time just appears from nothing (the atoms already existed somewhere in that version of the universe, and you’re cloning them for free), adding up to the constant mass of the universe, and then you can create infinite mass by just keep sending it back in time in a loop. Or the mass sent back in time is created from some existing mass that existed (replacing it), in which case you’re modifying the past no matter what (so things can’t be stable or self-consistent).
But if the models of the past and future allow for different instances, then prediction and time travel are creating oscillatory states (where predictions aren’t really prediction because they don’t take themselves into account, and time travel isn’t really time travel because we’re creating a new alternate version of the world).

299. Willem Says:

Scott: I understand what you’re saying, but I still feel that it is important to recognize and acknowledge the contributions of all members of the community, not just those who are already well-known. It’s easy to overlook or underestimate the work of those who may not have the same level of visibility or recognition. By only highlighting a few “central figures,” we risk perpetuating a hierarchy within the community and neglecting the valuable contributions of other researchers. It’s important to celebrate the diversity and inclusivity of our community and recognize the contributions of all members, regardless of their fame or prominence.

300. Ben Standeven Says:

“Now, sticking with the correct above definitions explain to me why the polarization of a EM wave has to be independent of the distribution of distant charges. What’s so obvious about that?”

That’s easy. Maxwell’s equations are local, so the polarization of an EM wave cannot depend on anything outside its light cone. In particular it is independent of any change in the distribution of distant charges occurring outside said light cone. So for example, if the setting of Bob’s detector is determined by data coming to him from a faraway pulsar, then it must be independent of Alice’s observation. (Of course, it is still possible that the prior states of the two particles are correlated, since they have a common source in their past.)

301. Andrei Says:

Ben Standeven #299,

“So for example, if the setting of Bob’s detector is determined by data coming to him from a faraway pulsar, then it must be independent of Alice’s observation.”

I assume here that the whole system (Alice + Bob + particle source + pulsars) satisfies Maxwell’s equations. I know that there are claims that this is not the case, based on some speculations about inflation, but I don’t think there is any strong evidence here.

302. WA Says:

Andrei:

“The CHSH game does not correctly model what happens in reality, this is why you cannot win.“

No offense, but this is gibberish. It’s a well defined game and that’s independent of how the players model nature or what their favorite interpretation of QM is. In fact this is in part why such games allow us to draw deep, “device-independent”, conclusions.

If every attempt you make at beating the bound ~ 85% fails, then you could say there is an obstruction. Then it’s on you to explain that obstruction.

Ben Standeven#299

Actually the situation is much worse for superdeterminism, because of the existence of past “particle horizons” in cosmology.
If you look at a conformal diagram ( that depicts clearly the light cone structure) of a relevant cosmological model, you’ll notice that less and less spatially distant events become more and more causally isolated as we’re approaching the spacelike big bang ( or the end of the inflation, that’s also depicted as a space-like hypersurface ).
This may seem counterintuitive at first, but it’s a consequence of the existence of a finite light speed plus the expansion.
I took a look at that Wikipedia article ( about superdeterminism ) that Andrei mentioned and as I suspected, the article (and the diagram , that “explains” the “common origin”) is highly misleading, not even bothering to explain the really thorny issues.
Unfortunately, Wikipedia articles about physics do not have an even standard quality level.
Sometimes they’re trustworthy, sometimes they’re not. This is the latter case…

Fred #296

The situation is essentially the same, even if you consider it from the point of view that individual persons have in such a strictly deterministic universe.
Imagine individual persons ( Alice, Bob, Conny etc) in the same Newtonian world.
Assume that the Demon is a device ( with unbounded abilities etc) and individual persons have only a limited access to its predictions.
For example, Bob is concerned about his near future and asks the machine for the prediction of , say, his next month.
Imagine now, that the device shows him , in sufficient detail, the prediction:
The most striking event is that two weeks later, at some specific moment, Bob will step on a banana peel , fall down the stairs and break his leg, on his way to buy biscuits from the store near his house.
Now he knows what will happen to him ( in a suitably detailed coarse grained prediction that the demon has shown to him ), but he can’t do anything about it. All his future actions and thoughts are already fixed!
It seems entirely crazy that he can’t alter any detail ( for example, why not buying these biscuits the previous day? ) and he “knows” (?) it…
Again, in what sense is he a conscious being at all? No need to mention free will: in that deterministic universe is entirely meaningless, in the common compatibilist sense ( not talking here about ” libertarian” free will etc).
These weird/ paradoxical situations are inherent in such fully deterministic and predictable hypothetical worlds and the weirdness becomes apparent especially in cases where even some limited knowledge of both the past and part of the future is accessible ( as in the above situation, or in GR in the case of non globally hyperbolic spacetimes that have closed timelike curves ).
Only in inherently probabilistic physics, consciousness or compatibilist free will is conceivable, in my opinion…

305. Andrei Says:

WA Says: #301

“It’s a well defined game and that’s independent of how the players model nature or what their favorite interpretation of QM is. In fact this is in part why such games allow us to draw deep, “device-independent”, conclusions.”

The game is not independent on the model. The game requires x and y to be randomly chosen. The game does not allow The characters to communicate. There is no such thing as randomness in a deterministic theory. You cannot model an EM system that requires all objects to communicate all the time (via EM fields) and where all objects must agree (read: satisfy Maxwell’s equations) using isolated characters (referee, Alice and Bob) that don’t communicate and do random things. Your requirement is absurd.

306. “Post-empirical science” – Flippiefanus Says:

[…] to mislead more people), many people have strongly criticized the story, including John Horgan, Scott Aaronson, Ethan Siegel, and Peter Woit. I can go on to try and clarify, but these posts are doing a much […]

307. Laurentius Zamoyski Says:

Forgive me for the somewhat off-topic comment (I can no longer common on your “Zen” blog post), but I believe this is important enough to violate whatever unwritten conventions hold in your virtual living room. Specifically, I’d like to point out that you have a serious philosopher at the very university you work who has engaged with the subject of how one ought to or at least could interpret quantum mechanics. I have in mind Robert Koons (https://liberalarts.utexas.edu/philosophy/faculty/koons).

And incidentally, he has just published a book that addresses this very subject (here’s a quick review by Ed Feser: https://www.thepublicdiscourse.com/2023/01/86512/). I really do think these ideas ought be taken seriously and it would be unfortunate not to bring them to your attention.

Cheers!

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.

Comment Policies:

1. All comments are placed in moderation and reviewed prior to appearing.
2. You'll also be sent a verification email to the email address you provided.
YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.