## My New York Times op-ed on quantum supremacy

I’d like to offer special thanks to the editor in charge, Eleanor Barkhorn, who commissioned this piece and then went way, way beyond the call of duty to get it right—including relaxing the usual length limit to let me squeeze in amplitudes and interference, and working late into the night to fix last-minute problems. Obviously I take sole responsibility for whatever errors remain.

Of course a lot of material still ended up on the cutting room floor, including a little riff about Andrew Yang’s tweet that because of quantum supremacy, now “no code is uncrackable,” as well as Ivanka Trump’s tweet giving credit for Google’s experiment (one that Google was working toward since 2015) partly to her father’s administration.

While I’m posting: those of a more technical bent might want to check out my new short preprint with UT undergraduate Sam Gunn, where we directly study the complexity-theoretic hardness of spoofing Google’s linear cross-entropy benchmark using a classical computer. Enjoy!

### 147 Responses to “My New York Times op-ed on quantum supremacy”

1. Measurement Says:

Nice article!
Among sins in qubit journalism, a hardy survivor appears to be the personification of the measurement problem with “when you look”/”when you don’t look”. I wonder if there is a better, twenty-word alternative.

2. Scott Says:

Measurement #1: I thought about it! And, I confess, failed to come up with a better alternative. This problem is the lesser, pop-science sibling of the measurement problem itself. Maybe “when you look” vs. “when the qubit is isolated”? (Ah, but then why are those two being contrasted with each other? What happens if the qubit is not isolated, but you also don’t look?)

3. lewikee Says:

Measurement #1: I would prefer “When collapsing the qubit to a state by measuring it.”

4. Measurement Says:

Scott #2: My biased view: it doesn’t matter if the isolation is engineered (“you look”) or not, since measurements are like phase transitions in that they both display localized reductions in entropy. This doesn’t answer the what (phases of what?) or why (which instability drove the transition?) but it at least rules out humans from the explanation, without invoking MWI.

5. lewikee Says:

Scott, great article. One bit that was a little confusing/misleading was in the IBM paragraph. You mentioned that IBM built its own 53-qubit processor and then immediately went into their simulation project and the Summit computer without saying that that one’s classical. All technically correct of course, but many readers could get the wrong impression.

6. Scott Says:

lewikee #3, Measurement #4: I was careful to say that when you don’t look, the amplitudes CAN interfere. I.e., they might or might not, depending on some additional condition that I haven’t specified (and with a few hundred more words, I could’ve talked about preventing the leakage of information into the environment, ie decoherence).

7. Scott Says:

lewikee #5: My original draft said nothing about IBM’s own 53-qubit device, but the editors and fact-checkers felt it was essential to add a sentence about it. I apologize for not explicitly adding, after that sentence, that we were now going back to talking about classical simulations.

8. nitpick345 Says:

Nicely written piece.

Nevertheless, I feel compelled to point out phrasing that could be considered a technical inaccuracy: You describe a superconducting qubit as a “loop of wire around which current can flow […]”, i.e., a flux qubit. Google’s chip, however, uses Transmon-type qubits that are weakly anharmonic LC-oscillators.

The problem of accurately portraying of the excitations of a Transmon circuit in a NYT op-ed still remains. The whole construction is often called an artificial atom, but this moniker evades a tangible description of what the physical qubit “is”.

[Of course, a frequency-tunable Transmon still contains a superconducting loop in its core, but there, the circulating current is used to tune the operating frequency, and is mostly orthogonal to the computational basis states.]

9. Scott Says:

nitpick345 #8: Thanks for the correction! I was relying in part on a presentation by Shyam Shankar, UT Austin’s local superconducting qubits expert, who did indeed speak about the qubit as a superposition of energy levels of the current. But maybe the point I missed is that a transmon/xmon is more complicated than your run-of-the-mill superconducting qubit?

10. Jeremy H Says:

Google is now working toward demonstrating my protocol; it bought the non-exclusive intellectual property rights last year.

I’d love to hear this story. Do proof-of-concept implementations typically require IP rights, or is this case special for some reason? And how does one go about deciding on a price?

11. Teresa Mendes Says:

Hi Scott,

Here we are, again, now following another quantum computing dispute, the a Google/IBM battle of technological giants over ‘quantum supremacy’.

Let me, on this matter, return to a foundational argument: No entanglement, no scalable quantum computer. (Remember?) Where is the non-local entanglement evidence in Google’s quantum computer? Up to now, Google’s quantum computer is only a complicated analog computer that has no new features that might explain why it might compute beyond classical limits.

Google’s quantum computer is built following the [Ansmann, 2009] (1) protocol. Not only this experiment didn’t show experimental rejection of local realism, due to lack of space-like separation, formally it’s not a Bell Test, as the experimental results can also be explained by crosstalk, a perfectly reasonable local realist effect. [Alicki, 2009] (2) reanalyzed Ansmann’s experimental data, to find a bigger value for crosstalk than the original paper reported, due to some orthographic mistake, acknowledged and soon updated by the authors in a new version of the supplementary information, but also questioned its results. Later, [Especial, 2012] (3) corrected the [Kofman/Korotkov, 2008] (4)’s inequality, used by [Ansmann, 2009], ( from S(c) ≤ 2+2p(c), or S(c) ≤ 2+4p(c) ), to S(c) ≤ 2+16p(c). For the amended probability of crosstalk of Ansmann’s experiment, p(c) = 1.5% (or more, according to [Alicki, 2009]), the CHSH limit for a local realist explanation by crosstalk is S(c) ≤ 2.24. The experimental S-quantity reported is 2.073.

Can you see my point?

I really don’t see the importance of speed in the calculation of the Google’s quantum computer vs the IBM’s digital supercomputer. Historically, during the 50’s through the 70’s, analog computers were built, quite useful and typically much faster than digital ones, for real-time processes. In time they were practically abandoned and replaced by digital computers, more economic and precise. But analog computing is still under research showing to be much more efficient simulating biological systems, taking as input differential equations, which biologists frequently use to describe cell dynamics, as reported by Larry Hardesty a MIT news officer, in 2016 (5).

An excerpt of that article is illustrative: “The researchers tested their compiler on five sets of differential equations commonly used in biological research. On the simplest test set, with only four equations, the compiler took less than a minute to produce an analog implementation; with the most complicated, with 75 differential equations, it took close to an hour. But designing an implementation by hand would have taken much longer. […]

A digital circuit, by contrast, needs to slice time into thousands or even millions of tiny intervals and solve the full set of equations for each of them. And each transistor in the circuit can represent only one of two values, instead of a continuous range of values. “With a few transistors, cytomorphic analog circuits can solve complicated differential equations — including the effects of noise — that would take millions of digital transistors and millions of digital clock cycles,” Sarpeshkar says.’

Both, analog and digital computers are simulated by a Turing machine. Quantum supremacy should be a challenge towards the Turing machine, not an analog vs digital, absolutely classical old battle. With no demonstrable non-local entanglement, what is the fuzz?

[IBM will not easily use this argument, as IBM’s quantum computer is using the same protocol 😉 ]

(1) Ansmann M., et al. Violation of Bell’s inequality in Josephson phase qubits. Nature. 2009 Sep 24;461(7263):504-6. doi: 10.1038/nature08363. Updated 2009 Dec,24.
(2) Alicki, R., Remarks on the violation of Bell’s inequality in Josephson phase qubits, 2009, https://arxiv.org/abs/0911.4009v1
(3) Especial, J., Bell inequalities under non ideal conditions, Ann. Fond. Louis de Broglie 37 (2012), https://arxiv.org/pdf/1205.4010.pdf
(4) Kofman, A.G., Korotkov, A. N., Analysis of Bell inequality violation in superconducting phase qubits,Phys. Rev. B 77, 104502 (2008), https://arxiv.org/abs/0707.0036
(5) Hardesty, L., MIT news, Analog computing returns, June 20, 2016, https://news.mit.edu/2016/analog-computing-organs-organisms-0620

12. Egg Syntax Says:

Hi Scott! I’ve sent the quantum supremacy FAQ and your recent discussion of the result to various people; I’ll send the less technical folks the NYT piece and tell them to start there instead. You and the editor have done a terrific job at making it impeccably clear for a general audience.

I was interested to see in the piece that “Google is now working toward demonstrating my protocol; it bought the non-exclusive intellectual property rights last year.” Does that mean that you (or UT) have patented the algorithm? As a programmer, I’ve struggled a lot with the ethics of patenting algorithms. I always appreciate your take on ethical questions, and I’d love to hear your thoughts on this one.

13. William Hird Says:

Hi Scott,
The Google result seems fantastic, but doesn’t their experiment have to be replicated by another party to be accepted ( in the normal scientific way) by the scientific community? Who would be some potential candidates who would (could ?) be willing to spend the time and money to exactly replicate the Google experiment ?

14. Measurement Says:

Scott #6: But that’s very close to implying that a bundle of straw is the usual reason that a camel’s back breaks, when a bundle of other stuff would have worked as well. Shouldn’t the stress be on “bundle”?
I see that inserting the human element in the measurement problem conveys a nice dollop of surprise, like it should; but it comes at the risk of propagating a straw man 🙂
Maybe mentioning that a human messing around is *one* example of inducing incoherence, and providing an appropriate link, would mitigate confusion.
I understand the constraint on word-limit though. Thanks.

15. dennis Says:

Hi Scott, what do you think about the following point of view:

The google device samples from the distribution p(x) = F pU(x) + (1-F)/2^n, so a mixture of the ideal quantum computer doing circuit U and uniform noise (with F=0.002 for 53 qubits and depth 20).

But what the data (Fig. 4) actually shows is that F decreases exponentially with the number of qubits n and the circuit depth d. Isn’t that evidence to expect that scaling up would be “exponentially hard”?

16. nitpick345 Says:

Scott #9: Not more complicated, just different! But, to my surprise, the issue is not resolvable by simple Wikipedia link.

The “Transmon” page calls it a “plasma oscillation qubit”, which is the correct jargon but also unfortunate since the experiment certainly doesn’t involve any plasma.

Also, similarity with the charge qubit is misleading since the computational basis states do not have definite charge.

The article on Superconducting quantum computing isn’t really helpful either (“a hybridization of other qubit archetypes”).

17. Scott Says:

Jeremy H #10 and Egg Syntax #12: The actual details of the non-exclusive license agreement between UT and Google are apparently still private, but the fact of an agreement is not (which is good, because the NYT required that I disclose it 😀 ). Hopefully the following comments will be helpful:

(1) Google, like everyone else, is completely welcome (and strongly encouraged) to demonstrate my certified randomness protocol for scientific purposes, free of charge. The only reason for a license agreement is that Google wanted to have the option in the future, if all goes well, of potentially incorporating the protocol into some aspects of its business.

(2) It’s not even allowed to patent an algorithm (much less a theoretical analysis of an algorithm), but for whatever it’s worth, you can patent a “method” that combines algorithms with physical hardware.

(3) I don’t think anyone has any real idea how to value such things at present.

(4) My own thinking was that simply offering licenses, to whatever quantum computing companies might be interested in the option to try to monetize this, is the “bargain” route—the route that’s only one step removed from taking my work (and hence, taking the “investments” made in my work by the state of Texas and the American taxpayer), and just giving it all away for free to for-profit corporations. The “greedy” route, by contrast (or maybe some would say the smart route? 🙂 ) would have been to form a startup company around the protocol, and then try to get acquired by one of the big players.

18. Scott Says:

William Hird #13:

Who would be some potential candidates who would (could ?) be willing to spend the time and money to exactly replicate the Google experiment ?

IBM? Rigetti? IonQ? Misha Lukin’s group?

And it wouldn’t need to be an “exact” replication. Even just another, independent sampling-based quantum supremacy experiment, by a different group (and possibly in a different hardware platform), would increase our confidence in the result, while adding to our knowledge about how it can be achieved.

Equally important, we need to give people some time (a few more years at the least…) to try to undercut this type of quantum supremacy claim, by designing better classical spoofing algorithms and better ways to implement those algorithms in hardware. My guess, exactly as the Google authors write in their paper, is that the classical simulations will improve, but also that they’ll be consistently outperformed from this point onwards (at least on artificial sampling problems, if not on other problems) by improvements to the quantum computers.

19. Scott Says:

dennis #15: Yes, we’ve been over that multiple times in the previous threads.

Even now, I would have no idea how to explain the Google result, without saying something about the exponential character of quantum states—the fact that 53 qubits correspond to 9 quadrillion amplitudes. In that sense, this is already important evidence for the reality of quantum computational speedups, and the achievability of those speedups by humans.

On the other hand, you’re absolutely right that, until you can get to fault-tolerance, the fidelity will fall off exponentially with the circuit depth. The way I’d put the point is simply that it’s a real quantum speedup, but it’s not yet a scalable speedup. Which, indeed, is precisely the take-home message that I chose to stress in this op-ed as well as my blog posts about this. 🙂

20. Peter Morgan Says:

Any comments on this, before I send it to letters@nytimes.com? A few brave souls on Facebook have dared to like it, but one person has rightly opined that it’s too technical. The suggested length limit for letters to the NYT, 150-175 words, is killing.
——————————

Scott Aaronson has for several years rightly objected to the idea of “a qubit as just a bit that can be both 0 and 1”, but his suggestion that the “honest version” is “all about amplitudes” also needs pushback.

The mathematics in the first instance gives models for (1) measurements we might perform and (2) states that give a probability for each possible result for each measurement. Amplitudes are a distraction. The physics of an experiment, on the other hand, exquisitely prepares exotic materials and circuitry as an approximate realization of a state and examines the many electronic signals connected to the experiment for tell-tale events. In simple cases, an event is a sudden jump of voltage, but often many signals are analyzed and a “trigger” decides when an event has happened. We say that we “make a measurement”, but in modern practice an event happens and a computer records when it and billions of others happened, all without human intervention.

Subtleties abound in transforming records of events in mathematical ways that are often incompatible with each other, but they are gradually becoming a second nature extension of classical thinking.

Peter Morgan, Research Associate, Physics Department, Yale University.
——————————

21. Egg Syntax Says:

Scott #17: “It’s not even allowed to patent an algorithm (much less a theoretical analysis of an algorithm), but for whatever it’s worth, you can patent a “method” that combines algorithms with physical hardware.”

I am decidedly not a lawyer, but my understanding is that while one interpretation of the law is that you can’t patent an algorithm, US software patent law is a conflicted mess, and so in practice, hundreds of thousands of patents on (more or less) algorithms have been issued. A few well-known examples are patents on compression algorithms, encoding algorithms, file formats, and so on.

All of that is somewhat beside the point, though; I was hoping for your thoughts on the ethics involved, and I appreciate your taking the time to give a thoughtful and considered answer!

22. Sniffnoy Says:

Scott #17:

To expand on what Egg Syntax is saying, patenting a “device” like this has basically been treated legally as effectively a patent on the algorithm, even though direct patents on algorithms are disallowed. Thus people use it as a way to patent algorithms or software. This is what people are talking about when they talk about being legally unable to use certain algorithms, or being on dicey ground using certain algorithms, because they’re patented. Software patents have been a real setback to the writing of software, I’m afraid.

(Admittedly, to a large extent this is because so much of what’s been patented is obvious — not because it’s software per se — and should never have been patentable based on this. But the patent office takes a pretty narrow view of what’s obvious, it seems…)

23. dennis Says:

Scott #19: Thanks. If we have to rely on fault tolerance, we may be in some trouble. Although we have discovered quite a few threshold theorems, I’m not sure that the complicated time evolution going on in Google’s device can even remotely satisfy any of their tough assumptions.

24. Scott Says:

Dennis #23: The question that you should be asking yourself is not, “am I logically allowed to take this position—the position that quantum supremacy works, that the previous 800,000 predictions of quantum mechanics all worked, but that quantum fault tolerance, no, that’s where nature draws the line?” Because clearly you can take that position, in the sense that no one today will be able to prove that you are wrong. The question you should be asking yourself is, “is this position going to be a winning bet?” And your judgment might be distorted by the fact that, when you try to imagine the world five years or ten years into the future, you see yourself winning the bet. Is it possible that you’re insufficiently moved by the fact that, even if it takes 200 or 2,000 more years, if it happens after that long, then for the question that I thought I was discussing, I win?

25. Parasolv Says:

For quantum error correction to clinch the case for quantum supremacy it has to be possible to get to a few thousand qubits in the first place. So if that were to be ruled out by such a large number of usable qubits eg. requiring more mass-energy than our Hubble volume contains or having to violate the Bekenstein bound somehow then classical supremacy could still be a possibility.

Is it possible to rule out there being such an absolute lower bound on the scale of apparatus needed?

26. Alonzo and Alan Says:

Sniffnoy #22: Sounds like the patent office disagrees with the Church Turing thesis.

27. Michael Rust Says:

I admire you a great deal Scott, and I typically ignore this sort of stuff, but you are wrong in your characterization of Ivanka Trump’s statements. I also believe it’s wrong of you, as a scientist, that you decided to change the phrasing from _collaboration_ to a socially loaded term like _credit_. The efforts inception date has no relevance in its original context (but a lot more in yours!). Anyway, I know of no statements made that are equivalent to your portrayal. I like you because you speak to my mind, there’s no shortage of pundits today to speak to people’s passions.

For anyone that bothered to look first hand, Ivanka referred to the experiment as a collaboration–it was. Google used DOE compute resources and NASA expertise under the Trump administration to achieve results. Was it a collaboration with the Trump administration? Well, isn’t that what the National Quantum Coordination Office inside the white house is responsible for? The Trump administration has made this technology a priority as a matter of national strategy both legislatively and operationally. It may be emotionally inconvenient but Trump signing legislation like the National Quantum Initiative Act (providing >\$1B) and the white house’s coordination office coordinating substantial operational resources merits some finite, positive consideration if you care about quantum computing. From Scott’s blog things seem consistently negative, contemptuous, and condescending even in the face of laudable action to fund science and support folks like Scott. Does the new moral order that is progressivism sound any more appealing than the old one?

Before I yield the soap box I’d like to say, I think this is a broader problem and it causes credibility issues in the relationship between the scientific community and the general public. It’s counterproductive to simultaneously bemoan the loss of informed dialogue (especially from a position of intellectual authority like Scott’s) and contribute to the muddying of clearer thinking. I desire to know the facts, not the caricature of reality I get from the _news_ and misguided experts.

28. Scott Says:

Michael Rust #27: On reflection, I did not bend over backwards to give Ivanka Trump’s tweets the most charitable interpretation that I possibly could have—only the most reasonable interpretation. But I was consistent: I also put the most reasonable rather than the most charitable interpretation on Andrew Yang’s remarks about codebreaking. And Andrew Yang, unlike a member of Donald Trump’s heriditary junta ruling over what was once the United States, actually seems like a decent human being who arguably would’ve deserved some unreasonable charitableness.

29. Scott Says:

Parasolv #25:

Is it possible to rule out there being such an absolute lower bound on the scale of apparatus needed?

Was it possible to rule out in Turing’s time that there might be some fundamental principle, yet to be discovered, that forces all possible machines for doing arithmetic to have at least the volume of the human brain? No, we couldn’t have ruled out such a principle. But I also see no reason whatsoever why one should have bet on one. And if you feel the same way in that case, then you understand my view of those who say there must be some fundamental reason why you could never scale random circuit sampling beyond 100 qubits or whatever, even though you can do it with 53 qubits.

30. Job Says:

I rendered the results of one of the 14 qubit runs, using the samples that were posted.

I see a speckle, it’s definitely not uniformly random – hey i wasn’t sure that it wouldn’t be so noisy as to be almost indistinguishable.

These are small images, 128×128, with one pixel per possible output value such that the brightness of each pixel reflects the output value’s sampled probability.

For reference, this is uniformly random:
https://i.imgur.com/dv1j4Ta.png

This is one of the 14 qubit runs (n14_m14_s0_e0_pEFGH):
https://i.imgur.com/tYM6TP1.png

And i also did one for sampling from Simon circuits of the same size:
https://i.imgur.com/HMvCY8k.png

These are all assembled from 500K samples.

Now, the last one is the coolest one, i think we can agree on that, and it will still look cool at 75% fidelity (i checked).

Looks like some kind of mega-slit experiment Any skeptic who concedes before we see that image out of a QC is not trying hard enough, ok? 🙂

31. Michael Rust Says:

Scott, that’s my only point though. I find the following reasonable:

1. Observe claim of collaboration
2. Evaluate claim (I find the claim true for the reasons in #27)
3. Contribute “meaningfully” (i.e., disclose additional information, etc…)

Instead I’ve seen a lot of unreasonable responses from people:
1. Observe claim of collaboration
2. Emotional state disrupted
3. Mock publicly for “taking credit”

The latter is only reasonable in a social context for rhetorical value (especially in a conformal environment amongst like-minded people) whereas the former is what I would say is more reasonable (objective). Most of us have finite resources (intelligence, time) and these caricatures become absorbed into our reality. It’s a self-reinforcing feedback cycle that contributes to further polarization, hostility, etc… I’m not saying I can (or want to) defend everything about Trump, but I will say I think the above are the first steps to a more “rational” society.

32. Rainer Says:

@nitpick345, comment#8

“Of course, a frequency-tunable Transmon still contains a superconducting loop in its core, but there, the circulating current is used to tune the operating frequency, and is mostly orthogonal to the computational basis states.”

Can you please explain what you mean by “orthogonal to the computational basis states.” ?

33. A Says:

Physics Nobel for you 2020?

34. Ehud Schreiber Says:

Hi Scott,

In your preprint, in the definition of the Porter-Thomas distribution, haven’t you left out a factor of p?
By the way, the distribution itself is just a simple gamma distribution; the Porter-Thomas stuff is the fact that this applies to the case at hand – if I understand correctly, the squared size of the eigenvalue of a random unitary matrix.

35. Scott Says:

A #33: They’re still giving out physics Nobels for work done in the early 1960s. Though Alfred Nobel’s will explicitly specified “work done in the last year,” the modern Nobel has become a longevity competition as much as anything else.

I am now taking metformin and eating a much better diet (I lost 20 pounds over the summer, and lowered my blood glucose from pre-diabetic back to normal range). Not because of that though. 😉

36. nitpick345 Says:

Rainer #32: One can analyze the SQUID circuit in terms of a circulating current flow and a “through” current flow.

External tuning flux induces a finite circulating current. For the “through” mode, the leading-order effect is to change the stiffness of the flux potential, i.e., Josephson inductance. It is the “through” current that, together with the shunting capacitance, forms the anharmonic LC circuit where the Transmon excitations live.

The two modes are orthogonal in the sense that the vectors [1 -1] and [1/2 1/2], corresponding to the phase offsets in the two Josephson junctions, are orthogonal.

37. Scott Says:

Ehud #34: The ensemble of probability distributions that actually arises in an experiment like Google’s, and the one that we analyze, is one where, as you vary over quantum circuits, the probability of measuring a given string is extremely close to an exponentially distributed random variable with mean 2-n. (Why? Because it’s the squared absolute value of a complex number of mean 0, whose real and imaginary parts are independent Gaussians.) The probabilities of different strings aren’t fully independent (as one can see by just counting degrees of freedom), but for the applications we care about, they can usually be treated as if they were.

Assuming you agree with the above, the only remaining question is what to call this ensemble of probability distributions. The physicists at the Google team all called the ensemble “Porter-Thomas,” so we followed them in calling it the same. But does the name “Porter-Thomas” normally refer to what you’d get by squaring real Gaussians rather than complex ones, and is that the reason for the discrepancy? (Someone raised a similar issue with us before.)

38. Scott Says:

Teresa Mendes #11:

Where is the non-local entanglement evidence in Google’s quantum computer?

In the Martinis group’s earlier papers (search the arXiv), they directly characterized the device’s output states. One small byproduct was to prove that, yes, they were entangled.

But furthermore, we have no explanation for how they could have obtained the speedup they did, if the machine was not capable of generating entangled states. Indeed, even after a circuit depth of 20, they’re still able to detect “bumps” in the distributions at a fidelity of around 0.002. With n=53 qubits, this should suffice to take them far outside the ball of separable mixed states around the maximally mixed state, and deep into a regime where almost all states are entangled.

You can’t explain this in terms of the device being “analog,” because qubits are finite-dimensional quantum systems, and the gates that we’re talking about here are also discrete-time rather than continuous-time. The only “continuous” part is the amplitudes themselves. Furthermore, while the machine of course has analog errors—i.e., it doesn’t exactly implement the target quantum circuit—that only makes it less likely rather than more likely to pass the benchmark. But it did pass the benchmark.

Ironically, the classical simulations of the Google device—including Google’s own simulations, as well as the ones proposed by IBM and by Johnnie Gray—would not work if the machine were just a classical analog device like you say. All of these simulations take as their starting point that it really is a quantum circuit acting on a vector of 253 amplitudes—and that’s why the simulations work, or (the latter two cases) why they will work.

I can’t stress enough that we’re no longer talking about a proposal, or an idea, but about the results of an experiment that has actually been done—one whose results, just like those of every other such experiment, are easy to explain with standard QM (entanglement, exponentially large Hilbert space, the whole megillah), but would be hopeless to explain otherwise.

I remember you’ve been here, repeatedly, to argue that not even the Bell experiments proved the reality of entanglement in our world, for convoluted reasons that didn’t make sense to anyone else here. The path that you’ve chosen is an unbelievably lonely and sad one. It’s a path that forces you, over and over, into vociferous denials that experiments must not have had the results that they straightforwardly did have, decades after the rest of the physics community has moved on.

39. nitpick345 Says:

Job #30: There’s also another way to get a speckle pattern from a quantum circuit, which is by continuously varying the rotation angles of the gates. See for example Fig. 3(b) of arXiv:1804.11326. This is not an example of a random circuit sampling/supremacy experiment, but the analogy with laser speckle is perhaps a bit more direct.

40. Christian Says:

#11 Teresa Mendes
Just a bit of follow-up on your argument. You reference the Ansmann2009 paper indeed from the Martinis group, but using a technically very different superconducting qubit architecture. As you point out, Alicki found that cross-talk in their results could to a large extend limit the result and some of their calculations of p-values were a bit rudimentary. Nonetheless this 2009 results was the first example of direct measurements of the Bell inequality with superconducting qubits with S>2 without “correcting” for readout errors, thus, the result showcase state of the art gates and readout at the time.

Google now uses transmons (as mentioned elsewhere) rather than phase qubits. With transmon qubits, readout cross-talk can be engineered to be very small and for most larger groups it would be more or less straight forward to replicate the Ansmann2009 results and obtain very large S-values. An example of that you can find in The BIG Bell Test Collaboration, Nature 557, 212-216(2018)* [arXiv:1805.04431] experiment 7, where a Bell-test was performed with transmon qubits and S<2 was refutted with a p-value as small as your heart desire and crucially analyzed with proper statistical analysis (D. Elkouss and S. Wehner, npj Quantum Information2, 16026 (2016)).

*this papers main value (in my opinion as a co-author) is not so much in the physics as it is a good demonstration of a "citizen science" project where the public engages actively in research.

41. Peter morgan Says:

“I can’t stress enough that we’re no longer talking about a proposal, or an idea, but about the results of an experiment that has actually been done—one whose results, just like those of every other such experiment, are easy to explain with standard QM (entanglement, exponentially large Hilbert space, the whole megillah), but would be hopeless to explain otherwise.”
Standard QM doesn’t “explain” why the measurement events are where they are, it only systematically describes and accounts for them, so the bar is quite low for an alternative formalism. They can be described and accounted for by any statistical Hilbert space formalism that includes the use of a noncommutative measurement algebra. Did I mention before that the Koopman formalism for classical mechanics is one such statistical Hilbert space formalism, and that it includes the use of a noncommutative measurement algebra that can be constructed using the Poisson bracket? And that this is distinct from geometric quantization? Nothing about such an account changes the experiments at all, of course.
Other people have understood at least some of this already, and gradually more people are taking baby steps: look back at my proposed letter to the NYT above, #20, at the account it gives in terms of experimental raw data: lists of the times at which events were recorded on various different signal lines out of the whole experimental apparatus. Going past the word limit, we can apply Hilbert space coordinate transformations either in hardware, which at its simplest is to introduce a diffraction grating (an approximate fourier transformation), or in software, by post-selecting subensembles of the whole set of experimental raw data and working with those subensembles as if they are not joint measurements (this takes much longer than doing the same task using a hardware implementation of the same transformation).

42. Scott Says:

Peter Morgan #41: In years of your commenting here, so far you have not written a single comment that I’ve understood in the slightest. Quantum mechanics has a reputation for being hard to understand, and yet it’s trivially easy to understand compared to your comments. That is what I mean in saying that QM “explains” the Google experiment: it gives a mathematical account of what states the computer passed through in order to arrive at the observed result—and no matter how weird or mind-boggling or whatever someone might find that account, at least it’s clear. It ultimately “compiles down” to things that I can understand the meaning of, like vectors of complex numbers.

43. asdf Says:

Scott, here’s this thing that’s been bugging me and maybe I have the words for it now. You’ve got this thing like a probability distribution except it uses complex amplitudes, and when you measure, it projects down to a traditional real-valued distribution. So 1) is there a quantity analogous to the Shannon entropy for the unmeasured quantum state, and 2) is its maximum size affected by something like the Bekenstein bound? I’m wondering if that says anything about using Shor’s algorithm for what another famous computer (cough) scientist called “factoring large primes”. thx.

44. Scott Says:

asdf #43: The natural quantum generalization of Shannon entropy is von Neumann entropy. Famously, however, von Neumann entropy is zero for all pure states! And even for n-qubit mixed states, the von Neumann entropy has a maximum value of n—exactly the same maximum as Shannon entropy.

In summary: no, looking at the Bekenstein bound, together with the definition of entropy that’s natural for quantum states, provides no support whatsoever for the speculation that Nature is going to censor quantum computation. On the contrary, it reaffirms the insight that mathematically, n qubits usually behave more like a weird generalization of n bits (or rather, of probability distributions over n bits) than they do like 2n bits.

45. A Says:

Ok I understand better now. Google’s efforts are commendable. Does the Google machine have potential to set new records (perhaps 49=7^2 or 53 is a prime) on factoring from Shor’s algorithm? Or perhaps we have to wait longer?

46. fulis Says:

https://arxiv.org/pdf/1910.14646.pdf

Related to your work with Susskind

47. Scott Says:

A #45: You might be able to do that, although the fact that the gates are nearest-neighbor only would be a severe impediment (modular exponentiation and the Quantum Fourier Transform have no spatial locality). Combined with the depth limitation set by the qubits’ coherence time, that would almost certainly limit you to tiny and unimpressive numbers.

In any case, even if something is possible in that direction, the Google group has expressed zero interest in trying, since they regard it (with some justification) as a “circus stunt” that wouldn’t showcase the actual abilities of their hardware.

48. Scott Says:

fulis #46: Yes, I was hoping to comment on that paper! But I’ll need some time.

49. A Says:

I disagree. For a common aspiring computer scientist like myself any small hints at worked out examples will serve as stepping stone.

50. Teresa Mendes Says:

Scott #38

‘vociferous denials that experiments must not have had the results that they straightforwardly did have, decades after the rest of the physics community has moved on.’, you say.

What the physics community is doing, in my opinion, because no one denies that Bell tests are the only experiments that can validly test local realism, is (not) saying that they are studying quantum entanglement because it might be a real-world phenomenon, when and if, one observes an experimental rejection of local realism, and who cares if local realism was or wasn’t rejected in a specific experiment – we bet that, in the next X following years, someone will do it properly. X varies from person to person. Meanwhile, we work on that belief. You, too, know that, because your own 2015 post, is entitled: ‘Bell inequality violation finally done right’, and you have been working on quantum computing long before that date. I’m not saying that it’s not an acceptable good reason.

So, it’s not decades, but a few years. And that continues to be just a belief, not a fact, because those 2015 results continue to be questionable.

For example, the BIG Bell test, published May 9, 2018, in Nature, that included 13 experiments, 12 quantum labs, 107 researchers, 100.000 citizens of the world, concludes: ‘Most experiments observed statistically strong violations of their respective inequalities, justifying rejection of local realism in a multitude of systems and scenarios.”

Here are some ‘convoluted reasons’, your words, why I think that conclusion is not acceptable: eleven of those thirteen experiments are not Bell tests, no space-like separation – no way those eleven experiments can conclude anything about local realism; and regarding the two ‘true’ Bell tests, one does not present enough experimental efficiency of detection to be conclusive; and BBT-13, the only experiment that could validly test local realism, presents results quite awkward, not only textually affirming being unable to reject local realism with human choices, which directly refutes the title of the article, but also, in a new trial, using random computer-generated numbers, reports a CH result, a slightly positive value that if rejects local realism, it also rejects quantum mechanics, by even more standard deviations. (The CH-QM prediction depends on the experimental efficiency of detection, that is 75,6% as indirectly reported, and not ‘over 90%’, thus a theoretical negative S-value). The BBT-13 experiment it is a replica of one of those 2015 so-called loophole-free Bell tests.

Do you really think that the BBT conclusion reflects the reported evidence? I don’t. Do you really think those 100.000 citizens of the world know the real result the BBT study, they participated in? I don’t. “Einstein disproved by 100.000 Bellsters”, is the world-wide diffused message.

The problem is all about a misleading narrative like ‘decades after the rest of the physics community has moved on’.
The physics community, and in particular the foundation of physics community, didn’t move on, just pretends to have moved on.
Let me give you some examples: pushing arguments that closing some loophole is equivalent to a rejection of local realism, that is not true. Affirming “The final p-value of about ∼ 10−4000 indicates that with high confidence locality is violated either by the incompleteness of a classical description of reality under the additional assumption of free will of the participants or by a locality loophole in our specific experimental setup”, misleads the lay public, while the experts that know exactly what it means. Expressions suggesting that ‘the experimental violation of a Bell inequality’, or ‘we observed strong correlations’, in a non-Bell test, is equivalent to a rejection of local realism. Or, more technically, forgetting to measure and report the efficiency of detection of the experiment using the Klyshko method, presenting, sometimes, some calorimetric efficiency of detection, that is always higher than the experimental one measured by the Klyshko method, for understandable but not acceptable reasons. Or independently of the efficiency of detection of the experiment, always comparing the experimental result with an inexistent CHSH classical bound S≤2, with fantastic statistical significance, when that bound varies with the efficiency of detection level present in the experiment, that is only 2 for 100% of efficiency, otherwise varies between 2 and 4, and where there is a critical level of efficiency above that an experiment can be conclusive, but not below. Or trying to convince that a non-maximally entangled system, is better than a maximally entangled system, and has the predicate to lower that critical efficiency level, even if it includes measurements of non-measurable events (Eberhard inequality for CHSH tests), that can’t be done experimentally, argument that is afterward used, with any arbitrary number, to explain why measured results are substantially different from QM limits; and then, the inequality is transformed into a CH regular inequality, and a CH experiment is performed, on which the limit of LR is 0 but the limit QM depends on the efficiency of detection – it’s obfuscation. Post-selecting a sub-set of experimental results in accordance to the desired conclusion, disregarding the whole population which is what is required by Bell theorem. And last but not least, even pretending that it’s a good idea, an experiment with one quantum system, measured twice, be included in an article whose title is ‘challenging local realism’, on which ‘true’ Bell tests are not even identified in the only Table that summarizes all the 13 experiments’ results. I don’t think so.

(You got me started. )

As you well know no Bell test has been conclusive in rejecting local realism, and therefore establishing non-local entanglement, up to this date.
This narrative only misleads and confuses non-experts, and, unfortunately, has greatly influenced physics textbooks and popular science authors, for decades.

I think, next, a post-publication review of that BBT article is due, dedicated to my fellow Bellsters. (I’m one of them). Let’s see what 100.000 citizens of the world can do more for science.

Regarding the Martini’s group experiment – it was unfortunate. In 2009, and certainly in prior experiments, they used the theoretical inequality they had, and they were convinced that they had found some effect that couldn’t be explained by crosstalk, the authors hoped could be entanglement, but that also could be complementarily explained by some other local realist effect, again, because local realism was not rejected in that experiment due to lack of space-like separation. It’s now all explained by crosstalk.

Christian #40: thank you for the pointer. In the BBT experiment 7 you mentioned, not a Bell test due to lack of space-like separation, an experimental S= 2.307, can be explained by crosstalk, by any p(c) ≥ 1.92%. Authors did not report the probability of crosstalk.

51. X Says:

Hi Scott,
Very nice article. Would you happen to know why IBM is unable to do the same demonstration with their 53 qubit machine since by all accounts, it seems like Google has “won” this race.

52. Scott Says:

X #51: It’s plausible to me that IBM could do such a demonstration in the near future—and if they can, I really hope they will! Several commenters here have already pointed out the need for an independent confirmation of the Google result. Of course, to compensate for being second, IBM could also try to go for an even better demo than Google’s, with more qubits and higher circuit depth—and/or direct verification of their hardest circuits on Summit, exactly as they themselves just proposed.

From what I know, there are two reasons why Google “won”:

(1) The Martinis group has apparently been a little bit ahead of IBM in terms of the circuit fidelities they can achieve. (Having said this, I don’t actually know the details and would love to hear from experts.)

(2) Even more important, the Google group decided years ago that clearly demonstrating quantum supremacy was going to be a central goal, whereas IBM decided that … well, you can read their recent blog post if you want to understand their perspective. It wasn’t what they chose to focus on. As a result, Google crossed a major finish line that its closest competitor in superconducting qubits knew about, but was somehow barely even racing toward!

53. Peter Morgan Says:

Scott, #42, I’m well aware that you don’t much understand my comments —though to say that you don’t understand them even slightly seems to me ridiculous— and that for many other people as well as for you I continue to be an outlier in my various approaches to QM/QFT. I have no great expectation that will change soon, or, who knows, perhaps ever, but, still, my work is serious to more people than just myself.
I go through patches of commenting on your posts mostly when you stray into broader questions of our understanding of quantum mechanics, where I think your position seems all too understandably conventional, though I think you don’t fit into any single box. How could you have the time to do your ground-breaking work on computational complexity and also know the details of the dozens of different interpretations and the mathematics and other arguments that underwrite them, and all the developing literature?
It will be easier not to darken your blog if you can write about foundations, when you do, in more open-ended ways: I mostly responded to you writing in #38 too expansively, as I think, that it “would be hopeless to explain otherwise” than using quantum mechanics. Quantum mechanics works wonderfully well, it’s a glorious tool, but when we speak of the future or even of the still-to-be-revealed present, someone may find or have already found some subtle mathematics or other arguments that will transform how we think of the relationships between quantum mechanics, classical mechanics, and the experiments and the world they seek to describe. If I say that I have, and that, loosely, I have along with and following others been updating and developing Koopman’s 1931 insight that classical mechanics can be presented in a Hilbert space formalism, then nonetheless, of course, no-one has to waste their time looking whether there’s anything of interest to see. Truly, I find the story the mathematics tells subtle enough that it goes in and out of focus even for me day by day, so that I write about it more as an oracle than as a guide.
I’ll try to settle back into not caring too much what you write. Feel free not to post this.

54. Scott Says:

Teresa Mendes #50: I confess I hadn’t even heard of the Bell test done by 100,000 people.

Here’s the way I think about it:

In 1964, after seeing how quantum mechanics (which by then, was obviously a spectacularly successful framework for all known non-gravitational physics, and had no empirically inequivalent alternatives) unequivocally predicted that Nature would violate the Bell inequality, but before doing the experiment, one could “only” be ~99% sure of the result.

After the first Bell experiments were done in the 1980s, one could then be ~99.99% sure—but still no more than that, because of the locality and detection loopholes (even if those loopholes would require insane conspiracies for Nature actually to exploit them).

Now that fully loophole-free Bell experiments have finally been done, one can at last be ~99.9999% sure—or roughly as sure as one ever is of any result in physics.

At this point, I guess there’s only one question for you whose answer would interest me. Namely, is there any possible experimental result (among those predicted by QM) that would cause you, too, to admit that Nature disobeys local realist assumptions? If so, what result is it? What loophole remains for the experimenters to close? Could you describe the experiment you want in such a way that people could plausibly go out and actually do it? And so that they’d have an assurance that, after they’d done the experiment and once again gotten the answer predicted by QM, you wouldn’t suddenly change your mind and say “no, that’s not what I really meant,” but would be bound by the result?

Or is it simply that you assign zero prior probability to Nature violating the Bell inequality—so that the more firmly and irrefutably the experiments say that it does, the loonier are the regions of possibility-space you’ll be forced to?

55. John K Clark Says:

I wonder if those who insist there must be some currently unknown fundamental principle that would make quantum error correction impossible or at least astronomically difficult so a practical large scale quantum computer could never be built have considered the opposite possibility. I’m thinking about non-abelian anyons and topological quantum computers that would produce far fewer errors that need correcting. I also wonder what a sudden breakthrough in this technology would have on society.

John K Clark

56. fred Says:

Commercial aviation became a reality 10 years after the Wright’s first flight, and Trinity happened 3 years after Fermi’s experiment.

It will be interesting to see how long it takes for QC to reach its full potential.

57. Robert Rand Says:

Measurement #1 and Scott #2:

What’s wrong with “when you try to turn it into a bit?” (Or a number, or classical / conventional / readable data.) No looking required!

58. Teresa Mendes Says:

Scott #54:

You are a good sport, I can tell you that, and that is just one of the reasons I keep coming to your blog and so enjoy our discussions. I like your generosity with your time, the informal way you answer your readers’ questions and, of course, your expertise. We, your readers, always learn something from you. Thank you, and I mean it.

I’m, without a doubt, a strong believer of local realism. I really believe Nature, the real world, to be compatible with those two assumptions: one, the realism, that defines what one thing is, and the other, locality, that defines how two things interact. That idea is my preferred Einstein’s legacy. And I feel quite secure knowing that, since many thousands of years ago, since mankind began describing natural events, local realism has never disproved, not with spooky actions, then, nor with spooky action-at-a-distance, now. All physics is compatible with local realism, but quantum mechanics. And also, is chemistry, biology and geology. I’m really not alone, here.

Would I change my mind if local realism were disproved at the quantum level? Sure. I trust science. But, if Nature behaves differently at the quantum level than it behaves at macroscopic and cosmological levels, I want to understand where and why there is a frontier. Quantum mechanics can’t do that for me.

Science taught us, also, that there isn’t, to date, any experiment other than Bell experiments to test local realism, and I do agree with Henry Stapp when he called Bell’s theorem the most profound discovery of science. Is there a loophole for experimenters to close? I really don’t recognize the concept of loophole as relevant. A Bell experiment, has a defined experimental protocol attached, called EPR-B. Just comply with that protocol and I’m perfectly happy. Inventing loopholes, closing one, leaving another open, seems to me like trying to be warm with a small blanket, when you cover your shoulders, you uncover your feet. It doesn’t make any sense to me. Loopholes are experimental problems, one has to tackle them all.

The way I see Bell tests, like falsification tests for local realism, where the null-hypothesis is the joint assumptions of realism and locality (and fair sampling), and where there are only two possible answers: Yes, evidences show rejection of local realism, or No, (it’s inconclusive, meaning) there is no rejection of local realism. ‘Almost’ rejected will always mean not rejected, that is what an inconclusive test of hypothesis means. And you can never be 99,9999% sure of a result when the rules of that test are not followed. One can be sure of something, but that something is not the rejection of local realism.

Now, what happens when some article appears claiming the experimental rejection of local realism? I jump into it. Here is my methodology for a quick review:

1. Is it a Bell Test? Normally, but not always, meaning, is there enough space-like separation? (I confess that the BBT experiment with the one photon measured twice, took me out of the serious).

2. What is the Bell test protocol and did the authors use the adequate inequality?

3. Entangled photon pairs. What is the reported efficiency of detection? No numerical efficiency reported. Grrrrr. Why don’t they report the detection efficiency of the experiment using the Klyshko method? It’s so easy to do.

4. CHSH experiments. Is the efficiency of detection over the critical limit? Ok, that requires a more detailed look. ‘Heralding’, ‘Swap entanglement’ … doesn’t convince me.

5. On CHSH with pairs of entangled qubits. Did the authors report the probability of crosstalk?

The only Bell tests that now are under discussion, today, are CH experiments, where the upper local realist bound is 0, independently of the efficiency of detection. But that test is so difficult to be successful – it implies subtracting two very large numbers towards zero, with an experimental result showing a so tiny positive result, there is always a real possibility of a systematic error. The fun part is, that experimentalist have to be very careful not to show rejection of quantum mechanics, too. Of course, that some non-maximally entangled new assumption is going to appear, mixed with Eberhard inequality’s critical level, ad-hoc transformations of the inequalities, bla-bla-bla, you name it.

CHSH are the more fun for me. Local realism defines a limit, and a critical limit, where over that critical level of efficiency the experiment could, theoretically, reject local realism. To me, that means that efficiency of detection will never overcome that value. Why? I don’t have another explanation, that I believe that nature won’t allow it. So, it’s very interesting to watch the technological improvement of that equipment – they are now really, really, close to that limit, like at a 90% level (75,6%, tops vs 76,5%, critical). There is where I have my shills.

[Quantum teleportation, cosmic, satellite experiments are just too far away from achieving those numbers, so I almost never even open those articles. The same for ion/atom+photon experiments. A recent Bell test with entangled qubits, announcing space-like separation, fell on my mail (you can check it out on arXiv), but also reported communication between the detectors of the two branches of the experiment. Ridiculous.]

So, to tell you the truth, the only experiment that would convince me, would be a CHSH experiment, with pairs of entangled photons, with the efficiency of detection measured by the Klyshko method, and with that measurement audited, no funny stuff, that shows an experimental rejection of local realism, with a replica of that experiment done by another independent team.

So that’s it. Don’t criticize me to have fun with it. I just really like science.

59. Jean Tate Says:

Scott #54: I cannot comment from direct experience, not only is this a field I do not work in, but in 1964 (for example) I would have had zero understanding anyway.

However, here’s a contrasting set of estimates, from the perspective of a hypothetical, conservative, experiment-über-alles curmudgeon:
– 1964 before any experiments: ~70-80% (yes, the alternative would be radical, but that’s why lots of experiments get done anyway)
– after the first Bell experiment: ~90% (very cool! but it needs independent confirmation/validation/verification; and there are loopholes)
– today: ~99.9% (about as good as it gets, comparable to General Relativity)

60. Scott Says:

Jean Tate #59: I might actually give probabilities over time not dissimilar to yours for a stronger claim, like “quantum mechanics is universally true in our world.” But I was talking only about the weaker claim that “the Bell inequality is violated in our world”—hence the higher probabilities.

61. Bennett Standeven Says:

Well, if it’s actually true that Bell experiments are the only way to detect violations of local realism, then I think it fair to suggest that Bell’s Theorem is one of the least profound discoveries of science. (Profound discoveries can usually be demonstrated in many different ways, after all…) In any case, it does not appear to relevant to the question of whether quantum computing is reliable and scalable, since we can always make the quantum computer’s operation “local” simply by slowing it down. More generally, quantum computing can be explained using only nonrelativistic quantum mechanics, and hence also using nonrelativistic Bohmian mechanics.

62. Scott Says:

Bennett Standeven #61: Bell experiments are not the only way to detect that local realism is false (a large fraction of QM experiments detect that as a byproduct of whatever else they’re doing). They’re just the way where you optimize for maximizing the distance and minimizing the number of auxiliary assumptions, in order to convince the most recalcitrant skeptics … or second most recalcitrant, after Teresa Mendes. 😀

63. Eric Cordian Says:

Hi Scott,

Great op-ed. I never cease to be stunned by your ability to write large amounts of crystal clear prose at a moment’s notice.

I wanted to briefly respond to a comment you made in the prior blog post…

[But in quantum mechanics as we’ve had it from 1926 to the present, there is no indication—none, zero—that the amplitudes are discretized in any way.]

Any non-continuous nature of the wavefunction is difficult to observe because, first, the wavefunction cannot be directly observed, only measurements on it can and, second, probing spacetime at Planck scale would require a particle accelerator the size of the solar system.

Mutually entangling 400 qubits is a big enough configuration space that if you kept enough bits of the wavefunction amplitudes to resolve distinct quantum logical functions, you’d need more than the observable universe to represent them, even if you packed the bits at Planck density. One might wonder how all this information can be represented with perfect fidelity on a small chip sitting in a cryostat.

I don’t think the notion that QFT at Planck scale is a discrete theory, and that conventional QFT, with its continuous wavefunctions and differential equations, is an approximation of the Planck scale theory in situations where spacetime can be regarded as a fixed background that can be treated classically, is a particularly heretical notion. I remember watching a YouTube video where Leonard Susskind assumes, neglecting normalization, that the wavefunction amplitudes on a system of qubits come from a discrete set, estimates the number of quantum microstates, and proceeds to derive some results about entanglement entropy.

The central question here is whether any of this implies that quantum computing has trouble scaling as you add more qubits. I tend to suspect it does, and look forward to seeing what happens as people try to build bigger devices.

I’m a bit skeptical that if we can’t make bigger quantum computers work, we should still throw money at it because it will lead to the discovery of fantastic new physics which is an even greater advance for humanity. “My quantum computer doesn’t work” is one bit of information. The distance from that bit to a Planck scale Theory of Everything is not significantly shorter than the path starting from scratch.

Since entanglement appears to knit together spacetime, experiments like entangling two physically separate quantum memories, and looking for gravitational effects, are probably a lot more likely to illuminate new physics than trying to build something that can factor RSA keys using Shor’s algorithm.

64. Rainer Says:

editorial correction.

nitpick345 #36
But that doesn’t have to mean anything as I am not an expert in this field.

However it is still not clear to me what you consider as wrong with Scott’s statement in the nyt article: “…which uses 53 loops of wire around which current can flow at two different energies, representing a 0 or a 1.”

A Transmon is, according to my understanding, an oscillator which consists of a capacitance and a Josephson junction instead of an inductor. The two states 0 and 1 are realized by the ground state and the first excited energy state of this oscillator.

65. Scott Says:

Eric Cordian #63: Discretization of amplitudes is certainly a heretical notion, as it would require a massive change to the basic rules of QM to enforce it. I can’t rule out that it could turn out to be true, but if so it would be true and heretical. 😀 And even if we ignore the lack of experimental evidence, no one knows today how to consistently implement such discretization just as a purely mathematical exercise, for the simple reason that U(N) lacks suitably large discrete subgroups.

Having said that, even assuming amplitudes are “really” continuous, there are all sorts of technical reasons to discretize them in practice (eg, if you’re simulating QM on a classical computer, or you need to do various countings of the number of detectably different quantum states). So Susskind may have been doing nothing more than that, although I couldn’t say for sure without seeing his lecture.

66. Teresa Mendes Says:

Scott #62, hehehe. 🙂

Bennett Standeven#61: Bell tests are the only falsification tests for local realism, to date. But you are free to interpret any other experiment as you like if you see something ‘funny’, even interpret it as non-locality. The question is, are you sure? Without a rejection of local realism there is always a possibility for a perfectly reasonable local realist explanation you didn’t think about, yet. Quantum mechanics was not experimentally rejected either, so both answers are scientifically valid, today, says philosopher of science, Karl Popper. Preferring the quantum paradigm or the local realism paradigm, is a personal choice, and you can do it without being called, bad names, as long as they are both still scientifically valid.

==============
.
Scott #54 –part II. (and Jean Tate #59 )
.
Let me comment on Scott’s 99.9999% percentage.

First, let’s be clear. They are just perceived probabilities, on confidence, not calculations based on any theoretical model or factual events.

Where do they come from? Probably a teacher, told your teacher, that then told you, and you are now telling your readers, at least 3 generations of specialists, all saying that local realism wasn’t experimentally rejected, with 100% certainty, but soon will be, and we don’t need 100% confidence, in physics. Those teachers surely also shared your convictions and would back up your number.

The story goes like this: theoretical work. It all began in 1935 – Einstein-Podolski-Rosen, then Bell’s theorem, 1964… But before Bell: David Bohm. He is really important, in this story, because that is why the strict Bell test protocol is called EPR-B, not for Bell, but for Bohm. Then Bell inequalities: [Clauser-Horne-Shimony-Holt 69]), used for a 2-channel Bell test; [Clauser-Horne 74], used for a 1-channel Bell test: … Jumping to notable experiments: [Freedman and Clauser 72]; [Aspect et al. 82]; …

Stop the time there. Why did the whole physics community of that time, become so convinced, on the experimental rejection of local realism? The reason commonly pointed out is, not an experimental evidence, but on one famous cherrypicked sentence by John Stewart Bell, a confessed local realist:

.
‘It is difficult for me to believe that quantum mechanics, working very well for currently practical set-ups, will nevertheless fail badly with improvements in counter efficiency.’
[from AZ quotes]
.

That, by the way, also wrote:

‘I agree with them about that: ORDINARY QUANTUM MECHANICS (as far as I know) IS JUST FINE FOR ALL PRACTICAL PURPOSES. Even when I begin by insisting on this myself, and in capital letters, it is likely to be insisted on repeatedly in the course of the discussion. So it is convenient to have an abbreviation for the last phrase: FOR ALL PRACTICAL PURPOSES = FAPP.’
[from Wikiquote, Bell, J.S. “Against ‘measurement’”, Physics World (August 1990)
(his caps locks, not mine)

.
Fast-forward to 2007. Adán Cabello describes in the Quantum Communication and Security Report, Vol. 11, NATO, the current situation on Bell tests: “The highest overall photo-detection efficiency currently available is ≈ 0.33”:
.

‘Experiments to test CHSH-Bell inequalities have fallen within quantum mechanics and, under certain additional assumptions, seem to exclude local realistic theories. A particular relevant loophole is the so-called detection loophole. It arises from the fact that, in most experiments, only a small subset of all the created pairs are actually detected, so we need to assume that the detected pairs are a fair sample of the created pairs (fair sampling assumptions). Otherwise, it is possible to build a local model reproducing the experimental results. Closing the detection loophole in a two-photon experiment to test the CHSH inequality requires an overall photon-detection efficiency of n≥ 0.83 [Garg-Mermin, 1987], while testing the Clauser-Horne inequality requires n ≥ 0.67. The highest overall photo-detection efficiency currently available is ≈ 0.33, although there is a promising attempt to solve this problem.’

.
What is that so notable Garg-Mermin’s 1987 result, that the physics community so many times forgets to mention in its citation lists?

.
Abstract: “The question of how to deal with inefficient detectors in actual experiments of the Einstein-Podolsky-Rosen type is studied. We derive the necessary and sufficient condition for compatibility with local realism of data collected in experiments with two settings of each detector, without making auxiliary assumptions about undetected events. For the conventional experiment with particles in the singlet state (or its photon analogue), the data predicted by the quantum theory do not violate this condition unless the quantum efficiency of the detectors exceeds 83%.”

.
1987: “Necessary and sufficient condition”, said and proved, SLR ≤ [4/η -2], for η greater than the critical level, 4 otherwise. That result immediately disproves all prior Bell tests that announced the experimental rejection of local realism. 2007, twenty years later, this Garg-Mermin’s article practically disappears from the citation list of experiments, and the ‘detection loophole’ was born.

.
How is you perceived 99.9999% confidence going, so far?

.
In 2012, [Especial, 2012] corrected the Garg-Mermin’s critical efficiency of detection for CHSH Bell tests, and also proved it, SLR ≤ [4 – 2η^2], lowering it from 82,8%, to 76.54%.
(The CHSH Eberhard’s inequality, S≤2/3, is not even usable, because it implies measuring non-measurable events).

Today, as reported in 2018, by BBT13, that uses the experimental set-up of [Shalm, L.K., et al., 2015, A strong loophole-free test of local realism ] the current situation on efficiency of detection, measured by the Klyshko method, is 75.6%, not about ‘over 90%’, as irresponsibly toward science and disrespectfully toward scientists, BBT13 announces.
.

Have you ever wondered why 2015 so-called loophole-free Bell tests authors, Giustina’s and Shalm’s teams, decide to perform a one channel Bell test, with 2 detectors, instead of the more sophisticated and obviously more precise, 2-channel Bell test, that uses 4 detectors, a CH experiment, that had been practically abandoned since 1982, since the 3rd (the first CHSH) experiment of Aspect’s, the one that actually convinced everybody?

Because they did not have enough budget to buy more detectors? It’s not really difficult to figure out: not enough efficiency of detection to a conclusive CHSH Bell test, and they know it. So back to the oldie, really difficult to be successful, always under probable systematic error criticism, CH Bell experiment, as the only left alternative.

I’m not coming back to Hensen’s team “post-selection” 2015 experiment, that we have discussed earlier, where you claimed, then, that the ‘convoluted reason’ that post-selecting is just biased sampling, simply did not convince you.
.
Now, here is where I really draw the line and I hope you do, too. The commonly used expression in CHSH Bell tests, “additional fair sampling assumption”, that doesn’t mean just what it normally means in science – just no biased sampling, and that had an incredible biasing effect on the perception of what the actual results of Bell tests meant and mean.

That argument also led the whole foundations of physics community to adopt the following practice: whatever the efficiency of detection of the experiment is, always use the limit of local realism S≤ 2 (that is the local realism theoretical limit for 100% of efficiency of detection), carefully adding “under the additional fair sampling assumption” expression to interpret the results. That practice is still used today.

Where do you see “under the fair sampling assumption” to explain a result, in any other test of hypothesis in science, to justify the opposite of what in fact the test shows? What is the relationship of assuming a fair sample with a measurable result of that experiment?

One can test if a sample is biased: the human choices in BTT, for example, were tested, against random computer-generated numbers. You don’t need a Bell test to do it. And they are, yes, biased, say [Soltan et all, 2019, Analysis of assumptions in BIG Bell Test experiments, arXiv:1906.05503v3] that reanalyses nine out of the thirteen BBT experiments, to find asymmetric bias in the setting choices probabilities, more 0s than 1s, lots of 0101 and 1010 sequences, but also warns: ‘These various tests were subject to additional assumptions, which lead to loopholes in the interpretations of almost all of the experiments.’
.

My strong local realistic point is, that one just cannot, for a Bell test under non-ideal conditions, affirm local realism bound is S ≤2 when one knows that local realism bound S varies between 2 and 4 depending on the efficiency of detection of the experiment. And then use that S≤ 2 to calculate the statistical confidence of the experimental result to conclude with high-confidence ‘we found strong correlations that cannot be explained by any classical model’.
Almost all CHSH BBT experiments did that. High statistical confidence reported, of course. The only BBT experiment that could use S≤2, is BBT7, the one that uses pairs of entangled qubits and had about 100%, or almost perfect efficiency of detection, in both branches of the experiment. But has no space-like separation; isn’t a Bell test.

Just for fun, let’s do a (re)interpretation of the results of those BBT experiments, using the random-number generator results, when available, but imagining that those experiments were performed with the state-of-the-art BBT13 SNSPD detectors, with measured 75,6% of efficiency of detection, hence, a bound for local realism of S ≤ 2.85,so farfetched as using the S ≤ 2 limit for local realism for all CHSH experiments, theoretically, just applicable for 100% of efficiency, but closer to real experimental conditions and without any additional fair sampling assumption, that, of course, is already assumed with its regular meaning in a Bell test:

.
BBT1 – transformed inequality. Studying something else, not testing local realism.
BBT2 –Just one quantum system measured twice. There is a Charlie involved with Alice and Bob.
*BBT3 – ‘true’ Bell test. Experimental S = 2.804, no error margin reported, no efficiency of detection reported, result not even mentioned in Table I, that summarizes all BBT results.
BBT4 – experimental S=2.6434
BBT5 – transformed inequality. Studying something else, not testing local realism.
BBT6 –experimental S = 2.413 ± 0.0223. Atom-photon.
BBT7 – experimental S = 2.307. Entangled qubits. 100% efficiency, theoretical S≤2. No space like separation. Result explained by probability of crosstalk ≥1.9%. Probability of crosstalk not reported.
BBT8 – experimental S=2.431
BBT9 – experimental S = 2.29 ± 0.10 . Photon-colective atom.
BBT10 – CH experiment. Transformed inequality S(CH)≤2. Experimental S = 2.25. Rejects local realism and quantum mechanics (?). I didn’t check.
BBT11 – experimental S= 2.55
BBT12 – experimental S= 2.4331 ± 0.0218
*BBT13 – ‘true’ Bell test. CH experiment. CH inequality K(CH)≤0. Experimental K = (1.65±0.20)×10^−4 , just reported in Table I. Efficiency of detection 75.6%. Rejects local realism and quantum mechanics. With Human Choices, no rejection was found.
.
* Bell tests.
.

How is your perceived 99.9999% going, now?

.
The only chilling result (chilling with a C, not a S, as wrongly spelled before, sorry for my non-first language bad English): the ‘true’ Bell test, BBT3, that still does not reject local realism, did not report its efficiency of detection, and I’ll bet if they had measured it, they would find similar efficiency of detection of the BBT13 experiment, with their also SNSPD detectors.

Any BBT13 CH result, of course, will be always challengeable, by a possible, not easy to detect and correct, systematic error on the experiment, quite credible when experimentalists are trying to prove that (1.65±0.20)×10^−4, is not zero, with a number of trials in the 10^8 order of magnitude, and just a few are counted as events, in the order or 10^4.

.

Zoom back into 2009, the ‘detection loophole’ finally closed by [Ansmann, 2009], still suffering with a ‘locality loophole”.

What does ‘locality loophole’ means other than no space-like separation in the experiment, one of the requirements of a strict EPR-B Bell test? Any conspiracy theory of mine there, Scott? Isn’t suppose a Bell test, a falsification test for local realism, strictly follow its requirements?

Or do you think Martinis’ team will be able to successfully separate entangled qubits and present a flawless EPR-B Bell test, soon? They affirmed, back then, that “For this, the qubits would have to be located at least 10 m apart, which should be achievable by placing them in two separate refrigerators connected with a cold superconducting bus. […] Entanglement via “flying qubits” […] qubit coherence times would have to be improved […] to maintain the entanglement “in flight” […] But we believe that none of these issues pose an unsurmountable obstacle”.

Not a big deal, then. Scott, please, can you call him for me? If somebody has the authority to ask this from them, it’s you. Tell him It has passed 10 years already, and it’s time for me to move on. 🙂
.

.

By the way, did you know that that BBT – BIG Bell Test project, with 100.000 people, was based on your idea? And I quote, page 1: ‘Central to both applications is the use of free variables to choose measurements: in the words of Aaronson(19) “Assuming no preferred reference frames or closed timelike curves, if Alice and Bob have genuine ‘freedom’ in deciding how to measure entangled particles, then the particles must also have ‘freedom’ in deciding how to respond to the measurements.”
.

So here we are today, with almost four decades of Bell tests.

In my opinion, your perceived confidence is, not even close, supported by evidence.

You have still a 1% chance, let’s say 10% chance, in the order of what is missing for the current detectors for achieving, or not, the critical value for the capture and the detection of photons, the necessary and sufficient condition for a successful Bell test, or even more, 11%, depending on how you trust it’s just a question of time and money, backed by Google, IBM, and all the giants of the technological industry. How much long do you think will you be able to maintain your 99.999% confidence? How much time ‘they’ will maintain that level of confidence?
.

I too don’t know what explains Google’s Sycamore results, I even thought it could be because of its ‘analog’y, but, what do I know about it? … I’m just still betting you, it’s not non-local entanglement. (You can reassure you wife, I have no intention of making you pay a dime for it, when, and if, you change your mind. Relax, your due penalty will be much more creative and fun even for you, I assure you).
.

And, finally, Scott, (long speech, sorry), thank you. No way I could have a platform with an audience, if you didn’t let me present my arguments, and discuss them with you, even if we don’t agree much, on your own blog. As you, back then, rightly pointed out, I don’t have to convince you, but convince the majority of the physics community, and that’s tough.
I don’t know if I can do it, but I can try.

[You know, for a science communicator, tango and cabaret dance teacher, between the other equally seemingly incompatible things that I do, writing 5 pages on some crazy issue of quantum physics, on a rainy Sunday … piece of cake].

67. ppnl Says:

Eric Cordian @63:

Since entanglement appears to knit together spacetime, experiments like entangling two physically separate quantum memories, and looking for gravitational effects, are probably a lot more likely to illuminate new physics than trying to build something that can factor RSA keys using Shor’s algorithm.

I don’t think so. How many qbits do you think you will need to entangle in order to observe some gravitational effect?

If there is something wonky with quantum mechanics then trying to build quantum computers is probably the fastest and cheapest way to find it. In particular if the amplitudes are discretized then this is what you do to discover that fact.

68. Eric Cordian Says:

ppnl #67

I believe Leonard Susskind estimated that you’d need to entangle 10^72 Bell pairs between two locations to get a wormhole big enough to stuff a person into. 🙂

69. Andrei Says:

Teresa Mendes,

I share your local realist view, yet I think you are mistaken in trying to locate the problem in the experimental protocol of Bell tests.

First, Bell’s theorem does not give you a choice between locality and realism, but between local realism and non-locality. Local non-realism has already been ruled out by EPR. EPR did not succeed in proving the mutual existence of non-commuting properties but, as a result of the reality criterion alone, proved that the only way to preserve locality is to accept determinism (the particles had the measured properties before measurement).

Second, Bell’s theorem is based on the so-called independence assumption, also known as “non-conspiracy” assumption. It says that the hidden variables should not depend on the measurement settings. Such an assumption is simply false in field theories (like classical electromagnetism or general relativity) where each charge/mass responds to the field produced by all particles (even those far away). So, the wise choice here is to reject Bell’s theorem as irrelevant and not try to find experimental limitations (such as detection or locality loophole).

70. gentzen Says:

Scott #42: Wow, my worst fear and secret dream, both at the same time:

In years of your commenting here, so far you have not written a single comment that I’ve understood in the slightest.

OK, not really with respect to commenting, because there I can just limit myself to stuff that is not too difficult to explain reasonably well.

Let me try to illustrate this with an example:
15 mK * k_B / h = 312.5 MHz where k_B is the Boltzmann constant, h is the Planck constant, and 15 mK is the temperature of 15 milliKelvin at which Sycamore was operated. Assume I would try to explain why those 312.5 MHz are a lower bound for how fast the quantum bits must be operated (or at least controlled) for extended quantum computations. The main paper contains sentences like: “We execute single-qubit gates by driving 25-ns microwave pulses resonant with the qubit frequency while the qubit–qubit coupling is turned off.” or “We perform two-qubit iSWAP-like entangling gates by bringing neighbouring qubits on-resonance and turning on a 20-MHz coupling for 12 ns, which allows the qubits to swap excitations.” So on the surface, it looks like my claim must be false. There are also sentences like “In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GHz) for complete control of the quantum processor.” in the main paper or “The microwave AWG provides signals with arbitrary spectral content within ±350 MHz of the local oscillator (LO)” in the supplementary material. So maybe there might still be a grain of truth in my claim, but my claim would have to be explained good enough to make sense and allow agreement.

By coincidence, I don’t have access to Lienhard Pagel’s explanation of that claim at the moment, and no longer enough time to properly write down my own explanation. (My own explanation is that the difference between the energy levels of your states must be sufficiently bigger than 15 mK * k_B to be safe from random termal state flips, and those differences in energy levels lead to phase shifts that would become too big if you couldn’t control your qubits faster (or at least better) than 350 MHz.)

Getting a comment like yours is a secret dream, because it would mean that such a lower bound (or at least time scale) apparently was totally unknown to you. It is my worst fear, because in the end explanations can only do so much (and mine or normally not very good anyway), in the end everybody has to understand for himself. And such a comment would mean that people wouldn’t even try to understand such a claim by themselves.

71. Scott Says:

Incidentally, regarding how a transmon qubit works, Shyam Shankar graciously gave me permission to share the following:

The transmon is made with a Josephson junction in parallel with a capacitor. The JJ behaves like a nonlinear inductor, so the circuit as a whole is an anharmonnic oscillator. The oscillation frequency is in the microwave frequency range (typically 4-8 GHz). This means that the charge on the capacitor and correspondingly the current through the JJ are oscillating back and forth at this frequency. This is called a plasma oscillation or plasmon in solid state physics which inspired the name.

To be extremely technical, if you could measure let’s say the current in the |0> and |1> state, you would get a distribution of values corresponding, to first order, to the wave functions of a harmonic oscillator. The average value of the current would be zero at all times in both states but the variance of the current would be non-zero and different for the two states. On the other hand if you prepared the |0>+|1> superposition there would be a current on average which would oscillate back and forth at the oscillator frequency (4-8 GHz).

72. Craig Says:

Scott,

I have accepted that Google achieved quantum supremacy, even though I did not think they would. However, the real test is in a regime of around 100 qubits.

I recall saying on this blog that attaining 50 qubits is like a man landing on the moon and returning safely to earth, while attaining 100 qubits is like a man landing on Jupiter and returning safely to earth.

If 100 qubits is possible, I would think that any number of qubits should be possible. I think we shall probably see a 100 qubit machine built within the next year or so, at least one that is designed to work correctly. But there does not seem to be a way to test it directly in the way that was used for the 53 qubit machine.

Are there ways of directly testing a 100 qubit machine for certain special complicated circuits in which we know the answer ahead of time through some mathematical proof and not direct computation (since direct computation would take too long on a classical computer with 100 qubits)? If so, then this would be a pretty good direct test, although not as good as the test that Google used.

73. Scott Says:

Craig #72: I admire your honest admission that Google violated your expectations. I expect 100-qubit machines to be coming in a matter of years. But as for how to test them in a way like what you suggest—eg, by starting with a random quantum circuit but then “planting” a large-amplitude result—I regard that as one of the main theoretical open problems in this whole area.

74. Job Says:

Craig #72

Are there ways of directly testing a 100 qubit machine for certain special complicated circuits in which we know the answer ahead of time through some mathematical proof and not direct computation (since direct computation would take too long on a classical computer with 100 qubits)?

See the third image I posted above (zoom in to see the detail):
https://i.imgur.com/HMvCY8k.png

That’s the output of Simon’s algorithm for a random Simon circuit with a given n-bit secret.

With Simon’s algorithm, the QC will only output values that are orthogonal to the secret value – that’s why the image looks so patterned, with exactly half of the output black (0 probability, after destructive interference between exponentially small amplitudes, that’s important) and half green (uniform non-zero probability).

The circuit is assembled for a randomly picked secret value. Since we know what the secret value is in advance, we can validate each individual sample from the QC, without relying on statistical analysis.

And we can append any classical reversible circuit (in quantum form) to the generated circuit without changing this property, which ensures universality (also important).

Plus, Simon’s problem is related to period-finding that’s used in Shor’s algorithm, so it’s almost as strong of a test as you could hope for – integer factorization being the ultimate test really, short of a BQP != BPP result.

On the other hand, it’s not like the supremacy experiment, where no existing machine could be made to match the QC. Since we know how the circuits are being assembled, it’s easy enough to figure out what the secret value is using a classical solver. Otherwise it is a difficult problem for a classical machine to solve.

But it’s not like the QC “knows” how we’re assembling the circuits (it’s a totally generic algorithm), so what does it matter? We just need higher fidelity (above 50%).

And you know it’s a good test because it has the same interference thingies as the double-slit experiment. 🙂

John K. Clark #55: the question of non-abelian anyons and topological quantum computing was put to Gil Kalai, by me. His response, which you can see in his papers and blog, was simple: hypothesize that non-abelian anyons just do not exist. Experimental evidence is gradually pointing towards this hypothesis being false.

76. Nicholas Teague Says:

Hey Scott, I wrote an op-ed too. (self-published) Thought I’d share, cheers.
https://medium.com/from-the-diaries-of-john-henry/quantum-supremacy-96f8c26a9e3

77. dennis Says:

Craig #72, Scott #73:
Is there any reasonable argument against doing something simple, like an identity circuit?

I mean, doing some (random) complicated circuit if you want, and then undoing each unitary operation again. For such circuits, you know the result. Furthermore, you can scale them to arbitrarily many qubits, gates, and depth, without the need to simulate the circuit. In that sense, they undeniably validate the operation of the quantum computer.

Of course, there are also other options. For instance, you can build adder circuits (quantum circuits that add two integers, see Draper’s paper), whose result you can easily check because you know how to add numbers.

The only argument against identity circuits that I can think of at the moment is the restricted gate set that Google’s device has at the moment. But that should not be a problem in principle. And even if it was, it would only apply to Google’s device, since IBM already implements the standard gate set.

78. Scott Says:

dennis #77: Running a circuit and then its inverse is done constantly by the experimentalists—it’s an excellent way to (e.g.) calibrate fidelity. But it is open to the obvious objection that … err … you’ve done nothing but compute the identity function in a convoluted way. 😀

79. Ben Bevan Says:

Scott, just to say I highly appreciate your taking the time to write so clearly about all this.
I like to think that in another life I might have been involved in it. But to see the unfolding of it brings me great joy.

80. Bjørn Kjos-Hanssen Says:

Slightly off-topic but is there a definition for QNC^0 as a class of decision problems?
The Complexity Zoo seems to suggest that there is, but when I follow the link provided, they don’t seem to say more than “To extend this definition from quantum operators to decision problems in the classical sense, we have to choose a measurement protocol, and to what extent we want errors bounded. We will not explore those issues here.” (https://arxiv.org/abs/quant-ph/9804034)
I feel like any attempt at making a decision problem version will lead to QNC^0 = NC^0 since the circuits will only depends on a constant number of bits of the input…

81. dennis Says:

Scott #78: Yes sure, but my question is, why does it have to be some ingenious, new and complicated circuit, and not just something that we already have? Take the adder, or Shor’s algorithm. Everyone knows that such a circuit is a tough job for a quantum computer. But it is easy to verify the answer classically. So as to Craig’s question for a “pretty good direct test of a 100-qubit machine” it would be convincing to see it work.

82. Scott Says:

dennis #81: An adder would’ve worked fine even if you’d only had classical bits, so it’s not a good test of quantum computational power specifically. Shor’s algorithm is a good test—the problem there is just that we still seem very far from being able to run it on interestingly large numbers, since doing so probably requires fault-tolerance.

83. dennis Says:

Scott #82: alright, just to clarify, I meant the adder that is based on the quantum Fourier transform (quant-ph/0008033), not just the classical full adder expressed in quantum gates. The quantum adder also works “massively parallel” on superpositions so you can also interpret it as a good test of the “quantum computational powers” if you want

84. J Says:

Hi Scott,

Not sure if this is the right place to ask this, but when you quote 9 quadrillion amplitudes it made me wonder – does the Sycamore actually have as many states as an “ideal” 53 qubit quantum computer? Since the qubits can only interact with their neighbors, it seems like that would significantly restrict the actual states that can be reached. Is my intuition way here?

85. Scott Says:

J #84: Nope, it’s still a 253-dimensional Hilbert space. Of course we can only prepare a tiny fraction of the states in that space using small circuits, but that’s an inherent property of QC that would still be true even with no nearest-neighbor restriction. (Remember also that nearest-neighbor gates are still computationally universal, for the simple reason that a nonlocal gate can always be simulated using a sequence of nearest-neighbor swaps.)

86. fred Says:

Scott #65

Isn’t discretization of amplitudes like saying regular probabilities would be discrete too?
Aren’t they both mathematical constructs?

We can imagine a 3 sides triangular coin (like a prism), the probability of each side is exactly 0.333333… But that’s assuming one can actually build a perfect triangle in practice, which maybe isn’t possible once you compares lengths at the Planck scale.

87. fred Says:

Scott,

when it comes to your patent(s) and Google, what’s your take on the issue of big US companies cooperating with China (and indirectly with the CCP) to get access to the market?
Is that worrying you at all?

From a more general point of view, what’s the state of QC in China? Are we at a risk to fall behind? (we know they’re already spending way more than we do in AI).

88. Scott Says:

fred #86: Indeed, from a mathematical standpoint, the idea of discretizing amplitudes seems to make just as little sense as the idea of discretizing probabilities. That’s why, while I do think the idea is worth exploring, I’m always flabbergasted when people throw it around as if it were no big deal.

89. Scott Says:

fred #87: Yes, of course I’m concerned about the appalling human rights situation in China (and, secondarily, with various Western governments’ and companies’ complicity in it). Along with climate change, and the recent takeover of much of the world by authoritarian strongmen, the efforts by the CCP to control and monitor the thoughts of a fifth of humanity is clearly one of the biggest issues facing the planet.

Regarding QC: I would say that the US is the undisputed leader right now in the effort to build scalable devices (with Google, Microsoft, IBM, Rigetti, IonQ, PsiQuantum, NIST/UMD, Honeywell, the Yale group…), and also, like in many fields, remains the single most dominant country on the academic side (with Canada, the EU, the UK, Singapore, Australia, Israel, and others also important). On the other hand, the work of Jianwei Pan’s group in quantum optics—such as their recent demonstration of BosonSampling with 14 photons—is extremely impressive. And in the specific area of quantum communications, I would say that China pulled ahead of the US a couple years ago with their launch of the world’s first (and so far, only) quantum communications satellite. Of course I’m far from alone in thinking that—it was partly fear of what China was doing in quantum communications that led to Congress unanimously passing the National Quantum Initiative Act last year.

90. fred Says:

Teresa #11

“But analog computing is still under research […] A digital circuit, by contrast, needs to slice time into thousands or even millions of tiny intervals and solve the full set of equations for each of them. And each transistor in the circuit can represent only one of two values, instead of a continuous range of values.”

Funny you mention this because I’m currently toying with using analog electronics circuits to solve NP-Hard problems. The transistors are used in the full non-linear regime, acting as valves. Not that those analog circuits could ever magically solve NP-Hard instances in linear time, but they would effectively search the entire solution set using backtracking, but do it very quickly (total time is bounded by transistor characteristics).

91. Scott Says:

fred #90 and Teresa: Nope, none of that analog stuff is going to rouse the Extended Church-Turing Thesis from its slumber—not even a little. To whatever extent an analog circuit behaves reliably, you can simulate it on a digital computer with only polynomial slowdown, and to whatever extent it behaves unreliably, you can’t use it for computation (except as a source of randomness). Quantum computing is fundamentally different, because of the exponentiality of the amplitude vector, and the fact that that vector evolves in a linear and norm-preserving way. Once you understand all this, you’ll be at the level of understanding that Bernstein and Vazirani reached in 1993 when they wrote their paper on Quantum Complexity Theory. 🙂

92. Craig Says:

Dennis 83 and Scott 82,

Yes, an adder using quantum Fourier transforms would be a very good test. If that worked on a 100 qubit machine, I think I might be shocked.

93. Craig Says:

As for my previous comment 92, it seems to me that the adder with quantum Fourier transformations has almost the same level of intuitive complexity as Shor’s algorithm, except for the modular exponentiation of Shor’s algorithm (which would probably be just as intuitively complex, even though it takes longer to work, as far as I know). So such a test would not just be a good test, but a *great* test for a quantum computer with 100 qubits. That is why I said I would be shocked if it worked on 100 qubits.

Reminds me of this quote, which I do not agree with (doesn’t IBM already have one that can do this?):

“Physicist Michel Dyakonov at the Université Montpellier in France, remains unconvinced quantum machines will become mainstream. “I don’t believe they will ever become practical,” he said. “The quest for ‘supremacy’ is somewhat artificial and belongs more to the hype than to science. Just show us an elementary quantum calculator that can do three times five or three plus five.””

94. fred Says:

Scott #54

“even if those loopholes would require insane conspiracies for Nature actually to exploit them”

Right, but many things are conspiracies until we find a new way to look at them.
QM itself is based on some pretty crazy conspiracies (call them limitations or coincidences):
The idea to frame QM experiments with the assumption of free will, even though free will is a fantasy. Science does rely on the assumption that things can be separated into a system + everything that’s not in the system, even though in practice everything is connected, but this leaky abstraction is at the core QM (and then everyone seems confused once you consider multiple observers and systems within systems).
The idea that things can happen without causes – i.e. an unstable atom decays spontaneously, yet nature does it in a way that follows a specific probability profile. Basically nature has free will? (isn’t free will the ability to influence things without being influenced? which makes no sense).
The idea that distant objects can be linked magically across distances, yet there’s no way to exploit this too radically.
The idea that there is quantum supremacy, yet it’s not that powerful in the end (unless you turn everything you care about into a quantum system, a bit like the digital revolution, where we turned a lot of what we cared about into bits… but humans are macro classical creatures).
There is probably an underlying explanation for all this, more fundamental than QM.

95. fred Says:

Scott #91

Right, sorry I wasn’t clear.
My analog gadget is designed to solve NP-Hard problems specifically (not factoring numbers or simulating QM systems!)
I don’t claim that it would be any sort of breakthrough (it’s still doing a full search on all the inputs, the backtracking is built-in), just an interesting example of replacing a digital computation with an analog circuit. I don’t think there’s anything fundamental precluding this – in the end, the distinction between analog and digital is only in the eye of the beholder – a thing isn’t a computer in itself, only in the context of it mapping our own human concepts, an extension of our brains, and if I handed you a very complex mechanism which is unknown to you, you wouldn’t be able to answer the question – “how much software is in this?”.
Such analog gadget could be faster than a digital computer trying to solve the same problem, up to some input size, because analog machines can only be scaled up so much before noise becomes an issue (just like QCs!).

And, no matter how good your QC is at maintaining a gigantic amplitude vector, it won’t be better in solving NP-Hard problems either (asymptotically).

96. Scott Says:

fred #95:

And, no matter how good your QC is at maintaining a gigantic amplitude vector, it won’t be better in solving NP-Hard problems either (asymptotically).

97. Teresa Mendes Says:

Andrei #69:

Thank you for your comment, Andrei (my local realist brother 🙂 )

Did I understand right? The additional no-conspiracy assumption required by Bell’s theorem says that Bell tests cannot be considered a falsification test to local realism?

It doesn’t change much, for me – it doesn’t change the fact that local realism has never been experimentally rejected.

It’s a problem to quantum mechanics. The proof quantum mechanics needs for non-locality is then unavailable, if that additional assumption is accepted as absolutely required. Science would lose a good tool, in my opinion.

Everybody knows quantum mechanics is a fantastic mathematical tool. Just a mathematical tool, or how can one explain the multiple interpretations on what it means, with the same mathematical predictions? Physics is not Math. Math is abstract, it doesn’t always have to have a direct connection to reality. So where do we draw the line?

In my opinion, for Physics, that line is local realism. A mathematical prediction that complies with local realism is a result we can interpret as being a real thing; otherwise, no.

So, science needs Bell tests. But also needs scientists to adequately report the conclusion that can be drawn from the Bell experiments’ results.

There is no discussion on the interpretation of a Bell test, using a strict EPR-B protocol: either is compliant with local realism, or is not. Then we can return to that additional assumption, if needed, but never when an experiment is not a Bell test. Strict EPR-B protocol requires solving the so-called ‘locality loophole’ – it’s an experimental requirement. The ‘detection loophole’, is not an experimental problem, and it’s only a problem, if one sees it as a problem, in the interpretation of results, deciding whether the test is conclusive or not. An inconclusive test is a result that affirms it didn’t reject local realism. Putting both ‘loopholes’ at the same level is not correct.

Do you really think “Einstein disproved by 100.000 Bellsters”, (source: IQOQI-Vienna), is an acceptable headline to inform the world on the result of the BBT 13-suite of experiments?

98. Job Says:

Scott #82: alright, just to clarify, I meant the adder that is based on the quantum Fourier transform (quant-ph/0008033), not just the classical full adder expressed in quantum gates.

That does seem like a good test for a QC. I would like to see something like that well before we get to 100 qubits, though.

It looks like the QFT based algorithm for adding two registers A and B involves running QFT on A, then running the adder between A and B, then running the inverse QFT on A.

Both the QFT and the adder require lots of qubit interactions, and it’s not clear whether qubit connectivity will also increase with qubit count. Is a 100 qubit QC still just a grid? If so, the respective overhead would just degrade fidelity.

There’s also the added difficulty that QFT uses lots of small rotations, and even though this can be reduced using AQFT, each rotation would still involve multiple gates.

In practice that would probably translate to a circuit depth of thousands of gates, and there are smaller and more targeted tests that we can run at that scale.

Ultimately, it’s not like addition is a difficult problem for a classical computer to solve, so the choice of 100 qubits is completely arbitrary. I would rather see addition of two 25-qubit integers with reasonably high fidelity than 100 with 0.02% or lower.

99. Teresa Mendes Says:

Fred #95:
Aren’t there commercial analog components in flight simulators and other devices that need real time computing? Is that for only speed or there is any other reason?

100. fred Says:

Teresa #97

Bell himself said this in an interview

“There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the ‘decision’ by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster-than-light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already ‘knows’ what that measurement, and its outcome, will be.”

The loophole seemed pretty obvious to him, and it’s interesting to note that he doesn’t even linger on any conspiracy theories as a rebuke, by “super-deterministic” he meant that free will isn’t a thing and that’s it.

101. Scott Says:

fred #100: In this discussion, the question of human free will is a complete red herring—that strikes me as the central point that the believers in “superdeterminism” fail to understand. If the world was superdeterministic, it would mean much more than that human free will was impossible. More importantly (!), it would mean that you couldn’t isolate any part of the universe from any other part, not even approximately and not even for the practical purpose of doing a scientific experiment. Sasquatch and the Loch Ness monster could forever evade detection, simply because of conspiratorial correlations (imprinted into the Big Bang) between their locations and the supposedly random choices made by the people looking for them. In such a world, the only reasonable response would be to give up and stop doing science altogether. If we keep doing science, then if we’re thinking clearly about the matter we’ve already implicitly rejected superdeterminism.

Seems apropos starting around #100
https://www.smbc-comics.com/comic/fossils-3

103. Michael MacDonald Says:

Rather than superdeterminism, I prefer to think of entangled states existing without time; the boundary conditions are “known” by the entangled states regardless of when they occur. You could also think of it as entangled particles signaling each other backwards through time (which is why it’s impossible to convey actual information this way; it would violate causality).

Even if you take this view as indistinguishable from superdeterminism, I don’t think that should have any effect on our perception of human free will or any human endeavor. Our perception of human free will is already a construct of the mechanisms of our consciousness; and we already know that the physical systems underlying those mechanisms at their most fundamental level are quite odd compared to our everyday experience.

104. fred Says:

Scott #101

I could be misunderstanding this quote from Bell, but it seems to me that his definition of “determinism” is the belief that the evolution of any inanimate clump of atoms depends only on prior causes, but somehow animate clump of atoms escape this, hence experimenters are able to truly make independent choices that “surprise” the rest of the universe. In other words, if we were to rewind the evolution of universe, and rerun it as it was at some point in the past, the actions of the experimenter could/would end up different each time. Even taking quantum randomness into account, this definition of determinism means that re-running the film of the universe as it was since the big bang would each time lead to different wave function for the universe as soon as conscious beings start appearing, the reason for it being “free will”, and the reason for this divergence of the wave function of the universe would be a totaly mystery outside of the reach of science (because, if we could explain it by some mechanism, say, the physics of the “soul” world, we could expend physics to include it).

And Bell’s definition of super-determinism is the hypothesis that human actions are just as based on causality as any other system of atoms. Everything an experimenter does is based on prior causes. No matter how many times we rewind the film of the universe back, and rerun it, the actions of the experimenters would be the same again and again, i.e. the wave function of the entire universe is that same each time (or that all the branches of the many-world theory would be exactly the same each time).

But I don’t think that the above is what you mean by super-determinism.
It seems that Bell’s definition of super-determinism is your definition of determinism.
And then you add to this that the initial conditions at the big bang were chosen just so that dinosaurs never really existed but the evolution of the universe was such that arbitrary patches of dust happen to clump up into what looks like dinosaur bones, and we’re all fooled by it. Or any variation of this based on the science theory one wants to falsify (even though I doubt that you can always chose the initial conditions to lead to any arbitrary result, because you’re still limited by the unitary evolution of the wave function, and only a limited set of extra-ordinary coincidences are possible).

But why didn’t Bell use this as an argument? Why didn’t he bring up such conspiracies as an extra requirement to his definition of super-determinism? All he said was the free-will be rejected.

It’s also a fact that every atom of gas in a room is “connected” to every other atom. But science is possible as far as making statistical observations about the evolution of average values. That’s the only type of science that’s possible, since we’re always part of the system. That’s also the reason why computers work, each iphone 10 fresh of the factory is a unique system, yet they all (almost) always come up with the same result for 2+2 (in spite of them being quantum system).

105. Scott Says:

Michael MacDonald #103: But does that view (which sounds like the “transactional interpretation”) buy you anything over the usual view? Can it explain, for example, how a QC works? Would a quantum computation involve, like, an exponential number of signals going backward in time until the computation was finished?

106. Scott Says:

fred #104: My definition of “superdeterminism” is simply “whatever that thing is, that lets you appear to violate the Bell inequality, in a way that closes all the experimental loopholes, even in a world that’s local and classical.” And I meant exactly what I said: that that thing is the same thing that can make bigfoot and the Loch Ness monster always be there except when you turn your head to look (or otherwise try to detect them). I.e., there’s no such thing that’s in any way compatible with the continued practice of science.

107. Cromwell's Rule Says:

Scott #101: Phrases like “only reasonable response” and “If we keep doing science” imply that there is no possibility that superdeterminism is true.
A position that doesn’t violate Cromwell’s rule: We don’t currently think that superdeterminism is falsifiable and so reject its place in science. We endeavor to be scientific insofar as we perceive its benefits, with the recognition that such endeavor may just be a temporary placeholder for something better (including, a belief in superdeterminism!).

108. Scott Says:

Cromwell #107: I mean, it’s possible that I’m mistaken and science doesn’t work. But I still fail to understand how it’s possible even in principle to do a scientific experiment involving random choices (i.e., virtually any scientific experiment), and learn anything from its result, in a superdeterministic universe.

109. fred Says:

Scott #106

I see your point about the specifics of QM (whether free will should be brushed aside, I don’t know since many in the community seems somewhat obsessed with it).
But almost anything can be made to look like a conspiracy when we don’t have enough insight into it. A conspiracy may just be a clue that there’s some underlying mechanism we’ve yet to discover (and ‘t Hooft is looking for a fundamental new science to replace the conspiracy, he’s not saying he will succeed).

E.g. the initial conditions at the big bang were just so that billions of individual clumps of atoms on the earth are now organized in such a way that they all have some high level state (i.e. statistical property) that can be interpreted as the same answer to question 2+2=? (it’s a fact independent of the specific accidents of human history).
When considered from purely looking at the evolution of the totality of the atoms of the solar system (first the big bang, then the earth was a cloud of dust that slowly gathered into a ball of magma, which cooled down, etc until now) it’s an incredible coincidence which could only be explained by some “conspiracy” involving all the atoms in the solar system influencing each other in some very delicate/intricate way… it must run really deep and we don’t have a unique theory to explain this. Except that it must be driven by something fundamental, because it would be just too unlikely to happen based on just the right set of initial big bang conditions. Of course we could stop digging and cover the observation into a theory “the universe eventually always spontaneously self-organizes in computers” and be done with it.

110. fred Says:

Scott #108

” I still fail to understand how it’s possible even in principle to do a scientific experiment involving random choices (i.e., virtually any scientific experiment), and learn anything from its result, in a superdeterministic universe.”

Let’s be even more specific – what is a choice when everything is made of atoms that don’t have a choice?
And if there’s no choice, why did brains appear?

I think the answer is in the fact that what brains do is capture statistical models of the world – even if each individual brain has zero true freedom, and its fate is sealed so to speak, the entire population of brains contains billions of instances which as a whole captures general practical truths about their complex environment, “practical” in the sense that it makes their reproduction and propagation more likely.

Science is just a refined version of a bacteria encoding that “sunlight is bad, water is good”, linking these facts to its motion because it’s what makes its survival more likely.

The underlying key mechanism of the universe is that self-duplicating clumps of atoms can appear.

111. Andrei Says:

Teresa Mendes,

“Did I understand right? The additional no-conspiracy assumption required by Bell’s theorem says that Bell tests cannot be considered a falsification test to local realism? ”

What I mean is that if a theory does not comply with one of the assumptions of Bell’s theorem, it falls out of the scope of the theorem. Classical field theories (like classical EM and GR) do not comply with the independence assumption, therefore they cannot be ruled out by Bell’s theorem. Let me describe the argument in more detail.

A Bell test consists of three systems: the source of the entangled particles and two detectors. At quantum level these systems are large groups of charged particles (electrons and quarks). The hidden variables (the spins of the entangled particles) depend on the way the electrons and quarks in the source move. Their motion is given in classical EM by Newton’s laws + Lorentz force law. The Lorentz force depends on the magnitude of the electric and magnetic field at that location. In turn, the electric and magnetic fields are a (classical) superposition of the electric and magnetic fields generated by all charged particles in the experiment (source+detectors). In conclusion the hidden variables are not independent on the states of the detectors.

The same argument applies to any field theory, or any theory with long range forces. The only theory Bell’s theorem can rule out is rigid-body mechanics with contact forces (billiard balls).

“So, science needs Bell tests.”

No, rigid body mechanics with only contact forces cannot explain EM induction or planetary systems so we do not need Bell’s theorem to tell us it cannot be true.

“Do you really think “Einstein disproved by 100.000 Bellsters”, (source: IQOQI-Vienna), is an acceptable headline to inform the world on the result of the BBT 13-suite of experiments?”

As I tried to explain above, Bell’s theorem cannot rule out classical EM so Einstein cannot be disproved even if a perfect Bell test is performed and shown to comply with QM’s prediction.

112. matt Says:

Scott, I keep hearing from various people that it doesn’t matter that IBM could simulate this in 2 days because the “real point” is that it shows that quantum systems require very high dimensional Hilbert spaces. But, uh, that’s basically something that every physicist has known for about 90 years. If you doubt the 90 years remark, I can find you papers from 60-70 years ago where people were numerically simulating small quantum spin chains and lamenting that the exponential growth of the Hilbert space restricted them to small system (9 spins might have been around the limit then). Further, these numerical simulations were compared to real experiments and there was actually quite some confusion for a while due to discrepancy with the experiments until larger simulations became available.

113. Andrei Says:

Scott,

“In this discussion, the question of human free will is a complete red herring—that strikes me as the central point that the believers in “superdeterminism” fail to understand.”

I do think superdeterminism is the only reasonable option but I agree with you that “free will” is a red herring.

” If the world was superdeterministic, it would mean much more than that human free will was impossible. More importantly (!), it would mean that you couldn’t isolate any part of the universe from any other part, not even approximately and not even for the practical purpose of doing a scientific experiment.”

This is not true. In the context of Bell’s theorem (where the word “superdeterminism” appeared) superdeterminism requires that the hidden variables (the properties of the entangled particles) are not independent of the detectors’ states. So, as long as the experiment does not use entangled particles you can ignore superdeterminism.

To put it in a different way, if your experiments are compatible with QM they will be compatible with a superdeterministic interpretation of QM as well, because the predictions will be similar.

” Sasquatch and the Loch Ness monster could forever evade detection, simply because of conspiratorial correlations (imprinted into the Big Bang) between their locations and the supposedly random choices made by the people looking for them.”

As far as I know Sasquatch or Loch Ness monster are not supposed to be in an entangled state so no correlation between them and the way they are supposed to be detected is implied by superdeterminism.

” In such a world, the only reasonable response would be to give up and stop doing science altogether. If we keep doing science, then if we’re thinking clearly about the matter we’ve already implicitly rejected superdeterminism.”

Such a conclusion is based on a logical fallacy – Hasty Generalization. Superdeterminism implies that some correlations exist when entangled states are present. It says nothing about other situations. Your conclusion does not follow. Again, if you can do science given QM’s predictions you can also do science if the same predictions are explained by a local-realist, superdeterminist scenario.

114. Scott Says:

matt #112: As I said in my FAQ a month ago, I think it was important to demonstrate that a programmable device can access an exponentially large Hilbert space. I don’t mean that the device has to be BQP-complete, although of course simulating such a chip would become BQP-complete as you let the number of qubits go to infinity and the fidelity go to 1. I mean more precisely that
(1) the device can be asked to solve arbitrarily-generated instances of some general problem,
(2) the problem has a clean statement independent of the details of the device, and
(3) (vague and negotiable) we have some sort of complexity-theoretic evidence for the problem’s being classically hard.
I’ve agreed that, if your definition of “quantum supremacy” lacks one or more of these requirements, then supremacy almost certainly was achieved quite some time ago.

Anyway, maybe the shortest argument for Google’s experiment being relevant is that Gil Kalai agrees that it is! 😀

115. Cromwell's Rule Says:

Scott #108: I feel the same.
It’s a wonder that the word “superdeterminism” was even coined (and, since then, invoked in the QM community) since it has been discussed in other forms for ages. An example: Laplace’s mechanist thesis is only different from superdeterminism in that he probably didn’t perceive the state vector of the universe to be the way that it does to modern physics (for example, that entangled states could be part of that vector).

116. Scott Says:

Cromwell’s Rule #115: No, my whole point is that “superdeterminism” really is something fundamentally different from Laplace’s mechanist thesis. I would even say: the former is self-evidently silly in a way that the latter is not.

An example of a “superdeterministic” thesis would be: the laws of physics are completely deterministic, and one of the laws is that there no perpetual motion machines, and the way that law is enforced is that God fiddled with the initial conditions at the Big Bang, until She found one that would lead to everyone who tried to build a perpetual motion machine dying in a mysterious accident or whatever.

In other words, it gives a completely absurd non-explanation for something that already has a perfectly satisfactory explanation (in this example, thermodynamics, or in the quantum case, entanglement). The perfectly satisfactory explanation that we have requires no fine-tuning of the initial conditions, and (crucially) is actually specific to the thing being explained.

Which brings me to my response to Andrei. With “superdeterminism,” you simply assert that your cosmic conspiracy is only to be invoked to explain the one phenomenon that bothers you for whatever reason (e.g., quantum entanglement), and not to explain any of the other things that it could equally well have explained (e.g., bigfoot sightings). A scientific theory, by contrast, should intrinsically tell you (or let you calculate) which situations it’s going to apply to and which not.

117. fred Says:

In other words, it gives a completely absurd non-explanation for something that already has a perfectly satisfactory explanation (in this example, thermodynamics, or in the quantum case, entanglement)

I guess that energy as an invariant was also satisfactory enough until Emmy Noether made a profound connection between symmetry and the conservation laws.
Many people don’t seem that satisfied with QM and feel like there’s gotta be more answers at a deeper level.
Basking in a status quo isn’t going to move the problem of QM + gravity, right?
So kudos to those who have the guts to at least try! (that includes ‘t Hooft)

118. Cromwell's Rule Says:

Scott #116: If that is the accepted definition of superdeterminism (and, I guess, the reason for the “super” prefix) then you have pushed me even closer to abandoning Cromwell’s rule on this issue!

I didn’t think its definition was that strong though. I thought that it was more like: entanglement is so *uniquely* absurd that assuming the intersection of everything’s light cone in the (“super”) distant past, there is a deterministic explanation for it at the intersection point which we just haven’t been able to figure out yet. By the very nature of this hypothesis, controlled experiments would be precluded.

I don’t why entanglement is considered *that* absurd though – why does it have to comport with standard human intuition? A bicycle’s working boggled my mind till I learned to live with it; one observation that helped me in the reconciliation was that nothing seemed to preclude a bicycle’s existence except my obstinacy.

119. Cromwell's Rule Says:

Scott, just to clarify, I am surprised when you say that superdeterminism does not intrinsically define the limits of its applicability. Is that something that its proponents assert, or something that can be concluded from their other assertions? Either way, if true, then I can’t differentiate it from religion.
Sorry if you’ve already discussed this on top.

120. JimV Says:

Dr Bell: “There is no need for a faster-than-light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already ‘knows’ what that measurement, and its outcome, will be.” (source–Wikipedia).

It seems reasonable to me that there is in fact no free will and therefore that the universe does know that experimenter A will use detector setting alpha producing measurement state aleph, and that experimenter B will get a complementary result. There could be (and probably is) some random function involved, so long as the universe “knows” what that random output will be. There’s no conspiracy to fake correlations. The correlations are due to the laws of this universe, which the universe knows.

One way to think of this is that this universe is a computer simulation being done in a much higher universe, using some pseudo-random functions. I don’t believe this myself, but it seems to me to work as a conceptual mechanism. Note that the simulation is using programed natural laws which reduce (at our experimental scale and precision) to the ones we have deduced, including QM. A deterministic rat (or even flatworm) can find its way through a maze, so we should be able to do some science; and find the Loch Ness monster, if it existed.

121. Bennett Standeven Says:

fred #104

Bell’s own proof of his inequality involved Einstein’s notion of “elements of physical reality”; so he had to assume that Alice and Bob’s measurement angles are not determined by the same elements that determine the results of their observations. But later researchers found proofs of the inequality that don’t depend on this concept (they define realism solely by “counterfactual definiteness”), so they don’t require free will anymore. That’s why the modern definition of ‘superdeterminism’ doesn’t match Bell’s definition. In fact, the modern concept would be better described as “Omphalism”, since it involves quantum correlations being built into the initial conditions of the Universe, just like the original Omphalism involved the geological history of Earth being part of the initial Creation.

122. Andrei Says:

Scott,

“With “superdeterminism,” you simply assert that your cosmic conspiracy is only to be invoked to explain the one phenomenon that bothers you for whatever reason (e.g., quantum entanglement), and not to explain any of the other things that it could equally well have explained (e.g., bigfoot sightings).”

Not at all. Bell’s independence assumption is false in any theory with long range forces, like classical EM, General Relativity, Newtonian gravity, etc. Any such theory that is also deterministic will be “superdeterministic” in Bell’s sense. All these theories imply correlations between distant systems as a result of those long range fields/forces. Such examples abound: planets in planetary systems, stars in a galaxy, electrons in a wire, atoms in a crystal, etc. Those theories apply equally well to bigfoot, if such a being exists, but the observable effects will be different because, well, bigfoot is different from an electron and a camera is different from a Stern-Gerlach detector. The trajectory of an individual electron will be significantly affected by the EM fields produced by other electrons and quarks. The center of mass of a large aggregate of electrons and quarks, like bigfoot, will not be significantly affected because those effects will cancel out (objects are neutral on average). The larger the object is, the less likely it is for all those fields to arrange in such a way so that we get observable consequences. Still, correlations between the motion of elementary particles inside bigfoot and inside the camera are to be expected, it is just difficult to measure them.

“A scientific theory, by contrast, should intrinsically tell you (or let you calculate) which situations it’s going to apply to and which not.”

Sure. Bell’s independence assumption is false in the general case as a direct consequence of long range fields/forces but it is approximately true for large, macroscopic objects where those forces cancel to a large degree. You can ignore those forces when doing coin flips or searching for bigfoot but you cannot ignore them when properties of elementary particles or small groups of such particles are involved.

One can explain (even if only qualitatively at this time) all QM’s mysteries by focusing on the ignored long-range forces. In the two-slit experiment we have electrons sent one by one towards a barrier with two holes. A pattern corresponding to an interference pattern is obtained. This “great mystery” that presumably cannot be explained classically can be easily understood if the wrong classical model (bullets shot towards a concrete wall) is replaced by the correct model (the motion of a charged particle in the field of the charged particles inside the barrier). The electrons and quarks inside the barrier will alter the trajectory of the electron as a result of the Lorentz force. It is expected that the trajectory will be different when the geometry of the barrier changes even if the electron passes trough the same slit in the one-slit or two-slit situations because a barrier with two holes will produce different electric and magnetic fields from a barrier with one hole.

123. b_jonas Says:

Wow! This new result in the Scott Aaronson, Sam Gunn article is better than I hoped from your previous post. It claims that spoofing is hard for a random instance of the problem. I wish we could have such strong hardness results for other computational tasks.

124. Scott Says:

b_jonas #123: Honesty compels me to admit that there’s nothing impressive about that aspect of our result, since our hardness assumption also involves random circuits drawn from the same distribution. The result simply relates two different things you can do given a random circuit (amplitude estimation and linear cross-entropy spoofing).

125. Scott Says:

Andrei #122: No, because after Einstein, long-range forces can only propagate at the speed of light. Even before Einstein, you would’ve told a more complicated story about the instantaneous influence of long-range forces falling off with distance until it’s undetectably small, and about something of that kind being needed to get approximate isolation of different parts of the universe from other parts and, therefore, science (including whatever observations led you to believe in the long-range forces in the first place).

126. Scott Says:

Cromwell’s Rule #119 and JimV #120: Let me give a simple example to prove the general point I’m making, without needing to go all the way to the Loch Ness monster or bigfoot. Once you’re invoking superdeterminism, there’s absolutely no reason why you couldn’t also have faster-than-light communication, in addition to just Bell correlations. You’d just need to say that the initial conditions of the universe were chosen such that the recipient of the message always correctly guesses it while it’s still in transit (or maybe—why not?—before the sender even generates it). Of course the superdeterminists don’t say that, but their reason is entirely post hoc: they have no theory of their own that would let them predict that Bell correlations are possible whereas superluminal communication isn’t. An example of such a theory—one that, moreover, exactly predicts the quantity of Bell correlation that’s possible—is quantum mechanics.

127. Andrei Says:

Scott,

“No, because after Einstein, long-range forces can only propagate at the speed of light.”

Sure, that’s the meaning of “local” in local realism. The interactions between source and detectors are local, but the result of those local interactions is the fact that the states of source and detectors are not independent. The interaction between Earth and Sun is also local, yet Earth and Sun are not independent systems.

“Even before Einstein, you would’ve told a more complicated story about the instantaneous influence of long-range forces falling off with distance until it’s undetectably small, and about something of that kind being needed to get approximate isolation of different parts of the universe from other parts and, therefore, science (including whatever observations led you to believe in the long-range forces in the first place).”

Approximate isolation is not the same thing as independence. Isolation means that you can describe a subsystem (say Earth-Moon system) autonomously, without taking into account other distant systems (say the Sun). But to say that the motion of Earth or Moon is independent from the Sun’s is a completely different, and false statement. As soon as you add another distant “autonomous” system, say Saturn-Titan system and you ask about the relative motion of the Earth-Moon system and Saturn-Titan system it becomes clear that this autonomy (which is, as you say, crucial for doing science) has nothing to do with independence. Same goes for distant planetary systems in the same galaxy. You can describe them in isolation if you are only interested in the motion of the planets relatively to their star, but you need to take the entire galactic field into account to describe the motions of the planetary systems relative to each other.

For a Bell test we are not interested in describing the source and detectors in isolation, but their relative behavior. And, just like in the case of distant planetary systems, their relative behavior can only be understood once the long-range interactions between them are taken into account, which means abandoning Bell’s independence assumption.

Increasing the distance between subsystems does give you autonomy, but not independence. Pluto is very far from the Sun as compared to Mercury, but Pluto is not more “independent”. Both planets orbit in elliptical orbits. Increasing the distance gives you a bigger ellipse, not random motion which independence requires.

128. Cromwell's Rule Says:

Scott #126: “…they have no theory of their own…”:
To be fair to the superdeterminists, they would probably tell you: yes our theory is currently a kluge of the type that you describe, and we realize that it’s a bug not a feature. Give us some time; no theory was born whole, i.e., without passing through a kluge state.

I would be more interested in their version of a controlled experiment, rather than whatever helps them sleep at night regarding entanglement.

129. fred Says:

Scott #126

but a superdeterminist like ‘t Hooft is taking your objection seriously and he doesn’t just stop there. So the apparent conspiracy is only the result a deeper mechanism (going back to simple cellular automata seems very popular with attempts to merge QM and gravity):

https://arxiv.org/abs/1405.1548

“Examples are displayed of models that are classical in essence, but can be analysed by the use of quantum techniques, and we argue that even the Standard Model, together with gravitational interactions, might be viewed as a quantum mechanical approach to analyse a system that could be classical at its core. We explain how such thoughts can conceivably be reconciled with Bell’s theorem, and how the usual objections voiced against the notion of `superdeterminism’ can be overcome, at least in principle. Our proposal would eradicate the collapse problem and the measurement problem. Even the existence of an “arrow of time” can perhaps be explained in a more elegant way than usual.”

130. JimV Says:

“Once you’re invoking superdeterminism, there’s absolutely no reason why you couldn’t also have faster-than-light communication, in addition to just Bell correlations.”

Yes, the universe (conceptually) could have been different, with different natural laws. Isn’t that true no matter what theories anyone has?

Maybe I’m missing a lot of background, in which Dr. Bell’s “absolute determinism” became inflated to “super determinism/conspiracy theory”. Just from the plain statement I quoted from Wikipedia, I don’t see any need for fine-tuning or any conspiracy theory. It just seems like another QM interpretation to me: there is no need for communication between distant entangled particles because the universe already knows what is going to happen (at least just before it does happen). I read “absolute determinism” as no free will (albeit with random number generators which are themselves under the universe’s control).

In which case, one person’s “ad hoc” is another person’s “another alternative which could possible be the right answer, and so must be considered before we say we have considered everything”.

You know more about this than I do, so I am 98% sure you’re right. (So I’m saying there’s a chance …)

131. Filip Says:

Hi Scott,

Do you think a future quantum computer could be used to perform calculations that aid building a new even better one (i.e. better error correction and/or more qubits)?

132. dennis Says:

JimV #130: no, your 2% are actually right. This is all about interpretation. You have such a problem in every theory that fundamentally can only, ever, predict probabilities. I urge everybody to read Chapter 10.8 of “Probability Theory: The logic of science.

Such a theory (like quantum theory) can, by construction, never describe the individual event. It can only describe the relative frequencies of many events.

As soon as you believe, then, that probabilities are fundamental (and not just an effective description of our state of knowledge, like Bayesians do), you get into all sorts of trouble with interpretations of entanglement and what not.

133. ppnl Says:

JimV #130

“Once you’re invoking superdeterminism, there’s absolutely no reason why you couldn’t also have faster-than-light communication, in addition to just Bell correlations.”

Yes, the universe (conceptually) could have been different, with different natural laws. Isn’t that true no matter what theories anyone has?

I think you are still missing the point. Imagine that the natural laws of the universe do allow for faster than light travel but by some massive coincidence everyone in the entire history of the universe who discovered it was struck by lightning or hit by a car before telling anyone. But if you believe in determinism then this is not a coincidence. It is simply a consequence of the initial conditions of the big bang.

Think of a cellular automata. One way to prevent some complex pattern from appearing is to set the rules so that they disallow it. Another way is to set the initial conditions of the cells so that the pattern never appears. It isn’t the natural laws that prevent it but simply the initial state.

Superdeterminism seems to depend on the second method.

This is a general problem with determinism. There is no freedom or creativity. The whole universe plays out like a movie. The story it tells is the story encoded in the initial conditions of the universe. If that story does not involve us inventing faster than light travel then it does not. But that does not mean that the laws driving the automata does not allow faster than light travel. We are just playing the wrong movie.

My question for Scott is would scalable quantum computers disprove all versions of superdeterminism?

134. Scott Says:

Filip #131:

Do you think a future quantum computer could be used to perform calculations that aid building a new even better one (i.e. better error correction and/or more qubits)?

Yes. Indeed, one of the main applications that people envision for noisy, near-term quantum computers is to do experiments on error-correction that could provide useful information about how to scale up.

135. Scott Says:

ppnl #133:

My question for Scott is would scalable quantum computers disprove all versions of superdeterminism?

On the one hand, Gerard ‘t Hooft himself is on record predicting, on the basis of his superdeterministic speculations, that no quantum computer will ever outperform a classical computer. So according to him, the answer to your question is yes.

On the other hand, I’ve never understood why ‘t Hooft predicted that! It seems like a total misunderstanding of the logic of his own insane proposal. Supposing that a scalable quantum computer got built, why shouldn’t he simply say that the initial conditions of the universe must have already foreseen, e.g., which numbers the computer would be given to factor, and therefore pre-loaded the computer with the correct answers? In other words, why shouldn’t he say the exact same thing that he already did say about Bell experiments?

To summarize: as far as I can see, superdeterminism is empirically sterile and pointless; it can never make any new predictions whatsoever. According to the superdeterminists themselves, it makes a prediction that was either already falsified (by Google’s experiment) or has an excellent chance of being falsified in our lifetimes.

136. Andrei Says:

Scott,

“Once you’re invoking superdeterminism, there’s absolutely no reason why you couldn’t also have faster-than-light communication, in addition to just Bell correlations. You’d just need to say that the initial conditions of the universe were chosen such that the recipient of the message always correctly guesses it while it’s still in transit (or maybe—why not?—before the sender even generates it).”

I do not understand why do you believe that the only way to get two distant systems correlated is by fine-tuning the initial conditions. Trivial counterexample: two stars orbiting each-other. They can be arbitrarily far away, the interaction is local, they are not independent (they orbit their common center of mass) and this has nothing, but absolutely nothing to do with fine-tuning the initial conditions.

In order to deny the independence assumption you only need to show that at least some states of the source are not compatible with some states of the detectors. That’s it. In classical EM this is trivial. It is easy to show that, as long as the number of charges is finite the field configuration at the location of the source uniquely determines charge distribution/momenta. This is much more than you need. Again, this has nothing to do with any fine-tuning, it’s just the normal behavior of an EM system.

It seems to me that by “local realism” you only think about billiard-balls (contact forces only). This is the only example where you actually need to fine-tune the initial conditions to get two distant systems in a correlated state. In field theories the situation is different.

“Gerard ‘t Hooft himself is on record predicting, on the basis of his superdeterministic speculations, that no quantum computer will ever outperform a classical computer.”

This is not true. Let me give you the exact quote (The Cellular Automaton Interpretation of Quantum Mechanics, p.79):

“Yes, by making good use of quantum features, it will be possible in principle, to build a computer vastly superior to conventional computers, but no, these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element (or, in view of the holographic principle, one memory site per Planckian surface element), and if its processing speed would increase accordingly, typically one operation per Planckian time unit of 10−43 seconds.”

https://arxiv.org/pdf/1405.1548.pdf

It is worth noticing that ‘t Hooft’s above prediction has little to do with superdeterminism per se. It is motivated by the fact that his interpretation is based on a discrete model. CA is in fact a discrete field theory. A superdeterministic theory based on a continuous space-time would not have those limitations.

“as far as I can see, superdeterminism is empirically sterile and pointless; it can never make any new predictions whatsoever.”

Let me offer you a challenge:

Take a system of charged particles that consists of three distant subsystems. Arrange the initial parameters in any way you like. You need to fulfill two conditions:

1. The system obeys the laws of classical EM (Maxwell equations+Lorentz law+Newton’s laws)
2. At least two of the three subsystems are independent.

A failure to provide such an example would imply that classical EM ” is empirically sterile and pointless; it can never make any new predictions whatsoever”.

137. Job Says:

There’s an interesting parallel to Bell’s experiment that surfaces in collaborative systems.

When you have shared memory between two clients, in order to maintain consistency without locking, non-commuting operations need to be handled in a particular way.

Conflict-free replicated data types (CRDT) and Operation Transformation (OT) are two different resolutions.

CRDT ensures that all operations commute, but it does so at a cost, since each client needs additional state to make this work, and it gets larger over time.

OT is more interesting and it doesn’t have the same problem. It works by transforming concurrent operations. The way it would work in the context of Bell’s experiment is:
1. Alice applies measurement A to her local copy of the particle.
2. Bob applies measurement B to his local copy of the particle.
3. Alice’s and Bob’s operations are sent to each other over some communications channel.
4. Upon receiving Alice’s operation A, Bob transforms it against his earlier measurement B and applies the resulting A’ to his local copy of the particle. Alice does something similar.

The result is that Alice and Bob remain in a consistent state, though not necessarily one that is simply the result of either AB or BA. It’s actually AB’ = BA’.

Normally, Alice and Bob are not “inside” the collaborative system. But suppose they are both in a simulation, such that the universe is the collaborative system. They have both entered their respective simulation pods.

Then, the communications channel is external to the universe, so Alice would receive Bob’s operations depending on how far the two pods are from each other, rather than according to the distance that separates the two within the simulation.

And it’s amazing that this would still not allow Alice and Bob to communicate at super-luminal speeds within the simulation. It’s incredible design. 🙂

When i think about Bell’s experiment in these terms, the results seem sensible, and i wonder whether local realism (as perceived from within the simulation) would require an ever growing state like CRDT does.

Also, it would be great if QCs could efficiently determine whether f(a,b) is commutative for all input pairs, beyond what Grover’s search can do.

That would have really practical applications, though it is clearly an NP-Hard problem. Maybe there is a constrained version of that problem that QCs could solve? It seems to be the kind of problem that’s almost in quantum territory.

138. gentzen Says:

Scott #71: Thanks for sharing Shyam Shankar’s description of how a transmon qubit works.
Also thanks for letting the comment thread open for so long.

gentzen #70: Let me repeat the claim and the excuse for the missing explanation:

15 mK * k_B / h = 312.5 MHz where k_B is the Boltzmann constant, h is the Planck constant, and 15 mK is the temperature of 15 milliKelvin at which Sycamore was operated. Assume I would try to explain why those 312.5 MHz are a lower bound for how fast the quantum bits must be operated (or at least controlled) for extended quantum computations. …

By coincidence, I don’t have access to Lienhard Pagel’s explanation of that claim at the moment, and no longer enough time to properly write down …

Now I have access to Lienhard Pagel’s explanation again. In “Information ist Energie,” section “2.5.8 Quantencomputing,” page 44, the claim is mentioned for the first time:

Wo liegen nun die Vorteile des Quantencomputings? Zuerst sollen einige technische Vorteile genannt werden:

3. Quantensysteme, die bei Raumtemperatur funktionieren, müssen sich gegenüber dem thermischen Rauschen der Umgebung durchsetzen. Wie aus Abbilding 4.4 ersichtlich, kommen bei Raumtemperatur nur relativ hochenergetische Quanten in Frage, weil sie sonst durch die Wärmeenergie der Umgebung zerstört würden. Diese Systeme können wegen ΔEΔt ≈ h dann nur schnell sein, auch als elektronische Systeme.

Google’s translation to English feels faithful to me:

What are the advantages of quantum computing? First, some technical advantages should be mentioned:

3. Quantum systems operating at room temperature must prevail over the thermal noise of the environment. As shown in Figure 4.4, only relatively high-energy quanta are possible at room temperature because they would otherwise be destroyed by the thermal energy of the environment. Because of ΔEΔt ≈ h, these systems can only be fast, even as electronic systems.

(Figure 4.4 shows energy and time behavior of natural and technical processes. The horizontal axis shows time, the vertical axis energy. Shown processes include super novae, gamma burst, nuclear bomb, ultrafast laser, CMOS, neuron, and many different types of radiation. All radiation processes are on a straight line in that double logarithmic plot, and below that line is the forbidden region.)
To me, Pagel’s explanation is both disappointing and fascinating at the same time. The explanation itself is not very different from my sketched explanation, only less concrete. (In a certain sense, Pagel’s explanation is stretched out over different sections and examples, which makes it hard for me to reproduce it here.) What is fascinating is that Pagel considers his claim to be an obvious advantage of quantum computing (that doesn’t need serious justification), rather than as a serious technical obstacle for practical implementation of quantum computing (that might not apply to all possible implementations). This feels similar to how Svante Arrhenius estimated the atmospheric warming effect of CO2 in 1896 and “saw that this human emission of carbon would eventually lead to warming. However, because of the relatively low rate of CO2 production in 1896, Arrhenius thought the warming would take thousands of years, and he expected it would be beneficial to humanity.”

The following passage from page 104 sheds some light on Pagel’s expectations:

Es muss angemerkt werden, dass die Energiemenge E, die für ein Bit aufgewendet wird, von den technischen Gegebenheiten abhängig ist und keineswegs prinzipiellen Charakter hat. Über die Unbestimmtheitsrelation kann nun berechnet werden, wie schnell ein solches Bit mit der Energie E umgesetzt werden kann:
Δt = h / k_B T ln 2 (4.34)
Diese Zeit liegt bei etwa 100 Femto-Sekunden. Nun kommt die Transaktionszeit ins Spiel. Bei kürzeren Transaktionszeiten ist die Bit-Energie ohnehin größer als die thermische Energie bei Zimmertemperatur und das Bit kann sich gegenüber der thermischen Energie seiner Umwelt behaupten.

It must be noted that the amount of energy E spent on a bit depends on the technical conditions and is by no means of a fundamental nature. The uncertainty relation can now be used to calculate how fast such a bit can be reset with the energy E:
Δt = h / k_B T ln 2 (4.34)
This time is about 100 femtoseconds. Now the transaction time comes into play. With shorter transaction times, the bit energy is anyway greater than the thermal energy at room temperature and the bit can prevail over the thermal energy of its environment.

Pagel cannot possibly expect that some classical control of a quantum computer would be able to operate anywhere near a timescale of 100 femtoseconds. So he probably imagines a quantum computer as “quantum all the way down,” just like an Everettian like David Deutsch might suggest. Therefore, the extremely fast timescale doesn’t feel like a serious technical obstacle to him. The irony is that all current and probably all future quantum computers crucially depend on the interaction between a huge macroscopic classical control and extremely well isolated and calibrated quantum states (i.e. the exact opposite of “quantum all the way down”). All proposals for error correction depend on classical measurement interactions, so the Copenhagen interpretation seems very suitable to describe those implementations of quantum computation. Maybe N. David Mermin’s Copenhagen Computation: How I Learned to Stop Worrying and Love Bohr hints at some physical constraint with respect to implementations of quantum computers.

139. Bennett Standeven Says:

The best argument I see for thinking that superdeterminism requires some sort of miraculous fine tuning of the universe’s initial state is that T’Hooft’s theory actually does so:

(from https://arxiv.org/pdf/0908.3408.pdf)
“If, on the other hand, Alice wishes to rearrange her measuring device from measuring one operator σ1 to measuring another operator σ2 that does not commute with σ1, she does something that cannot be expressed in terms of beables alone. The rotation of her device is described not by a changeable, but by a superimposable operator. Somewhere along the line, superimposable operators must have come into play. Therefore, Alice cannot make such a transition if the quasar had only been affected by a changeable operator. This particular disturbance of the quasar is not of the right kind to cause Alice (or rather her measuring device) to go into a superimposed state. From the state she is in, she cannot measure the new operator σ2.

Any perturbation that we would like to consider, would be easiest described by a superimposable operator, not just by a beable or even a changeable.

Is this contradicted by the Bell inequalities[3]? There may be different ways to circumvent such a conclusion. One is, as stated above, that a pure changeable operator would replace the beables for quasar A into other beables, and thus not allow Alice to turn her detector into the non-diagonal eigenstate needed to measure the new operator. We must then assume the quasars to be in an entangled state from the start.”

In other words, the universe needs to be in a special state in which all modifications must be changeable rather than superimposable.

Considering the alternative case where Alice’s detector settings can be altered by a changeable operator:
“The above considerations lead us to realize that the set of states we use to describe physical events are characterized by two important extensive quantities: the total energy and the total entropy. In all states that we consider, both these quantities are very small in comparison with the generically possible values. Any ontological state of our automaton, characterized by beables, is a superposition of all possible energies and all possible entropies. The state we use to describe our statistical knowledge of the universe has very low energy and very low entropy. It appears that, if in any ontological state we make one local perturbation, the energy will not change much, but the entropy increases tremendously, thus allowing particles α and β at t=t1 to enter into a modified, (dis)entangled state. If we perturb the quantum state, both energy and entropy change very little, α and β stay in the same state, but then Bell’s inequality needs not be obeyed.”

In this case, it seems we have the Many Worlds Interpretation, which is indeed local, and reproduces quantum mechanics on generic initial conditions. But it does not satisfy counterfactual definiteness (because, in T’Hooft’s terminology, it is not possible to determine which perturbations are changeables and which are superimposables.)

@Scott, 135:
From T’Hooft’s earlier papers (like the one I quoted), I actually got the impression that he thought P=BQP (or at least P/qpoly = BQP/qpoly).

140. Andrei Says:

Bennett Standeven,

“The best argument I see for thinking that superdeterminism requires some sort of miraculous fine tuning of the universe’s initial state is that T’Hooft’s theory actually does so”

1. This is not a good argument.
2. I do not think ‘t Hooft implies that, but more importantly:
3. It is trivial to find distant systems that are not independent, yet no fine-tuning is involved. Two orbiting stars are such an example.

Please take a look at my previous post (136) to find such examples. In fact, the only theory I can think of that does imply Bell’s independence assumption (except for fine-tuning situations) is rigid body Newtonian mechanics with contact forces only (billiard balls). Any modern theory (local field theories like classical electromagnetism, general relativity) does not allow distant systems to be independent and this is obvious from their formalism.

141. Bennett Standeven Says:

When I say “fine-tuning” I mean that the initial state has probability zero. So billiard balls needing to hit each other doesn’t count. I actually agree that the way T’Hooft’s theory works isn’t necessarily representative of all superdeterministic theories; just the ones people have actually suggested.

As for independence, Bell’s Theorem only requires that Alice and Bob’s measurements can be independent, not that they must be. Of course perfect independence would require fine tuning, so we have to settle for approximate independence; but we can arrange for the correlation to be negligibly small. This doesn’t have anything to do with superdeterminism, as it is simply part of the experimental protocol.

142. Andrei Says:

Bennett Standeven,

The independence assumption means that the hidden variables (say the spins of the particles) are independent of the states of the detectors, not that the states of the detectors are independent of each other. The hidden variables depend on the state of the source emitting those particles, more exactly, on the electric and magnetic field configuration at the locus of the emission.

In classical electromagnetism the fields in a certain region depend on the position/momenta of all field sources (electrons and quarks) including those in the distant detectors. So, it is a mathematical certainty that the hidden variables are NOT independent of the detectors’ states, as those detectors’ states are ultimately described in terms of the electrons and quarks inside them. This observation is generally true and can be applied to any experimental protocol, including Quasars or whatever.

Alice and Bob can indeed be assumed to be independent on each other because we only care about some macroscopic states, a regime that is reasonably well described by Newtonian mechanics with contact forces only. Independence assumption fails whenever you need to describe physics in terms of fields or long-range forces, such as electromagnetic or gravitational systems. ‘t Hooft’s cellular automaton interpretation is just an example of a discrete field theory and the reason it cannot be ruled out by Bell is the same as in the case of electromagnetism.

143. Gil’s Collegial Quantum Supremacy Skepticism FAQ | Combinatorics and more Says:

[…] last thing, Gil. Nick Read just commented that experimental evidence is gradually pointing towards you being false on the matter of […]

144. Bennett Standeven Says:

No such assumption is necessary, since the state of each detector can simply be viewed as another of the possibly-hidden variables for the associated particle. It is of course impossible for, say, Alice’s hidden variables to depend on the state of Bob’s detector, because they are spacelike separated.

Since Alice and Bob have interacted in the past (they are working on the same experiment, after all) it is highly likely their own internal states are interdependent (there is of course no direct dependence between them due to the spacelike separation). So an experimental protocol of some sort is necessary to eliminate this interdependence.

145. Andrei Says:

Bennett Standeven,

“No such assumption is necessary, since the state of each detector can simply be viewed as another of the possibly-hidden variables for the associated particle.”

I am not sure I understand your point. Do you claim that Bell’s theorem works if the hidden variables are a function of the detectors’ settings?

“It is of course impossible for, say, Alice’s hidden variables to depend on the state of Bob’s detector, because they are spacelike separated.”

It is not impossible:

1. In classical EM the electric and magnetic field configuration at Alice (A) depends on the past/retarded charge distribution/momenta at Bob (B). You can find the exact formulas here (Feynman’s lecture, equations 21.1):

http://www.feynmanlectures.caltech.edu/II_21.html

2. But the present/instantaneous state of B does also depend on the past state of B (deterministic theory)

From 1 and 2 it follows that A and B are not independent, even if spacelike separated.

So, according to classical EM, the states of A and B cannot be independent. However, those are microscopic states (position/momenta of electrons/quarks, E and B fields). In order to determine the macroscopic/observable consequences you need to solve those equations which cannot be done with the computational power we have. So, we need to rely on experiment. We just record the detectors’ orientations and submit them to a statistical test and see if they are independent or not. I agree that they are independent.

Unfortunately, we cannot directly measure the hidden variables (they are hidden, right?) so we cannot be sure if they are independent of the detectors’ settings. So, Bell’s theorem cannot rule out classical electromagnetism. In other words, classical electromagnetism is a superdeterministic theory.

146. gentzen Says:

gentzen #70, #138: Not being understood may even be a blessing. I recently thought about how to better explain quantum computing to outsiders (after one quite successful and one less successful explanation). My plan for the next time is to start with 3 qubits (written in Dirac notation with up and down arrows), explain the complex superposition of the 8 (computational) basis states, explain the effect of classical reversible gates on that superposition, and then drop to a single qubit with 2 states and explain 1-qubit gates, especially the Hadamard gate and phase shift gates. Those phase shift gates are related to

My own explanation is that the difference between the energy levels of your states must be sufficiently bigger than 15 mK * k_B to be safe from random termal state flips, and those differences in energy levels lead to phase shifts that would become too big if you couldn’t control your qubits faster (or at least better) than 312.5 MHz.

I claimed that Pagel’s explanation would not be very different from my sketched explanation, but my mental picture of a quantum computer had a subtle flaw. In my mental picture, each qubit had a phase, and phase errors of the individual qubits directly changed the phase of the complex numbers describing the superposition. But that is not how things work. If I simultaneously apply 3 phase shift gates to 3 qubits (to model the phase errors), then the phases are added together first, before they change the phase of the complex numbers describing the superposition. What I don’t like about this is that now the number of qubits has an influence on the impact of the phase errors. It seems like in the worst case, if n is the number of qubits, the quotient of the computation speed by the temperature now has to be n-times as big, for a reliable quantum computation. Maybe one can get it down to sqrt(n)-times (i.e. the average case) by clever engineering, but it is still annoying. So the better route would be to reject the assumption in my sketched explanation that the difference between the energy levels of the two states of a single qubit must be sufficiently bigger than 15 mK * k_B to be safe from random termal state flips.

Happily, I am not at all up to speed on current approaches for physical realization of quantum computing. I am still at the level from 20 years ago, as described in chapter 7 “Quantum computers: physical realization” from Nielsen and Chuan. All of the examples presented in that chapter are obviously unable to allow scalable quantum computing:

As these examples demonstrate, coming up with a good physical realization for a quantum computer is tricky business, fraught with tradeoffs. All of the above schemes are unsatisfactory, in that none allow a large-scale quantum computer to be realized anytime in the near future. However, that does not preclude the possibility, and in fact …

Probably already Sycamore uses a better way to limit the impact of the thermal environment. (Even in case anybody should be able to understand what I am saying here, please consider the possibility that people already understood those thoughts 20 years ago, and had good reasons to pursue physical realizations of scalable quantum computers nevertheless. Keep in mind that any conclusions about the feasibility of scalable quantum computing might be wrong in practice, independent of what can be proven in theory. For example, Mermin’s theorem of 1968 from the paper titled “Crystalline order in two dimensions” (often incorrectly cited as Mermin-Wagner theorem) seems to show that 2D crystals cannot exist. But graphene and other 2D crystals do exist. And Gerhard Gentzen proved the consistency of Peano arithmetic by finite means, even so people believed that Gödel’s theorem would prove that this is impossible. I was serious when I said at Gil Kalai’s blog: “But at the current moment, the field progresses nicely, so why unnecessarily disrupt that progress?” The money poured into quantum computing and quantum information science at the moment is well spent. It is also responsible for results like Ewin Tang’s, which stay valuable even if scalable quantum computers cannot be built.)

147. gentzen Says:

gentzen #146: “So the better route would be to reject the assumption in my sketched explanation that the difference between the energy levels of the two states of a single qubit must be sufficiently bigger than 15 mK * k_B to be safe from random thermal state flips.”

The assumption is wrong for optical quantum comp., due to the dual-rail representation:

It is possible for a cavity to contain a superposition of zero or one photon, a state which could be expressed as a qubit c0 |0> + c1 |1>, but we shall do something different. Let us consider two cavities, whose total energy is ??, and take the two states of a qubit as being whether the photon is in one cavity (|01>) or the other (|10>). The physical state of a superposition would thus be written as c0 |01> + c1 |10>; we shall call this the dual-rail representation.

This is from section 7.4.1 in Nelson and Chuang. The important observation for the phase shift issue is stated in section 7.4.2:

Note that the dual-rail representation is convenient because free evolution only changes |?> = c0 |01> + c1 |10> by an overall phase, which is undetectable.

It sounds like a voluntary decision. OK, for a single qubit, the state |00> will be abundant and useless, and |11> will be extremely rare. But for larger state spaces, using any representation where each tensor product basis state (occuring in the superposition(s)) has the same number of 1s could have been a valid alternative too. According to Scott’s lecture notes the name dual-rail representation arose “since the two channels, when drawn side-by-side, look like railway tracks”.

Is the assumption also wrong for liquid NMR? This was the other room temperature physical realization for a quantum computer described in Nelson and Chuang. The way it is supposed to be operated is such that the frequencies are still realistic for controlling the phase shifts, but the energy gaps are way below the thermal energies, see for example section 7.7.1: “… produce an NMR signal at about 500 MHz when placed in a magnetic field of about 11.7 tesla”. So the states are not protected from thermal state flips, and are basically guaranteed to be destroyed in thermal equilibrium. The question is just how fast they will be destroyed. Apparently this has been measured experimentally, and was on time scales larger than 500 MHz. It should also be possible to estimate this theoretically based on phonon interactions. However, the NMR QC and Chuang’s wikipedia articles indicate that “the field of liquid state NMR quantum computing fell out of favor due to limitations on its scalability beyond tens of qubits due to noise” already by the end of 2002. (Will solid state NMR rise some day?)

But liquid NMR contains lessons for how to deal with the problem of the trade off between temperature (state flips) and speed (phase shifts): We don’t need to control it for an arbitrary amount of time or arbitrary number of qubits, it is enough to control it long enough for a sufficient number of qubits until quantum error correction can be applied.

This lesson shows that I was too pessimistic when I wrote: “Maybe one can get it down to sqrt(n)-times (i.e. the average case) by clever engineering”. One can get it down to O(1)-times, assuming the operations required for error correction (like “measure and discard bad qubits” or “pump in fresh ​|0​⟩ qubits”) can be done fast enough. (Scott’s lecture notes also lists “apply gates to many qubits in parallel” and “do extremely fast and reliable classical computation”.)

But can one also get it down to 0-times? The dual-rail representation discussed above suggests that this might be possible. It could be regarded as a special case of “decoherence free subspaces”. Error suppression and prevention techniques also include “dynamical decoupling”, which is a generalization of the refocusing technique used in liquid NMR (one of my other lessons from liquid NMR). (My third and final lesson from liquid NMR was that quantum computing is still possible, even if the signal to noise ratio is below 1%. One just repeats the quantum computation a sufficient number of times. Not scalable, but also used in the Sycamore experiment, in a certain sense.)

I am skeptical here (whether one can get it down to 0-times). For example, the refocusing technique used in liquid NMR itself has to operate at 500 MHz, and the required accuracy of its own control scales with the number of qubits. And if we try to use a dual-rail like representation outside of the optical domain, then most single qubit errors will throw us out of the comfortable decoherence free subspace. On the positive side, each single qubit error can at most add 2 to the factor, so the technique is still good for ensuring that requirements for the speed and accuracy of the classical control don’t escalate too much.

To end this, I want to come back to my initial assumption that a low temperature compared to the energy difference between the states allows to protect them from random state flips. This is clear for transitions from the lower energy state to the higher energy state. But why should this prevent transitions from the higher energy state to the lower energy state? This is because in the solid state, the temperature is mostly transported by phonons, and phonons are bosons. And bosons provoke stimulated emissions (or in this case stimulated relaxations), which are well known from lasers. You might object that phonons will be unable to trigger the transitions because they have a momentum while the transition (which can be triggered by microwave photons) should have none. You are right for a single phonon, but multiple phonons can still trigger the transition. (Which may explain why this stimulated relaxation is nevertheless a surprisingly slow process.)

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.