## Archive for the ‘Quantum’ Category

### Back

Saturday, April 23rd, 2022Thanks to everyone who asked whether I’m OK! Yeah, I’ve been living, loving, learning, teaching, worrying, procrastinating, just not blogging.

Last week, Takashi Yamakawa and Mark Zhandry posted a preprint to the arXiv, “Verifiable Quantum Advantage without Structure,” that represents some of the most exciting progress in quantum complexity theory in years. I wish I’d thought of it. tl;dr they show that relative to a random oracle (!), there’s an NP search problem that quantum computers can solve exponentially faster than classical ones. And yet this is 100% consistent with the Aaronson-Ambainis Conjecture!

A student brought my attention to Quantle, a variant of Wordle where you need to guess a true equation involving 1-qubit quantum states and unitary transformations. It’s really well-done! Possibly the best quantum game I’ve seen.

Last month, Microsoft announced on the web that it had achieved an experimental breakthrough in topological quantum computing: not *quite* the creation of a topological qubit, but some of the underlying physics required for that. This followed their needing to retract their previous claim of such a breakthrough, due to the criticisms of Sergey Frolov and others. One imagines that they would’ve taken far greater care this time around. Unfortunately, a research paper doesn’t seem to be available yet. Anyone with further details is welcome to chime in.

Woohoo! Maximum flow, maximum bipartite matching, matrix scaling, and isotonic regression on posets (among many others)—all algorithmic problems that I was familiar with way back in the 1990s—are now solvable in nearly-linear time, thanks to a breakthrough by Chen et al.! Many undergraduate algorithms courses will need to be updated.

For those interested, Steve Hsu recorded a podcast with me where I talk about quantum complexity theory.

### Why Quantum Mechanics?

Tuesday, January 25th, 2022In the past few months, I’ve twice injured the same ankle while playing with my kids. This, perhaps combined with covid, led me to several indisputable realizations:

- I am mortal.
- Despite my self-conception as a nerdy little kid awaiting the serious people’s approval, I am now firmly middle-aged. By my age, Einstein had completed general relativity, Turing had founded CS, won WWII,
*and*proposed the Turing Test, and Galois, Ramanujan, and Ramsey had been dead for years. - Thus, whatever I wanted to accomplish in my intellectual life, I should probably get started on it
*now*.

Hence today’s post. I’m feeling a strong compulsion to write an essay, or possibly even a book, surveying and critically evaluating a century of ideas about the following question:

**Q: Why should the universe have been quantum-mechanical?**

If you want, you can divide Q into two subquestions:

**Q1: Why didn’t God just make the universe classical and be done with it? What would’ve been wrong with that choice?**

**Q2: Assuming classical physics wasn’t good enough for whatever reason, why this specific alternative? Why the complex-valued amplitudes? Why unitary transformations? Why the Born rule? Why the tensor product?**

Despite its greater specificity, Q2 is ironically the question that I feel we have a better handle on. I could spend half a semester teaching theorems that admittedly don’t *answer* Q2, as satisfyingly as Einstein answered the question “why the Lorentz transformations?,” but that at least render this particular set of mathematical choices (the 2-norm, the Born Rule, complex numbers, etc.) orders-of-magnitude less surprising than one might’ve thought they were *a priori*. Q1 therefore stands, to me at least, as the more mysterious of the two questions.

So, I want to write something about the space of credible answers to Q, and especially Q1, that humans can currently conceive. I want to do this for my own sake as much as for others’. I want to do it because I regard Q as one of the biggest questions ever asked, for which it seems plausible to me that there’s simply an *answer* that most experts would accept as valid once they saw it, but for which no such answer is known. And also because, besides having spent 25 years working in quantum information, I have the following qualifications for the job:

- I don’t dismiss either Q1
*or*Q2 as silly; and - crucially, I don’t think I already know the answers, and merely need better arguments to justify them. I’m genuinely uncertain and confused.

**The purpose of this post is to invite you to share your own answers to Q in the comments section.** Before I embark on my survey project, I’d better know if there are promising ideas that I’ve missed, and this blog seems like as good a place as any to crowdsource the job.

Any answer is welcome, no matter how wild or speculative, *so long as it honestly grapples with the actual nature of QM*. To illustrate, nothing along the lines of “the universe is quantum because it needs to be holistic, interconnected, full of surprises, etc. etc.” will cut it, since such answers leave utterly unexplained why the world wasn’t simply endowed with those properties directly, rather than specifically via *generalizing the rules of probability to allow interference and noncommuting observables*.

Relatedly, whatever “design goal” you propose for the laws of physics, if the goal is satisfied by QM, but satisfied *even better* by theories that provide *even more* power than QM does—for instance, superluminal signalling, or violations of Tsirelson’s bound, or the efficient solution of NP-complete problems—then your explanation is out. This is a remarkably strong constraint.

Oh, needless to say, don’t try my patience with anything about the uncertainty principle being due to floating-point errors or rendering bugs, or anything else that relies on a travesty of QM lifted from a popular article or meme! đź™‚

OK, maybe four more comments to enable a more productive discussion, before I shut up and turn things over to you:

- I’m aware, of course, of the radical uncertainty about what form an answer to Q should even take. Am I asking you to psychoanalyze the will of God in creating the universe? Or, what perhaps amounts to the same thing, am I asking for the design objectives of the giant computer simulation that we’re living in? (As in, “I’m 100% fine with living inside a Matrix … I just want to understand why it’s a
*unitary*matrix!”) Am I instead asking for an anthropic explanation, showing why*of course*QM would be needed if you wanted life or consciousness like ours? Am I “merely” asking for simpler or more intuitive physical principles from which QM is to be derived as a consequence? Am I asking why QM is the “most elegant choice” in some space of mathematical options … even to the point where, with hindsight, a 19th-century mathematician or physicist could’ve been convinced that*of course*this must be part of Nature’s plan? Am I asking for something else entirely?**You get to decide**!**Should you take up my challenge, this is both your privilege and your terrifying burden.** - I’m aware, of course, of the dizzying array of central physical phenomena that rely on QM for their ultimate explanation. These phenomena range from the stability of matter itself, which depends on the Pauli exclusion principle; to the nuclear fusion that powers the sun, which depends on a quantum tunneling effect; to the discrete energy levels of electrons (and hence, the combinatorial nature of chemistry), which relies on electrons being waves of probability amplitude that can only circle nuclei an integer number of times if their crests are to meet their troughs. Important as they are, though, I don’t regard any of these phenomena as satisfying answers to Q in themselves. The reason is simply that, in each case, it would seem like child’s-play to contrive some classical mechanism to produce the same effect, were that the goal. QM just seems far too grand to have been the answer to
*these*questions! An exponentially larger state space for all of reality,*plus*the end of Newtonian determinism, just to overcome the technical problem that accelerating charges radiate energy in classical electrodynamics, thereby rendering atoms unstable? It reminds me of the*Simpsons*episode where Homer uses a teleportation machine to get a beer from the fridge without needing to get up off the couch. - I’m aware of Gleason’s theorem, and of the specialness of the 1-norm and 2-norm in linear algebra, and of the arguments for complex amplitudes as opposed to reals or quaternions, and of the beautiful work of Lucien Hardy and of Chiribella et al. and others on axiomatic derivations of quantum theory. As some of you might remember, I even discussed much of this material in
*Quantum Computing Since Democritus*! There’s a*huge*amount to say about these fascinating justifications for the rules of QM, and I hope to say some of it in my planned survey! For now, I’ll simply remark that every axiomatic reconstruction of QM that I’ve seen, impressive though it was, has relied on one or more axioms that struck me as*weird*, in the sense that I’d have little trouble dismissing the axioms as totally implausible and unmotivated if I hadn’t already known (from QM, of course) that they were true. The axiomatic reconstructions*do*help me somewhat with Q2, but little if at all with Q1. - To keep the discussion focused, in this post I’d like to exclude answers along the lines of “but what if QM is merely an approximation to something else?,” to say nothing of “a century of evidence for QM was all just a massive illusion! LOCAL HIDDEN VARIABLES FOR THE WIN!!!” We can have those debates another day—God knows that, here on
*Shtetl-Optimized*, we have and we will. Here I’m asking instead: imagine that, as fantastical as it sounds, QM were not only exactly true, but (along with relativity, thermodynamics, evolution, and the tastiness of chocolate) one of the profoundest truths our sorry species had ever discovered. Why should I have*expected*that truth all along? What possible reasons to expect it have I missed?

### On tardigrades, superdeterminism, and the struggle for sanity

Monday, January 10th, 2022*(Hopefully no one has taken taken that title yet!)*

I waste a large fraction of my existence just reading about what’s happening in the world, or discussion and analysis thereof, in an unending scroll of paralysis and depression. On the first anniversary of the January 6 attack, I read the recent revelations about just how close the seditionists actually came to overturning the election outcome (e.g., by pressuring just one Republican state legislature to “decertify” its electors, after which the others would likely follow in a domino effect), and how hard it now is to see a path by which democracy in the United States will survive beyond 2024. Or I read about Joe Manchin, who’s already entered the annals of history as the man who could’ve halted the slide to the abyss and decided not to. Of course, I also read about the wokeists, who correctly see the swing of civilization getting pushed terrifyingly far out of equilibrium to the right, so their solution is to push the swing terrifyingly far out of equilibrium to the *left*, and then they act shocked when their own action, having added all this potential energy to the swing, causes it to swing back *even further* to the right, as swings tend to do. (And *also* thereâ€™s a global pandemic killing millions, and the correct response to itâ€”to authorize and distribute new vaccines as quickly as the virus mutatesâ€”is completely outside the Overton Window between Obey the Experts and Disobey the Experts, advocated by no one but a few nerds. When I first wrote this post, I forgot all about the global pandemic.) And I see all this and I am powerless to stop it.

In such a dark time, it’s easy to forget that I’m a theoretical computer scientist, mainly focused on quantum computing. It’s easy to forget that people come to this blog because they want to read about quantum computing. It’s like, who gives a crap about that anymore? What doth it profit a man, if he gaineth a few thousand fault-tolerant qubits with which to calculateth chemical reaction rates or discrete logarithms, and he loseth civilization?

Nevertheless, in the rest of this post I’m going to share some quantum-related debunking updates—not because that’s what’s at the top of my mind, but in an attempt to find my way back to sanity. Picture that: quantum mechanics (and specifically, the refutation of outlandish claims related to quantum mechanics) as the part of one’s life that’s comforting, normal, and sane.

There’s been lots of online debate about the claim to have entangled a tardigrade (i.e., water bear) with a superconducting qubit; see also this paper by Vlatko Vedral, this from CNET, this from Ben Brubaker on Twitter. So, do we now have SchrĂ¶dinger’s Tardigrade: a living, “macroscopic” organism maintained coherently in a quantum superposition of two states? How could such a thing be possible with the technology of the early 21^{st} century? Hasn’t it been a huge challenge to demonstrate even SchrĂ¶dinger’s Virus or SchrĂ¶dinger’s Bacterium? So then how did this experiment leapfrog (or leaptardigrade) over those vastly easier goals?

Short answer: it didn’t. The experimenters couldn’t directly measure the degree of freedom in the tardigrade that’s claimed to be entangled with the qubit. But it’s consistent with everything they report that whatever entanglement is there, it’s between the superconducting qubit and a microscopic *part* of the tardigrade. It’s *also* consistent with everything they report that there’s no entanglement at all between the qubit and *any* part of the tardigrade, just boring classical correlation. (Or rather that, if there’s “entanglement,” then it’s the Everett kind, involving not merely the qubit and the tardigrade but the whole environment—the same as we’d get by just measuring the qubit!) Further work would be needed to distinguish these possibilities. In any case, it’s of course cool that they were able to cool a tardigrade to near absolute zero and then revive it afterwards.

I thank the authors of the tardigrade paper, who clarified a few of these points in correspondence with me. Obviously the comments section is open for whatever I’ve misunderstood.

People also asked me to respond to Sabine Hossenfelder’s recent video about superdeterminism, a theory that holds that quantum entanglement doesn’t actually exist, but the universe’s initial conditions were fine-tuned to stop us from choosing to measure qubits in ways that would make its nonexistence apparent: even when we *think* we’re applying the right measurements, we’re not, because the initial conditions messed with our brains or our computers’ random number generators. (See, I tried to be as non-prejudicial as possible in that summary, and it still came out sounding like a parody. Sorry!)

Sabine sets up the usual dichotomy that people argue against superdeterminism only because they’re attached to a belief in free will. She rejects Bell’s statistical independence assumption, which she sees as a mere dogma rather than a prerequisite for doing science. Toward the end of the video, Sabine mentions the objection that, without statistical independence, a demon could destroy any randomized controlled trial, by tampering with the random number generator that decides who’s in the control group and who isn’t. But she then reassures the viewer that it’s no problem: superdeterministic conspiracies will only appear when quantum mechanics would’ve predicted a Bell inequality violation or the like. Crucially, she never explains the mechanism by which superdeterminism, once allowed into the universe (*including* into macroscopic devices like computers and random number generators), will stay confined to reproducing the specific predictions that quantum mechanics already told us were true, rather than enabling ESP or telepathy or other mischief. This is *stipulated*, never explained or derived.

To say I’m not a fan of superdeterminism would be a super-understatement. And yet, nothing I’ve written previously on this blog—about superdeterminism’s gobsmacking lack of explanatory power, or about how trivial it would be to cook up a superdeterministic “mechanism” for, e.g., faster-than-light signaling—none of it seems to have made a dent. It’s all come across as obvious to the majority of physicists and computer scientists who think as I do, and it’s all fallen on deaf ears to superdeterminism’s fans.

So in desperation, let me now try another tack: going meta. It strikes me that no one who saw quantum mechanics as a profound clue about the nature of reality could ever, in a trillion years, think that superdeterminism looked like a promising route forward given our current knowledge. The only way you could think that, it seems to me, is if you saw quantum mechanics as an **anti-clue**: a red herring, actively misleading us about how the world really is. To be a superdeterminist is to say:

OK, fine, there’s the Bell experiment, which

looks likeNature screaming the reality of ‘genuine indeterminism, as predicted by QM,’ louder than you might’ve thought it evenlogically possiblefor that to be screamed. But don’t listen to Nature, listen to us! If you just drop what you thought were foundational assumptions of science, we can explain this away! Notexplainit, of course, but explain itaway. What more could you ask from us?

Here’s my challenge to the superdeterminists: when, in 400 years from Galileo to the present, has such a gambit *ever* worked? Maxwell’s equations were a clue to special relativity. The Hamiltonian and Lagrangian formulations of classical mechanics were clues to quantum mechanics. When has a great theory in physics ever been *grudgingly accommodated* by its successor theory in a horrifyingly ad-hoc way, rather than gloriously explained and derived?

**Update:** Oh right, and the QIP’2022 list of accepted talks is out! And I was on the program committee! And ~~they’re still planning to hold QIP in person, in March at Caltech, will you fancy that!~~ actually I have no idea—but if they’re going to move to virtual, I’m awaiting an announcement just like everyone else.

### Two new talks and an interview

Thursday, December 2nd, 2021- A talk to UT Austin’s undergraduate math club (handwritten PDF notes) about Hao Huang’s proof of the Sensitivity Conjecture, and its implications for quantum query complexity and more. I’m
*still*not satisfied that I’ve presented Huang’s beautiful proof as clearly and self-containedly as I possibly can, which probably just means I need to lecture on it a few more times. - A Zoom talk at the QPQIS conference in Beijing (PowerPoint slides), setting out my most recent thoughts about Google’s and USTC’s quantum supremacy experiments and the continuing efforts to spoof them classically.
- An interview with me in
*Communications of the ACM*, mostly about BosonSampling and the quantum lower bound for the collision problem.

Enjoy y’all!

### The Acrobatics of BQP

Friday, November 19th, 2021Just in case anyone is depressed this afternoon and needs something to cheer them up, students William Kretschmer, DeVon Ingram, and I have finally put out a new paper:

Abstract:We show that, in the black-box setting, the behavior of quantum polynomial-time (BQP) can be remarkably decoupled from that of classical complexity classes like NP. Specifically:– There exists an oracle relative to which NP

^{BQP}âŠ„BQP^{PH}, resolving a 2005 problem of Fortnow. Interpreted another way, we show that AC^{0}Â circuits cannot perform useful homomorphic encryption on instances of the Forrelation problem. As a corollary, there exists an oracle relative to which P=NP but BQPâ‰ QCMA.– Conversely, there exists an oracle relative to which BQP

^{NP}âŠ„PH^{BQP}.– Relative to a random oracle, PP=PostBQP is not contained in the “QMA hierarchy” QMA

^{QMA^QMA^…}, and more generally PPâŠ„(MIP*)^{(MIP*)^(MIP*)^…}Â (!), despite the fact that MIP*=RE in the unrelativized world. This result shows that there is no black-box quantum analogue of Stockmeyer’s approximate counting algorithm.– Relative to a random oracle, ÎŁ

_{k+1}âŠ„BQP^{ÎŁ_k}Â for every k.– There exists an oracle relative to which BQP=P

^{#P}Â and yet PH is infinite. (By contrast, if NPâŠ†BPP, then PH collapses relative to all oracles.)– There exists an oracle relative to which P=NPâ‰ BQP=P

^{#P}.To achieve these results, we build on the 2018 achievement by Raz and Tal of an oracle relative to which BQPâŠ„PH, and associated results about the Forrelation problem. We also introduce new tools that might be of independent interest. These include a “quantum-aware” version of the random restriction method, a concentration theorem for the block sensitivity of AC

^{0}circuits, and a (provable) analogue of the Aaronson-Ambainis Conjecture for sparse oracles.

Incidentally, particularly when I’ve worked on a project with students, I’m often tremendously excited and want to shout about it from the rooftops for the students’ sake … but then I also don’t want to use this blog to privilege my own papers “unfairly.” Can anyone suggest a principle that I should follow going forward?

### Scott Aaronson, when reached for comment, said…

Tuesday, November 16th, 2021**About IBM’s new 127-qubit superconducting chip:** As I told *New Scientist*, I look forward to seeing the actual details! As far as I could see, the marketing materials that IBM released yesterday take a lot of words to say absolutely nothing about what, to experts, is the single most important piece of information: namely, *what are the gate fidelities?* How deep of a quantum circuit can they apply? How have they benchmarked the chip? Right now, all I have to go on is a stats page for the new chip, ~~which reports its average CNOT error as 0.9388—in other words, close to 1, or terrible! (But see also a tweet by James Wootton, which explains that such numbers are often highly misleading when a new chip is first rolled out.) Does anyone here have more information?~~ **Update (11/17):** As of this morning, the average CNOT error has been updated to 2%. Thanks to multiple commenters for letting me know!

**About the new simulation of Google’s 53-qubit Sycamore chip in 5 minutes on a Sunway supercomputer (see also here):** This is an exciting step forward on the classical validation of quantum supremacy experiments, and—ironically, what currently amounts to almost the same thing—on the classical *spoofing* of those experiments. Congratulations to the team in China that achieved this! But there are two crucial things to understand. First, “5 minutes” refers to the time needed to calculate a *single* amplitude (or perhaps, several correlated amplitudes) using tensor network contraction. It doesn’t refer to the time needed to generate millions of *independent* noisy samples, which is what Google’s Sycamore chip does in 3 minutes. For the latter task, more like a week still seems to be needed on the supercomputer. (I’m grateful to Chu Guo, a coauthor of the new work who spoke in UT Austin’s weekly quantum Zoom meeting, for clarifying this point.) Second, the Sunway supercomputer has parallel processing power equivalent to approximately ten million of your laptop. Thus, even if we agreed that Google no longer had quantum supremacy as measured by time, it would still have quantum supremacy as measured by carbon footprint! (And this despite the fact that the quantum computer itself requires a noisy, closet-sized dilution fridge.) Even so, for me the new work underscores the point that quantum supremacy is not yet a done deal. Over the next few years, I hope that Google and USTC, as well as any new entrants to this race (IBM? IonQ? Harvard? Rigetti?), will push forward with more qubits and, even more importantly, better gate fidelities leading to higher Linear Cross-Entropy scores. Meanwhile, we theorists should try to do our part by inventing new and better protocols with which to demonstrate near-term quantum supremacy—*especially* protocols for which the classical verification is easier.

**About the new anti-woke University of Austin (UATX):** In general, I’m extremely happy for people to experiment with new and different institutions, and of course I’m happy for more intellectual activity in my adopted city of Austin. And, as *Shtetl-Optimized* readers will know, I’m probably more sympathetic than most to the reality of the problem that UATX is trying to solve—living, as we do, in an era when one academic after another has been cancelled for ideas that a mere decade ago would’ve been considered unexceptional, moderate, center-left. Having said all that, I wish I could feel more optimistic about UATX’s prospects. I found its website heavy on free-speech rhetoric but frustratingly light on what the new university is actually going to *do*: what courses it will offer, who will teach them, where the campus will be, etc. etc. Arguably this is all excusable for a university still in ramp-up mode, but had I been in their shoes, I might have held off on the public launch until I had at least some sample content to offer. Certainly, the fact that Steven Pinker has quit UATX’s advisory board is a discouraging sign. If UATX asks me to get involved—to lecture there, to give them advice about their CS program, etc.—I’ll consider it as I would any other request. So far, though, they haven’t.

**About the Association for Mathematical Research:** Last month, some colleagues invited me to join a brand-new society called the Association for Mathematical Research. Many of the other founders (Joel Hass, Abigail Thompson, Colin Adams, Richard Borcherds, Jeff Cheeger, Pavel Etingof, Tom Hales, Jeff Lagarias, Mark Lackenby, Cliff Taubes, …) were brilliant mathematicians who I admired, they seemed like they could use a bit of theoretical computer science representation, there was no time commitment, maybe they’d eventually do something good, so I figured why not? Alas, to say that AMR has proved unpopular on Twitter would be an understatement: it’s received the same contemptuous reception that UATX has. The argument seems to be: starting a new mathematical society, even an avowedly diverse and apolitical one, is really just an implicit claim that the existing societies, like the Mathematical Association of America (MAA) and the American Mathematical Society (AMS), have been co-opted by woke true-believers. But that’s paranoid and insane! I mean, it’s not as if an AMS blog has called for the mass resignation of white male mathematicians to make room for the marginalized, or the boycott of Israeli universities, or the abolition of the criminal justice system *(what to do about Kyle Rittenhouse though?)*. Still, even though claims of that sort of co-option are obviously far-out, rabid fantasies, yeah, I did decide to give a new organization the benefit of the doubt. AMR might well fail or languish in obscurity, just like UATX might. On the other hand, the barriers to making a positive difference for the intellectual world, the world I love, the world under constant threat from the self-certain ideologues of every side, do strike me as orders of magnitude smaller for a new professional society than they do for a new university.

### Q2B 2021

Monday, November 1st, 2021This is a quick post to let people know that the 2021 Q2B (Quantum 2 Business) conference will be this December 7-9 at the Santa Clara Convention Center. (Full disclosure: Q2B is hosted by QC Ware, Inc., to which I’m the scientific adviser.) Barring a dramatic rise in cases or the like, I’m planning to attend to do my Ask-Me-Anything session, in what’s become an annual tradition. Notably, this will be my first in-person conference, and in fact my first professional travel of any kind, since before covid shut down the US in late March 2020. I hope to see many of you there! And if you *won’t* be at Q2B, but you’ll be in the Bay Area and would like to meet otherwise, let me know and we’ll try to work something out.

### Gaussian BosonSampling, higher-order correlations, and spoofing: An update

Sunday, October 10th, 2021In my last post, I wrote (among other things) about an ongoing scientific debate between the group of Chaoyang Lu at USTC in China, which over the past year has been doing experiments that seek to demonstrate quantum supremacy via Gaussian BosonSampling; and the group of Sergio Boixo at Google, which had a recent paper on a polynomial-time classical algorithm to sample approximately from the same distributions. I reported the facts as I understood them at the time. Since then, though, a long call with the Google team gave me a new and different understanding, and I feel duty-bound to share that here.

A week ago, I considered it obvious that if, using a classical spoofer, you could beat the USTC experiment on a metric like total variation distance from the ideal distribution, then you wouldâ€™ve completely destroyed USTCâ€™s claim of quantum supremacy.Â The reason I believed *that*, in turn, is a proposition that I hadnâ€™t given a name but needs one, so let me call it **Hypothesis H**:

The only way a classical algorithm to spoof BosonSampling can possibly do well in total variation distance, is by correctly reproducing the high-order correlations (correlations among the occupation numbers of large numbers of modes) â€” because thatâ€™s where the complexity of BosonSampling lies (if it lies anywhere).

Hypothesis H had important downstream consequences.Â Googleâ€™s algorithm, by the Google teamâ€™s own admission, does not reproduce the high-order correlations.Â Furthermore, because of limitations on both samples and classical computation time, Googleâ€™s paper calculates the total variation distance from the ideal distribution only on the marginal distribution on roughly 14 out of 144 modes.Â On that marginal distribution, Googleâ€™s algorithm does do better than the experiment in total variation distance.Â Google presents a claimed extrapolation to the full 144 modes, but eyeballing the graphs, it was far from clear to me what would happen: like, maybe the spoofing algorithm would continue to win, but maybe the experiment would turn around and win; who knows?

Chaoyang, meanwhile, made a clear prediction that the experiment would turn around and win, because of

- the experimentâ€™s success in reproducing the high-order correlations,
- the admitted failure of Googleâ€™s algorithm in reproducing the high-order correlations, and
- the seeming impossibility of doing well on BosonSampling
*without*reproducing the high-order correlations (Hypothesis H).

Given everything my experience told me about the central importance of high-order correlations for BosonSampling, I was inclined to agree with Chaoyang.

Now for the kicker: it seems that Hypothesis H is false. A classical spoofer could beat a BosonSampling experiment on total variation distance from the ideal distribution, without even bothering to reproduce the high-order correlations correctly.

This is true because of a combination of two facts about the existing noisy BosonSampling experiments. The first fact is that the contribution from the order-k correlations falls off like 1/exp(k). The second fact is that, due to calibration errors and the like, the experiments already show significant deviations from the ideal distribution on the order-1 and order-2 correlations.

Put these facts together and what do you find? Well, suppose your classical spoofing algorithm takes care to get the low-order contributions to the distribution exactly right. Just for that reason alone, it could already win over a noisy BosonSampling experiment, as judged by benchmarks like total variation distance from the ideal distribution, or for that matter linear cross-entropy. Yes, the experiment will beat the classical simulation on the higher-order correlations. But because those higher-order correlations are exponentially attenuated anyway, they wonâ€™t be enough to make up the difference. The experimentâ€™s lack of perfection on the low-order correlations will swamp everything else.

Granted, I still donâ€™t know for sure that this *is* what happens â€” that depends on whether I believe Sergio or Chaoyang about the extrapolation of the variation distance to the full 144 modes (my own eyeballs having failed to render a verdict!). But I now see that itâ€™s logically possible, maybe even plausible.

So, letâ€™s imagine for the sake of argument that Googleâ€™s simulation wins on variation distance, even though the experiment wins on the high-order correlations. In that case, what would be our verdict: would USTC have achieved quantum supremacy via BosonSampling, or not?

Itâ€™s clear what each side could say.

Google could say: by a metric that Scott Aaronson, the coinventor of BosonSampling, thought was perfectly adequate as late as last week â€” namely, total variation distance from the ideal distribution â€” we won. We achieved lower variation distance than USTCâ€™s experiment, and we did it using a fast classical algorithm. End of discussion. No moving the goalposts after the fact.

Google could even add: BosonSampling is a *sampling* task; itâ€™s right there in the name! The only purpose of any benchmark â€” whether Linear XEB or high-order correlation â€” is to give evidence about whether you are or arenâ€™t sampling from a distribution close to the ideal one. But that means that, if you accept that we *are* doing the latter better than the experiment, then thereâ€™s nothing more to argue about.

USTC could respond: even if Scott Aaronson *is* the coinventor of BosonSampling, heâ€™s extremely far from an infallible oracle. In the case at hand, his lack of appreciation for the sources of error in realistic experiments caused him to fixate inappropriately on variation distance as the success criterion. If you want to see the quantum advantage in our system, you have to deliberately subtract off the low-order correlations and look at the high-order correlations.

USTC could add: from the very beginning, the whole point of quantum supremacy experiments was to demonstrate a clear speedup on *some* benchmark â€” we never particularly cared which one!Â That horse is out of the barn as soon as weâ€™re talking about quantum supremacy at all â€” something the Google group, which itself reported the first quantum supremacy experiment in Fall 2019, again for a completely artificial benchmark â€” knows as well as anyone else.Â (The Google team even has experience with adjusting benchmarks: when, for example, Pan and Zhang pointed out that Linear XEB as originally specified is pretty easy to spoof for random 2D circuits, the most cogent rejoinder was: OK, fine then, add an extra check that the returned samples are sufficiently different from one another, which kills Pan and Zhang’s spoofing strategy.) In that case, then, why isnâ€™t a benchmark tailored to the high-order correlations as good as variation distance or linear cross-entropy or any other benchmark?

Both positions are reasonable and have merit â€” though I confess to somewhat greater sympathy for the one that appeals to my doofosity rather than my supposed infallibility!

OK, but suppose, again for the sake of argument, that we accepted the second position, and we said that USTC gets to declare quantum supremacy as long as its experiment does better than any known classical simulation at reproducing the high-order correlations. Weâ€™d still face the question: does the USTC experiment, in fact, do better on that metric? It would be awkward if, having won the right to change the rules in its favor, USTC still lost even under the new rules.

Sergio tells me that USTC directly reported experimental data only for up to order-7 correlations, and at least individually, the order-7 correlations are easy to reproduce on a laptop (although *sampling* in a way that reproduces the order-7 correlations might still be hardâ€”a point that Chaoyang confirms, and where further research would be great). OK, but USTC also reported that their experiment seems to reproduce up to order-19 correlations. And order-19 correlations, the Google team agrees, are hard to sample consistently with on a classical computer by any currently known algorithm.

So then, why donâ€™t we have direct data for the order-19 correlations? The trouble is simply that it wouldâ€™ve taken USTC an astronomical amount of computation time. So instead, they relied on a statistical extrapolation from the observed strength of the lower-order correlations â€” there we go again with the extrapolations! Of course, if weâ€™re going to let Google rest its case on an extrapolation, then maybe itâ€™s only sporting to let USTC do the same.

You might wonder: why didnâ€™t we have to worry about any of this stuff with the *other* path to quantum supremacy, the one via random circuit sampling with superconducting qubits? The reason is that, with random circuit sampling, all the correlations except the highest-order ones are completely trivial â€” or, to say it another way, the reduced state of any small number of output qubits is exponentially close to the maximally mixed state. This is a real difference between BosonSampling and random circuit samplingâ€”and even 5-6 years ago, we knew that this represented an advantage for random circuit sampling, although I now have a deeper appreciation for just how great of an advantage it is. For it means that, with random circuit sampling, itâ€™s easier to place a â€śsword in the stoneâ€ť: to say, for example, *here* is the Linear XEB score achieved by the trivial classical algorithm that outputs random bits, and lo, our experiment achieves a higher score, and lo, we challenge anyone to invent a fast classical spoofing method that achieves a similarly high score.

With BosonSampling, by contrast, we have various metrics with which to judge performance, but so far, for none of those metrics do we have a plausible hypothesis that says â€ś*here’s* the best that any polynomial-time classical algorithm can possibly hope to do, and itâ€™s completely plausible that even a noisy current or planned BosonSampling experiment can do better than that.â€ť

In the end, then, I come back to the exact same three goals I wouldâ€™ve recommended a week ago for the future of quantum supremacy experiments, but with all of them now even more acutely important than before:

- Experimentally, to increase the fidelity of the devices (with BosonSampling, for example, to observe a larger contribution from the high-order correlations) â€” a much more urgent goal, from the standpoint of evading classical spoofing algorithms, than further increasing the dimensionality of the Hilbert space.
- Theoretically, to design better ways to verify the results of sampling-based quantum supremacy experiments classically â€” ideally, even ways that could be applied via polynomial-time tests.
- For Gaussian BosonSampling in particular, to get a better understanding of the plausible limits of classical spoofing algorithms, and exactly how good a noisy device needs to be before it exceeds those limits.

Thanks so much to Sergio Boixo and Ben Villalonga for the conversation, and to Chaoyang Lu and Jelmer Renema for comments on this post. Needless to say, any remaining errors are my own.

### The Physics Nobel, Gaussian BosonSampling, and Dorian Abbot

Tuesday, October 5th, 20211. Huge congratulations to the winners of this year’s Nobel Prize in Physics: Syukuro Manabe and Klaus Hasselmann for climate modelling, and separately, Giorgio Parisi for statistical physics. While I don’t know the others, I had the great honor to get to know Parisi three years ago, when he was chair of the committee that awarded me the Tomassoni-Chisesi Prize in Physics, and when I visited Parisi’s department at Sapienza University of Rome to give the prize lecture and collect the award. I remember Parisi’s kindness, a lot of good food, and a lot of discussion of the interplay between theoretical computer science and physics. Note that, while much of Parisi’s work is beyond my competence to comment on, in computer science he’s very well-known for applying statistical physics methods to the analysis of survey propagation—an algorithm that revolutionized the study of random 3SAT when it was introduced two decades ago.

2. Two weeks ago, a group at Google put out a paper with a new efficient classical algorithm to simulate the recent Gaussian BosonSampling experiments from USTC in China. They argued that this algorithm called into question USTC’s claim of BosonSampling-based quantum supremacy. Since then, I’ve been in contact with Sergio Boixo from Google, Chaoyang Lu from USTC, and Jelmer Renema, a Dutch BosonSampling expert and friend of the blog, to try to get to the bottom of this. Very briefly, the situation seems to be that Google’s new algorithm outperforms the USTC experiment on one particular metric: namely, *total variation distance from the ideal marginal distribution, if (crucially) you look at only a subset of the optical modes*, *say 14 modes out of 144 total*. Meanwhile, though, if you look at the k^{th}-order correlations for large values of k, then the USTC experiment continues to win. With the experiment, the correlations fall off exponentially with k but still have a meaningful, detectable signal even for (say) k=19, whereas with Google’s spoofing algorithm, you choose the k that you want to spoof (say, 2 or 3), and then the correlations become nonsense for larger k.

Now, given that you were only ever *supposed* to see a quantum advantage from BosonSampling if you looked at the k^{th}-order correlations for large values of k, and given that we already knew, from the work of Leonid Gurvits, that *very* small marginals in BosonSampling experiments would be easy to reproduce on a classical computer, my inclination is to say that USTC’s claim of BosonSampling-based quantum supremacy still stands. On the other hand, it’s true that, with BosonSampling especially, more so than with qubit-based random circuit sampling, we currently lack an adequate theoretical understanding of what the *target* should be. That is, which numerical metric should an experiment aim to maximize, and how well does it have to score on that metric before it’s plausibly outperforming any fast classical algorithm? One thing I feel confident about is that, whichever metric is chosen—Linear Cross-Entropy or whatever else—it needs to capture the k^{th}-order correlations for large values of k. No metric that’s insensitive to those correlations is good enough.

3. Like many others, I was outraged and depressed that MIT uninvited Dorian Abbot (see also here), a geophysicist at the University of Chicago, who was slated to give the Carlson Lecture in the Department of Earth, Atmospheric, and Planetary Sciences about the atmospheres of extrasolar planets. The reason for the cancellation was that, totally unrelatedly to his scheduled lecture, Abbot had argued in *Newsweek* and elsewhere that Diversity, Equity, and Inclusion initiatives should aim for equality for opportunity rather than equality of outcomes, a Twitter-mob decided to go after him in retaliation, and they succeeded. It should go without saying that it’s perfectly reasonable to **disagree** with Abbot’s stance, to **counterargue**—if those very concepts haven’t gone the way of floppy disks. It should also go without saying that the MIT EAPS department chair is *free* to bow to social-media pressure, as he did, rather than standing on principle … just like I’m *free* to criticize him for it. To my mind, though, cancelling a scientific talk because of the speaker’s centrist (!) political views completely, 100% validates the right’s narrative about academia, that it’s become a fanatically intolerant echo chamber. To my fellow progressive academics, I beseech thee in the bowels of Bertrand Russell: *why would you commit such an unforced error?*

Yes, one can *imagine* views (e.g., open Nazism) so hateful that they might justify the cancellation of unrelated scientific lectures by people who hold those views, as many physicists after WWII refused to speak to Werner Heisenberg. But it seems obvious to me—as it would’ve been obvious to everyone else not long ago—that no matter where a reasonable person draws the line, Abbot’s views as he expressed them in *Newsweek* don’t come within a hundred miles of it. To be more explicit still: if Abbot’s views justify deplatforming him as a planetary scientist, then **all my quantum computing and theoretical computer science lectures deserve to be cancelled too**, for the many attempts I’ve made on this blog over the past 16 years to share my honest thoughts and life experiences, to write like a vulnerable human being rather than like a university press office. While I’m sure some sneerers gleefully embrace that implication, I ask everyone else to consider how deeply they believe in the idea of academic freedom at all—keeping in mind that such a commitment *only ever gets tested* when there’s a chance someone might denounce you for it.

**Update:** Princeton’s James Madison Program has volunteered to host Abbot’s Zoom talk in place of MIT. The talk is entitled “Climate and the Potential for Life on Other Planets.” Like probably hundreds of others who heard about this only because of the attempted cancellation, I plan to attend!

**Unrelated Bonus Update:** Here’s a neat YouTube video put together by the ACM about me as well as David Silver of AlphaGo and AlphaZero, on the occasion of our ACM Prizes in Computing.