Archive for the ‘Quantum’ Category

Linkz!

Saturday, July 9th, 2022

(1) Fellow CS theory blogger (and, 20 years ago, member of my PhD thesis committee) Luca Trevisan interviews me about Shtetl-Optimized, for the Bulletin of the European Association for Theoretical Computer Science. Questions include: what motivates me to blog, who my main inspirations are, my favorite posts, whether blogging has influenced my actual research, and my thoughts on the role of public intellectuals in the age of social-media outrage.

(2) Anurag Anshu, Nikolas Breuckmann, and Chinmay Nirkhe have apparently proved the NLTS (No Low-Energy Trivial States) Conjecture! This is considered a major step toward a proof of the famous Quantum PCP Conjecture, which—speaking of one of Luca Trevisan’s questions—was first publicly raised right here on Shtetl-Optimized back in 2006.

(3) The Microsoft team has finally released its promised paper about the detection of Majorana zero modes (“this time for real”), a major step along the way to creating topological qubits. See also this live YouTube peer review—is that a thing now?—by Vincent Mourik and Sergey Frolov, the latter having been instrumental in the retraction of Microsoft’s previous claim along these lines. I’ll leave further discussion to people who actually understand the experiments.

(4) I’m looking forward to the 2022 Conference on Computational Complexity less than two weeks from now, in my … safe? clean? beautiful? awe-inspiring? … birth-city of Philadelphia. There I’ll listen to a great lineup of talks, including one by my PhD student William Kretschmer on his joint work with me and DeVon Ingram on The Acrobatics of BQP, and to co-receive the CCC Best Paper Award (wow! thanks!) for that work. I look forward to meeting some old and new Shtetl-Optimized readers there.

Einstein-Bohr debate settled once and for all

Friday, July 8th, 2022

In Steven Pinker’s guest post from last week, there’s one bit to which I never replied. Steve wrote:

After all, in many areas Einstein was no Einstein. You [Scott] above all could speak of his not-so-superintelligence in quantum physics…

While I can’t speak “above all,” OK, I can speak. Now that we’re closing in on a century of quantum physics, can we finally adjudicate what Einstein and Bohr were right or wrong about in the 1920s and 1930s? (Also, how is it still even a thing people argue about?)

The core is this: when confronted with the phenomena of entanglement—including the ability to measure one qubit of an EPR pair and thereby collapse the other in a basis of one’s choice (as we’d put it today), as well as the possibility of a whole pile of gunpowder in a coherent superposition of exploding and not exploding (Einstein’s example in a letter to Schrödinger, which the latter then infamously transformed into a cat)—well, there are entire conferences and edited volumes about what Bohr and Einstein said, didn’t say, meant to say or tried to say about these matters, but in cartoon form:

  • Einstein said that quantum mechanics can’t be the final answer, it has ludicrous implications for reality if you actually take it seriously, the resolution must be that it’s just a statistical approximation to something deeper, and at any rate there’s clearly more to be said.
  • Bohr (translated from Ponderousness to English) said that quantum mechanics sure looks like a final answer and not an approximation to anything deeper, there’s not much more to be said, we don’t even know what the implications are for “reality” (if any) so we shouldn’t hyperventilate about it, and mostly we need to change the way we use words and think about our own role as observers.

A century later, do we know anything about these questions that Einstein and Bohr didn’t? Well, we now know the famous Bell inequality, the experiments that have demonstrated Bell inequality violation with increasing finality (most recently, in 2015, closing both the detector and the locality loopholes), other constraints on hidden-variable theories (e.g. Kochen-Specker and PBR), decoherence theory, and the experiments that have manufactured increasingly enormous superpositions (still, for better or worse, not exploding piles of gunpowder or cats!), while also verifying detailed predictions about how such superpositions decohere due to entanglement with the environment rather than some mysterious new law of physics.

So, if we were able to send a single short message back in time to the 1927 Solvay Conference, adjudicating between Einstein and Bohr without getting into any specifics, what should the message say? Here’s my attempt:

  • In 2022, quantum mechanics does still seem to be a final answer—not an approximation to anything deeper as Einstein hoped. And yet, contra Bohr, there was considerably more to say about the matter! The implications for reality could indeed be described as “ludicrous” from a classical perspective, arguably even more than Einstein realized. And yet the resolution turns out simply to be that we live in a universe where those implications are true.

OK, here’s the point I want to make. Even supposing you agree with me (not everyone will) that the above would be a reasonable modern summary to send back in time, it’s still totally unclear how to use it to mark the Einstein vs. Bohr scorecard!

Indeed, it’s not surprising that partisans have defended every possible scoring, from 100% for Bohr (quantum mechanics vindicated! Bohr called it from the start!), to 100% for Einstein (he put his finger directly on the implications that needed to be understood, against the evil Bohr who tried to shut everyone up about them! Einstein FTW!).

Personally, I’d give neither of them perfect marks, in part because they not only both missed Bell’s Theorem, but failed even to ask the requisite question (namely: what empirically verifiable tasks can Alice and Bob use entanglement to do, that they couldn’t have done without entanglement?). But I’d give both of them very high marks for, y’know, still being Albert Einstein and Niels Bohr.

And with that, I’m proud to have said the final word about precisely what Einstein and Bohr got right and wrong about quantum physics. I’m relieved that no one will ever need to debate that tiresome historical question again … certainly not in the comments section of this post.

Computer scientists crash the Solvay Conference

Thursday, June 9th, 2022

Thanks so much to everyone who sent messages of support following my last post! I vowed there that I’m going to stop letting online trolls and sneerers occupy so much space in my mental world. Truthfully, though, while there are many trolls and sneerers who terrify me, there are also some who merely amuse me. A good example of the latter came a few weeks ago, when an anonymous commenter calling themselves “String Theorist” submitted the following:

It’s honestly funny to me when you [Scott] call yourself a “nerd” or a “prodigy” or whatever [I don’t recall ever calling myself a “prodigy,” which would indeed be cringe, though “nerd” certainly —SA], as if studying quantum computing, which is essentially nothing more than glorified linear algebra, is such an advanced intellectual achievement. For what it’s worth I’m a theoretical physicist, I’m in a completely different field, and I was still able to learn Shor’s algorithm in about half an hour, that’s how easy this stuff is. I took a look at some of your papers on arXiv and the math really doesn’t get any more advanced than linear algebra. To understand quantum circuits about the most advanced concept is a tensor product which is routinely covered in undergraduate linear algebra. Wheras in my field of string theory grasping, for instance, holographic dualities relating confirmal field theories and gravity requires vastly more expertise (years of advanced study). I actually find it pretty entertaining that you’ve said yourself you’re still struggling to understand QFT, which most people I’m working with in my research group were first exposed to in undergrad 😉 The truth is we’re in entirely different leagues of intelligence (“nerdiness”) and any of your qcomputing papers could easily be picked up by a first or second year math major. It’s just a joke that this is even a field (quantum complexity theory) with journals and faculty when the results in your papers that I’ve seen are pretty much trivial and don’t require anything more than undergraduate level maths.

Why does this sort of trash-talk, reminiscent of Luboš Motl, no longer ruffle me? Mostly because the boundaries between quantum computing theory, condensed matter physics, and quantum gravity, which were never clear in the first place, have steadily gotten fuzzier. Even in the 1990s, the field of quantum computing attracted amazing physicists—folks who definitely do know quantum field theory—such as Ed Farhi, John Preskill, and Ray Laflamme. Decades later, it would be fair to say that the physicists have banged their heads against many of the same questions that we computer scientists have banged our heads against, oftentimes in collaboration with us. And yes, there were cases where actual knowledge of particle physics gave physicists an advantage—with some famous examples being the algorithms of Farhi and collaborators (the adiabatic algorithm, the quantum walk on conjoined trees, the NAND-tree algorithm). There were other cases where computer scientists’ knowledge gave them an advantage: I wouldn’t know many details about that, but conceivably shadow tomography, BosonSampling, PostBQP=PP? Overall, it’s been what you wish every indisciplinary collaboration could be.

What’s new, in the last decade, is that the scientific conversation centered around quantum information and computation has dramatically “metastasized,” to encompass not only a good fraction of all the experimentalists doing quantum optics and sensing and metrology and so forth, and not only a good fraction of all the condensed-matter theorists, but even many leading string theorists and quantum gravity theorists, including Susskind, Maldacena, Bousso, Hubeny, Harlow, and yes, Witten. And I don’t think it’s just that they’re too professional to trash-talk quantum information people the way commenter “String Theorist” does. Rather it’s that, because of the intellectual success of “It from Qubit,” we’re increasingly participating in the same conversations and working on the same technical questions. One particularly exciting such question, which I’ll have more to say about in a future post, is the truth or falsehood of the Quantum Extended Church-Turing Thesis for observers who jump into black holes.

Not to psychoanalyze, but I’ve noticed a pattern wherein, the more secure a scientist is about their position within their own field, the readier they are to admit ignorance about the neighboring fields, to learn about those fields, and to reach out to the experts in them, to ask simple or (as it usually turns out) not-so-simple questions.


I can’t imagine any better illustration of these tendencies better than the 28th Solvay Conference on the Physics of Quantum Information, which I attended two weeks ago in Brussels on my 41st birthday.

As others pointed out, the proportion of women is not as high as we all wish, but it’s higher than in 1911, when there was exactly one: Madame Curie herself.

It was my first trip out of the US since before COVID—indeed, I’m so out of practice that I nearly missed my flights in both directions, in part because of my lack of familiarity with the COVID protocols for transatlantic travel, as well as the immense lines caused by those protocols. My former adviser Umesh Vazirani, who was also at the Solvay Conference, was proud.

The Solvay Conference is the venue where, legendarily, the fundamentals of quantum mechanics got hashed out between 1911 and 1927, by the likes of Einstein, Bohr, Planck, and Curie. (Einstein complained, in a letter, about being called away from his work on general relativity to attend a “witches’ sabbath.”) Remarkably, it’s still being held in Brussels every few years, and still funded by the same Solvay family that started it. The once-every-few-years schedule has, we were constantly reminded, been interrupted only three times in its 110-year history: once for WWI, once for WWII, and now once for COVID (this year’s conference was supposed to be in 2020).

This was the first ever Solvay conference organized around the theme of quantum information, and apparently, the first ever that counted computer scientists among its participants (me, Umesh Vazirani, Dorit Aharonov, Urmila Mahadev, and Thomas Vidick). There were four topics: (1) many-body physics, (2) quantum gravity, (3) quantum computing hardware, and (4) quantum algorithms. The structure, apparently unchanged since the conference’s founding, is this: everyone attends every session, without exception. They sit around facing each other the whole time; no one ever stands to lecture. For each topic, two “rapporteurs” introduce the topic with half-hour prepared talks; then there are short prepared response talks as well as an hour or more of unstructured discussion. Everything everyone says is recorded in order to be published later.


Daniel Gottesman and I were the two rapporteurs for quantum algorithms: Daniel spoke about quantum error-correction and fault-tolerance, and I spoke about “How Much Structure Is Needed for Huge Quantum Speedups?” The link goes to my PowerPoint slides, if you’d like to check them out. I tried to survey 30 years of history of that question, from Simon’s and Shor’s algorithms, to huge speedups in quantum query complexity (e.g., glued trees and Forrelation), to the recent quantum supremacy experiments based on BosonSampling and Random Circuit Sampling, all the way to the breakthrough by Yamakawa and Zhandry a couple months ago. The last slide hypothesizes a “Law of Conservation of Weirdness,” which after all these decades still remains to be undermined: “For every problem that admits an exponential quantum speedup, there must be some weirdness in its detailed statement, which the quantum algorithm exploits to focus amplitude on the rare right answers.” My title slide also shows DALL-E2‘s impressionistic take on the title question, “how much structure is needed for huge quantum speedups?”:

The discussion following my talk was largely a debate between me and Ed Farhi, reprising many debates he and I have had over the past 20 years: Farhi urged optimism about the prospect for large, practical quantum speedups via algorithms like QAOA, pointing out his group’s past successes and explaining how they wouldn’t have been possible without an optimistic attitude. For my part, I praised the past successes and said that optimism is well and good, but at the same time, companies, venture capitalists, and government agencies are right now pouring billions into quantum computing, in many cases—as I know from talking to them—because of a mistaken impression that QCs are already known to be able to revolutionize machine learning, finance, supply-chain optimization, or whatever other application domains they care about, and to do so soon. They’re genuinely surprised to learn that the consensus of QC experts is in a totally different place. And to be clear: among quantum computing theorists, I’m not at all unusually pessimistic or skeptical, just unusually willing to say in public what others say in private.

Afterwards, one of the string theorists said that Farhi’s arguments with me had been a highlight … and I agreed. What’s the point of a friggin’ Solvay Conference if everyone’s just going to agree with each other?


Besides quantum algorithms, there was naturally lots of animated discussion about the practical prospects for building scalable quantum computers. While I’d hoped that this discussion might change the impressions I’d come with, it mostly confirmed them. Yes, the problem is staggeringly hard. Recent ideas for fault-tolerance, including the use of LDPC codes and bosonic codes, might help. Gottesman’s talk gave me the insight that, at its core, quantum fault-tolerance is all about testing, isolation, and contact-tracing, just for bit-flip and phase-flip errors rather than viruses. Alas, we don’t yet have the quantum fault-tolerance analogue of a vaccine!

At one point, I asked the trapped-ion experts in open session if they’d comment on the startup company IonQ, whose stock price recently fell precipitously in the wake of a scathing analyst report. Alas, none of them took the bait.

On a different note, I was tremendously excited by the quantum gravity session. Netta Engelhardt spoke about her and others’ celebrated recent work explaining the Page curve of an evaporating black hole using Euclidean path integrals—and by questioning her and others during coffee breaks, I finally got a handwavy intuition for how it works. There was also lots of debate, again at coffee breaks, about Susskind’s recent speculations on observers jumping into black holes and the quantum Extended Church-Turing Thesis. One of my main takeaways from the conference was a dramatically better understanding of the issues involved there—but that’s a big enough topic that it will need its own post.

Toward the end of the quantum gravity session, the experimentalist John Martinis innocently asked what actual experiments, or at least thought experiments, had been at issue for the past several hours. I got a laugh by explaining to him that, while the gravity experts considered this too obvious to point out, the thought experiments in question all involve forming a black hole in a known quantum pure state, with total control over all the Planck-scale degrees of freedom; then waiting outside the black hole for ~1070 years; collecting every last photon of Hawking radiation that comes out and routing them all into a quantum computer; doing a quantum computation that might actually require exponential time; and then jumping into the black hole, whereupon you might either die immediately at the event horizon, or else learn something in your last seconds before hitting the singularity, which you could then never communicate to anyone outside the black hole. Martinis thanked me for clarifying.


Anyway, I had a total blast. Here I am amusing some of the world’s great physicists by letting them mess around with GPT-3.

Back: Ahmed Almheiri, Juan Maldacena, John Martinis, Aron Wall. Front: Geoff Penington, me, Daniel Harlow. Thanks to Michelle Simmons for the photo.

I also had the following exchange at my birthday dinner:

Physicist: So I don’t get this, Scott. Are you a physicist who studied computer science, or a computer scientist who studied physics?

Me: I’m a computer scientist who studied computer science.

Physicist: But then you…

Me: Yeah, at some point I learned what a boson was, in order to invent BosonSampling.

Physicist: And your courses in physics…

Me: They ended at thermodynamics. I couldn’t handle PDEs.

Physicist: What are the units of h-bar?

Me: Uhh, well, it’s a conversion factor between energy and time. (*)

Physicist: Good. What’s the radius of the hydrogen atom?

Me: Uhh … not sure … maybe something like 10-15 meters?

Physicist: OK fine, he’s not one of us.

(The answer, it turns out, is more like 10-10 meters. I’d stupidly substituted the radius of the nucleus—or, y’know, a positively-charged hydrogen ion, i.e. proton. In my partial defense, I was massively jetlagged and at most 10% conscious.)

(*) Actually h-bar is a conversion factor between energy and 1/time, i.e. frequency, but the physicist accepted this answer.


Anyway, I look forward to attending more workshops this summer, seeing more colleagues who I hadn’t seen since before COVID, and talking more science … including branching out in some new directions that I’ll blog about soon. It does beat worrying about online trolls.

My first-ever attempt to create a meme!

Wednesday, April 27th, 2022

Back

Saturday, April 23rd, 2022

Thanks to everyone who asked whether I’m OK! Yeah, I’ve been living, loving, learning, teaching, worrying, procrastinating, just not blogging.


Last week, Takashi Yamakawa and Mark Zhandry posted a preprint to the arXiv, “Verifiable Quantum Advantage without Structure,” that represents some of the most exciting progress in quantum complexity theory in years. I wish I’d thought of it. tl;dr they show that relative to a random oracle (!), there’s an NP search problem that quantum computers can solve exponentially faster than classical ones. And yet this is 100% consistent with the Aaronson-Ambainis Conjecture!


A student brought my attention to Quantle, a variant of Wordle where you need to guess a true equation involving 1-qubit quantum states and unitary transformations. It’s really well-done! Possibly the best quantum game I’ve seen.


Last month, Microsoft announced on the web that it had achieved an experimental breakthrough in topological quantum computing: not quite the creation of a topological qubit, but some of the underlying physics required for that. This followed their needing to retract their previous claim of such a breakthrough, due to the criticisms of Sergey Frolov and others. One imagines that they would’ve taken far greater care this time around. Unfortunately, a research paper doesn’t seem to be available yet. Anyone with further details is welcome to chime in.


Woohoo! Maximum flow, maximum bipartite matching, matrix scaling, and isotonic regression on posets (among many others)—all algorithmic problems that I was familiar with way back in the 1990s—are now solvable in nearly-linear time, thanks to a breakthrough by Chen et al.! Many undergraduate algorithms courses will need to be updated.


For those interested, Steve Hsu recorded a podcast with me where I talk about quantum complexity theory.

Why Quantum Mechanics?

Tuesday, January 25th, 2022

In the past few months, I’ve twice injured the same ankle while playing with my kids. This, perhaps combined with covid, led me to several indisputable realizations:

  1. I am mortal.
  2. Despite my self-conception as a nerdy little kid awaiting the serious people’s approval, I am now firmly middle-aged. By my age, Einstein had completed general relativity, Turing had founded CS, won WWII, and proposed the Turing Test, and Galois, Ramanujan, and Ramsey had been dead for years.
  3. Thus, whatever I wanted to accomplish in my intellectual life, I should probably get started on it now.

Hence today’s post. I’m feeling a strong compulsion to write an essay, or possibly even a book, surveying and critically evaluating a century of ideas about the following question:

Q: Why should the universe have been quantum-mechanical?

If you want, you can divide Q into two subquestions:

Q1: Why didn’t God just make the universe classical and be done with it? What would’ve been wrong with that choice?

Q2: Assuming classical physics wasn’t good enough for whatever reason, why this specific alternative? Why the complex-valued amplitudes? Why unitary transformations? Why the Born rule? Why the tensor product?

Despite its greater specificity, Q2 is ironically the question that I feel we have a better handle on. I could spend half a semester teaching theorems that admittedly don’t answer Q2, as satisfyingly as Einstein answered the question “why the Lorentz transformations?,” but that at least render this particular set of mathematical choices (the 2-norm, the Born Rule, complex numbers, etc.) orders-of-magnitude less surprising than one might’ve thought they were a priori. Q1 therefore stands, to me at least, as the more mysterious of the two questions.

So, I want to write something about the space of credible answers to Q, and especially Q1, that humans can currently conceive. I want to do this for my own sake as much as for others’. I want to do it because I regard Q as one of the biggest questions ever asked, for which it seems plausible to me that there’s simply an answer that most experts would accept as valid once they saw it, but for which no such answer is known. And also because, besides having spent 25 years working in quantum information, I have the following qualifications for the job:

  • I don’t dismiss either Q1 or Q2 as silly; and
  • crucially, I don’t think I already know the answers, and merely need better arguments to justify them. I’m genuinely uncertain and confused.

The purpose of this post is to invite you to share your own answers to Q in the comments section. Before I embark on my survey project, I’d better know if there are promising ideas that I’ve missed, and this blog seems like as good a place as any to crowdsource the job.

Any answer is welcome, no matter how wild or speculative, so long as it honestly grapples with the actual nature of QM. To illustrate, nothing along the lines of “the universe is quantum because it needs to be holistic, interconnected, full of surprises, etc. etc.” will cut it, since such answers leave utterly unexplained why the world wasn’t simply endowed with those properties directly, rather than specifically via generalizing the rules of probability to allow interference and noncommuting observables.

Relatedly, whatever “design goal” you propose for the laws of physics, if the goal is satisfied by QM, but satisfied even better by theories that provide even more power than QM does—for instance, superluminal signalling, or violations of Tsirelson’s bound, or the efficient solution of NP-complete problems—then your explanation is out. This is a remarkably strong constraint.

Oh, needless to say, don’t try my patience with anything about the uncertainty principle being due to floating-point errors or rendering bugs, or anything else that relies on a travesty of QM lifted from a popular article or meme! 🙂

OK, maybe four more comments to enable a more productive discussion, before I shut up and turn things over to you:

  1. I’m aware, of course, of the radical uncertainty about what form an answer to Q should even take. Am I asking you to psychoanalyze the will of God in creating the universe? Or, what perhaps amounts to the same thing, am I asking for the design objectives of the giant computer simulation that we’re living in? (As in, “I’m 100% fine with living inside a Matrix … I just want to understand why it’s a unitary matrix!”) Am I instead asking for an anthropic explanation, showing why of course QM would be needed if you wanted life or consciousness like ours? Am I “merely” asking for simpler or more intuitive physical principles from which QM is to be derived as a consequence? Am I asking why QM is the “most elegant choice” in some space of mathematical options … even to the point where, with hindsight, a 19th-century mathematician or physicist could’ve been convinced that of course this must be part of Nature’s plan? Am I asking for something else entirely? You get to decide! Should you take up my challenge, this is both your privilege and your terrifying burden.
  2. I’m aware, of course, of the dizzying array of central physical phenomena that rely on QM for their ultimate explanation. These phenomena range from the stability of matter itself, which depends on the Pauli exclusion principle; to the nuclear fusion that powers the sun, which depends on a quantum tunneling effect; to the discrete energy levels of electrons (and hence, the combinatorial nature of chemistry), which relies on electrons being waves of probability amplitude that can only circle nuclei an integer number of times if their crests are to meet their troughs. Important as they are, though, I don’t regard any of these phenomena as satisfying answers to Q in themselves. The reason is simply that, in each case, it would seem like child’s-play to contrive some classical mechanism to produce the same effect, were that the goal. QM just seems far too grand to have been the answer to these questions! An exponentially larger state space for all of reality, plus the end of Newtonian determinism, just to overcome the technical problem that accelerating charges radiate energy in classical electrodynamics, thereby rendering atoms unstable? It reminds me of the Simpsons episode where Homer uses a teleportation machine to get a beer from the fridge without needing to get up off the couch.
  3. I’m aware of Gleason’s theorem, and of the specialness of the 1-norm and 2-norm in linear algebra, and of the arguments for complex amplitudes as opposed to reals or quaternions, and of the beautiful work of Lucien Hardy and of Chiribella et al. and others on axiomatic derivations of quantum theory. As some of you might remember, I even discussed much of this material in Quantum Computing Since Democritus! There’s a huge amount to say about these fascinating justifications for the rules of QM, and I hope to say some of it in my planned survey! For now, I’ll simply remark that every axiomatic reconstruction of QM that I’ve seen, impressive though it was, has relied on one or more axioms that struck me as weird, in the sense that I’d have little trouble dismissing the axioms as totally implausible and unmotivated if I hadn’t already known (from QM, of course) that they were true. The axiomatic reconstructions do help me somewhat with Q2, but little if at all with Q1.
  4. To keep the discussion focused, in this post I’d like to exclude answers along the lines of “but what if QM is merely an approximation to something else?,” to say nothing of “a century of evidence for QM was all just a massive illusion! LOCAL HIDDEN VARIABLES FOR THE WIN!!!” We can have those debates another day—God knows that, here on Shtetl-Optimized, we have and we will. Here I’m asking instead: imagine that, as fantastical as it sounds, QM were not only exactly true, but (along with relativity, thermodynamics, evolution, and the tastiness of chocolate) one of the profoundest truths our sorry species had ever discovered. Why should I have expected that truth all along? What possible reasons to expect it have I missed?

On tardigrades, superdeterminism, and the struggle for sanity

Monday, January 10th, 2022

(Hopefully no one has taken taken that title yet!)

I waste a large fraction of my existence just reading about what’s happening in the world, or discussion and analysis thereof, in an unending scroll of paralysis and depression. On the first anniversary of the January 6 attack, I read the recent revelations about just how close the seditionists actually came to overturning the election outcome (e.g., by pressuring just one Republican state legislature to “decertify” its electors, after which the others would likely follow in a domino effect), and how hard it now is to see a path by which democracy in the United States will survive beyond 2024. Or I read about Joe Manchin, who’s already entered the annals of history as the man who could’ve halted the slide to the abyss and decided not to. Of course, I also read about the wokeists, who correctly see the swing of civilization getting pushed terrifyingly far out of equilibrium to the right, so their solution is to push the swing terrifyingly far out of equilibrium to the left, and then they act shocked when their own action, having added all this potential energy to the swing, causes it to swing back even further to the right, as swings tend to do. (And also there’s a global pandemic killing millions, and the correct response to it—to authorize and distribute new vaccines as quickly as the virus mutates—is completely outside the Overton Window between Obey the Experts and Disobey the Experts, advocated by no one but a few nerds. When I first wrote this post, I forgot all about the global pandemic.) And I see all this and I am powerless to stop it.

In such a dark time, it’s easy to forget that I’m a theoretical computer scientist, mainly focused on quantum computing. It’s easy to forget that people come to this blog because they want to read about quantum computing. It’s like, who gives a crap about that anymore? What doth it profit a man, if he gaineth a few thousand fault-tolerant qubits with which to calculateth chemical reaction rates or discrete logarithms, and he loseth civilization?

Nevertheless, in the rest of this post I’m going to share some quantum-related debunking updates—not because that’s what’s at the top of my mind, but in an attempt to find my way back to sanity. Picture that: quantum mechanics (and specifically, the refutation of outlandish claims related to quantum mechanics) as the part of one’s life that’s comforting, normal, and sane.


There’s been lots of online debate about the claim to have entangled a tardigrade (i.e., water bear) with a superconducting qubit; see also this paper by Vlatko Vedral, this from CNET, this from Ben Brubaker on Twitter. So, do we now have Schrödinger’s Tardigrade: a living, “macroscopic” organism maintained coherently in a quantum superposition of two states? How could such a thing be possible with the technology of the early 21st century? Hasn’t it been a huge challenge to demonstrate even Schrödinger’s Virus or Schrödinger’s Bacterium? So then how did this experiment leapfrog (or leaptardigrade) over those vastly easier goals?

Short answer: it didn’t. The experimenters couldn’t directly measure the degree of freedom in the tardigrade that’s claimed to be entangled with the qubit. But it’s consistent with everything they report that whatever entanglement is there, it’s between the superconducting qubit and a microscopic part of the tardigrade. It’s also consistent with everything they report that there’s no entanglement at all between the qubit and any part of the tardigrade, just boring classical correlation. (Or rather that, if there’s “entanglement,” then it’s the Everett kind, involving not merely the qubit and the tardigrade but the whole environment—the same as we’d get by just measuring the qubit!) Further work would be needed to distinguish these possibilities. In any case, it’s of course cool that they were able to cool a tardigrade to near absolute zero and then revive it afterwards.

I thank the authors of the tardigrade paper, who clarified a few of these points in correspondence with me. Obviously the comments section is open for whatever I’ve misunderstood.


People also asked me to respond to Sabine Hossenfelder’s recent video about superdeterminism, a theory that holds that quantum entanglement doesn’t actually exist, but the universe’s initial conditions were fine-tuned to stop us from choosing to measure qubits in ways that would make its nonexistence apparent: even when we think we’re applying the right measurements, we’re not, because the initial conditions messed with our brains or our computers’ random number generators. (See, I tried to be as non-prejudicial as possible in that summary, and it still came out sounding like a parody. Sorry!)

Sabine sets up the usual dichotomy that people argue against superdeterminism only because they’re attached to a belief in free will. She rejects Bell’s statistical independence assumption, which she sees as a mere dogma rather than a prerequisite for doing science. Toward the end of the video, Sabine mentions the objection that, without statistical independence, a demon could destroy any randomized controlled trial, by tampering with the random number generator that decides who’s in the control group and who isn’t. But she then reassures the viewer that it’s no problem: superdeterministic conspiracies will only appear when quantum mechanics would’ve predicted a Bell inequality violation or the like. Crucially, she never explains the mechanism by which superdeterminism, once allowed into the universe (including into macroscopic devices like computers and random number generators), will stay confined to reproducing the specific predictions that quantum mechanics already told us were true, rather than enabling ESP or telepathy or other mischief. This is stipulated, never explained or derived.

To say I’m not a fan of superdeterminism would be a super-understatement. And yet, nothing I’ve written previously on this blog—about superdeterminism’s gobsmacking lack of explanatory power, or about how trivial it would be to cook up a superdeterministic “mechanism” for, e.g., faster-than-light signaling—none of it seems to have made a dent. It’s all come across as obvious to the majority of physicists and computer scientists who think as I do, and it’s all fallen on deaf ears to superdeterminism’s fans.

So in desperation, let me now try another tack: going meta. It strikes me that no one who saw quantum mechanics as a profound clue about the nature of reality could ever, in a trillion years, think that superdeterminism looked like a promising route forward given our current knowledge. The only way you could think that, it seems to me, is if you saw quantum mechanics as an anti-clue: a red herring, actively misleading us about how the world really is. To be a superdeterminist is to say:

OK, fine, there’s the Bell experiment, which looks like Nature screaming the reality of ‘genuine indeterminism, as predicted by QM,’ louder than you might’ve thought it even logically possible for that to be screamed. But don’t listen to Nature, listen to us! If you just drop what you thought were foundational assumptions of science, we can explain this away! Not explain it, of course, but explain it away. What more could you ask from us?

Here’s my challenge to the superdeterminists: when, in 400 years from Galileo to the present, has such a gambit ever worked? Maxwell’s equations were a clue to special relativity. The Hamiltonian and Lagrangian formulations of classical mechanics were clues to quantum mechanics. When has a great theory in physics ever been grudgingly accommodated by its successor theory in a horrifyingly ad-hoc way, rather than gloriously explained and derived?


Update: Oh right, and the QIP’2022 list of accepted talks is out! And I was on the program committee! And they’re still planning to hold QIP in person, in March at Caltech, will you fancy that! actually I have no idea—but if they’re going to move to virtual, I’m awaiting an announcement just like everyone else.

Two new talks and an interview

Thursday, December 2nd, 2021
  1. A talk to UT Austin’s undergraduate math club (handwritten PDF notes) about Hao Huang’s proof of the Sensitivity Conjecture, and its implications for quantum query complexity and more. I’m still not satisfied that I’ve presented Huang’s beautiful proof as clearly and self-containedly as I possibly can, which probably just means I need to lecture on it a few more times.
  2. A Zoom talk at the QPQIS conference in Beijing (PowerPoint slides), setting out my most recent thoughts about Google’s and USTC’s quantum supremacy experiments and the continuing efforts to spoof them classically.
  3. An interview with me in Communications of the ACM, mostly about BosonSampling and the quantum lower bound for the collision problem.

Enjoy y’all!

The Acrobatics of BQP

Friday, November 19th, 2021

Just in case anyone is depressed this afternoon and needs something to cheer them up, students William Kretschmer, DeVon Ingram, and I have finally put out a new paper:

The Acrobatics of BQP

Abstract: We show that, in the black-box setting, the behavior of quantum polynomial-time (BQP) can be remarkably decoupled from that of classical complexity classes like NP. Specifically:

– There exists an oracle relative to which NPBQP⊄BQPPH, resolving a 2005 problem of Fortnow. Interpreted another way, we show that AC0 circuits cannot perform useful homomorphic encryption on instances of the Forrelation problem. As a corollary, there exists an oracle relative to which P=NP but BQP≠QCMA.

– Conversely, there exists an oracle relative to which BQPNP⊄PHBQP.

– Relative to a random oracle, PP=PostBQP is not contained in the “QMA hierarchy” QMAQMA^QMA^…, and more generally PP⊄(MIP*)(MIP*)^(MIP*)^… (!), despite the fact that MIP*=RE in the unrelativized world. This result shows that there is no black-box quantum analogue of Stockmeyer’s approximate counting algorithm.

– Relative to a random oracle, Σk+1⊄BQPΣ_k for every k.

– There exists an oracle relative to which BQP=P#P and yet PH is infinite. (By contrast, if NP⊆BPP, then PH collapses relative to all oracles.)

– There exists an oracle relative to which P=NP≠BQP=P#P.

To achieve these results, we build on the 2018 achievement by Raz and Tal of an oracle relative to which BQP⊄PH, and associated results about the Forrelation problem. We also introduce new tools that might be of independent interest. These include a “quantum-aware” version of the random restriction method, a concentration theorem for the block sensitivity of AC0 circuits, and a (provable) analogue of the Aaronson-Ambainis Conjecture for sparse oracles.

Incidentally, particularly when I’ve worked on a project with students, I’m often tremendously excited and want to shout about it from the rooftops for the students’ sake … but then I also don’t want to use this blog to privilege my own papers “unfairly.” Can anyone suggest a principle that I should follow going forward?

Scott Aaronson, when reached for comment, said…

Tuesday, November 16th, 2021

About IBM’s new 127-qubit superconducting chip: As I told New Scientist, I look forward to seeing the actual details! As far as I could see, the marketing materials that IBM released yesterday take a lot of words to say absolutely nothing about what, to experts, is the single most important piece of information: namely, what are the gate fidelities? How deep of a quantum circuit can they apply? How have they benchmarked the chip? Right now, all I have to go on is a stats page for the new chip, which reports its average CNOT error as 0.9388—in other words, close to 1, or terrible! (But see also a tweet by James Wootton, which explains that such numbers are often highly misleading when a new chip is first rolled out.) Does anyone here have more information? Update (11/17): As of this morning, the average CNOT error has been updated to 2%. Thanks to multiple commenters for letting me know!

About the new simulation of Google’s 53-qubit Sycamore chip in 5 minutes on a Sunway supercomputer (see also here): This is an exciting step forward on the classical validation of quantum supremacy experiments, and—ironically, what currently amounts to almost the same thing—on the classical spoofing of those experiments. Congratulations to the team in China that achieved this! But there are two crucial things to understand. First, “5 minutes” refers to the time needed to calculate a single amplitude (or perhaps, several correlated amplitudes) using tensor network contraction. It doesn’t refer to the time needed to generate millions of independent noisy samples, which is what Google’s Sycamore chip does in 3 minutes. For the latter task, more like a week still seems to be needed on the supercomputer. (I’m grateful to Chu Guo, a coauthor of the new work who spoke in UT Austin’s weekly quantum Zoom meeting, for clarifying this point.) Second, the Sunway supercomputer has parallel processing power equivalent to approximately ten million of your laptop. Thus, even if we agreed that Google no longer had quantum supremacy as measured by time, it would still have quantum supremacy as measured by carbon footprint! (And this despite the fact that the quantum computer itself requires a noisy, closet-sized dilution fridge.) Even so, for me the new work underscores the point that quantum supremacy is not yet a done deal. Over the next few years, I hope that Google and USTC, as well as any new entrants to this race (IBM? IonQ? Harvard? Rigetti?), will push forward with more qubits and, even more importantly, better gate fidelities leading to higher Linear Cross-Entropy scores. Meanwhile, we theorists should try to do our part by inventing new and better protocols with which to demonstrate near-term quantum supremacy—especially protocols for which the classical verification is easier.

About the new anti-woke University of Austin (UATX): In general, I’m extremely happy for people to experiment with new and different institutions, and of course I’m happy for more intellectual activity in my adopted city of Austin. And, as Shtetl-Optimized readers will know, I’m probably more sympathetic than most to the reality of the problem that UATX is trying to solve—living, as we do, in an era when one academic after another has been cancelled for ideas that a mere decade ago would’ve been considered unexceptional, moderate, center-left. Having said all that, I wish I could feel more optimistic about UATX’s prospects. I found its website heavy on free-speech rhetoric but frustratingly light on what the new university is actually going to do: what courses it will offer, who will teach them, where the campus will be, etc. etc. Arguably this is all excusable for a university still in ramp-up mode, but had I been in their shoes, I might have held off on the public launch until I had at least some sample content to offer. Certainly, the fact that Steven Pinker has quit UATX’s advisory board is a discouraging sign. If UATX asks me to get involved—to lecture there, to give them advice about their CS program, etc.—I’ll consider it as I would any other request. So far, though, they haven’t.

About the Association for Mathematical Research: Last month, some colleagues invited me to join a brand-new society called the Association for Mathematical Research. Many of the other founders (Joel Hass, Abigail Thompson, Colin Adams, Richard Borcherds, Jeff Cheeger, Pavel Etingof, Tom Hales, Jeff Lagarias, Mark Lackenby, Cliff Taubes, …) were brilliant mathematicians who I admired, they seemed like they could use a bit of theoretical computer science representation, there was no time commitment, maybe they’d eventually do something good, so I figured why not? Alas, to say that AMR has proved unpopular on Twitter would be an understatement: it’s received the same contemptuous reception that UATX has. The argument seems to be: starting a new mathematical society, even an avowedly diverse and apolitical one, is really just an implicit claim that the existing societies, like the Mathematical Association of America (MAA) and the American Mathematical Society (AMS), have been co-opted by woke true-believers. But that’s paranoid and insane! I mean, it’s not as if an AMS blog has called for the mass resignation of white male mathematicians to make room for the marginalized, or the boycott of Israeli universities, or the abolition of the criminal justice system (what to do about Kyle Rittenhouse though?). Still, even though claims of that sort of co-option are obviously far-out, rabid fantasies, yeah, I did decide to give a new organization the benefit of the doubt. AMR might well fail or languish in obscurity, just like UATX might. On the other hand, the barriers to making a positive difference for the intellectual world, the world I love, the world under constant threat from the self-certain ideologues of every side, do strike me as orders of magnitude smaller for a new professional society than they do for a new university.