Archive for the ‘Complexity’ Category

Updatez

Tuesday, August 2nd, 2022
  1. On the IBM Qiskit blog, there’s an interview with me about the role of complexity theory in the early history of quantum computing. Not much new for regular readers, but I’m very happy with how it came out—thanks so much to Robert Davis and Olivia Lanes for making it happen! My only quibble is with the sketch of my face, which might create the inaccurate impression that I no longer have teeth.
  2. Boaz Barak pointed me to a Twitter thread of DALL-E paintings of people using quantum computers, in the styles of many of history’s famous artists. While the motifs are unsurprising (QCs look like regular computers but glowing, or maybe like giant glowing atoms), highly recommended as another demonstration of the sort of thing DALL-E does best.
  3. Dan Spielman asked me to announce that the National Academy of Sciences is seeking nominations for the Held Prize in combinatorial and discrete optimization. The deadline is October 3.
  4. I’m at the NSF Workshop on Quantum Advantage and Next Steps at the University of Chicago. My talk yesterday was entitled “Verifiable Quantum Advantage: What I Hope Will Be Done” (yeah yeah, I decided to call it “advantage” rather than “supremacy” in deference to the name of the workshop). My PowerPoint slides are here. Meanwhile, this morning was the BosonSampling session. The talk by Chaoyang Lu, leader of USTC’s experimental BosonSampling effort, was punctuated by numerous silly memes and videos, as well as the following striking sentence: “only by putting the seven dragon balls together can you unlock the true quantum computational power.”
  5. Gavin Leech lists and excerpts his favorite writings of mine over the past 25 years, while complaining that I spend “a lot of time rebutting fleeting manias” and “obsess[ing] over political flotsam.”

On black holes, holography, the Quantum Extended Church-Turing Thesis, fully homomorphic encryption, and brain uploading

Wednesday, July 27th, 2022

I promise you: this post is going to tell a scientifically coherent story that involves all five topics listed in the title. Not one can be omitted.

My story starts with a Zoom talk that the one and only Lenny Susskind delivered for the Simons Institute for Theory of Computing back in May. There followed a panel discussion involving Lenny, Edward Witten, Geoffrey Penington, Umesh Vazirani, and your humble shtetlmaster.

Lenny’s talk led up to a gedankenexperiment involving an observer, Alice, who bravely jumps into a specially-prepared black hole, in order to see the answer to a certain computational problem in her final seconds before being ripped to shreds near the singularity. Drawing on earlier work by Bouland, Fefferman, and Vazirani, Lenny speculated that the computational problem could be exponentially hard even for a (standard) quantum computer. Despite this, Lenny repeatedly insisted—indeed, he asked me again to stress here—that he was not claiming to violate the Quantum Extended Church-Turing Thesis (QECTT), the statement that all of nature can be efficiently simulated by a standard quantum computer. Instead, he was simply investigating how the QECTT needs to be formulated in order to be a true statement.

I didn’t understand this, to put it mildly. If what Lenny was saying was right—i.e., if the infalling observer could see the answer to a computational problem not in BQP, or Bounded-Error Quantum Polynomial-Time—then why shouldn’t we call that a violation of the QECTT? Just like we call Shor’s quantum factoring algorithm a likely violation of the classical Extended Church-Turing Thesis, the thesis saying that nature can be efficiently simulated by a classical computer? Granted, you don’t have to die in order to run Shor’s algorithm, as you do to run Lenny’s experiment. But why should such implementation details matter from the lofty heights of computational complexity?

Alas, not only did Lenny never answer that in a way that made sense to me, he kept trying to shift the focus from real, physical black holes to “silicon spheres” made of qubits, which would be programmed to simulate the process of Alice jumping into the black hole (in a dual boundary description). Say what? Granting that Lenny’s silicon spheres, being quantum computers under another name, could clearly be simulated in BQP, wouldn’t this still leave the question about the computational powers of observers who jump into actual black holes—i.e., the question that we presumably cared about in the first place?

Confusing me even further, Witten seemed almost dismissive of the idea that Lenny’s gedankenexperiment raised any new issue for the QECTT—that is, any issue that wouldn’t already have been present in a universe without gravity. But as to Witten’s reasons, the most I understood from his remarks was that he was worried about various “engineering” issues with implementing Lenny’s gedankenexperiment, involving gravitational backreaction and the like. Ed Witten, now suddenly the practical guy! I couldn’t even isolate the crux of disagreement between Susskind and Witten, since after all, they agreed (bizarrely, from my perspective) that the QECTT wasn’t violated. Why wasn’t it?

Anyway, shortly afterward I attended the 28th Solvay Conference in Brussels, where one of the central benefits I got—besides seeing friends after a long COVID absence and eating some amazing chocolate mousse—was a dramatically clearer understanding of the issues in Lenny’s gedankenexperiment. I owe this improved understanding to conversations with many people at Solvay, but above all Daniel Gottesman and Daniel Harlow. Lenny himself wasn’t there, other than in spirit, but I ran the Daniels’ picture by him afterwards and he assented to all of its essentials.

The Daniels’ picture is what I want to explain in this post. Needless to say, I take sole responsibility for any errors in my presentation, as I also take sole responsibility for not understanding (or rather: not doing the work to translate into terms that I understood) what Susskind and Witten had said to me before.


The first thing you need to understand about Lenny’s gedankenexperiment is that it takes place entirely in the context of AdS/CFT: the famous holographic duality between two types of physical theories that look wildly different. Here AdS stands for anti-de-Sitter: a quantum theory of gravity describing a D-dimensional universe with a negative cosmological constant (i.e. hyperbolic geometry), one where black holes can form and evaporate and so forth. Meanwhile, CFT stands for conformal field theory: a quantum field theory, with no apparent gravity (and hence no black holes), that lives on the (D-1)-dimensional boundary of the D-dimensional AdS space. The staggering claim of AdS/CFT is that every physical question about the AdS bulk can be translated into an equivalent question about the CFT boundary, and vice versa, with a one-to-one mapping from states to states and observables to observables. So in that sense, they’re actually the same theory, just viewed in two radically different ways. AdS/CFT originally came out of string theory, but then notoriously “swallowed its parent,” to the point where nowadays, if you go to what are still called “string theory” meetings, you’re liable to hear vastly more discussion of AdS/CFT than of actual strings.

Thankfully, the story I want to tell won’t depend on fine details of how AdS/CFT works. Nevertheless, you can’t just ignore the AdS/CFT part as some technicality, in order to get on with the vivid tale of Alice jumping into a black hole, hoping to learn the answer to a beyond-BQP computational problem in her final seconds of existence. The reason you can’t ignore it is that the whole beyond-BQP computational problem we’ll be talking about, involves the translation (or “dictionary”) between the AdS bulk and the CFT boundary. If you like, then, it’s actually the chasm between bulk and boundary that plays the starring role in this story. The more familiar chasm within the bulk, between the interior of a black hole and its exterior (the two separated by an event horizon), plays only a subsidiary role: that of causing the AdS/CFT dictionary to become exponentially complex, as far as anyone can tell.

Pause for a minute. Previously I led you to believe that we’d be talking about an actual observer Alice, jumping into an actual physical black hole, and whether Alice could see the answer to a problem that’s intractable even for quantum computers in her last moments before hitting the singularity, and if so whether we should take that to refute the Quantum Extended Church-Turing Thesis. What I’m saying now is so wildly at variance with that picture, that it had to be repeated to me about 10 times before I understood it. Once I did understand, I then had to repeat it to others about 10 times before they understood. And I don’t care if people ridicule me for that admission—how slow Scott and his friends must be, compared to string theorists!—because my only goal right now is to get you to understand it.

To say it again: Lenny has not proposed a way for Alice to surpass the complexity-theoretic power of quantum computers, even for a brief moment, by crossing the event horizon of a black hole. If that was Alice’s goal when she jumped into the black hole, then alas, she probably sacrificed her life for nothing! As far as anyone knows, Alice’s experiences, even after crossing the event horizon, ought to continue to be described extremely well by general relativity and quantum field theory (at least until she nears the singularity and dies), and therefore ought to be simulatable in BQP. Granted, we don’t actually know this—you can call it an open problem if you like—but it seems like a reasonable guess.

In that case, though, what beyond-BQP problem was Lenny talking about, and what does it have to do with black holes? Building on the Bouland-Fefferman-Vazirani paper, Lenny was interested in a class of problems of the following form: Alice is given as input a pure quantum state |ψ⟩, which encodes a boundary CFT state, which is dual to an AdS bulk universe that contains a black hole. Alice’s goal is, by examining |ψ⟩, to learn something about what’s inside the black hole. For example: does the black hole interior contain “shockwaves,” and if so how many and what kind? Does it contain a wormhole, connecting it to a different black hole in another universe? If so, what’s the volume of that wormhole? (Not the first question I would ask either, but bear with me.)

Now, when I say Alice is “given” the state |ψ⟩, this could mean several things: she could just be physically given a collection of n qubits. Or, she could be given a gigantic table of 2n amplitudes. Or, as a third possibility, she could be given a description of a quantum circuit that prepares |ψ⟩, say from the all-0 initial state |0n⟩. Each of these possibilities leads to a different complexity-theoretic picture, and the differences are extremely interesting to me, so that’s what I mostly focused on in my remarks in the panel discussion after Lenny’s talk. But it won’t matter much for the story I want to tell in this post.

However |ψ⟩ is given to Alice, the prediction of AdS/CFT is that |ψ⟩ encodes everything there is to know about the AdS bulk, including whatever is inside the black hole—but, and this is crucial, the information about what’s inside the black hole will be pseudorandomly scrambled. In other words, it works like this: whatever simple thing you’d like to know about parts of the bulk that aren’t hidden behind event horizons—is there a star over here? some gravitational lensing over there? etc.—it seems that you could not only learn it by measuring |ψ⟩, but learn it in polynomial time, the dictionary between bulk and boundary being computationally efficient in that case. (As with almost everything else in this subject, even that hasn’t been rigorously proven, though my postdoc Jason Pollack and I made some progress this past spring by proving a piece of it.) On the other hand, as soon as you want to know what’s inside an event horizon, the fact that there are no probes that an “observer at infinity” could apply to find out, seems to translate into the requisite measurements on |ψ⟩ being exponentially complex to apply. (Technically, you’d have to measure an ensemble of poly(n) identical copies of |ψ⟩, but I’ll ignore that in what follows.)

In more detail, the relevant part of |ψ⟩ turns into a pseudorandom, scrambled mess: a mess that it’s plausible that no polynomial-size quantum circuit could even distinguish from the maximally mixed state. So, while in principle the information is all there in |ψ⟩, getting it out seems as hard as various well-known problems in symmetric-key cryptography, if not literally NP-hard. This is way beyond what we expect even a quantum computer to be able to do efficiently: indeed, after 30 years of quantum algorithms research, the best quantum speedup we know for this sort of task is typically just the quadratic speedup from Grover’s algorithm.

So now you understand why there was some hope that Alice, by jumping into a black hole, could solve a problem that’s exponentially hard for quantum computers! Namely because, once she’s inside the black hole, she can just see the shockwaves, or the volume of the wormhole, or whatever, and no longer faces the exponentially hard task of decoding that information from |ψ⟩. It’s as if the black hole has solved the problem for her, by physically instantiating the otherwise exponentially complex transformation between the bulk and boundary descriptions of |ψ⟩.

Having now gotten your hopes up, the next step in the story is to destroy them.


Here’s the fundamental problem: |ψ⟩ does not represent the CFT dual of a bulk universe that contains the black hole with the shockwaves or whatever, and that also contains Alice herself, floating outside the black hole, and being given |ψ⟩ as an input.  Indeed, it’s unclear what the latter state would even mean: how do we get around the circularity in its definition? How do we avoid an infinite regress, where |ψ⟩ would have to encode a copy of |ψ⟩ which would have to encode a copy of … and so on forever? Furthermore, who created this |ψ⟩ to give to Alice? We don’t normally imagine that an “input state” contains a complete description of the body and brain of the person whose job it is to learn the output.

By contrast, a scenario that we can define without circularity is this: Alice is given (via physical qubits, a giant table of amplitudes, an obfuscated quantum circuit, or whatever) a pure quantum state |ψ⟩, which represents the CFT dual of a hypothetical universe containing a black hole.  Alice wants to learn what shockwaves or wormholes are inside the black hole, a problem plausibly conjectured not to have any ordinary polynomial-size quantum circuit that takes copies of |ψ⟩ as input.  To “solve” the problem, Alice sets into motion the following sequence of events:

  1. Alice scans and uploads her own brain into a quantum computer, presumably destroying the original meat brain in the process! The QC represents Alice, who now exists only virtually, via a state |φ⟩.
  2. The QC performs entangling operations on |φ⟩ and |ψ⟩, which correspond to inserting Alice into the bulk of the universe described by |ψ⟩, and then having her fall into the black hole.
  3. Now in simulated form, “Alice” (or so we assume, depending on our philosophical position) has the subjective experience of falling into the black hole and observing what’s inside.  Success! Given |ψ⟩ as input, we’ve now caused “Alice” (for some definition of “Alice”) to have observed the answer to the beyond-BQP computational problem.

In the panel discussion, I now model Susskind as having proposed scenario 1-3, Witten as going along with 1-2 but rejecting 3 or not wanting to discuss it, and me as having made valid points about the computational complexity of simulating Alice’s experience in 1-3, yet while being radically mistaken about what the scenario was (I still thought an actual black hole was involved).

An obvious question is whether, having learned the answer, “Alice” can now get the answer back out to the “real, original” world. Alas, the expectation is that this would require exponential time. Why? Because otherwise, this whole process would’ve constituted a subexponential-time algorithm for distinguishing random from pseudorandom states using an “ordinary” quantum computer! Which is conjectured not to exist.

And what about Alice herself? In polynomial time, could she return from “the Matrix,” back to a real-world biological body? Sure she could, in principle—if, for example, the entire quantum computation were run in reverse. But notice that reversing the computation would also make Alice forget the answer to the problem! Which is not at all a coincidence: if the problem is outside BQP, then in general, Alice can know the answer only while she’s “inside the Matrix.”

Now that hopefully everything is crystal-clear and we’re all on the same page, what can we say about this scenario?  In particular: should it cause us to reject or modify the QECTT itself?


Daniel Gottesman, I thought, offered a brilliant reductio ad absurdum of the view that the simulated black hole scenario should count as a refutation of the QECTT. Well, he didn’t call it a “reductio,” but I will.

For the reductio, let’s forget not only about quantum gravity but even about quantum mechanics itself, and go all the way back to classical computer science.  A fully homomorphic encryption scheme, the first example of which was discovered by Craig Gentry 15 years ago, lets you do arbitrary computations on encrypted data without ever needing to decrypt it.  It has both an encryption key, for encrypting the original plaintext data, and a separate decryption key, for decrypting the final answer.

Now suppose Alice has some homomorphically encrypted top-secret emails, which she’d like to read.  She has the encryption key (which is public), but not the decryption key.

If the homomorphic encryption scheme is secure against quantum computers—as the schemes discovered by Gentry and later researchers currently appear to be—and if the QECTT is true, then Alice’s goal is obviously infeasible: decrypting the data will take her exponential time.

Now, however, a classical version of Lenny comes along, and explains to Alice that she simply needs to do the following:

  1. Upload her own brain state into a classical computer, destroying the “meat” version in the process (who needed it?).
  2. Using the known encryption key, homomorphically encrypt a computer program that simulates (and thereby, we presume, enacts) Alice’s consciousness.
  3. Using the homomorphically encrypted Alice-brain, together with the homomorphically encrypted input data, do the homomorphic computations that simulate the process of Alice’s brain reading the top-secret emails.

The claim would now be that, inside the homomorphic encryption, the simulated Alice has the subjective experience of reading the emails in the clear.  Aha, therefore she “broke” the homomorphic encryption scheme! Therefore, assuming that the scheme was secure even against quantum computers, the QECTT must be false!

According to Gottesman, this is almost perfectly analogous to Lenny’s black hole scenario.  In particular, they share the property that “encryption is easy but decryption is hard.”   Once she’s uploaded her brain, Alice can efficiently enter the homomorphically encrypted world to see the solution to a hard problem, just like she can efficiently enter the black hole world to do the same.  In both cases, however, getting back to her normal world with the answer would then take Alice exponential time.  Note that in the latter case, the difficulty is not so much about “escaping from a black hole,” as it is about inverting the AdS/CFT dictionary.

Going further, we can regard the AdS/CFT dictionary for regions behind event horizons as, itself, an example of a fully homomorphic encryption scheme—in this case, of course, one where the ciphertexts are quantum states.  This strikes me as potentially an important insight about AdS/CFT itself, even if that wasn’t Gottesman’s intention. It complements many other recent connections between AdS/CFT and theoretical computer science, including the view of AdS/CFT as a quantum error-correcting code, and the connection between AdS/CFT and the Max-Flow/Min-Cut Theorem (see also my talk about my work with Jason Pollack).

So where’s the reductio?  Well, when it’s put so starkly, I suspect that not many would regard Gottesman’s classical homomorphic encryption scenario as a “real” challenge to the QECTT.  Or rather, people might say: yes, this raises fascinating questions for the philosophy of mind, but at any rate, we’re no longer talking about physics.  Unlike with (say) quantum computing, no new physical phenomenon is being brought to light that lets an otherwise intractable computational problem be solved.  Instead, it’s all about the user herself, about Alice, and which physical systems get to count as instantiating her.

It’s like, imagine Alice at the computer store, weighing which laptop to buy. Besides weight, battery life, and price, she definitely does care about processing power. She might even consider a quantum computer, if one is available. Maybe even a computer with a black hole, wormhole, or closed timelike curve inside: as long as it gives the answers she wants, what does she care about the innards? But a computer whose normal functioning would (pessimistically) kill her or (optimistically) radically change her own nature, trapping her in a simulated universe that she can escape only by forgetting the computer’s output? Yeah, I don’t envy the computer salesman.

Anyway, if we’re going to say this about the homomorphic encryption scenario, then shouldn’t we say the same about the simulated black hole scenario?  Again, from an “external” perspective, all that’s happening is a giant BQP computation.  Anything beyond BQP that we consider to be happening, depends on adopting the standpoint of an observer who “jumps into the homomorphic encryption on the CFT boundary”—at which point, it would seem, we’re no longer talking about physics but about philosophy of mind.


So, that was the story! I promised you that it would integrally involve black holes, holography, the Quantum Extended Church-Turing Thesis, fully homomorphic encryption, and brain uploading, and I hope to have delivered on my promise.

Of course, while this blog post has forever cleared up all philosophical confusions about AdS/CFT and the Quantum Extended Church-Turing Thesis, many questions of a more technical nature remain. For example: what about the original scenario? can we argue that the experiences of bulk observers can be simulated in BQP, even when those observers jump into black holes? Also, what can we say about the complexity class of problems to which the simulated Alice can learn the answers? Could she even solve NP-complete problems in polynomial time this way, or at least invert one-way functions? More broadly, what’s the power of “BQP with an oracle for applying the AdS/CFT dictionary”—once or multiple times, in one direction or both directions?

Lenny himself described his gedankenexperiment as exploring the power of a new complexity class that he called “JI/poly,” where the JI stands for “Jumping In” (to a black hole, that is). The nomenclature is transparently ridiculous—“/poly” means “with polynomial-size advice,” which we’re not talking about here—and I’ve argued in this post that the “JI” is rather misleading as well. If Alice is “jumping” anywhere, it’s not into a black hole per se, but into a quantum computer that simulates a CFT that’s dual to a bulk universe containing a black hole.

In a broader sense, though, to contemplate these questions at all is clearly to “jump in” to … something. It’s old hat by now that one can start in physics and end up in philosophy: what else is the quantum measurement problem, or the Boltzmann brain problem, or anthropic cosmological puzzles like whether (all else equal) we’re a hundred times as likely to find ourselves in a universe with a hundred times as many observers? More recently, it’s also become commonplace that one can start in physics and end in computational complexity theory: quantum computing itself is the example par excellence, but over the past decade, the Harlow-Hayden argument about decoding Hawking radiation and the complexity = action proposal have made clear that it can happen even in quantum gravity.

Lenny’s new gedankenexperiment, however, is the first case I’ve seen where you start out in physics, and end up embroiled in some of the hardest questions of philosophy of mind and computational complexity theory simultaneously.

Linkz!

Saturday, July 9th, 2022

(1) Fellow CS theory blogger (and, 20 years ago, member of my PhD thesis committee) Luca Trevisan interviews me about Shtetl-Optimized, for the Bulletin of the European Association for Theoretical Computer Science. Questions include: what motivates me to blog, who my main inspirations are, my favorite posts, whether blogging has influenced my actual research, and my thoughts on the role of public intellectuals in the age of social-media outrage.

(2) Anurag Anshu, Nikolas Breuckmann, and Chinmay Nirkhe have apparently proved the NLTS (No Low-Energy Trivial States) Conjecture! This is considered a major step toward a proof of the famous Quantum PCP Conjecture, which—speaking of one of Luca Trevisan’s questions—was first publicly raised right here on Shtetl-Optimized back in 2006.

(3) The Microsoft team has finally released its promised paper about the detection of Majorana zero modes (“this time for real”), a major step along the way to creating topological qubits. See also this live YouTube peer review—is that a thing now?—by Vincent Mourik and Sergey Frolov, the latter having been instrumental in the retraction of Microsoft’s previous claim along these lines. I’ll leave further discussion to people who actually understand the experiments.

(4) I’m looking forward to the 2022 Conference on Computational Complexity less than two weeks from now, in my … safe? clean? beautiful? awe-inspiring? … birth-city of Philadelphia. There I’ll listen to a great lineup of talks, including one by my PhD student William Kretschmer on his joint work with me and DeVon Ingram on The Acrobatics of BQP, and to co-receive the CCC Best Paper Award (wow! thanks!) for that work. I look forward to meeting some old and new Shtetl-Optimized readers there.

Computer scientists crash the Solvay Conference

Thursday, June 9th, 2022

Thanks so much to everyone who sent messages of support following my last post! I vowed there that I’m going to stop letting online trolls and sneerers occupy so much space in my mental world. Truthfully, though, while there are many trolls and sneerers who terrify me, there are also some who merely amuse me. A good example of the latter came a few weeks ago, when an anonymous commenter calling themselves “String Theorist” submitted the following:

It’s honestly funny to me when you [Scott] call yourself a “nerd” or a “prodigy” or whatever [I don’t recall ever calling myself a “prodigy,” which would indeed be cringe, though “nerd” certainly —SA], as if studying quantum computing, which is essentially nothing more than glorified linear algebra, is such an advanced intellectual achievement. For what it’s worth I’m a theoretical physicist, I’m in a completely different field, and I was still able to learn Shor’s algorithm in about half an hour, that’s how easy this stuff is. I took a look at some of your papers on arXiv and the math really doesn’t get any more advanced than linear algebra. To understand quantum circuits about the most advanced concept is a tensor product which is routinely covered in undergraduate linear algebra. Wheras in my field of string theory grasping, for instance, holographic dualities relating confirmal field theories and gravity requires vastly more expertise (years of advanced study). I actually find it pretty entertaining that you’ve said yourself you’re still struggling to understand QFT, which most people I’m working with in my research group were first exposed to in undergrad 😉 The truth is we’re in entirely different leagues of intelligence (“nerdiness”) and any of your qcomputing papers could easily be picked up by a first or second year math major. It’s just a joke that this is even a field (quantum complexity theory) with journals and faculty when the results in your papers that I’ve seen are pretty much trivial and don’t require anything more than undergraduate level maths.

Why does this sort of trash-talk, reminiscent of Luboš Motl, no longer ruffle me? Mostly because the boundaries between quantum computing theory, condensed matter physics, and quantum gravity, which were never clear in the first place, have steadily gotten fuzzier. Even in the 1990s, the field of quantum computing attracted amazing physicists—folks who definitely do know quantum field theory—such as Ed Farhi, John Preskill, and Ray Laflamme. Decades later, it would be fair to say that the physicists have banged their heads against many of the same questions that we computer scientists have banged our heads against, oftentimes in collaboration with us. And yes, there were cases where actual knowledge of particle physics gave physicists an advantage—with some famous examples being the algorithms of Farhi and collaborators (the adiabatic algorithm, the quantum walk on conjoined trees, the NAND-tree algorithm). There were other cases where computer scientists’ knowledge gave them an advantage: I wouldn’t know many details about that, but conceivably shadow tomography, BosonSampling, PostBQP=PP? Overall, it’s been what you wish every indisciplinary collaboration could be.

What’s new, in the last decade, is that the scientific conversation centered around quantum information and computation has dramatically “metastasized,” to encompass not only a good fraction of all the experimentalists doing quantum optics and sensing and metrology and so forth, and not only a good fraction of all the condensed-matter theorists, but even many leading string theorists and quantum gravity theorists, including Susskind, Maldacena, Bousso, Hubeny, Harlow, and yes, Witten. And I don’t think it’s just that they’re too professional to trash-talk quantum information people the way commenter “String Theorist” does. Rather it’s that, because of the intellectual success of “It from Qubit,” we’re increasingly participating in the same conversations and working on the same technical questions. One particularly exciting such question, which I’ll have more to say about in a future post, is the truth or falsehood of the Quantum Extended Church-Turing Thesis for observers who jump into black holes.

Not to psychoanalyze, but I’ve noticed a pattern wherein, the more secure a scientist is about their position within their own field, the readier they are to admit ignorance about the neighboring fields, to learn about those fields, and to reach out to the experts in them, to ask simple or (as it usually turns out) not-so-simple questions.


I can’t imagine any better illustration of these tendencies better than the 28th Solvay Conference on the Physics of Quantum Information, which I attended two weeks ago in Brussels on my 41st birthday.

As others pointed out, the proportion of women is not as high as we all wish, but it’s higher than in 1911, when there was exactly one: Madame Curie herself.

It was my first trip out of the US since before COVID—indeed, I’m so out of practice that I nearly missed my flights in both directions, in part because of my lack of familiarity with the COVID protocols for transatlantic travel, as well as the immense lines caused by those protocols. My former adviser Umesh Vazirani, who was also at the Solvay Conference, was proud.

The Solvay Conference is the venue where, legendarily, the fundamentals of quantum mechanics got hashed out between 1911 and 1927, by the likes of Einstein, Bohr, Planck, and Curie. (Einstein complained, in a letter, about being called away from his work on general relativity to attend a “witches’ sabbath.”) Remarkably, it’s still being held in Brussels every few years, and still funded by the same Solvay family that started it. The once-every-few-years schedule has, we were constantly reminded, been interrupted only three times in its 110-year history: once for WWI, once for WWII, and now once for COVID (this year’s conference was supposed to be in 2020).

This was the first ever Solvay conference organized around the theme of quantum information, and apparently, the first ever that counted computer scientists among its participants (me, Umesh Vazirani, Dorit Aharonov, Urmila Mahadev, and Thomas Vidick). There were four topics: (1) many-body physics, (2) quantum gravity, (3) quantum computing hardware, and (4) quantum algorithms. The structure, apparently unchanged since the conference’s founding, is this: everyone attends every session, without exception. They sit around facing each other the whole time; no one ever stands to lecture. For each topic, two “rapporteurs” introduce the topic with half-hour prepared talks; then there are short prepared response talks as well as an hour or more of unstructured discussion. Everything everyone says is recorded in order to be published later.


Daniel Gottesman and I were the two rapporteurs for quantum algorithms: Daniel spoke about quantum error-correction and fault-tolerance, and I spoke about “How Much Structure Is Needed for Huge Quantum Speedups?” The link goes to my PowerPoint slides, if you’d like to check them out. I tried to survey 30 years of history of that question, from Simon’s and Shor’s algorithms, to huge speedups in quantum query complexity (e.g., glued trees and Forrelation), to the recent quantum supremacy experiments based on BosonSampling and Random Circuit Sampling, all the way to the breakthrough by Yamakawa and Zhandry a couple months ago. The last slide hypothesizes a “Law of Conservation of Weirdness,” which after all these decades still remains to be undermined: “For every problem that admits an exponential quantum speedup, there must be some weirdness in its detailed statement, which the quantum algorithm exploits to focus amplitude on the rare right answers.” My title slide also shows DALL-E2‘s impressionistic take on the title question, “how much structure is needed for huge quantum speedups?”:

The discussion following my talk was largely a debate between me and Ed Farhi, reprising many debates he and I have had over the past 20 years: Farhi urged optimism about the prospect for large, practical quantum speedups via algorithms like QAOA, pointing out his group’s past successes and explaining how they wouldn’t have been possible without an optimistic attitude. For my part, I praised the past successes and said that optimism is well and good, but at the same time, companies, venture capitalists, and government agencies are right now pouring billions into quantum computing, in many cases—as I know from talking to them—because of a mistaken impression that QCs are already known to be able to revolutionize machine learning, finance, supply-chain optimization, or whatever other application domains they care about, and to do so soon. They’re genuinely surprised to learn that the consensus of QC experts is in a totally different place. And to be clear: among quantum computing theorists, I’m not at all unusually pessimistic or skeptical, just unusually willing to say in public what others say in private.

Afterwards, one of the string theorists said that Farhi’s arguments with me had been a highlight … and I agreed. What’s the point of a friggin’ Solvay Conference if everyone’s just going to agree with each other?


Besides quantum algorithms, there was naturally lots of animated discussion about the practical prospects for building scalable quantum computers. While I’d hoped that this discussion might change the impressions I’d come with, it mostly confirmed them. Yes, the problem is staggeringly hard. Recent ideas for fault-tolerance, including the use of LDPC codes and bosonic codes, might help. Gottesman’s talk gave me the insight that, at its core, quantum fault-tolerance is all about testing, isolation, and contact-tracing, just for bit-flip and phase-flip errors rather than viruses. Alas, we don’t yet have the quantum fault-tolerance analogue of a vaccine!

At one point, I asked the trapped-ion experts in open session if they’d comment on the startup company IonQ, whose stock price recently fell precipitously in the wake of a scathing analyst report. Alas, none of them took the bait.

On a different note, I was tremendously excited by the quantum gravity session. Netta Engelhardt spoke about her and others’ celebrated recent work explaining the Page curve of an evaporating black hole using Euclidean path integrals—and by questioning her and others during coffee breaks, I finally got a handwavy intuition for how it works. There was also lots of debate, again at coffee breaks, about Susskind’s recent speculations on observers jumping into black holes and the quantum Extended Church-Turing Thesis. One of my main takeaways from the conference was a dramatically better understanding of the issues involved there—but that’s a big enough topic that it will need its own post.

Toward the end of the quantum gravity session, the experimentalist John Martinis innocently asked what actual experiments, or at least thought experiments, had been at issue for the past several hours. I got a laugh by explaining to him that, while the gravity experts considered this too obvious to point out, the thought experiments in question all involve forming a black hole in a known quantum pure state, with total control over all the Planck-scale degrees of freedom; then waiting outside the black hole for ~1070 years; collecting every last photon of Hawking radiation that comes out and routing them all into a quantum computer; doing a quantum computation that might actually require exponential time; and then jumping into the black hole, whereupon you might either die immediately at the event horizon, or else learn something in your last seconds before hitting the singularity, which you could then never communicate to anyone outside the black hole. Martinis thanked me for clarifying.


Anyway, I had a total blast. Here I am amusing some of the world’s great physicists by letting them mess around with GPT-3.

Back: Ahmed Almheiri, Juan Maldacena, John Martinis, Aron Wall. Front: Geoff Penington, me, Daniel Harlow. Thanks to Michelle Simmons for the photo.

I also had the following exchange at my birthday dinner:

Physicist: So I don’t get this, Scott. Are you a physicist who studied computer science, or a computer scientist who studied physics?

Me: I’m a computer scientist who studied computer science.

Physicist: But then you…

Me: Yeah, at some point I learned what a boson was, in order to invent BosonSampling.

Physicist: And your courses in physics…

Me: They ended at thermodynamics. I couldn’t handle PDEs.

Physicist: What are the units of h-bar?

Me: Uhh, well, it’s a conversion factor between energy and time. (*)

Physicist: Good. What’s the radius of the hydrogen atom?

Me: Uhh … not sure … maybe something like 10-15 meters?

Physicist: OK fine, he’s not one of us.

(The answer, it turns out, is more like 10-10 meters. I’d stupidly substituted the radius of the nucleus—or, y’know, a positively-charged hydrogen ion, i.e. proton. In my partial defense, I was massively jetlagged and at most 10% conscious.)

(*) Actually h-bar is a conversion factor between energy and 1/time, i.e. frequency, but the physicist accepted this answer.


Anyway, I look forward to attending more workshops this summer, seeing more colleagues who I hadn’t seen since before COVID, and talking more science … including branching out in some new directions that I’ll blog about soon. It does beat worrying about online trolls.

Back

Saturday, April 23rd, 2022

Thanks to everyone who asked whether I’m OK! Yeah, I’ve been living, loving, learning, teaching, worrying, procrastinating, just not blogging.


Last week, Takashi Yamakawa and Mark Zhandry posted a preprint to the arXiv, “Verifiable Quantum Advantage without Structure,” that represents some of the most exciting progress in quantum complexity theory in years. I wish I’d thought of it. tl;dr they show that relative to a random oracle (!), there’s an NP search problem that quantum computers can solve exponentially faster than classical ones. And yet this is 100% consistent with the Aaronson-Ambainis Conjecture!


A student brought my attention to Quantle, a variant of Wordle where you need to guess a true equation involving 1-qubit quantum states and unitary transformations. It’s really well-done! Possibly the best quantum game I’ve seen.


Last month, Microsoft announced on the web that it had achieved an experimental breakthrough in topological quantum computing: not quite the creation of a topological qubit, but some of the underlying physics required for that. This followed their needing to retract their previous claim of such a breakthrough, due to the criticisms of Sergey Frolov and others. One imagines that they would’ve taken far greater care this time around. Unfortunately, a research paper doesn’t seem to be available yet. Anyone with further details is welcome to chime in.


Woohoo! Maximum flow, maximum bipartite matching, matrix scaling, and isotonic regression on posets (among many others)—all algorithmic problems that I was familiar with way back in the 1990s—are now solvable in nearly-linear time, thanks to a breakthrough by Chen et al.! Many undergraduate algorithms courses will need to be updated.


For those interested, Steve Hsu recorded a podcast with me where I talk about quantum complexity theory.

Two new talks and an interview

Thursday, December 2nd, 2021
  1. A talk to UT Austin’s undergraduate math club (handwritten PDF notes) about Hao Huang’s proof of the Sensitivity Conjecture, and its implications for quantum query complexity and more. I’m still not satisfied that I’ve presented Huang’s beautiful proof as clearly and self-containedly as I possibly can, which probably just means I need to lecture on it a few more times.
  2. A Zoom talk at the QPQIS conference in Beijing (PowerPoint slides), setting out my most recent thoughts about Google’s and USTC’s quantum supremacy experiments and the continuing efforts to spoof them classically.
  3. An interview with me in Communications of the ACM, mostly about BosonSampling and the quantum lower bound for the collision problem.

Enjoy y’all!

The Acrobatics of BQP

Friday, November 19th, 2021

Just in case anyone is depressed this afternoon and needs something to cheer them up, students William Kretschmer, DeVon Ingram, and I have finally put out a new paper:

The Acrobatics of BQP

Abstract: We show that, in the black-box setting, the behavior of quantum polynomial-time (BQP) can be remarkably decoupled from that of classical complexity classes like NP. Specifically:

– There exists an oracle relative to which NPBQP⊄BQPPH, resolving a 2005 problem of Fortnow. Interpreted another way, we show that AC0 circuits cannot perform useful homomorphic encryption on instances of the Forrelation problem. As a corollary, there exists an oracle relative to which P=NP but BQP≠QCMA.

– Conversely, there exists an oracle relative to which BQPNP⊄PHBQP.

– Relative to a random oracle, PP=PostBQP is not contained in the “QMA hierarchy” QMAQMA^QMA^…, and more generally PP⊄(MIP*)(MIP*)^(MIP*)^… (!), despite the fact that MIP*=RE in the unrelativized world. This result shows that there is no black-box quantum analogue of Stockmeyer’s approximate counting algorithm.

– Relative to a random oracle, Σk+1⊄BQPΣ_k for every k.

– There exists an oracle relative to which BQP=P#P and yet PH is infinite. (By contrast, if NP⊆BPP, then PH collapses relative to all oracles.)

– There exists an oracle relative to which P=NP≠BQP=P#P.

To achieve these results, we build on the 2018 achievement by Raz and Tal of an oracle relative to which BQP⊄PH, and associated results about the Forrelation problem. We also introduce new tools that might be of independent interest. These include a “quantum-aware” version of the random restriction method, a concentration theorem for the block sensitivity of AC0 circuits, and a (provable) analogue of the Aaronson-Ambainis Conjecture for sparse oracles.

Incidentally, particularly when I’ve worked on a project with students, I’m often tremendously excited and want to shout about it from the rooftops for the students’ sake … but then I also don’t want to use this blog to privilege my own papers “unfairly.” Can anyone suggest a principle that I should follow going forward?

Scott Aaronson, when reached for comment, said…

Tuesday, November 16th, 2021

About IBM’s new 127-qubit superconducting chip: As I told New Scientist, I look forward to seeing the actual details! As far as I could see, the marketing materials that IBM released yesterday take a lot of words to say absolutely nothing about what, to experts, is the single most important piece of information: namely, what are the gate fidelities? How deep of a quantum circuit can they apply? How have they benchmarked the chip? Right now, all I have to go on is a stats page for the new chip, which reports its average CNOT error as 0.9388—in other words, close to 1, or terrible! (But see also a tweet by James Wootton, which explains that such numbers are often highly misleading when a new chip is first rolled out.) Does anyone here have more information? Update (11/17): As of this morning, the average CNOT error has been updated to 2%. Thanks to multiple commenters for letting me know!

About the new simulation of Google’s 53-qubit Sycamore chip in 5 minutes on a Sunway supercomputer (see also here): This is an exciting step forward on the classical validation of quantum supremacy experiments, and—ironically, what currently amounts to almost the same thing—on the classical spoofing of those experiments. Congratulations to the team in China that achieved this! But there are two crucial things to understand. First, “5 minutes” refers to the time needed to calculate a single amplitude (or perhaps, several correlated amplitudes) using tensor network contraction. It doesn’t refer to the time needed to generate millions of independent noisy samples, which is what Google’s Sycamore chip does in 3 minutes. For the latter task, more like a week still seems to be needed on the supercomputer. (I’m grateful to Chu Guo, a coauthor of the new work who spoke in UT Austin’s weekly quantum Zoom meeting, for clarifying this point.) Second, the Sunway supercomputer has parallel processing power equivalent to approximately ten million of your laptop. Thus, even if we agreed that Google no longer had quantum supremacy as measured by time, it would still have quantum supremacy as measured by carbon footprint! (And this despite the fact that the quantum computer itself requires a noisy, closet-sized dilution fridge.) Even so, for me the new work underscores the point that quantum supremacy is not yet a done deal. Over the next few years, I hope that Google and USTC, as well as any new entrants to this race (IBM? IonQ? Harvard? Rigetti?), will push forward with more qubits and, even more importantly, better gate fidelities leading to higher Linear Cross-Entropy scores. Meanwhile, we theorists should try to do our part by inventing new and better protocols with which to demonstrate near-term quantum supremacy—especially protocols for which the classical verification is easier.

About the new anti-woke University of Austin (UATX): In general, I’m extremely happy for people to experiment with new and different institutions, and of course I’m happy for more intellectual activity in my adopted city of Austin. And, as Shtetl-Optimized readers will know, I’m probably more sympathetic than most to the reality of the problem that UATX is trying to solve—living, as we do, in an era when one academic after another has been cancelled for ideas that a mere decade ago would’ve been considered unexceptional, moderate, center-left. Having said all that, I wish I could feel more optimistic about UATX’s prospects. I found its website heavy on free-speech rhetoric but frustratingly light on what the new university is actually going to do: what courses it will offer, who will teach them, where the campus will be, etc. etc. Arguably this is all excusable for a university still in ramp-up mode, but had I been in their shoes, I might have held off on the public launch until I had at least some sample content to offer. Certainly, the fact that Steven Pinker has quit UATX’s advisory board is a discouraging sign. If UATX asks me to get involved—to lecture there, to give them advice about their CS program, etc.—I’ll consider it as I would any other request. So far, though, they haven’t.

About the Association for Mathematical Research: Last month, some colleagues invited me to join a brand-new society called the Association for Mathematical Research. Many of the other founders (Joel Hass, Abigail Thompson, Colin Adams, Richard Borcherds, Jeff Cheeger, Pavel Etingof, Tom Hales, Jeff Lagarias, Mark Lackenby, Cliff Taubes, …) were brilliant mathematicians who I admired, they seemed like they could use a bit of theoretical computer science representation, there was no time commitment, maybe they’d eventually do something good, so I figured why not? Alas, to say that AMR has proved unpopular on Twitter would be an understatement: it’s received the same contemptuous reception that UATX has. The argument seems to be: starting a new mathematical society, even an avowedly diverse and apolitical one, is really just an implicit claim that the existing societies, like the Mathematical Association of America (MAA) and the American Mathematical Society (AMS), have been co-opted by woke true-believers. But that’s paranoid and insane! I mean, it’s not as if an AMS blog has called for the mass resignation of white male mathematicians to make room for the marginalized, or the boycott of Israeli universities, or the abolition of the criminal justice system (what to do about Kyle Rittenhouse though?). Still, even though claims of that sort of co-option are obviously far-out, rabid fantasies, yeah, I did decide to give a new organization the benefit of the doubt. AMR might well fail or languish in obscurity, just like UATX might. On the other hand, the barriers to making a positive difference for the intellectual world, the world I love, the world under constant threat from the self-certain ideologues of every side, do strike me as orders of magnitude smaller for a new professional society than they do for a new university.

Gaussian BosonSampling, higher-order correlations, and spoofing: An update

Sunday, October 10th, 2021

In my last post, I wrote (among other things) about an ongoing scientific debate between the group of Chaoyang Lu at USTC in China, which over the past year has been doing experiments that seek to demonstrate quantum supremacy via Gaussian BosonSampling; and the group of Sergio Boixo at Google, which had a recent paper on a polynomial-time classical algorithm to sample approximately from the same distributions.  I reported the facts as I understood them at the time.  Since then, though, a long call with the Google team gave me a new and different understanding, and I feel duty-bound to share that here.

A week ago, I considered it obvious that if, using a classical spoofer, you could beat the USTC experiment on a metric like total variation distance from the ideal distribution, then you would’ve completely destroyed USTC’s claim of quantum supremacy.  The reason I believed that, in turn, is a proposition that I hadn’t given a name but needs one, so let me call it Hypothesis H:

The only way a classical algorithm to spoof BosonSampling can possibly do well in total variation distance, is by correctly reproducing the high-order correlations (correlations among the occupation numbers of large numbers of modes) — because that’s where the complexity of BosonSampling lies (if it lies anywhere).

Hypothesis H had important downstream consequences.  Google’s algorithm, by the Google team’s own admission, does not reproduce the high-order correlations.  Furthermore, because of limitations on both samples and classical computation time, Google’s paper calculates the total variation distance from the ideal distribution only on the marginal distribution on roughly 14 out of 144 modes.  On that marginal distribution, Google’s algorithm does do better than the experiment in total variation distance.  Google presents a claimed extrapolation to the full 144 modes, but eyeballing the graphs, it was far from clear to me what would happen: like, maybe the spoofing algorithm would continue to win, but maybe the experiment would turn around and win; who knows?

Chaoyang, meanwhile, made a clear prediction that the experiment would turn around and win, because of

  1. the experiment’s success in reproducing the high-order correlations,
  2. the admitted failure of Google’s algorithm in reproducing the high-order correlations, and
  3. the seeming impossibility of doing well on BosonSampling without reproducing the high-order correlations (Hypothesis H).

Given everything my experience told me about the central importance of high-order correlations for BosonSampling, I was inclined to agree with Chaoyang.

Now for the kicker: it seems that Hypothesis H is false.  A classical spoofer could beat a BosonSampling experiment on total variation distance from the ideal distribution, without even bothering to reproduce the high-order correlations correctly.

This is true because of a combination of two facts about the existing noisy BosonSampling experiments.  The first fact is that the contribution from the order-k correlations falls off like 1/exp(k).  The second fact is that, due to calibration errors and the like, the experiments already show significant deviations from the ideal distribution on the order-1 and order-2 correlations.

Put these facts together and what do you find?  Well, suppose your classical spoofing algorithm takes care to get the low-order contributions to the distribution exactly right.  Just for that reason alone, it could already win over a noisy BosonSampling experiment, as judged by benchmarks like total variation distance from the ideal distribution, or for that matter linear cross-entropy.  Yes, the experiment will beat the classical simulation on the higher-order correlations.  But because those higher-order correlations are exponentially attenuated anyway, they won’t be enough to make up the difference.  The experiment’s lack of perfection on the low-order correlations will swamp everything else.

Granted, I still don’t know for sure that this is what happens — that depends on whether I believe Sergio or Chaoyang about the extrapolation of the variation distance to the full 144 modes (my own eyeballs having failed to render a verdict!).  But I now see that it’s logically possible, maybe even plausible.

So, let’s imagine for the sake of argument that Google’s simulation wins on variation distance, even though the experiment wins on the high-order correlations.  In that case, what would be our verdict: would USTC have achieved quantum supremacy via BosonSampling, or not?

It’s clear what each side could say.

Google could say: by a metric that Scott Aaronson, the coinventor of BosonSampling, thought was perfectly adequate as late as last week — namely, total variation distance from the ideal distribution — we won.  We achieved lower variation distance than USTC’s experiment, and we did it using a fast classical algorithm.  End of discussion.  No moving the goalposts after the fact.

Google could even add: BosonSampling is a sampling task; it’s right there in the name!  The only purpose of any benchmark — whether Linear XEB or high-order correlation — is to give evidence about whether you are or aren’t sampling from a distribution close to the ideal one.  But that means that, if you accept that we are doing the latter better than the experiment, then there’s nothing more to argue about.

USTC could respond: even if Scott Aaronson is the coinventor of BosonSampling, he’s extremely far from an infallible oracle.  In the case at hand, his lack of appreciation for the sources of error in realistic experiments caused him to fixate inappropriately on variation distance as the success criterion.  If you want to see the quantum advantage in our system, you have to deliberately subtract off the low-order correlations and look at the high-order correlations.

USTC could add: from the very beginning, the whole point of quantum supremacy experiments was to demonstrate a clear speedup on some benchmark — we never particularly cared which one!  That horse is out of the barn as soon as we’re talking about quantum supremacy at all — something the Google group, which itself reported the first quantum supremacy experiment in Fall 2019, again for a completely artificial benchmark — knows as well as anyone else.  (The Google team even has experience with adjusting benchmarks: when, for example, Pan and Zhang pointed out that Linear XEB as originally specified is pretty easy to spoof for random 2D circuits, the most cogent rejoinder was: OK, fine then, add an extra check that the returned samples are sufficiently different from one another, which kills Pan and Zhang’s spoofing strategy.) In that case, then, why isn’t a benchmark tailored to the high-order correlations as good as variation distance or linear cross-entropy or any other benchmark?

Both positions are reasonable and have merit — though I confess to somewhat greater sympathy for the one that appeals to my doofosity rather than my supposed infallibility!

OK, but suppose, again for the sake of argument, that we accepted the second position, and we said that USTC gets to declare quantum supremacy as long as its experiment does better than any known classical simulation at reproducing the high-order correlations.  We’d still face the question: does the USTC experiment, in fact, do better on that metric?  It would be awkward if, having won the right to change the rules in its favor, USTC still lost even under the new rules.

Sergio tells me that USTC directly reported experimental data only for up to order-7 correlations, and at least individually, the order-7 correlations are easy to reproduce on a laptop (although sampling in a way that reproduces the order-7 correlations might still be hard—a point that Chaoyang confirms, and where further research would be great). OK, but USTC also reported that their experiment seems to reproduce up to order-19 correlations. And order-19 correlations, the Google team agrees, are hard to sample consistently with on a classical computer by any currently known algorithm.

So then, why don’t we have direct data for the order-19 correlations?  The trouble is simply that it would’ve taken USTC an astronomical amount of computation time.  So instead, they relied on a statistical extrapolation from the observed strength of the lower-order correlations — there we go again with the extrapolations!  Of course, if we’re going to let Google rest its case on an extrapolation, then maybe it’s only sporting to let USTC do the same.

You might wonder: why didn’t we have to worry about any of this stuff with the other path to quantum supremacy, the one via random circuit sampling with superconducting qubits?  The reason is that, with random circuit sampling, all the correlations except the highest-order ones are completely trivial — or, to say it another way, the reduced state of any small number of output qubits is exponentially close to the maximally mixed state.  This is a real difference between BosonSampling and random circuit sampling—and even 5-6 years ago, we knew that this represented an advantage for random circuit sampling, although I now have a deeper appreciation for just how great of an advantage it is.  For it means that, with random circuit sampling, it’s easier to place a “sword in the stone”: to say, for example, here is the Linear XEB score achieved by the trivial classical algorithm that outputs random bits, and lo, our experiment achieves a higher score, and lo, we challenge anyone to invent a fast classical spoofing method that achieves a similarly high score.

With BosonSampling, by contrast, we have various metrics with which to judge performance, but so far, for none of those metrics do we have a plausible hypothesis that says “here’s the best that any polynomial-time classical algorithm can possibly hope to do, and it’s completely plausible that even a noisy current or planned BosonSampling experiment can do better than that.”

In the end, then, I come back to the exact same three goals I would’ve recommended a week ago for the future of quantum supremacy experiments, but with all of them now even more acutely important than before:

  1. Experimentally, to increase the fidelity of the devices (with BosonSampling, for example, to observe a larger contribution from the high-order correlations) — a much more urgent goal, from the standpoint of evading classical spoofing algorithms, than further increasing the dimensionality of the Hilbert space.
  2. Theoretically, to design better ways to verify the results of sampling-based quantum supremacy experiments classically — ideally, even ways that could be applied via polynomial-time tests.
  3. For Gaussian BosonSampling in particular, to get a better understanding of the plausible limits of classical spoofing algorithms, and exactly how good a noisy device needs to be before it exceeds those limits.

Thanks so much to Sergio Boixo and Ben Villalonga for the conversation, and to Chaoyang Lu and Jelmer Renema for comments on this post. Needless to say, any remaining errors are my own.

The Physics Nobel, Gaussian BosonSampling, and Dorian Abbot

Tuesday, October 5th, 2021

1. Huge congratulations to the winners of this year’s Nobel Prize in Physics: Syukuro Manabe and Klaus Hasselmann for climate modelling, and separately, Giorgio Parisi for statistical physics. While I don’t know the others, I had the great honor to get to know Parisi three years ago, when he was chair of the committee that awarded me the Tomassoni-Chisesi Prize in Physics, and when I visited Parisi’s department at Sapienza University of Rome to give the prize lecture and collect the award. I remember Parisi’s kindness, a lot of good food, and a lot of discussion of the interplay between theoretical computer science and physics. Note that, while much of Parisi’s work is beyond my competence to comment on, in computer science he’s very well-known for applying statistical physics methods to the analysis of survey propagation—an algorithm that revolutionized the study of random 3SAT when it was introduced two decades ago.


2. Two weeks ago, a group at Google put out a paper with a new efficient classical algorithm to simulate the recent Gaussian BosonSampling experiments from USTC in China. They argued that this algorithm called into question USTC’s claim of BosonSampling-based quantum supremacy. Since then, I’ve been in contact with Sergio Boixo from Google, Chaoyang Lu from USTC, and Jelmer Renema, a Dutch BosonSampling expert and friend of the blog, to try to get to the bottom of this. Very briefly, the situation seems to be that Google’s new algorithm outperforms the USTC experiment on one particular metric: namely, total variation distance from the ideal marginal distribution, if (crucially) you look at only a subset of the optical modes, say 14 modes out of 144 total. Meanwhile, though, if you look at the kth-order correlations for large values of k, then the USTC experiment continues to win. With the experiment, the correlations fall off exponentially with k but still have a meaningful, detectable signal even for (say) k=19, whereas with Google’s spoofing algorithm, you choose the k that you want to spoof (say, 2 or 3), and then the correlations become nonsense for larger k.

Now, given that you were only ever supposed to see a quantum advantage from BosonSampling if you looked at the kth-order correlations for large values of k, and given that we already knew, from the work of Leonid Gurvits, that very small marginals in BosonSampling experiments would be easy to reproduce on a classical computer, my inclination is to say that USTC’s claim of BosonSampling-based quantum supremacy still stands. On the other hand, it’s true that, with BosonSampling especially, more so than with qubit-based random circuit sampling, we currently lack an adequate theoretical understanding of what the target should be. That is, which numerical metric should an experiment aim to maximize, and how well does it have to score on that metric before it’s plausibly outperforming any fast classical algorithm? One thing I feel confident about is that, whichever metric is chosen—Linear Cross-Entropy or whatever else—it needs to capture the kth-order correlations for large values of k. No metric that’s insensitive to those correlations is good enough.


3. Like many others, I was outraged and depressed that MIT uninvited Dorian Abbot (see also here), a geophysicist at the University of Chicago, who was slated to give the Carlson Lecture in the Department of Earth, Atmospheric, and Planetary Sciences about the atmospheres of extrasolar planets. The reason for the cancellation was that, totally unrelatedly to his scheduled lecture, Abbot had argued in Newsweek and elsewhere that Diversity, Equity, and Inclusion initiatives should aim for equality for opportunity rather than equality of outcomes, a Twitter-mob decided to go after him in retaliation, and they succeeded. It should go without saying that it’s perfectly reasonable to disagree with Abbot’s stance, to counterargue—if those very concepts haven’t gone the way of floppy disks. It should also go without saying that the MIT EAPS department chair is free to bow to social-media pressure, as he did, rather than standing on principle … just like I’m free to criticize him for it. To my mind, though, cancelling a scientific talk because of the speaker’s centrist (!) political views completely, 100% validates the right’s narrative about academia, that it’s become a fanatically intolerant echo chamber. To my fellow progressive academics, I beseech thee in the bowels of Bertrand Russell: why would you commit such an unforced error?

Yes, one can imagine views (e.g., open Nazism) so hateful that they might justify the cancellation of unrelated scientific lectures by people who hold those views, as many physicists after WWII refused to speak to Werner Heisenberg. But it seems obvious to me—as it would’ve been obvious to everyone else not long ago—that no matter where a reasonable person draws the line, Abbot’s views as he expressed them in Newsweek don’t come within a hundred miles of it. To be more explicit still: if Abbot’s views justify deplatforming him as a planetary scientist, then all my quantum computing and theoretical computer science lectures deserve to be cancelled too, for the many attempts I’ve made on this blog over the past 16 years to share my honest thoughts and life experiences, to write like a vulnerable human being rather than like a university press office. While I’m sure some sneerers gleefully embrace that implication, I ask everyone else to consider how deeply they believe in the idea of academic freedom at all—keeping in mind that such a commitment only ever gets tested when there’s a chance someone might denounce you for it.

Update: Princeton’s James Madison Program has volunteered to host Abbot’s Zoom talk in place of MIT. The talk is entitled “Climate and the Potential for Life on Other Planets.” Like probably hundreds of others who heard about this only because of the attempted cancellation, I plan to attend!

Unrelated Bonus Update: Here’s a neat YouTube video put together by the ACM about me as well as David Silver of AlphaGo and AlphaZero, on the occasion of our ACM Prizes in Computing.