Quantum computing motte-and-baileys
Saturday, December 28th, 2019In the wake of two culture-war posts—the first on the term “quantum supremacy,” the second on the acronym “NIPS”—it’s clear that we all need to cool off with something anodyne and uncontroversial. Fortunately, this holiday season, I know just the thing to bring everyone together: groaning about quantum computing hype!
When I was at the Q2B conference in San Jose, I learned about lots of cool stuff that’s happening in the wake of Google’s quantum supremacy announcement. I heard about the 57-qubit superconducting chip that the Google group is now building, following up on its 53-qubit one; and also about their first small-scale experimental demonstration of my certified randomness protocol. I learned about recent progress on costing out the numbers of qubits and gates needed to do fault-tolerant quantum simulations of useful chemical reactions (IIRC, maybe a hundred thousand qubits and a few hours’ worth of gates—scary, but not Shor’s algorithm scary).
I also learned about two claims about quantum algorithms that startups have made, and which are being wrongly interpreted. The basic pattern is one that I’ve come to know well over the years, and which you could call a science version of the motte-and-bailey. (For those not up on nerd blogosphere terminology: in medieval times, the motte was a dank castle to which you’d retreat while under attack; the bailey was the desirable land that you’d farm once the attackers left.)
To wit:
- Startup makes claims that have both a true boring interpretation (e.g., you can do X with a quantum computer), as well as a false exciting interpretation (e.g., you can do X with a quantum computer, and it would actually make sense to do this, because you’ll get an asymptotic speedup over the best known classical algorithm).
- Lots of business and government people get all excited, because they assume the false exciting interpretation must be true (or why else would everyone be talking about this?). Some of those people ask me for comment.
- I look into it, perhaps by asking the folks at the startup. The startup folks clarify that they meant only the true boring interpretation. To be sure, they’re actively exploring the false exciting interpretation—whether some parts of it might be true after all—but they’re certainly not making any claims about it that would merit, say, a harsh post on Shtetl-Optimized.
- I’m satisfied to have gotten to the bottom of things, and I tell the startup folks to go their merry way.
- Yet many people continue to seem as excited as if the false exciting interpretation had been shown to be true. They continue asking me questions that presuppose its truth.
Our first instance of this pattern is the recent claim, by Zapata Computing, to have set a world record for integer factoring (1,099,551,473,989 = 1,048,589 × 1,048,601) with a quantum computer, by running a QAOA/variational algorithm on IBM’s superconducting device. Gosh! That sure sounds a lot better than the 21 that’s been factored with Shor’s algorithm, doesn’t it?
I read the Zapata paper that this is based on, entitled “Variational Quantum Factoring,” and I don’t believe that a single word in it is false. My issue is something the paper omits: namely, that once you’ve reduced factoring to a generic optimization problem, you’ve thrown away all the mathematical structure that Shor’s algorithm cleverly exploits, and that makes factoring asymptotically easy for a quantum computer. And hence there’s no reason to expect your quantum algorithm to scale any better than brute-force trial division (or in the most optimistic scenario, trial division enhanced with Grover search). On large numbers, your algorithm will be roundly outperformed even by classical algorithms that do exploit structure, like the Number Field Sieve. Indeed, the quantum computer’s success at factoring the number will have had little or nothing to do with its being quantum at all—a classical optimization algorithm would’ve served as well. And thus, the only reasons to factor a number on a quantum device in this way, would seem to be stuff like calibrating the device.
Admittedly, to people who work in quantum algorithms, everything above is so obvious that it doesn’t need to be said. But I learned at Q2B that there are interested people for whom this is not obvious, and even comes as a revelation. So that’s why I’m saying it.
Again and again over the past twenty years, I’ve seen people reinvent the notion of a “simpler alternative” to Shor’s algorithm: one that cuts out all the difficulty of building a fault-tolerant quantum computer. In every case, the trouble, typically left unstated, has been that these alternatives also cut out the exponential speedup that’s Shor’s algorithm’s raison d’être.
Our second example today of a quantum computing motte-and-bailey is the claim, by Toronto-based quantum computing startup Xanadu, that Gaussian BosonSampling can be used to solve all sorts of graph problems, like graph isomorphism, graph similarity, and densest subgraph. As the co-inventor of BosonSampling, few things would warm my heart more than finding an actual application for that model (besides quantum supremacy experiments and, perhaps, certified random number generation). But I still regard this as an open problem—if by “application,” we mean outperforming what you could’ve done classically.
In papers (see for example here, here, here), members of the Xanadu team have given all sorts of ways to take a graph, and encode it into an instance of Gaussian BosonSampling, in such a way that the output distribution will then reveal features of the graph, like its isomorphism type or its dense subgraphs. The trouble is that so far, I’ve seen no indications that this will actually lead to quantum algorithms that outperform the best classical algorithms, for any graph problems of practical interest.
In the case of Densest Subgraph, the Xanadu folks use the output of a Gaussian BosonSampler to seed (that is, provide an initial guess for) a classical local search algorithm. They say they observe better results this way than if they seed that classical local search algorithm with completely random initial conditions. But of course, the real question is: could we get equally good results by seeding with the output of some classical heuristic? Or by solving Densest Subgraph with a different approach entirely? Given how hard it’s turned out to be just to verify that the outputs of a BosonSampling device come from such a device at all, it would seem astonishing if the answer to these questions wasn’t “yes.”
In the case of Graph Isomorphism, the situation is even clearer. There, the central claim made by the Xanadu folks is that given a graph G, they can use a Gaussian BosonSampling device to sample a probability distribution that encodes G’s isomorphism type. So, isn’t this “promising” for solving GI with a quantum computer? All you’d need to do now is invent some fast classical algorithm that could look at the samples coming from two graphs G and H, and tell you whether the probability distributions were the same.
Except, not really. While the Xanadu paper never says so, if all you want is to sample a distribution that encodes a graph’s isomorphism type, that’s easy to do classically! (I even put this on the final exam for my undergraduate Quantum Information Science course a couple weeks ago.) Here’s how: given as input a graph G, just output G but with its vertices randomly permuted. Indeed, this will even provide a further property, better than anything the BosonSampling approach has been shown to provide (or than it probably does provide): namely, if G and H are not isomorphic, then the two probability distributions will not only be different but will have disjoint supports. Alas, this still leaves us with the problem of distinguishing which distribution a given sample came from, which is as hard as Graph Isomorphism itself. None of these approaches, classical or quantum, seem to lead to any algorithm that’s subexponential time, let alone competitive with the “Babai approach” of thinking really hard about graphs.
All of this stuff falls victim to what I regard as the Fundamental Error of Quantum Algorithms Research: namely, to treat it as “promising” that a quantum algorithm works at all, or works better than some brute-force classical algorithm, without asking yourself whether there are any indications that your approach will ever be able to exploit interference of amplitudes to outperform the best classical algorithm.
Incidentally, I’m not sure exactly why, but in practice, a major red flag that the Fundamental Error is about to be committed is when someone starts talking about “hybrid quantum/classical algorithms.” By this they seem to mean: “outside the domain of traditional quantum algorithms, so don’t judge us by the standards of that domain.” But I liked the way someone at Q2B put it to me: every quantum algorithm is a “hybrid quantum/classical algorithm,” with classical processors used wherever they can be, and qubits used only where they must be.
The other thing people do, when challenged, is to say “well, admittedly we have no rigorous proof of an asymptotic quantum speedup”—thereby brilliantly reframing the whole conversation, to make people like me look like churlish theoreticians insisting on an impossible and perhaps irrelevant standard of rigor, blind to some huge practical quantum speedup that’s about to change the world. The real issue, of course, is not that they haven’t given a proof of a quantum speedup (in either the real world or the black-box world); rather, it’s that they’ve typically given no reasons whatsoever to think that there might be a quantum speedup, compared to the best classical algorithms available.
In the holiday spirit, let me end on a positive note. When I did the Q&A at Q2B—the same one where Sarah Kaiser asked me to comment on the term “quantum supremacy”—one of my answers touched on the most important theoretical open problems about sampling-based quantum supremacy experiments. At the top of the list, I said, was whether there’s some interactive protocol by which a near-term quantum computer can not only exhibit quantum supremacy, but prove it to a polynomial-time-bounded classical skeptic. I mentioned that there was one proposal for how to do this, in the IQP model, due to Bremner and Shepherd, from way back in 2008. I said that their proposal deserved much more attention than it had received, and that trying to break it would be one obvious thing to work on. Little did I know that, literally while I was speaking, a paper was being posted to the arXiv, by Gregory Kahanamoku-Meyer, that claims to break Bremner and Shepherd’s protocol. I haven’t yet studied the paper, but assuming it’s correct, it represents the first clear progress on this problem in years (even though of a negative kind). Cool!!