Update (Dec. 16): If you’re still following this, please check out an important comment by Alex Selby, the discoverer of Selby’s algorithm, which I discussed in the post. Selby queries a few points in the Google paper: among other things, he disagrees with their explanation of why his classical algorithm works so well on D-Wave’s Chimera graph (and with their prediction that it should stop working for larger graphs), and he explains that Karmarkar-Karp is not the best known classical algorithm for the Number Partitioning problem. He also questions whether simulated annealing is the benchmark against which everything should be compared (on the grounds that “everything else requires fine-tuning”), pointing out that SA itself typically requires lots of tuning to get it to work well.
Update (Dec. 11): MIT News now has a Q&A with me about the new Google paper. I’m really happy with how the Q&A turned out; people who had trouble understanding this blog post might find the Q&A easier. Thanks very much to Larry Hardesty for arranging it.
Meanwhile, I feel good that there seems to have been actual progress in the D-Wave debate! In previous rounds, I had disagreed vehemently with some of my MIT colleagues (like Ed Farhi and Peter Shor) about the best way to respond to D-Wave’s announcements. Today, though, at our weekly group meeting, there was almost no daylight between any of us. Partly, I’m sure, it’s that I’ve learned to express myself better; partly it’s that the “trigger” this time was a serious research paper by a group separate from D-Wave, rather than some trash-talking statement from Geordie Rose. But mostly it’s that, thanks to the Google group’s careful investigations, this time pretty much anyone who knows anything agrees about all the basic facts, as I laid them out in this blog post and in the Q&A. All that remains are some small differences in emotional attitude: e.g., how much of your time do you want to spend on a speculative, “dirty” approach to quantum computing (which is far ahead of everyone else in terms of engineering and systems integration, but which still shows no signs of an asymptotic speedup over the best classical algorithms, which is pretty unsurprising given theoretical expectations), at a time when the “clean” approaches might finally be closing in on the long-sought asymptotic quantum speedup?
Another Update: Daniel Lidar was nice enough to email me an important observation, and to give me permission to share it here. Namely, the D-Wave 2X has a minimum annealing time of 20 microseconds. Because of this, the observed running times for small instance sizes are artificially forced upward, making the growth rate in the machine’s running time look milder than it really is. (Regular readers might remember that exactly the same issue plagued previous D-Wave vs. classical performance comparisons.) Correcting this would certainly decrease the D-Wave 2X’s predicted speedup over simulated annealing, in extrapolations to larger numbers of qubits than have been tested so far (although Daniel doesn’t know by how much). Daniel stresses that he’s not criticizing the Google paper, which explicitly mentions the minimum annealing time—just calling attention to something that deserves emphasis.
In retrospect, I should’ve been suspicious, when more than a year went by with no major D-Wave announcements that everyone wanted me to react to immediately. Could it really be that this debate was over—or not “over,” but where it always should’ve been, in the hands of experts who might disagree vehemently but are always careful to qualify speedup claims—thereby freeing up the erstwhile Chief D-Wave Skeptic for more “””rewarding””” projects, like charting a middle path through the Internet’s endless social justice wars?
Nope.
As many of you will have seen by now, on Monday a team at Google put out a major paper reporting new experiments on the D-Wave 2X machine. (See also Hartmut Neven’s blog post about this.) The predictable popularized version of the results—see for example here and here—is that the D-Wave 2X has now demonstrated a factor-of-100-million speedup over standard classical chips, thereby conclusively putting to rest the question of whether the device is “truly a quantum computer.” In the comment sections of one my previous posts, D-Wave investor Steve Jurvetson even tried to erect a victory stele, by quoting Karl Popper about falsification.
In situations like this, the first thing I do is turn to Matthias Troyer, who’s arguably the planet’s most balanced, knowledgeable, trustworthy interpreter of quantum annealing experiments. Happily, in collaboration with Ilia Zintchenko and Ethan Brown, Matthias was generous enough to write a clear 3-page document putting the new results into context, and to give me permission to share it on this blog. From a purely scientific standpoint, my post could end right here, with a link to their document.
Then again, from a purely scientific standpoint, the post could’ve ended even earlier, with the link to the Google paper itself! For this is not a case where the paper hides some crucial issue that the skeptics then need to ferret out. On the contrary, the paper’s authors include some of the most careful people in the business, and the paper explains the caveats as clearly as one could ask. In some sense, then, all that’s left for me or Matthias to do is to tell you what you’d learn if you read the paper!
So, OK, has the D-Wave 2X demonstrated a factor-108 speedup or not? Here’s the shortest answer that I think is non-misleading:
Yes, there’s a factor-108 speedup that looks clearly asymptotic in nature, and there’s also a factor-108 speedup over Quantum Monte Carlo. But the asymptotic speedup is only if you compare against simulated annealing, while the speedup over Quantum Monte Carlo is only constant-factor, not asymptotic. And in any case, both speedups disappear if you compare against other classical algorithms, like that of Alex Selby. Also, the constant-factor speedup probably has less to do with quantum mechanics than with the fact that D-Wave built extremely specialized hardware, which was then compared against a classical chip on the problem of simulating the specialized hardware itself (i.e., on Ising spin minimization instances with the topology of D-Wave’s Chimera graph). Thus, while there’s been genuine, interesting progress, it remains uncertain whether D-Wave’s approach will lead to speedups over the best known classical algorithms, let alone to speedups over the best known classical algorithms that are also asymptotic or also of practical importance. Indeed, all of these points also remain uncertain for quantum annealing as a whole.
To expand a bit, there are really three separate results in the Google paper:
- The authors create Chimera instances with tall, thin energy barriers blocking the way to the global minimum, by exploiting the 8-qubit “clusters” that play such a central role in the Chimera graph. In line with a 2002 theoretical prediction by Farhi, Goldstone, and Gutmann (a prediction we’ve often discussed on this blog), they then find that on these special instances, quantum annealing reaches the global minimum exponentially faster than classical simulated annealing, and that the D-Wave machine realizes this advantage. As far as I’m concerned, this completely nails down the case for computationally-relevant collective quantum tunneling in the D-Wave machine, at least within the 8-qubit clusters. On the other hand, the authors point out that there are other classical algorithms, like that of Selby (building on Hamze and de Freitas), which group together the 8-bit clusters into 256-valued mega-variables, and thereby get rid of the energy barrier that kills simulated annealing. These classical algorithms are found empirically to outperform the D-Wave machine. The authors also match the D-Wave machine’s asymptotic performance (though not the leading constant) using Quantum Monte Carlo, which (despite its name) is a classical algorithm often used to find quantum-mechanical ground states.
- The authors make a case that the ability to tunnel past tall, thin energy barriers—i.e., the central advantage that quantum annealing has been shown to have over classical annealing—might be relevant to at least some real-world optimization problems. They do this by studying a classic NP-hard problem called Number Partitioning, where you’re given a list of N positive integers, and your goal is to partition the integers into two subsets whose sums differ from each other by as little as possible. Through numerical studies on classical computers, they find that quantum annealing (in the ideal case) and Quantum Monte Carlo should both outperform simulated annealing, by roughly equal amounts, on random instances of Number Partitioning. Note that this part of the paper doesn’t involve any experiments on the D-Wave machine itself, so we don’t know whether calibration errors, encoding loss, etc. will kill the theoretical advantage over simulated annealing. But even if not, this still wouldn’t yield a “true quantum speedup,” since (again) Quantum Monte Carlo is a perfectly-good classical algorithm, whose asymptotics match those of quantum annealing on these instances.
- Finally, on the special Chimera instances with the tall, thin energy barriers, the authors find that the D-Wave 2X reaches the global optimum about 108 times faster than Quantum Monte Carlo running on a single-core classical computer. But, extremely interestingly, they also find that this speedup does not grow with problem size; instead it simply saturates at ~108. In other words, this is a constant-factor speedup rather than an asymptotic one. Now, obviously, solving a problem “only” 100 million times faster (rather than asymptotically faster) can still have practical value! But it’s crucial to remember that this constant-factor speedup is only observed for the Chimera instances—or in essence, for “the problem of simulating the D-Wave machine itself”! If you wanted to solve something of practical importance, you’d first need to embed it into the Chimera graph, and it remains unclear whether any of the constant-factor speedup would survive that embedding. In any case, while the paper isn’t explicit about this, I gather that the constant-factor speedup disappears when one compares against (e.g.) the Selby algorithm, rather than against QMC.
So then, what do I say to Steve Jurvetson? I say—happily, not grudgingly!—that the new Google paper provides the clearest demonstration so far of a D-Wave device’s capabilities. But then I remind him of all the worries the QC researchers had from the beginning about D-Wave’s whole approach: the absence of error-correction; the restriction to finite-temperature quantum annealing (moreover, using “stoquastic Hamiltonians”), for which we lack clear evidence for a quantum speedup; the rush for more qubits rather than better qubits. And I say: not only do all these worries remain in force, they’ve been thrown into sharper relief than ever, now that many of the side issues have been dealt with. The D-Wave 2X is a remarkable piece of engineering. If it’s still not showing an asymptotic speedup over the best known classical algorithms—as the new Google paper clearly explains that it isn’t—then the reasons are not boring or trivial ones. Rather, they seem related to fundamental design choices that D-Wave made over a decade ago.
The obvious question now is: can D-Wave improve its design, in order to get a speedup that’s asymptotic, and that holds against all classical algorithms (including QMC and Selby’s algorithm), and that survives the encoding of a “real-world” problem into the Chimera graph? Well, maybe or maybe not. The Google paper returns again and again to the subject of planned future improvements to the machine, and how they might clear the path to a “true” quantum speedup. Roughly speaking, if we rule out radical alterations to D-Wave’s approach, there are four main things one would want to try, to see if they helped:
- Lower temperatures (and thus, longer qubit lifetimes, and smaller spectral gaps that can be safely gotten across without jumping up to an excited state).
- Better calibration of the qubits and couplings (and thus, ability to encode a problem of interest, like the Number Partitioning problem mentioned earlier, to greater precision).
- The ability to apply “non-stoquastic” Hamiltonians. (D-Wave’s existing machines are all limited to stoquastic Hamiltonians, defined as Hamiltonians all of whose off-diagonal entries are real and non-positive. While stoquastic Hamiltonians are easier from an engineering standpoint, they’re also the easiest kind to simulate classically, using algorithms like QMC—so much so that there’s no consensus on whether it’s even theoretically possible to get a true quantum speedup using stoquastic quantum annealing. This is a subject of active research.)
- Better connectivity among the qubits (thereby reducing the huge loss that comes from taking problems of practical interest, and encoding them in the Chimera graph).
(Note that “more qubits” is not on this list: if a “true quantum speedup” is possible at all with D-Wave’s approach, then the 1000+ qubits that they already have seem like more than enough to notice it.)
Anyway, these are all, of course, things D-Wave knows about and will be working on in the near future. As well they should! But to repeat: even if D-Wave makes all four of these improvements, we still have no idea whether they’ll see a true, asymptotic, Selby-resistant, encoding-resistant quantum speedup. We just can’t say for sure that they won’t see one.
In the meantime, while it’s sometimes easy to forget during blog-discussions, the field of experimental quantum computing is a proper superset of D-Wave, and things have gotten tremendously more exciting on many fronts within the last year or two. In particular, the group of John Martinis at Google (Martinis is one of the coauthors of the Google paper) now has superconducting qubits with orders of magnitude better coherence times than D-Wave’s qubits, and has demonstrated rudimentary quantum error-correction on 9 of them. They’re now talking about scaling up to ~40 super-high-quality qubits with controllable couplings—not in the remote future, but in, like, the next few years. If and when they achieve that, I’m extremely optimistic that they’ll be able to show a clear quantum advantage for something (e.g., some BosonSampling-like sampling task), if not necessarily something of practical importance. IBM Yorktown Heights, which I visited last week, is also working (with IARPA funding) on integrating superconducting qubits with many-microsecond coherence times. Meanwhile, some of the top ion-trap groups, like Chris Monroe’s at the University of Maryland, are talking similarly big about what they expect to be able to do soon. The “academic approach” to QC—which one could summarize as “understand the qubits, control them, keep them alive, and only then try to scale them up”—is finally bearing some juicy fruit.
(At last week’s IBM conference, there was plenty of D-Wave discussion; how could there not be? But the physicists in attendance—I was almost the only computer scientist there—seemed much more interested in approaches that aim for longer-laster qubits, fault-tolerance, and a clear asymptotic speedup.)
I still have no idea when and if we’ll have a practical, universal, fault-tolerant QC, capable of factoring 10,000-digit numbers and so on. But it’s now looking like only a matter of years until Gil Kalai, and the other quantum computing skeptics, will be forced to admit they were wrong—which was always the main application I cared about anyway!
So yeah, it’s a heady time for QC, with many things coming together faster than I’d expected (then again, it was always my personal rule to err on the side of caution, and thereby avoid contributing to runaway spirals of hype). As we stagger ahead into this new world of computing—bravely, coherently, hopefully non-stoquastically, possibly fault-tolerantly—my goal on this blog will remain what it’s been for a decade: not to prognosticate, not to pick winners, but merely to try to understand and explain what has and hasn’t already been shown.
Update (Dec. 10): Some readers might be interested in an economic analysis of the D-Wave speedup by commenter Carl Shulman.
Another Update: Since apparently some people didn’t understand this post, here are some comments from a Y-Combinator thread about the post that might be helpful:
(1) [T]he conclusion of the Google paper is that we have probable evidence that with enough qubits and a big enough problem it will be faster for a very specific problem compared to a non-optimal classical algorithm (we have ones that are for sure better).
This probably sounds like a somewhat useless result (quantum computer beats B-team classical algorithm), but it is in fact interesting because D-Wave’s computers are designed to perform quantum annealing and they are comparing it to simulated annealing (the somewhat analogous classical algorithm). However they only found evidence of a constant (i.e. one that 4000 qubits wouldn’t help with) speed up (though a large one) compared to a somewhat better algorithm (Quantum Monte Carlo, which is ironically not a quantum algorithm), and they still can’t beat an even better classical algorithm (Selby’s) at all, even in a way that won’t scale.
Scott’s central thesis is that although it is possible there could be a turning point past 2000 qubits where the D-Wave will beat our best classical alternative, none of the data collected so far suggests that. So it’s possible that a 4000 qubit D-Wave machine will exhibit this trend, but there is no evidence of it (yet) from examining a 2000 qubit machine. Scott’s central gripe with D-Wave’s approach is that they don’t have any even pie-in-the-sky theoretical reason to expect this to happen, and scaling up quantum computers without breaking the entire process is much harder than for classical computers so making them even bigger doesn’t seem like a solution.
(2) DWave machines are NOT gate quantum computers; they call their machine quantum annealing machines. It is not known the complexity class of problems that can be solved efficiently by quantum annealing machines, or if that class is equivalent to classical machines.
The result shows that the DWave machine is asymptotically faster than the Simulated Annealing algorithm (yay!), which suggests that it is executing the Quantum Annealing algorithm. However, the paper also explicitly states that this does not mean that the Dwave machine is exhibiting a ‘quantum speedup’. To do this, they would need to show it to outperform the best known classical algorithm, which as the paper acknowledges, it does not.
What the paper does seem to be showing is that the machine in question is actually fundamentally quantum in nature; it’s just not clear yet that that the type of quantum computer it is is an improvement over classical ones.
(3) [I]t isn’t called out in the linked blog since by now Scott probably considers it basic background information, but D-Wave only solves a very particular problem, and it is both not entirely clear that it has a superior solution to that problem than a classical algorithm can obtain and it is not clear that encoding real problems into that problem will not end up costing you all of the gains itself. Really pragmatic applications are still a ways into the future. It’s hard to imagine what they might be when we’re still so early in the process, and still have no good idea what either the practical or theoretical limits are.
(4) The popular perception of quantum computers as “doing things in parallel” is very misleading. A quantum computer lets you perform computation on a superposed state while maintaining that superposition. But that only helps if the structure of the problem lets you somehow “cancel out” the incorrect results leaving you with the single correct one. [There’s hope for the world! –SA]