Archive for the ‘Speaking Truth to Parallelism’ Category

Speaking truth to parallelism

Thursday, February 15th, 2007

Today The Economist put out this paragon of hard-hitting D-Wave journalism. At first I merely got angry — but then I asked myself what Winston Churchill, Carl Sagan, or Chip ‘n Dale’s Rescue Rangers would do. So let’s see if The Economist prints the following.

SIR —

In a remarkably uncritical article about D-Wave’s announcement of the “world’s first practical quantum computer” (Feb. 15), you gush that “[i]n principle, by putting a set of entangled qubits into a suitably tuned magnetic field, the optimal solution to a given NP-complete problem can be found in one shot.” This is simply incorrect. Today it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time. Indeed, the view of quantum computers as able to “try all possible solutions in parallel,” and then instantly choose the correct one, is fundamentally mistaken. Since measurement outcomes in quantum mechanics are random, one can only achieve a computational speedup by carefully exploiting the phenomenon known as quantum interference. And while it is known how to use interference to achieve dramatic speedups for a few problems — such as factorising large integers, and thereby breaking certain cryptographic codes — those problems are much more specialised than the NP-complete problems.

Over the past few days, many news outlets have shown a depressing willingness to reprint D-Wave’s marketing hype, without even attempting to learn why most quantum computing researchers are sceptical. I expected better from The Economist.

Scott Aaronson
Institute for Quantum Computing
University of Waterloo

I thought ‘factorising,’ ‘specialised,’ and ‘sceptical’ were a nice touch.

A Final Thought (2/16): As depressing as it is to see a lazy magazine writer, with a single harebrained sentence, cancel out your entire life’s work twenty times over, some good may yet come from this travesty. Where before I was reticent in the war against ignorant misunderstandings of quantum computing, now I have been radicalized — much as 9/11 radicalized Richard Dawkins in his war against religion. We, the quantum complexity theorists, are far from helpless in this fight. We, too, can launch a public relations campaign. We can speak truth to parallelism. So doofuses of the world: next time you excrete the words “NP-complete,” “solve,” and “instantaneous” anywhere near one another, brace yourselves for a Bennett-Bernstein-Brassard-Blitzirani the likes of which the multiverse has never seen.

Update (2/16): If you read the comments, Geordie Rose responds to me, and I respond to his response. Also see the comments on my earlier D-Wave post for more entertaining and ongoing debate.

The Orion Quantum Computer Anti-Hype FAQ

Friday, February 9th, 2007

Grudgingly offered for your reading pleasure, and in the vain hope of forestalling further questions.

Q: Thanks to D-Wave Systems — a startup company that’s been in the news lately for its soon-to-be-unveiled “Orion” quantum computer — is humanity now on the verge of being able to solve NP-complete problems in polynomial time?

A: No. We’re also not on the verge of being able to build perpetual-motion machines or travel faster than light.

Q: But couldn’t quantum computers try all possible solutions in parallel, and thereby solve NP-complete problems in a heartbeat?

A: Yes, if the heart in question was beating exponentially slowly.

Otherwise, no. Contrary to widespread misconception, a quantum computer could not “try all possible solutions in parallel” in the sense most people mean by this. In particular, while quantum computers would apparently provide dramatic speedups for a few “structured” problems (such as factoring integers and simulating physical systems), it’s conjectured that they couldn’t solve NP-complete problems in polynomial time.

Q: But isn’t factoring an NP-complete problem?

A: Good heavens, no! While factoring is believed to be intractable for classical computers, it’s not NP-complete, unless some exceedingly unlikely things happen in complexity theory. Where did you get the idea that factoring was NP-complete? (Now I know how Richard Dawkins must feel when someone asks him to explain, again, how “life could have arisen by chance.”)

Q: How could the people at D-Wave not understand that quantum computers couldn’t solve NP-complete problems in polynomial time?

A: To his credit, Geordie Rose (the founder of D-Wave) does understand this; see here for example. And yet, essentially every article I’ve read about D-Wave gives readers exactly the opposite impression. The charitable explanation is that the D-Wave folks are being selectively quoted or misquoted by journalists seeking to out-doofus one another. If so, one hopes that D-Wave will try harder in the future to avoid misunderstandings.

Q: But even if it gave only polynomial speedups (as opposed to exponential ones), couldn’t the adiabatic quantum computer that D-Wave built still be useful for industrial optimization problems?

A: D-Wave’s current machine is said to have sixteen qubits. Even assuming it worked perfectly, with no decoherence or error, a sixteen-qubit quantum computer would be about as useful for industrial optimization problems as a roast-beef sandwich.

Q: But even if it wasn’t practically useful, wouldn’t a 16-qubit superconducting quantum computer still be a major scientific advance?

A: Yes, absolutely.

Q: So, can D-Wave be said to have achieved that goal?

A: As Dave Bacon pointed out earlier, it’s impossible to answer that question without knowing more about how their machine works. With sixteen qubits, a “working demo” doesn’t prove anything. The real questions are: how high are the fidelities, and what are the prospects for scalability?

Q: But clearly D-Wave isn’t going to give away its precious trade secrets just to satisfy some niggling academics! Short of providing technical specifics, what else could they do to make computer scientists take them seriously?

A: Produce the prime factors of

1847699703211741474306835620200164403018549
3386634101714717857749106516967111612498593
3768430543574458561606154457179405222971773
2524660960646946071249623720442022269756756
6873784275623895087646784409332851574965788
4341508847552829818672645133986336493190808
4671990431874381283363502795470282653297802
9349161558118810498449083195450098483937752
2725705257859194499387007369575568843693381
2779613089230392569695253261620823676490316
036551371447913932347169566988069.

Q: Alright, what else could they do?

A: Avoid talking like this:

The system we are currently deploying, which we call Trinity, is a capability-class supercomputer specifically designed to provide extremely rapid and accurate approximate answers to arbitrarily large NP-complete problems … Trinity has a front-end software interface, implemented in a combination of Java and C, that allows a user to easily state any NP-complete problem of interest. After such a problem has been stated the problem is compiled down to the machine language of the processors at the heart of the machine. These processors then provide an answer, which is shuttled back to the front end and provided to the user. This capability can of course be called remotely and/or as a subroutine of some other piece of software.

Or to translate: “Not only have we built a spaceship capable of reaching Pluto in a few hours, our spaceship also has tinted windows and deluxe leather seats!” If I were them, I’d focus more on the evidence for their core technological claims, given that those claims are very much what’s at issue.

Q: While Dave Bacon also expressed serious doubts about the Orion quantum computer, he seemed more enthusiastic than you are. Why?

A: Generous and optimistic by nature, Dave strives to give others the benefit of the doubt (as the Chinese restaurant placemat would put it). Furthermore, as Quantum Pontiff, he’s professionally obligated to love the quantum sinner and forgive the wayward quantum sheep. And these are all wonderful qualities to have. On the other hand, when the hype surrounding some topic crosses a certain threshold, arguably a pricklier approach becomes called for.

Q: If D-Wave fizzles out, will many journalists and policymakers then conclude that quantum computing is bunk?

A: It doesn’t seem unlikely.

Q: What would it take to get these people to recognize the most elementary distinctions?

A: That’s the question, isn’t it?

Update (2/13): Lawrence Ip, my friend from Berkeley who now works at Google, went to the D-Wave “launch” in person and kindly provided the following report.

I just came back from the D-Wave announcement. Unfortunately I had to leave at the start of the Q&A session.

Here’s what I took away from it (minus the marketing fluff):

– They claim to solve integer programming problems on their system. Geordie Rose was careful to explicitly say that they are only hoping to see a quadratic speedup. Herb Martin (the CEO) wasn’t quite as careful in his opening remarks but then he’s the “suit”. Geordie said that their current chip is not a universal QC (presumably because their space of Hamiltonians is limited) but with some work they expect to be able to make it universal.

– Geordie said compared their approach to the efforts in academia as similar to Celera and the Human Genome Project. He said they were trying to get something that would scale fast and worry about about the quality of the qubits later. He contrasted this to the academic community’s efforts to achieve fine control over qubits before scaling up. They say that they hope to reach 1024 qubits by the end of 2008.

– They demoed 3 “real-world” problems where they used their system as essentially a blackbox IP solver.
– Searching for protein matches. Given a protein try to find the closest match in a protein database. They serially fed all the items from the database to the chip and asked it to score the match against the given protein. They said it was solving a maximum independent set problem.
– Finding the best seating arrangement at a wedding reception. Given constraints on which people can’t be seated together and who wants to sit together, find the optimal arrangement.
– Solving a Sudoku puzzle.

– At one point Geordie quoted you. He excerpted a paragraph from your SIGACT article (the one where you talk about generating Shakespeare’s 38th play) and mentioned your proposal of the inability to solve NP-hard problems as a physical law. As far as I can remember, the only other computer scientist he quoted was Gordon Moore so you’re in good company. 🙂

Reasons to believe II: quantum edition

Friday, September 8th, 2006

At Greg Kuperberg’s request, I’ve decided to follow my Ten Reasons To Believe P!=NP with…

Thirteen Reasons Why I’d Be Surprised If Quantum Computing Were Fundamentally Impossible

So that there’s no question about exactly where I stand, I’ll start out by repeating, for the ten billionth time, the Official Scott Aaronson Quantum Computing Position Statement.

  • It’s entirely conceivable that quantum computing will turn out to be impossible for a fundamental reason.
  • This would be much more interesting than if it’s possible, since it would overturn our most basic ideas about the physical world.
  • The only real way to find out is to try to build a quantum computer.
  • Such an effort seems to me at least as scientifically important as (say) the search for supersymmetry or the Higgs boson.
  • I have no idea — none — how far it will get in my lifetime.

I now offer thirteen arguments to support the above views.

  1. The Obvious Argument. Quantum mechanics has been the foundation for all non-gravitational physics since 1926. Hoping that it would “just go away” has been one of the most consistently losing strategies in the history of science. If physicists and engineers didn’t take quantum mechanics seriously as a description of the world, they wouldn’t have been able to invent the laser, transistor, or classical computer. For that matter, they wouldn’t be able to explain why all the atoms in the universe don’t instantly disintegrate. Now, if you start with quantum mechanics, and write down the model of computation that directly flows from it, what do you end up with? BQP: Bounded-Error Quantum Polynomial-Time.
  2. The Experimental Argument. Ten years ago, one wouldn’t have been able to do much more than mount a general defense of quantum mechanics. But by now, liquid-NMR quantum computers have been built that not only factored 15 into 3 x 5 with small probability of error, but also searched 8-item databases. I’ve seen some of the machines that performed these staggering computational feats right here in Waterloo; they look like big-ass cylinders with the word “Bruker” on them. Seriously, while liquid-NMR (at least for now) doesn’t seem to be scalable, there’s been lots of recent work on solid-state NMR, photonics, and ion traps, all of which (if I’m not mistaken) are up to at least 3 qubits. While I don’t think the experimentalists are anywhere close to succeeding, these are smart people who haven’t been sitting on their asses (or if they have, then no doubt hard at work at a lab table or something).
  3. The Better-Shor-Than-More Argument. Why do skeptics always assume that, if quantum mechanics turns out to be only approximate, then whatever theory supersedes it will reinstate the Extended Church-Turing Thesis? Why isn’t it just as likely, a priori, that the new theory would yield even more computational power than BQP? This isn’t merely a logical point: to the extent that people have tried to propose serious alternatives to quantum mechanics (where “serious” means “agreeing with known experiments”), those alternatives often do involve more computational power than BQP.
  4. The Sure/Shor Argument. If you believe quantum mechanics is going to break down before nontrivial quantum computing becomes possible, then you must believe there’s some point where it will break down — some level of size, or complexity, or whatever, at which it will cease to be a useful description of the world. What is that point? In other words, where is the line — possibly a fuzzy, asymptotic, resource-dependent line — that puts the quantum states that have already been observed on one side, and the quantum states that arise in Shor’s factoring algorithm on the other? In a paper I wrote three years ago, I called such a line a “Sure/Shor separator,” and challenged skeptics to come up with some example of what it might be. I even tried to get the ball rolling by studying such separators myself. My idea was that having a Sure/Shor separator could motivate further research: once they knew where the “barrier” was, the experimentalists could set to work trying to cross it; then, if they succeeded, the skeptics could come back with a new barrier, and so on. Unfortunately, no skeptic has yet risen to the challenge. It’s not hard to see why: if you start with the many-particle entangled states that have already been observed (for example, by the Zeilinger group and by Ghosh et al.) and then throw in a few closure properties, you quickly end up with — well, the set of all quantum states. Coming up with a “reasonable” set of states that includes Sure states but doesn’t include Shor states turns out to be an extremely hard problem.
  5. The Linearity Argument. In my experience, at least 70% of all objections to quantum computing boil down to the idea that a quantum computer would be a “souped-up analog computer” — a machine that would store information not in voltage differences or the positions of pulleys, but instead in exponentially-small amplitudes. From this idea it follows readily that, just as “old-school” analog computers have always run up against scalability problems, so too will quantum computers. To see why the analogy fails, think about classical probabilities. If you flip a coin a thousand times, you’ll end up with a probability distribution over outcomes that requires real numbers of order 2-1000 to describe. Does it follow from this that classical probabilistic computers are really analog computers in disguise, or that classical probability theory must be a mere approximation to some deeper, underlying theory? Of course not — for, unlike voltages or pulleys, probabilities evolve in time by means of norm-preserving linear transformations, which are insensitive to small errors. Well, quantum amplitudes also evolve by means of norm-preserving linear transformations, and this is what makes them behave like probabilities with respect to error, and not like the state variables of an analog computer.
  6. The Fault-Tolerance Argument. Among the many nontrivial consequences of this linearity, there’s one that probably counts as a separate argument: the Threshold Theorem. This theorem states that even if a quantum computer is subject to noise, we can still use it to do universal computation, provided we have parallel processing and a supply of fresh qubits, and provided the error rate is at most ε per qubit per time step, for some constant ε>0 independent of the length of the computation. The original lower bound on ε was about 10-6, but recently Knill and others have brought it up to 1-3% under plausible assumptions. Many quantum computing researchers talk about this theorem as the knight in shining armor who rode in unexpectedly to vindicate all their hopes. They’re entitled to do so, but to me, the theorem has always felt more like a beautiful, detailed working-out of something that couldn’t possibly have been false. (And not just because it’s a theorem.)
  7. The What-A-Waste Argument. Why do I say that the threshold theorem “couldn’t possibly have been false”? Well, suppose quantum mechanics were an accurate description of reality, yet quantum computing was still impossible for some fundamental reason. In that case, we’d have to accept that Nature was doing a staggering amount of quantum computation that could never be “extracted,” even in principle. Indeed, even assuming that life is (and always will be) confined to the vicinity of one planet, the resulting computational waste would make the waste of 1011 uninhabited galaxies look like chickenfeed. I don’t deny that such a possibility is logically consistent, but my complexity-theoretic instincts rebel against it.
  8. The Non-Extravagance Argument. In my opinion, if quantum computers could solve NP-complete problems in polynomial time, then there really would be grounds for regarding them as physically extravagant. Like coming up with theories that allow causality violations and superluminal signalling, coming up with models of computation that can simulate NP, #P, and PSPACE is just too easy. It’s not interesting. The interesting task is to come up with a model of computation that’s stronger than the usual ones (P, BPP, and P/poly), but not so strong that it encompasses NP-complete problems. If it weren’t for BQP, I don’t think I’d have any clear idea of what such a model could look like. (Sure, we have problems and complexity classes below NP, but that’s different from a full-fledged model of computation.)
  9. The Turn-The-Tables Argument. If building quantum computers that outperform classical ones is fundamentally impossible, then it must be possible to write classical computer programs that efficiently simulate any quantum system found in Nature. And yet, even though this way of looking at the question is perfectly equivalent, there’s a reason quantum computing skeptics avoid it. This is that, as soon as you frame the issue this way, they (the skeptics) are the ones who look like wild-eyed technological optimists — believing we’ll be able to simulate superconductors and quark-gluon plasmas on an ordinary desktop PC! The “staid,” “conservative” position is that such a simulation won’t be possible — or, equivalently, that the systems being simulated have more computational power than the PC doing the simulating.
  10. The Island-In-Theoryspace Argument. String theorists have been ridiculed for claiming that string theory is “too beautiful to be wrong.” But as Peter Woit points out in his fascinating new book, this is not at all a bad argument. It’s a fine argument; the real question is whether string theory — with its perturbation series, ten dimensions of which six are compactified for unknown reasons, landscape of vacua, etc. — really is as beautiful as its proponents think it is. At the risk of breaking my vow, let me hasten to say that I’m in no position to judge. What I do know is that there’s something mathematically unique about quantum mechanics: how it takes advantage of special properties of the L2 norm that fail for other p-norms, how the parameter-counting for mixed states that works perfectly with complex numbers fails with real numbers and quaternions, and so on. Crucially, it seems all but impossible to change quantum mechanics while retaining its nice properties. More so than general relativity or any other theory we have, quantum mechanics gives every indication of being an island in theoryspace.
  11. The Only-Game-In-Town Argument. However one feels about the alternatives to string theory — loop quantum gravity, spin foams, twistors, and so on — at least each one has a “developer base,” a community of physicists who are actively trying to make it work. By contrast, I don’t know of any picture of the world in which quantum computing is impossible, that’s being actively developed by any research community anywhere. (Gerard ‘t Hooft and Stephen Wolfram are not research communities.) All the skepticism of quantum computing that I’m aware of is purely negative in character.
  12. The Historical Argument. If the above arguments are sound, then why haven’t people already succeeded in building quantum computers? It’s been what, ten years already? Some historical perspective might be helpful here: in Samuel Johnson’s The History of Rasselas, Prince of Abissinia, written in 1759, Johnson has one of his characters give a correct explanation of why heavier-than-air flying machines should be physically possible, and then build a test plane that promptly plummets into a lake. Johnson was safe in ridiculing the idea; it would be another 144 years before Kitty Hawk. Closer to our topic, Darwin wrote in his autobiography about an eccentric loon of his acquaintance, who dreamed of building an engine to automate routine human thought. Though the loon — a certain Charles Babbage — hadn’t run afoul of any fundamental theory, his proposal to build a classical computer was a century ahead of its time. Since the 1600’s, science has often been generations ahead of technology. History gives us no reason at all to assume that a technology will be discovered to be compatible with known laws of physics at about the same time as it becomes possible to implement.
  13. The Trademark-Twist Argument. This last argument is the hardest one to articulate, but possibly the most compelling to my mind. In my view, Nature has been telling us, over and over and over, that our everyday intuitions will match the physical world if and only if we first apply a little “twist” to them. Often this twist involves an unusual symmetry, or switching from the L1 to the L2 norm, or inserting negative or complex numbers where our intuition says that only nonnegative real numbers would make sense. We see such a twist in special relativity, in the metric that’s not positive definite but instead has a (-1,1,1,1) signature. We see it in the -1 phase that the universe picks up when you swap a fermion with its identical twin. We see it in the fact that, to rotate an electron back to where it was, you have to turn it not 360o but 720o. We see it in the Dirac equation. We see it, of course, in quantum mechanics itself. And what is BQP, if not P=BPP with Nature’s trademark little twist?

Pipin’-hot learnin’ theorems

Thursday, August 17th, 2006

I’ve decided to “launch” my latest paper on the blogosphere even before posting it to quant-ph — so that you, my loyal readers, can be the very first to lay eyes on it. (True fact: as I was writing, I didn’t look once at the screen.) Comments more than welcome.

The Learnability of Quantum States [PS] [PDF]
Scott Aaronson

Abstract: Let me warn you up-front, this is one big-ass mother of a paper. It’s got learning. It’s got quantum. It’s got philosophy. It’s got weird complexity classes (naturally). It’s even got experimental physics applications (don’t worry, I showered afterward). And dude. When I say “experimental,” I’m not talking wormholes or anthropic postselection. I’m talking stuff that you, the quantum state tomographer, can try in your lab today. And no, this is not the real abstract.

Update (8/20): I’ve posted a slightly revised version, mostly in response to the comments I received here.