Scientific American article is out!

After three years of procrastination and delays, my 8-page feature article on “The Limits of Quantum Computers” has finally appeared in the March issue of Scientific American. Once I get permission, I’ll post a plain-text version on my website. In the meantime, you can buy the online issue for US$5.00 from SciAm‘s website, in which case you get colorful sidebars and graphics (including a bearded, white-lab-coated cartoon scientist holding quantum computers and complexity class inclusion diagrams), as well as an interview with Jeff Kimble about the unfortunate movie “Jumper”, and other articles about “the end of cosmology”, prediction markets (Robin Hanson and his “futarchy” get a mention), and the disastrous overfishing of the bluefin tuna (the kind used for toro sushi).

Update (2/18): By popular demand, I’m posting a rough early draft (PDF) of my article online. Read at your own risk!

So, what was it like to write for Scientific American? Exhausting, excruciating, and ultimately worth it. As a general rule, SciAm (probably like all large-circulation magazines) rewrites articles so extensively that the person listed in the byline is less the “writer” than the “content consultant.” Almost every sentence in my article bears the scars of battle (some that I won, more that I didn’t). Yet I have to concede that, when something was really cringe-inducingly wrong, SciAm was willing to listen and make changes — and besides, they did a great job with the cartoons. I’m happy with the end result. Thanks to George Musser, the editor who solicited the article and gave me lots of feedback in the early stages, and to Graham Collins, who finally saw it into print.

A few specific comments for your amusement:

  • In an earlier draft, the cartoons adorning my article were “all balding white guy, all the time” (supposedly, because of the need to keep a “consistent character” throughout the article). I demanded some sort of cartoon-diversity. After a heated discussion among the editors — in which, I’m told, the name of Larry Summers was invoked — they finally agreed to add a cartoon black woman. To those who think I’m a male chauvinist pig: how many brownie points do I get?
  • No, the crystal ball with floating ψ’s and φ’s, mounted atop a keyboard, is not an accurate depiction of what a quantum computer would look like. Having toured some actual QC labs, though, I had to admit it worked better graphically than a lab table piled high with tinfoil, lasers, and assorted pipes.
  • The topic label of the article is “Information Technology.” I pleaded with them to change the topic to “Computer Science,” but to no avail. Apparently the problem was that in the table of contents, the previous two articles were labeled “Prediction Science” and “Brain Science.”
  • The complexity class inclusion diagram on page 67 was a key concession I did win. (Apparently some editors felt a Venn diagram with P, NP, BQP, and PSPACE would be way too complicated, even for readers who regularly gobble down heaping helpings of M-theory.) As you can imagine, exposing people to this stuff seemed pretty important to me: this is apparently the first time P, NP, and NP-completeness have been explained at any length in Scientific American since articles by Knuth and by Lewis and Papadimitriou in the 1970’s.
  • In the author bio on page 67, the description of me as a “high school dropout” is a slight exaggeration, but there’s no other short term for what I am (see here for more).
  • I had nothing to do with the sidebar on page 68, about Vernor Vinge’s novel A Fire Upon the Deep. I’ve never read that (or anything else by Vinge for that matter).
  • My original draft included explanations of both the polynomial and adversary methods for quantum lower bounds, with references to BBCMdW and Ambainis. Shockingly, all of that was cut, while the part about time machines was greatly expanded.

During the hairiest parts of editing process, I was reminded of a passage in Anita and Solomon Feferman’s biography of the great logician Alfred Tarski, which described Tarski’s writing of an article for Scientific American (the only popular article he ever wrote).

Usually the Scientific American articles are heavily edited; many are rewriteen and some even ghostwritten, but Wisnovsky [Tarski’s editor] knew better than to tamper with Tarski’s work and did not — except for his usage of ‘which’ and ‘that’. It seemed to him that Tarski did a 180-degree reversal of these words, so he changed every ‘which’ to ‘that’ and every ‘that’ to ‘which’ and sent the proofs to Tarski, who changed everything back to the way it had been. Wisnovsky got out Fowler’s Dictionary of Modern English Usage, the house bible, and called Tarski on the telephone. “I asked if I could read him the relevant passage on ‘that’ and ‘which’ and he said, ‘yes’. It goes on for pages, but he listened very patiently until I finished. Then he said, ‘Well, you see, that is Fowler. I am Tarski.’ The minute he said that I caved in. I felt cut off at the knees and I gave up trying to make any changes at all.”

Yet, while the “Tarski approach” to magazine writing is a tempting one, here’s the final irony. I looked up Tarski’s actual article from 1969, and it badly needed an editor.

125 Responses to “Scientific American article is out!”

  1. harrison Says:

    Hmm. Is there anything prohibiting you from posting the original material explaining the polynomial and adversary methods? I mean, Nielsen and Chuang’s explanation of the polynomial method’s fine and dandy, but (1) it’s hard to explain to the guy sitting next to me on the airplane and (2) they don’t even cover the adversary method, since it was pioneered after the text was printed.

    Or will that be posted with the plain-text version?

  2. Brian Borchers Says:

    Vernor Vinge is now a full time writer, but previously he was a CS professor at San Diego State University. His most important science fiction works are probably the novella “True Names” and the Hugo winning novel “A Fire Upon the Deep.” Although he’s known for setting many of his works in a future where information technology has become omnipresent and humanity is approaching a technological singularity (an idea that he invented), there’s not really much computer science in his novels.

  3. JimV Says:

    You’re such a good writer yourself that I find it impossible (or at least NP-hard) to believe you wouldn’t enjoy reading Vinge.

  4. cody Says:

    Scott, i tend to like the way you write a whole lot to begin with. i hope the editing isnt as heavy as you claim. or regardless, harrison has a good point: would they be okay with you posting your whole original version? or after whatever normal editing process you have? actually, i know youre busy and this isnt that important. i look forward to reading the magazine soon.

  5. Job Says:

    Read the article. T’was good. The idea that P!=NP is a fundamental principle of our universe makes sense to me. I’d expect this principle to manifest itself in some physical limitation of our universe (such as the speed of light limit, or conservation of energy) where a polynomial time algorithm for solving an NP-Complete problem enables us to illegally overcome that physical barrier. While that wouldn’t be a proof exactly, a demonstration that P=NP violates current accepted theories of Physics does at least delegate the problem to someone else. Darn it, I say let the physicists worry about P vs NP for a while… i can’t take it anymore!!

  6. Oy Says:

    “they finally agreed to add a cartoon black woman… how many brownie points do I get?”

    I’d say this has to win the award for most untimely use of the phrase “brownie points”.

    Scott, you walked into that one.

  7. Gabriel Says:

    Wow! My subscription to Scientific American ended on February. I’ll go buy the new one at the newsstand.

    I got bored of Scientific American. They almost never write about computer science or math.

  8. Matt Says:

    Interesting stuff. Your diagram led me to google PSPACE, which led me to the Complexity Zoo– which is, um, complicated.

  9. Brian Hayes Says:

    “… this is apparently the first time P, NP, and NP-completeness have been explained in Scientific American since an article by Donald Knuth in 1978.”

    For the record, Knuth’s article appeared in April 1977. He has also written entertainingly on the experience of becoming sausage in the S.A. meatgrinder. (Disclosure: I was then a component of that meatgrinder.)

    Harry Lewis and Christos Papadimitriou wrote an article titled “The Efficiency of Algorithms” that appeared a few months later, in January 1978. Is it really true that P and NP have not been explained in S.A.’s pages in the 30 years since then? A search of the S.A. web site for “NP” yields a dozen hits, citing articles published between 1994 and 2006. I haven’t actually looked at those articles, but some of the titles suggest that a sentence or two on P vs. NP might well be found inside.

  10. Scott Says:

    Brian: Thanks for the info; I just corrected the entry! John Rennie is the one who told me that he couldn’t recall a SciAm article on these topics within the last few decades. I know there have been scattered mentions of P and NP; let me know if you find anything more than that.

  11. Bob Hearn Says:

    NP-completeness of games and puzzles was briefly a hot topic again after Richard Kaye’s proof that Minesweeper is NP-complete, which appeared in Sci. Am. in 2000.

    (Incidentally, I maintain that in fact the natural decision question for Minesweeper is actually coNP-complete.)

  12. Louis Says:

    Scott, you win. Epic win.

    I know there’s been a vaguely recent article on the difficulty of Sudokus, but I don’t think it talked much at all about P vs. NP save to mention that Sudoku was something called NP-complete, but that that didn’t matter for constant-size puzzles.

  13. Arun Says:

    From the preprint: By the time we reach 100 letters, there are already more possibilities than there are atoms in a
    vat of 26100 atoms.

    Scott, are you the guy who wrote the essay beginning, “John was as tall as a six-foot-three-inch-tall tree”?

  14. tcne Says:


    Fascinating, that which changes stays unchanged, T(A)ar(on)s(on)ki style! Smiley

    Fascinating still that you are still at it, p-np I mean.

    Referring to “Ten Signs”, an earlier blog, what are your thoughts, in terms of probability, on some one ‘saying’: “I can settle the notorious problem with elementary math on a single page, 10-point Times Roman.”? Any reason NOT to deny it offhand?


  15. asdf Says:

    Scott, you absolutely have to read A Fire Upon The Deep. It is a necessity.

  16. Scott Says:

    tcne: The only reasons I see not to deny such a claim offhand are that

    (1) the number of proofs that can be written on a single page in 10-point Roman is astronomical, and

    (2) if a proof of P≠NP were ever found, no doubt it would be summarizable in retrospect on a single page. (Not using what most people would consider “elementary math,” though.)

    Indeed, it might even be possible to say enough on one page to enable experts to reconstruct a proof of P≠NP. It’s an interesting question whether this could be done with the known proofs of Fermat and Poincare. We know that Perelman presented sixty pages, from which experts were able to reconstruct what for a saner person would have been a 400-page proof.

    So, I think it ultimately comes down to the question of sanity. Even if it were possible to say enough about a P≠NP proof on one page to let experts reconstruct it, a sane person wouldn’t present the proof that way, because he or she would know that the new ideas would take time to assimilate (otherwise they wouldn’t be new).

  17. asdf Says:

    I should have added in response to Brian Borchers: A Fire Upon The Deep is not a CS novel in the sense of treating CS ideas in detail, but its grounding in ideas that parallel the stuff you’re interested in is unmistakeable. One could view it as about the consequences of having the tractability of different complexity classes depend on what region of the galaxy you’re in. E.g. quantum computers are physically realizable in the outer regions of the galaxy but not in the inner parts, because of some kind of interference emanating from the core. He doesn’t describe it like that but you can see it clearly. Also the basic plot is about what happens when a malevolent entity with hypercomputational powers tries to take over the galaxy.

  18. Scott Says:

    you absolutely have to read A Fire Upon The Deep. It is a necessity.

    Alright, it’s ordered.

  19. asdf Says:

    Cool, looking forward to your review :).

  20. Scott Says:

    Scott, are you the guy who wrote the essay beginning, “John was as tall as a six-foot-three-inch-tall tree”?

    Nope, wasn’t me. I think it was the guy who wrote the essay beginning, “John was as tall as a six-foot-three-inch-tall tree.”

  21. Bob Hearn Says:

    If you have read no Vinge, you might want to start with the novella True Names, which is available online at

    As Wikipedia says,

    “True Names was the science fiction novella which brought Vernor Vinge to prominence in 1981. It was one of the earliest stories to present a fully fleshed-out concept of cyberspace, which would later be central to stories in the cyberpunk genre. Because of this, it is often referenced as a seminal work of the genre. The story also contains elements of transhumanism, anarchism, and even hints about The Singularity.”

  22. cody Says:

    Scott: thank you very much for posting that rough draft, im glad i didnt have to wait and overcome my procrastination to get to a book store or library. i have a couple questions as a result though. do you real computer scientists really worry about P vs NP? i like to waste time failing to find efficient ways of solving the subset-sum problem, but thats because i dont enjoy proof writing enough to seriously ponder how to show P≠NP. i was under the impression that everyone serious feels pretty strongly P≠NP, (though now i see the lack of a proof could still keep you up).

    my other question isnt so important; i was curious, what is your typical approach to writing, do you normally have an editor, use friends/collegues, or revise things yourself?

  23. Scott Says:

    Hi Cody,

    1. You can buy the issue online — you don’t have to go to a bookstore or library.

    2. Yes, real computer scientists (at least theorists) certainly worry about P vs. NP. Not about the answer, of course — most of us believe P≠NP the same way we believe much more mundane things about the world — but about how to prove it.

    3. Blog entries I typically write myself, without anyone else seeing them before they go live (which more than once caused me to say things I regretted…). In other cases I often have readers and/or editors, both to infuriate me and to protect me from myself. In the case of the SciAm article, about six people went over it and gave me useful feedback.

  24. Raoul Ohio Says:

    1. I agree with Cody that subset sum is an excellent way to waste time and sharpen your programming skills. I favor “Bottom Up Crunch” algorithms, optimized by pruning the search space with whatever ad hoc methods pop into my head; “Recursion is your friend”.
    You can also try to calc the probability of a size n solution existing for a given set of data and target sum.
    2. While NP-complete problems are all equivalent in one sense, many very different structures turn up when you try to program them.
    3. I was just looking for an algorithm remark I read in the last year, something about linear programming and dynamic programming being “the only two bright lights in whatever”. Does anybody know where I read that?

  25. Connelly Barnes Says:

    I’d wait until there’s an experimentally tested quantum gravity theory to claim that no physical computer can solve an NP-complete problem in polynomial time. Otherwise, I think the claim is too speculative, because one would be venturing into completely uncharted terrain within the already not understood and untested string theory.

    Your results for quantum computing are neat! Thanks for the article, and for writing for the popular audience, to get this information out.

  26. Scott Says:

    Connelly, I hope the article made clear that any claim about the computational limitations of quantum gravity is necessarily conjectural, but also the grounds one might have for conjecturing: basically, the same grounds one might have for conjecturing that a quantum theory of gravity will still satisfy causality and the Second Law. No, we don’t yet have a QG theory, but physics has given us some clear principles which will either continue to hold in the QG regime — in which case we can make some definite predictions about what will and won’t be possible — or else fail, which in my opinion would be much more interesting than a QG theory itself.

  27. Jonathan Vos Post Says:

    Strange. It took me less than 10 hours of writing and re-writing to get my first Least Publishable Unit in Scientific American:

    “This sentence contains ten words, eighteen syllables, and sixty-four letters.”

    [Jonathan Vos Post, Scientific American, reprinted in “Metamagical Themas: Questing for the Essence of Mind and Pattern”, by Douglas R. Hofstadter, paperback reprint March 1996, pp.26-27]

    Your mistake was in writing the lower density but higher-effort Feature Article format. Which I shall buy. Maybe more than one copy, so as to give one away to a non-subscribing friend. On the other hand, you got paid.

    Second the motion regarding Vernor Vinge. He is one of the major science fiction authors such as Iain Banks, Greg Egan, Charles Stross, John Scalzi, Cory Doctorow, Ian Stewart, Marvin Minsky (novel collaboration with Harry Harrison), and others with Computer Science experience who really Get It.

  28. Chris Says:

    Good lord the comments on Slashdot regarding the draft of your article are making me laugh out loud and facepalm simultaneously.

  29. Stas Says:

    Good lord the comments on Slashdot regarding the draft of your article are making me laugh out loud and facepalm simultaneously.
    Yes, the crowd didn’t get the point. Maybe “What Google cannot find” was actually a better piece of Scott’s writing to publicize widely…

  30. Scott Says:

    Wow, 116 Slashdot comments so far, of which exactly one shows evidence of having read either the article or the draft that was linked to. A beautiful illustration of how interacting agents can pool their ignorance to produce something dumber than the sum of its parts.

  31. Gil Kalai Says:

    While it is believed that BQP is not containing NP, the diagram in your paper shos that BQP contains staff well outside NP and BQP is only contained in P-space. How far up in the hierarchy towards P-space it is known that BQP is not contained in. Is the quantum algorithms well outside NP useful, or should make us excited?

  32. Scott Says:

    Gil, great question! The best we know is that BQP is contained in PP (or actually a subclass of PP called AWPP). We don’t know whether BQP is contained in the polynomial hierarchy, and this is widely recognized as one of the biggest open problems of the field. We have oracles relative to which BQP⊄MA (for example, the complement of Simon’s problem), but after 15 years of work, we still don’t even have an oracle relative to which BQP⊄AM.

    For complexity theorists, I think this situation is tremendously exciting — there’s so much research still to do! But for “end users,” it’s not clear to me that it’s particularly exciting, since we still don’t have any candidate for a non-oracle, non-promise problem in BQP\NP. In other words, we still don’t know what (if anything) is to BQP\NP as factoring is to BQP\P. Partly because of the failure to find such a problem, I would give even odds that in the “real,” unrelativized world, BQP⊆AM∩coAM (which would then imply BQP⊆NP∩coNP under a derandomization assumption).

  33. John Armstrong Says:

    AUGH! I hadn’t even looked at Slashdot in years, but you people tempted me. It’s horrible! Worse than I ever knew it before! These people have no idea what they’re talking about, but still run off spouting their pompous opinions and claiming to be experts. Pain!

  34. Chris Says:

    My favorites from the Slashdot crowd:

    “NP complete is solved by nature (Score:3, Interesting)
    When you see an electrical arc, you are seeing Nature taking the path of least resistance between two points. Interestingly, if you build a “maze” which requires the arc to pass around a corner, it does just that. In fact, the arc, given sufficient power, can find its way through a maze. Not only can it solve a maze, it can solve it for the shortest path essentially instantly.

    NP completeness is only a problem for us because we don’t understand the mechanics of the solution. It is not an unsolvable problem, just one that we don’t know how to solve yet.”


    “Stupid much? (Score:2, Informative)
    Quantum computers have already proven they can do factoring (which is an NP-complete problem) in polynomial time. We know for sure that this is not a “violation of the laws of physics,” because it has already been done!”

  35. Scott Says:

    Quantum computers have already proven they can do factoring (which is an NP-complete problem) in polynomial time.

    That’s pretty impressive even for Slashdot: three entirely separate confusions (factoring vs. NP-complete problems, small vs. large instances, and experiments vs. asymptotics) in a single sentence.

  36. Nagesh Adluru Says:

    Awesome Scott. Now, when my friends and people around me ask what do I have to do with complexity theory when I can not “work” in that field, I can say I am an “end user”, at least a naive one:)

  37. Frédéric Says:

    asdf, Scott

    Here is an interesting usenet post about computer science and physics in A Fire Upon The Deep :

  38. asdf Says:

    The Barak Perlmutter post was interesting. Also, here is a song written by an acquaintance of mine, that Vinge fans here might enjoy (it will help if you’ve read the book first):



    I better say not to take the quantum computing comments about the book too seriously, it’s just a reader interpretation. The book is one of the best space operas ever, but the computation-related aspects of it are there to move the plot– it’s not a mathematical tract presented as fiction or anything like that (e.g. Surreal Numbers).

  39. sp Says:

    If quantum computer is impossible, and this is sure is, then it’s not more interesting than if it possible, becouse quantum mechanic isn’t wrong, but wrong is trying to build quantum computer and are wrong some conclusions how possible to use quantum mechanics for quantum computation. Quantum mechanic is very unpredicted processes, which are totaly noisy. If imposible isolate qubits from noise, when how you can say that imposibility of quantum computer violates quantum mechanics? So it’s meancs taht quantum mechanic never acts like quantum computer, becouse quantum mechanic always acts totaly noisly.

  40. Alex Mikunov Says:

    Scott, your great article doesn’t seem to talk about algorithms for quantum simulation, which promise substantial speed-up relative to analogous classical algorithms

  41. Scott Says:

    sp: If quantum computing is impossible, I would say that quantum mechanics as physicists have taught and understood it for 80 years is wrong. Think about it this way: if quantum mechanics can’t be used to efficiently sample any probability distribution beyond what a classical computer can efficiently sample (forget for the moment about deciding all languages in BQP), then there must be an efficient classical algorithm to simulate the outputs of a quantum process. But that, in turn, means that there’s some complete description of interacting physical systems that’s vastly more efficient than writing down the entangled wavefunction (i.e., that requires polynomially many rather than exponentially many bits). And this, I think, would be the biggest change in physicists understanding of QM since the 1920’s: it wouldn’t be a local hidden-variable theory, but it would be something even more astounding, a “succinct” hidden-variable theory. I think such a theory would, for example, overturn David Deutsch’s famous argument for the many-worlds interpretation (“where is all the computation happening, if not in parallel universes?”). The answer would be: it’s happening in a single universe that stores the succinct representation of the wavefunction.

    You’ll notice that in the above argument, I never once mentioned noise. That’s not an accident. For either the noise is enough to enable an efficient classical simulation of the quantum computer, in which case you’re in the situation above, or else it isn’t, in which case some sort of nontrivial quantum computing is possible by definition.

  42. Scott Says:

    Alex: It’s in the published version (there’s even a highlighted quote on p. 65: “The ‘killer app’ for quantum computers will most likely be simulating quantum physics”).

  43. Gil Kalai Says:

    Scott, thanks for the answer regarding complexity. Indeed it is a fascinating problem. I will be happy to believe that BQP is sufficiently orthogonal to the classical computing notions that there are problems in it outside the entire polynomial hierarchy.

    Regarding your answer to sp; Isn’t all the wonderful things you mention from “then there must be…” should hold by the same token even if QC are possible, unless there are already in nature phenomena like quantum fault tolerance or quantum error correction that enable fault tolerant quantum computing.

  44. Michael Bacon Says:

    “If quantum computing is impossible . . .”


    I thought that quantum computations had been carried out — although on a small scale and involving only a handful of qubits — in a number of experiments and so had been proven to be possible, at least in principle. Is this not correct, or is the situation more complicated?

  45. Michael Bacon Says:

    Upon further reflection, perhaps your point was quantum theory as previously understood would be wrong — something like correctly predicting an eclipse by using Ptolemy’s geocentric solar system with epicycles to compute the motions of the Sun, Moon, and the visible planets.

  46. Jonathan Vos Post Says:

    “The ‘killer app’ for quantum computers will most likely be simulating quantum physics” — which is, modulo netspeak, exactly what Richard Feynman declared when he became the great-greandfather of Quantum Computing (more or less simultaneously with being the great-greandfather of Nanotechnology). The field hasn’t gone quite where he expected, but his central vision endures, and seems more plausible than ever.

  47. Scott Says:

    Gil and Michael: The answer to both of your questions is that by a “quantum computer,” here I mean one that does something that a classical computer would take an astronomical amount of time to simulate.

  48. Gil Kalai Says:

    I understand that you refer to computationally superior quantum computer, Scott. My question was different.

    You drew a lot of terrific consequences from the impossibility of (computationally superior) quantum computers regarding the ability of classical computers to simulate evolution of quantum processes in nature.

    However, adopting your line of reasoning, these terrific consequences do not really require that quantum computers are impossible. They only require that in evolution of quantum processes in nature we do not witness something like the type of quantum error-correction and fault tolerance which are required for building computationally superior quantum computers.

    Isn’t it?

  49. Michael Bacon Says:

    ” . . . by a “quantum computer,” here I mean one that does something that a classical computer would take an astronomical amount of time to simulate.”


    This clarifies your point. However, isn’t it equally valid to define a “quantum computer” as one that does something (because of the quantum nature of the world) in fewer computational steps than a classical computer could — even if it’s as simple as, say, factoring the number 15? Assuming that quantum physics, as we understand it, is correct, wouldn’t this prove that quantum computation is in principle possible? Thanks.

  50. Scott Says:

    Michael: The difficulty is, how do you even define “number of steps” for such tiny problems? Do you include the time needed to synthesize the molecule, to repeat the experiment enough times to see a signal, to do the classical postprocessing, etc. etc. If so, then the “number of steps” will probably be vastly greater than for a classical computer to factor 15. (Indeed, such difficulties are a large part of the reason why computer scientists talk about asymptotics in the first place.)

    Still, I do agree that current experiments are useful proofs of principle — not because of any crude comparison of running times, but because of what we learn about the physics aspects like controlling decoherence.

  51. Scott Says:

    Gil: I find your way of talking bizarre. Quantum fault-tolerance is not something we expect to find in Nature (though it would be hugely interesting if we did); it’s something we hope to engineer. Just like we didn’t find fault-tolerant, general-purpose classical computers lying around on beaches.

  52. Michael Bacon Says:


    Thanks for the explanation — that makes sense.

    Regarding “fault-tolerance,” from a philosophical standpoint, if one assumes that the world is quantum in nature, that our understanding of quantum physics is generally correct and that there isn’t some “succinct representation of the wavefunction” at work in the classical universe that we observe, would it be correct to conjecture that “[q]uantum fault-tolerance” is at work in nature even if we can not (as yet) engineer it on a large scale?

  53. Gil Kalai Says:

    Right, but if something like quantum error-correction is required to achieve computationally superior computing, and it is not present in nature then there is no reason to suppose that general-purpose quantum computers are needed to simulate natural quantum processes. Your nice description about the ability of classical computers to simulate natural processes may apply whether computationally superior QC can be built or not.

    (We do find fault tolerant computing devices on beaches, Next time you visit the beach think about it.)

  54. Michael Bacon Says:

    ” . . . We do find fault tolerant computing devices on beaches.”


    But they were “engineered” after all — weren’t they?

  55. Scott Says:

    Gil and Michael: Really interesting questions!

    One response is that, if you just want to do something that seems hard to simulate (as opposed to solving a specific problem like factoring), then the machinery of quantum fault-tolerance doesn’t seem to be necessary. And the evidence is that there are, in Nature, many phenomena that physicists can’t currently simulate, essentially because of the astronomical size of the wavefunction. These include quark confinement, high-temperature superconductivity, various interesting phenomena in magnetic spin lattices, possibly structure formation in the very early universe… (physicist lurkers: care to add more examples?)

    Of course, for each of these examples, someone might have a physics insight that renders a quantum computer unnecessary — just like, for any specific SAT instance we care about, someone might have a combinatorial insight that renders an efficient algorithm for NP-complete problems unnecessary. The point is that a quantum computer would give us a general-purpose way to simulate all of these systems, without having to worry about the details.

    PS. Regarding beaches, I was careful to say “general-purpose”! Clams, crabs, and humans all compute, but all are likely to balk if you try to feed them an arbitrary input. Indeed, that’s exactly the distinction I was making above in the quantum setting.

  56. Gil Kalai Says:

    Hi Scott

    These are good examples, and they really point to some fundamental issues. There are many things we cannot simulate even in areas which are completely classical. For example, we do not know how to simulate many phenomena in biology.

    Complexity theory is a very important scientific discovery mainly of the last half century, and it gives a whole new way to think about various scientific issues. Complexity theory gives some formal meaning to the statement “it seems hard to simulate”. But it is not the only aspect of simulating natural phenomena. (In a sense, complexity theory just kicks in once we have a good mathematical model to start with.)

    Your idea above is that if quantum computers are impossible then simulating physics should not be harder, in principle, then simulating classical natural phenomena. This seems quite a reasonable idea. Of course, it does not tell us how precisely we will do it and how “easy” it will be. (A philosophical argument why protein folding should be computationally feasible is nice. But it does not come close to a solution.)

    My complementary idea is that if we want to simulate realistic noisy quantum processes (including all your examples) there is no a priori reason to expect that the full power of quantum computers will be needed. If these processes do not involve any phenomena similar to the highly entangled QEC states or any fault tolerance it is not clear why fault tolerance is required for their simulations.

    But I do agree with you that the question if (full power) QC are possible and the (somewhat different) quation if (full power) QC are needed to simulate physics in nature are very important.

    Now, what formally do you mean, Scott, by “general-purpose”?

    (BTW, Michael, are you related to Dave Bacon?)

  57. Anonymous Says:

    A minor quibble :
    The problem of matching spouses was much older than Edmonds algorithm when only hetrosexual pairs are allowed. Hungarian method given by Kuhn developing on work of Konig and Egervary solved the problem.
    Edmonds gave a polynomial time algorithm when homosexual pairing is also allowed. As you mention this pairing makes the problem much more difficult but still keeps it polytime.

  58. Gil Kalai Says:

    “would it be correct to conjecture that ‘quantum fault-tolerance’ is at work in nature even if we can not (as yet) engineer it on a large scale?”

    Michael, this is an excellent way to put it !!

  59. Michael Bacon Says:


    No relation to Dave Bacon — at least as far as I know — I don’t ski well enough to be a close relative 🙂

  60. Scott Says:

    Gil and Michael: It’s a great question why you should need fault-tolerance to simulate physical processes that don’t themselves involve fault-tolerance. Without thinking about it more deeply, the best answer I can give is that exactly the same issue seems to arise in classical computation. Why do we find it so useful to use almost-noiseless, fault-tolerant computers to simulate things as noisy and chaotic as (say) the earth’s climate, or the airflow around a wing? Whatever explanation you prefer, it seems the same explanation should apply in the quantum case.

    PS. When I said “general-purpose,” I just meant universal (able to perform any polytime computation specified in the input).

  61. sean barrett Says:

    This is an interesting debate.

    One area that’s of current research interest is the possibility of “simulating” various condensed matter systems in other quantum systems where one has more control over the interactions (e.g. using cold atoms in optical lattices to simulate the Hubbard model). Such an experiment would no doubt be of enormous benefit in getting a better handle on the quantum many body problem. However I think it would be a bit of a stretch to call such an experiment a “quantum computer” – to draw an analogy with classical physics there is a long tradition of this sort of model building:

    however there’s clearly a distinction between a wind tunnel and a classical digital computer.

    In terms of simulating other quantum systems, it seems to me that one advantage that a quantum computer (in the sense that QC researchers usually mean) would have over a “model experiment”, is that in a QC one can, as far as we understand, implement fault tolerance, and so the noise that the physical elements of the computer “feels” is not transferred to the simulation.

    So pretty much what Scott said.

  62. sp Says:

    If quantum computing is imposible, do then protein folding using foult-tolerance or/and QEC? I think that no, and that it is not up to quantum mechanic.

    “These include quark confinement, high-temperature superconductivity, various interesting phenomena in magnetic spin lattices, possibly structure formation in the very early universe… (physicist lurkers: care to add more examples?)”
    Quark existence is not proved. Formation early universe also unknow… Maybe high-temperature superconductivity at all don’t need simulate, becouse you just rise/cool temperature of metal and it is superconductor or not depending on temperature. About various interesting phenomena in magnetic spin lattices I also sceptical… I just don’t believe that quantum mechanic can do somthing, what can’t be simulated easyly with classical computer. All quantum phenomena is not realy quantum phenomena or they can be simulated classicly.

    Bell theorem showing seems amazing non classical thing, that entanglement is faster than light phenomena or that particles know how each time behave in each experiment and it’s fine to me, I believe, that this phenomena can be explained with single universe and classical physics and math and that just human inteligent can’t understand how it can be, becouse don’t know everything… So I admit that maybe quantum mechanic can create some phenomena, for which simulation need exponentional computation power, but more possible that those all phenomena are too noisy and can be simulated classicly.

  63. rrtucci Says:

    sp, 95% of what you are saying doesn’t make any sense. It’s as if you pretended to be an expert in French, among native speakers, after taking only one day of classes. Sorry, but someone had to tell you.

  64. Gil Says:

    Dear rr,

    I beg to disagree, I think that the main obstacle in understanding sp is the English.

    For example, if I understand correctly: in the first half of his last comment he expresses the (reasonable) idea that the difficulty in the examples Scott mentioned (high temperature superconductivity, the early universe, etc.) is that the physics is not sufficiently well understood, and not that a superior computational power is necessary for simulations. (Of course, with a handy QC you can try a lot of possibilities and may get somewhere even without understanding the physics to start with.)

    In the second half sp said that he admits that maybe computationally superior QC are creatable but he tends more to think that because of the noise they are not.

    Difficulties in English is a very serious obstacle in expressing one’s ideas, and probably weblogs are good places to practice.

  65. Michael Bacon Says:

    “Bell theorem showing seems amazing non classical thing, that entanglement is faster than light phenomena or that particles know how each time behave in each experiment and it’s fine to me, I believe, that this phenomena can be explained with single universe and classical physics and math . . .”


    My favorite attempt to address the issues noted above (in particular, in Einstein-Podolski-Rosen experiments and in quantum teleportation) was contained in a paper by Patrick Hayden and David Deutsch and can be found at:

    Basicially, it contends that all information in quantum systems is, notwithstanding Bell’s theorem, localised. Measuring or otherwise interacting with a quantum system has no effect on distant systems from which its is dynamically isolated, even if they are entangled. Using the Heisenberg picture to analyse quantum information processing makes this locality explicit, and reveals that in Einstein-Podolski-Rosen experiments and in quantum teleportation, quantum information is transmitted through
    “classical” (i.e. decoherent) information channels.

    Others with a greater understanding than I have of this type of contention are better placed to address whether it is in whole or part correct. Nevertheless, it is an interesting effort to use quantum information processing concepts to explain some of the apparent “spooky” effects of quantum physics.

  66. Michael Bacon Says:

    Sorry, my last post should have been addressed to “sp”:(

  67. rrtucci Says:

    Gil said:
    “In the second half sp said that he admits that maybe computationally superior QC are creatable but he tends more to think that because of the noise they are not.”

    Yes, a reasonable belief until proven otherwise. This is why, despite giving Scott an A++ for his article, I think the following paragraph is BS. If QCs prove impossible, it doesn’t necessarily mean that quantum mechanics is wrong. That’s like saying that if my car doesn’t start, the only possible cause is that Newton’s equations or
    Thermodynamics are breaking down.

    “The consensus today is that, if quantum computing is shown to be impossible, this will be
    much more interesting from a fundamental physics standpoint than if it’s shown to be
    possible. For it would mean that quantum mechanics, the bedrock theory of physics for
    the past 80 years, is wrong or incomplete.”

  68. Scott Says:

    That’s like saying that if my car doesn’t start, the only possible cause is that Newton’s equations or Thermodynamics are breaking down.

    No, it’s not. It’s like saying that, if cars were shown to be impossible in principle, then some part of current physical theory (which predicts, among many other things, that cars are possible in principle) would have to be wrong. Please read the passage again; I chose my words carefully!

  69. rrtucci Says:

    Suppose someone finds a type of noise that can’t be
    reduced beyond a certain point (like a zero point noise), that makes QCs “impossible”. This does not mean that QM is wrong or incomplete

  70. Nagesh Adluru Says:

    Man I am neither a QC expert nor a CT expert but I got what Scott was saying. It’s QM “as physicists have understood that could be wrong or incomplete”. Not that QM is the wrong line of thought.

  71. Scott Says:

    rr: firstly, we now have strong evidence that any such “zero point noise” would have to be fairly large (more than a few percent per qubit per time step) to make QC impossible in principle — so that if it existed, then it probably would’ve been detected already. But leaving that aside, if a zero point noise did exist that made QC impossible, then (by the argument in my comment #41) there would have to be a succinct (polynomial-size) description of the wavefunction sufficient to predict the outcomes of all experiments. And if that were the case, then I would say without any hesitation that QM (as physicists have understood it for the last 80 years) was incomplete — all that time physicists were thinking about exponentially large wavefunctions, when actually there was a much more fundamental polynomial-size description! Maybe you disagree, but if it’s just a disagreement over semantics (i.e., what it means for QM to be “incomplete”) then there’s nothing further to argue about.

  72. Joshua Schraiber Says:

    Scott, I haven’t read an article but a question has been bugging me for the longest time, and I’m procrastinating, so if you don’t mind, I’d like to ask it… it has to do with the np-completeness of protein folding.

    Okay, to be clear, I’m neither a biochemist nor a computer scientist, so I might be wrong in some basic assumptions here, so let me set them out:

    1)NP complete programs cannot be solved efficiently in general
    2)Protein folding is NP complete

    But when you denature a protein and put it in a test tube, it will fold back up into the correct position almost instantly, right? This is where I’m confused, because shouldn’t that not be able to happen? Clearly I’m missing something here, because I know I’m not pointing out something that has been overlooked by minds far greater than my own.

  73. Gil Says:

    Dear Scott and rr

    Regarding Scott’s statement I think that it reflects a reasonable sentiment/opinion. It is very hard to translate what Scott wrote into a factual statement that we can test or even properly state. There is a pretty large (but not complete) consensus among experts who work in this field that QC are possible at least in principle. Given that, it is not clear what it precisely means “how interesting is the possibility that they are impossible.” Counterfactuals is quite an interesting and tricky subject. So what Scott wrote seems not as a scientific objective evaluation of the consensus, but as a legitimate way to express his own opinion and to influence others.

    My personal view is that the possibility that QC are impossible in principle is very interesting, (and that this possibility has a good chance to be correct,) but I think Scott uses superlatives that I would not have used. The possibility that QC can be built is also very interesting. (Probably even more interesting.)

  74. Gil Kalai Says:

    Scott: “Gil and Michael: It is a great question why you should need fault-tolerance to simulate physical processes that don’t themselves involve fault-tolerance.”

    It is great that you liked our problem, Scott. (Albeit, it looked bizarre to you at first.)

    Needs and necessities.

    Maybe using the word “needed” was confusing. Certainly a noiseless QC will be very useful for solving physics problems (like those you mentioned), and, as you said, the same device will work for many problems.

    An NP solver will be very useful for solving protein folding problems and many other problems. But we do not think that an NP solver is necessary (in principle) to solve protein folding problems. In order to have a detailed understanding why protein folding is computationally easier we probably need to understand this specific problem much better.

    Problems with our problem.

    I agree with you that in the case of quantum processes it seems that classical simulation will look miraculous and I agree with Sean that “in a QC one can, as far as we understand, implement fault tolerance.” It is not easy to come up with error-models that will not allow QEC and FT; but it is an interesting challenge and I am wrorking in this direction. (Maybe the adiabatic context is also a good place to look.)

    The classical analogy .

    Your analogy, Scott, with classical computation seems, as always, nice. One clear distinction is that the analog of Michael Bacon’s conjecture (comment 52) has a clear yes answer.

    Suppose that somebody had suggested classical computers and fault tolerance centuries ago and Francis Bacon the ancient ancestor of Dave Bacon and Michael Bacon had make in response the following conjecture:

    “Would it be correct to conjecture that ‘fault-tolerance’ is at work in nature even if we can not (as yet) engineer it on a large scale?

    Of course, unlike the QC case, we see fault tolerance and FT computation everywhere, on the beaches, in the fields and in the streets, and in the hills. We never never miss them. So this is apparently a difference. There are others.


    A closely related question to the one we discuss is the question if stable non Abelian anyons exist (or can exist). Many physicists are working on constructing them or on developing devices to detect them. When I asked a physicist after hearing him lecturing about them, if he thinks that constructing non Abelian anyons might be impossible he replied that the Hamiltonians are so simple and elegant that they must exist. In any case, while there are some skeptics there is no systematic research (as far as he knew) aimed at showing that stable non Abelian anyons cannot be constructed.

    If somebody will come up with a good explanation why such anyons cannot be constructed this will be a big deal and may be related to very nice stuff. (Still I wouldn’t go as far as you, saying that this will mean that what QM people told us from the 20s is wrong.)

    But, in any case, trying to develop a mathematical scenario where stable non Abelian anyons cannot exist looks like a nice problem.

  75. sp Says:

    For me is very strange, why sciencists don’t knowing precisly all concept about quantum mechanic, saying that quantum mechanic isn’t classical, that for quantum mechanic need exponentional computation power to simulate it. Becouse if Scott saying, that if quantum computing is imposible then quantum mechanic is wrong in understanding. So if only working quantum computer can show that quantum mechanic is valid, then how without quantum computer physicist dare claim that they know how working quantum mechanic and that need exponentional wavefunction to simulate it.

    If quantum computing is imposible, do this means, that Bell theorem is wrong, entanglement doesn’t exist, superposition may also? I know only successfull experiment with entangled photons, but with them imposible to made working quantum computer… So maybe it’s ok with entanglement with photons, and with single photon superposition in Mach-Zehnder interferometer. Quantum teleportation was made ~57% succesfully instead 50% in classical case. But maybe quantum teleportation also don’t have realation with exponentionaly large wave-function. So if nobody have proves about quantum mechanic exponentionaly large wavefunction, then Scott answer makes sence and physicists was believing all the time in unproved theory of quantum mechanic…
    Joshua Schraiber, about protein folding is very much theories and one of them claiming that protein folding solving NP problems and over – that proteins folds in local minimum energy or is just don’t enough understanded Nature proccess, which can have simple genetic or polinomial algorithm answer…

  76. sp Says:

    On the over hand quantum mechanic 80 years may be wrong or not depending how you interpreting quantum mechanic. If you taking peace (small part, few particles) of quantum mechanic then it seems that you can build quantum computer, becouse you have only those few isolated particles, but if you will look at quantum mechanic in general with many particles then seems that they doing too strong noise to each over. So entanglement is separate phenomenon, which don’t realates with any possiblity to build quantum computer. QM or QC is like quarks, it seems you can to builld quantum computer (to find quarks), becouse entanglement and maybe over phenomenons showing that you can do it (over particles consist of quarks and leptons), but when you trying it build then noise don’t leting to to do it (quarks wasn’t obtained).
    Anyway I think, that quantum computer never will be builded and handy (with few qubits) QC is unproved that they are QCs and I don’t see any reason to believe that quantum computer ever will be possible, what don’t fit with intuition about normal classical world… And quantum mechanic will be the same as is now, but just everybody will stop talking about quantum simulation…

  77. Joshua Schraiber Says:

    I just realized I meant to say “THE article” in my post. Of course I’ve read “an” article… I’m not that out of the loop

  78. cody Says:

    sp, does relativity fit into your intuition about the classical world?

    and Scott, i’ll probably buy the magazine sooner or later, because its always fun to have that physical copy, but i’m not a big fan of buying online magazine articles. not sure why really.

    and Joshua, i’m not a biologist or computer scientist either, though i imagine, if protein folding is NP-complete, and they really fold themselves back up, there might still be more going on. for example, NP problems aren’t necessarily hard if you have a small enough instance, and genetic solutions are not always optimal (it seems that prions are a non-optimal folding, at least in the sense that they are undesirable). though i guess the jury is still out on all this. also, with the number of unknowns in the chemistry and biology of proteins, there might be other mechanisms we dont yet understand, that make the problem more manageable, like catalysts or something.

    i don’t really know much about protein folding or what is/isn’t known about their chemistry and physics, so i probably shouldn’t even be speculating.

  79. Daniel Says:

    Thank you Scott for that great article, it was probably the most interesting one in Sciam in recent years.
    One question that I have is: what is the relation between a bit and a qbit from an information-content point of view ?
    – A similarly interesting article as yours was published in Sciam 8/2003, entitled “Are you a Hologram ?”. There, authors claim that the maximum possible information content in a space area depends on its surface area, not on its volume. However, they measure the information content in classical bits. If quantum computers exist, it would probably be more interesting to know the maximum information content in qbits. So is there a theory that connects the information content of bits and qbits ?

  80. cody Says:

    also, does anyone know how much is currently know about error detection/correction in biological chemistry, like protein folding? i guess in the end, the folding seems to be correct the vast majority of the time… maybe there are some biologists or chemists or protein-folding aficionados out there who can set me straight?

  81. Scott Says:

    Joshua, protein folding is not a mystery at all, only an apparent one. The key point is that proteins specifically evolved to fold quickly and reliably — or in other words, to encode easy instances of the protein folding problem! If they didn’t, then they wouldn’t be good proteins. Yet even so, we still sometimes see proteins that can fold in non-optimal ways — that’s exactly what happens with prions.

    If you specifically engineered a gigantic protein whose minimum-energy folding configuration encoded a proof of the Riemann Hypothesis, then there’s no reason whatsoever to suppose it would reach its minimum-energy configuration (just as prions don’t).

    For more, see my survey article NP-complete Problems and Physical Reality.

  82. Scott Says:

    what is the relation between a bit and a qbit from an information-content point of view ?

    Daniel: If you ask how many classical bits can be reliably stored in and read out from a single qubit, then the answer is “just one.”

    If you ask how many classical bits are needed to teleport a single qubit, then the answer is “two.”

    If you ask how many classical bits are needed to specify the state of a single qubit exactly, then the answer is “infinitely many.”

    So, the real answer is that you need to say operationally what you mean by “information content”! For more see my talk in Iceland about this.

  83. Michael Welford Says:

    I would like to expand on something that Scott said about exponential complexity.

    If we do a thermodymamic analysis of a system of 10,000 gas molecules we are likely to cite a number like n to the 10,000 power. We are not impressed that this number is greater than the number of elementary particles in the observable universe because we are not counting actual physical objects. We are instead counting all the possible states the system could hypothetically be in.

    If instead, we analyze a highly entangled system 10,000 qubits we are again likely to cite a number like 4 to the 10,000 power. But, now we stand in awe, because this number counts all of the possible states the system is trying out simultaneously while waiting for the measurement that will collapse the system into 10,000 ordinary bits.

    Let’s be clear. A modest sized quantum system waiting to be measured has an internal complexity greater than the outward (measurable) complexity of the observable universe.

    And as Scott says, this complexity has been an essential part of quantum theory for generations. I remember many years ago reading Dirac and simply not beleiving him when he described the multidimentional tensors necessary to describe a quantum state. (Confession: Dirac was way over my head back then and still is.)

    The thing I don’t understand is why physicists go to such lengths to hide this complexity from the general public. Feynman barely hinted at it. Scott confined his discussion here to a parenthetical clause. I mean this is utterly amazing stuff. The first physicist to write a book about this has a guaranteed best seller.

  84. Scott Says:

    Michael: I couldn’t agree with you more strongly about exponentiality (combined with the interference that makes it manifest) being the most striking thing about quantum mechanics! You might have missed it, but I’ve been trying to shout that particular message from the rooftops for years — on this blog, in my public talks, in the SciAm article (at least in the published version, there’s a whole page about it)…

    I’ve also been trying to correct various misconceptions that immediately arise once people realize that there’s this exponentiality in our best description of the world.

  85. cody Says:

    “Let’s be clear. A modest sized quantum system waiting to be measured has an internal complexity greater than the outward (measurable) complexity of the observable universe.”

    Michael, i think thats one of the best short descriptions of this stuff i have ever heard (read). it just sort of sums up all these ideas which are usually presented in very confusing manners.

  86. Michael Welford Says:

    I’m glad, releived even, that you agree with me. It didn’t seem Iike all of the commenters were really getting what you were saying. I think this may be a case where the professionals doesn’t realize the profound conceptional breakthroughs that they themselves have made. Think back on the hours of hard study you had to go through to get the intuition and physical insight you have now. I think you need to realize that the level of expertise of the readers even of a physics blog won’t approach yours in your own specialty.

    The exponential monster in your article draft seemed a clear and dramatic metaphor, but even that is only really meaningful to someone who has experience equivalent to reading Powers of Ten.

    I think I owe the comparison to the number of paritcles in the universe to Mermin. But, I haven’t found the article.

    And that bit about hiding the complexity was a bit factious.

  87. Gil Says:

    “I couldn’t agree with you more strongly about exponentiality being the most striking thing about quantum mechanics!”

    This is a very interesting point of view. Historically, what are the origins of regarding exponentiality as such a major aspect of QM. Is it part of the early approaches from the beginning of last century or this is mainly part of the impact of the quantum information/quantum computers revolutions?

  88. Scott Says:

    Gil: If you read Schrödinger (for example), he clearly understood this aspect of QM even in 1926. I think Dirac and von Neumann did also (anyone who’s studied them care to comment?). On the other hand, it seems likely to me that Bohr did not understand it, and he’s the one who shaped most physicists’ thinking about QM for most of the century.

  89. Markk Says:

    I want to re-raise Daniel’s point (79) about the Holographic Principle and this infinite number of classical bits to specify a quantum bit. I have always felt this inconsistency. If you can give the number of quantum states in a volume as a finite function of the surface area, which is what the Holographic Principle people seem to do – for example in Max Tegmark’s arguments about the fact that there must be zillions of copies of ourselves, then this seems to contradict the “exponentialness” of QM. If there are only the limited number of visible outcomes the H.P. would seem to say (if it were true), then the “infinite” bits are really a sham, they could be collapsed into some finite number of, say, superstates. If not, then even with a finite number out output states, there would be infinite probability distributions describing their probability, and the HP would fall.

  90. sp Says:

    In 1926 was invented spin and Pauli matrices for it. Maybe after it entanglement few years later… If for two entangled photons need 2 classical bits for each photon, does it proves that wave-function is exponentional? If not then physicists just create they own imaginary theory of some strange math and physic. And then another “herous” based on this imaginary theory create they new own theory about quantum computing. To me is interesting, does there can exist more imaginary theories, which with n “bubits” can solve NP problems? Grover’s algorithm seems optimal, but maybe another superalgorithms for non-quantum imaginary theories can exist, which can solve those NP problems, which are unsolvable for even working quantum computer?

  91. Scott Says:

    Markk: The holographic principle is actually an upper bound on Hilbert space dimension — and hence, on the number of qubits rather than the number of classical bits. On the other hand, as I was saying before, an upper bound of n qubits does imply an upper bound of n of the number of classical bits that can be reliably stored and retrieved.

    Mathematically, there’s no problem whatsoever reconciling the continuity of a (finite-dimensional) Hilbert space with its finite capacity to store information — the problem is just in people’s heads.

  92. Scott Says:

    sp, when Schrödinger proposed wave mechanics in 1926, it really was an audacious extrapolation from what was known at the time. But by now, QM has been spectacularly confirmed in every single instance where physicists have been able to test it — even in experiments involving hundreds or thousands of entangled particles (the Zeilinger group’s buckyball experiments, the SQUID experiments, experiments on spin lattices in condensed-matter physics…). I think you owe it to yourself to learn about some of the actual modern tests of QM before dismissing it as an “imaginary theory of some strange math and physic.”

  93. Matteo Martini Says:

    do you agree with Jon Von Post that “The ‘killer app’ for quantum computers will most likely be simulating quantum physics”?
    If not, where do you think the areas of application of quantum computers will be?
    Sorry, if you have alread covered the topic..

  94. Scott Says:

    Matteo: That was my sentence, not Jonathan’s! Yes, I agree with the sentence that I wrote! And I think it’s time to start a new thread…

  95. Greg Egan Says:

    Scott wrote (#82):

    If you ask how many classical bits are needed to specify the state of a single qubit exactly, then the answer is “infinitely many.”

    One thing I find interesting about this is that it seems it’s bound up with the classical context in which degrees of freedom in the real world are necessarily embedded: at the very least, the continuous geometry of spacetime. What forces us to use a whole Hilbert space to specify an electron’s spin state is the continuum of classical angles.

    If we could break the universe down into truly minimal units of information, stripped of that macroscopic context, I wonder if the basic unit would still be a qubit, or whether it would be something smaller. Of course it’s possible to describe toy universes with just a few discrete degrees of freedom and model them with Hilbert spaces, but that doesn’t prove that the fundamental building blocks of the real universe work that way.

  96. Amy Hoffman Says:

    Scott, I just wanted to say I’m so happy for you and the wonderful life and career you’ve made for yourself. I don’t know that this is the right place to make this comment since it’s not specifically about your article, but I wanted to let you know.

  97. Deirdre LeVine Says:

    Scott, all of your hard work has paid off. We are so proud. But does P=NP?

  98. Scott Says:

    Amy: Thanks!

    Deirdre: P≠NP (of course I can’t prove it, but I also can’t prove the Roman Empire existed, and I’m about equally confident of both).

  99. Matteo Martini Says:

    sorry for the confusion.
    The last thing I would ask you, sorry for being pedantic, what exactly do you mean by simulation of quantum physics?
    In which particular tasks you think quantum computers can provide a consistent speed up when compared with conventional computers?
    I mean, conventional computers are used to analyze data coming from LHC.
    Why and how do you think quantum computers would do better?

  100. sp Says:

    Matteo Martini, I think tasks for quantum computer quantum simulation would be buckyball experiments simulation, the SQUID experiments simulation, experiments on spin lattices in condensed-matter physics simulation.
    “But by now, QM has been spectacularly confirmed in every single instance where physicists have been able to test it — even in experiments involving hundreds or thousands of entangled particles”
    100 or 1000 particles means wave function complexity is 2^100=10^30 or 2^1000=10^300. Such simulation is too hard even for supercomputer even with 100 particles. So how they was able to check that such experiments realy have exponentional wave-function? The only one misleading answer can be that wave-function is not classical so then it means it is quantum exponentional. But maybe it is classical but diferent than they think… SQUID of say 30 particles also hard to check, becouse how possible to made superconductor of 30 atoms? Another mentioned experiments maybe can more succesfully check does wave-function is exponentional or not… Anyway, If quantum computing is imposible, does then mean that all those mentioned experiments (for which describtion need exponentional wavefunction) are invalid or not?

  101. Scott Says:

    Matteo, by simulating quantum physics I mean simulating quantum physics. The main other known applications of quantum computers are Grover search and breaking most forms of public-key cryptography.

  102. Scott Says:

    sp, the point is that no one has come up with a theory — even a hypothetical toy theory — that could explain the buckyball and SQUID experiments (or any of thousands of other experiments) and does not also involve exponentially-long vectors. As I pointed out in my “Multilinear formulas and skepticism of quantum computing” paper, if QM skeptics wanted to advance science, then the thing to do would be to propose such a theory. Arguing about it like a lawyer is not helpful. And I really don’t want to rehash this debate.

  103. Gil Says:

    Upon further reflection I tend to agree with rr that there is no grounds for Scott’s rhetoric identifying QC skepticism with QM skepticism. If you refer to QM as the basic mathematical language (say of non commutative probability) developed last century, then there is no reason to believe that this language is wrong or incomplete even if (computationally supeior) QC is impossible in principle. If you are referring to the ongoing efforts to write the laws of physics using this language then for sure this is an incomplete effort, but I do not think that QC being impossible conflicts in a substantial way with physics’ intuitions regarding QM. (Not directly related to QC.)

    Returning to the question of Michael and me; If it is indeed true that all the physics we see around can be simulated by “unprotected quantum processes” namely quentum processes that do not invole quantum error correcion (e.g. those based on highly entangled states), this suggests that “unprotected quantum computers” are sufficient, in principle, to simulate our physical world. One difficulty with this suggestion is that it requires a solid suggestion of what “unprotected quantum computers” are, and that after such a suggestion is made, we will need some effort to understand their computational power. But still, persuing this suggestion does not require any formulation outside QM and even not much which goes contrary to common intuition, except the emerging intuition based on the belief that (computationally superior) QC are possible.

  104. Stas Says:

    This discussion starts pushing on concepts that break down if pushed too hard. I agree that it’s a good time for a new thread, which I’m eagerly waiting from Scott :).

  105. asdf Says:

    I don’t see why use of error correction is so objectionable. Classical computers rely intensively on error correction and redundancy to stay reliable. They represent data as analog voltages, magnetization levels, etc. and in the case of modern hard drives and digital radio (e.g. cellular phones), they come close to the Shannon limits for how much data they can store given the signal-to-noise ratio of the channel. Shannon’s noisy channel coding theorem is fundamental to data communications. It’s entirely unsurprising if something similar applies in QC.

  106. Minu Says:

    Dear Dr Scott,

    It was a pleasure reading your post about the lecture that you had delivered at DEI, Dayalbagh in January this year. It is, of course a bit late to be commenting on that, so thought am commenting on your latest post.

    Today, in His discourses during the Satsang [or congregation] at Dayalbagh, our revered Huzur Prof P.S.Satsangi had mentioned about your post and your experience here; and so I googled for your name and here I am.

    Having read your blog and some of the comments therein, I wish to try giving you a clearer picture about our religion. If you like, you could visit our site and check out the sub-links:

    PS: It was nice to know that the best food you have had here, was at Dayalbagh.

  107. Douglas Knight Says:

    Isn’t there some question to which the answer is: a qubit is worth sqrt(2) classical bits?

  108. Scott Says:

    Douglas: I don’t know. Is there?

  109. Gil Kalai Says:

    Reading carefully Scott’s article (still the draft posted here), let me say that IMHO it really represents an extraordinary achievment: It tells in a lovely way the many scientific stories that are interlaced in the QC saga, it is accessible to a wide audience (and is entertaining), and yet even experts will probably find in the article unfamiliar staff, and new angles on familiar things. It is certainly educating and gives much food for thought. great job!

  110. John Sidles Says:

    Scott says: no one has come up with a theory — even a hypothetical toy theory — that could explain the buckyball and SQUID experiments (or any of thousands of other experiments) and does not also involve exponentially-long vectors.

    … except, that is, for geometric quantum mechanics. Astekar and Shilling wrote a long article on this point of view.

    To put it another way, Scott, isn’t it true that every quantum mechanical experiment that has ever been done, can be simulated on a classical computer? So clearly, exponentially-long vectors are not required to explain existing quantum mechanical experiments!

    The main challenge in embracing geometric quantum mechanics is that many of the tools of traditional quantum mechanics no longer are well-defined (the density matrix, for example). Broadly speaking, everything quantum-mechanical that used to be globally defined has to be redefined in purely local terms.

    But perhaps this is good news?

    For example, embracing the purely local geometry of general relativity means giving up right triangles and Pythagoras’ Theorem. This is bothersome, for sure, but what physics and mathematics gains from general relativity is much more valuable than what we lose.

    We had a vigorous lunchtime discussion of this at last week’s Gordon Conference Mechanical Systems in the Quantum Regime. The consensus of the table was, if experiments show that quantum mechanical state-spaces are compact and dynamical (like GR) rather than unbounded and Euclidean (like Newtonian physics) … well … that’s good news!

    On the other hand, if the state-space of QM really is exactly Euclidean with exponentially many dimensions (which seems kind of implausible when you say it that way) … well … that’s good news too! 🙂

  111. piers i. lewis Says:


    Dear Scott Aaronson:

    I read your essay on the limits of Quantum Computers with great interest. I especially like the uses to which you put the finding that no computer of any kind can solve NP-complete problems quickly. All sorts of fanciful schemes, such as time-travel, are thereby ruled out.

    The NP-completeness theorem, if that’s what it is, would seem to join a large and apparently gowing body of similar restrictions: the speed of light as an absolute limit, the uncertainty principle, Godel’s incompleteness theorem, Turing’s ‘halting’ theorem (I don’t think I’ve stated that properly but you will know what I’m referring to), and there are others I’m sure. These restrictions on what’s rationally possible strike me as central in a deep way to what modernity is all about, though I’m not sure I could easily explain why that should be so. Somebody should write a book about this. How about you?

    So far as I know, David Hume is the first secular philosopher to propose any sort of logical restriction on reason. In Hume’s Treatise On Human Nature (1739), he severed the connection between reason and morality by showing that statements about matters or fact do not logically entail statements of moral obligation, approval or disapproval. Naturally, Hume’s argument did not meet with widespread acclaim. Here it is:

    I cannot forbear adding to these reasonings an observation, which may, perhaps, be found of some importance. In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, `tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention wou’d subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceiv’d by reason.

    If you think anything I’ve said is worthy of note, please let me know. I can be reached by email at

    Piers I. Lewis

  112. Scott Says:

    Piers: Thanks for the Hume quote!

    No, it’s not a theorem, just a proposed principle.

  113. asdf Says:

    Piers, you might look at Cristian Calude’s book about complexity, or one of Greg Chaitin’s. I haven’t read either one myself, but going by the blurbs they should be fairly accessible.

    Scott, if you’re still reading this thread, I wonder what you think of the stuff those guys (Calude and Chaitin) write.

  114. baka Says:

    hello, I read your article and wondered about the ‘time travel’ algorithm. My question is: is it possible to solve harder (like PSAPCE or even arbitrary decidable) questions with the same trick? or there is some special property which only NP problems have?

    PS: I don’t have the strength to look through all the comments here, so this might have been asked somewhere. Then sorry about that.
    The topic looks quite interesting so that I wish you have a post explaining a little bit more for someone who is not very familiar with quantum computing (like me).

  115. Scott Says:

    Baka: Using the time travel trick, you can do PSPACE, but no more than PSPACE. This is a result of myself and John Watrous, which we have yet to write up.

  116. Scott Says:

    asdf: You’d have to point me to some specific writings. I’ve enjoyed many things Chaitin wrote (and also enjoyed arguing with him in person), even though we seem to disagree about everything (including whether what I just wrote is accurate). I haven’t read much by Calude.

  117. Gil Says:

    One fascinating thing about Scott (and others) paper and writings is the attempts to draw very concrete, even technical, conclusions from very general philosophical arguments. This is nice although one can wonder what is the power of such philosophical considerations in deciding rather technical questions. (I suppose this is another interesting philosophical issue.) Let me disagree with the interesting argument of Scott in comment #41.

    (My “counter argument” below is just regarding Scott’s argument that can be read as QC being “obviously possible” in the context of QM.)

    For scientific computing we can consider two major issues:

    A) What can be done,

    B) What is the required computational complexity.

    Thus, in the context of modeling and simulation, computational complexity is sort of a second-order matter (albeit, extremely important).

    Scott’s argument can be rephrased as follows: First he argues that computationally superior quantum computers being impossible (in principle) means that the computational complexity required to simulate physics is described by classical computing. Let’s take this part for granted.

    Next Scott claims that this means that quantum physics is vastly simpler than we think. There is a miraculous polynomial description of exponentially long vectors.

    However, there is an alternative interpretation:

    The computational complexity required to simulate physics being described by classical computing means that quantum physics is much harder than we think. We will not be able to describe (or prescribe, by computers or by experiments) quantum processes whose simulation requires superior computational complexity.

    I do not think these two interpretations are the same. (I am not sure if neither of them contradict early intuition about QM, but especially the second one does not seem to.)

    (Here is an example: the computational complexity required for us to predict the weather the day after tommorow based on measures conducted tomorrow is more than what is required to predict the weather 100 years from now based on measures conducted tomorrow. Because the later task is harder.)

  118. John Sidles Says:

    That is interesting philosophy-based informatic physics, Gil!

    But the remainder of this post outlines a philosophy-based physics that is pretty much exactly opposite. So maybe the overall lesson is that philosophy-based quantum-informatic physics does not unveil “the truth” but rather unveils an overview of all the possibilities (this overview being just as valuable).

    We’ll begin by revisiting a “spooky informatic mystery of classical physics” whose main virtue is that it is an *obvious* mystery. Presumably when the spooky mysteries of quantum physics are unveiled, they too will seem obvious, so hopefully our time is not wasted when we review these obvious classical mysteries.

    Our spooky classical informatic mystery concerns von Neumann machines. We start with a von Neumann machine that has a one hour power supply. Assuming (classically) that matter is infinitely divisible, we instruct the machine to build two duplicates of itself, each half the size of the original, with double the clock speed.

    Well, you can see how this game goes … after two hours we have constructed an infinite binary tree of von Neumann machines. By including an extra gigabyte of RAM in each machine, and a communication link between nodes, we can search the graph for short proofs of P≠NP, or in general, solve any NP-complete problem we want.

    So on purely philosophical-informatic grounds, we (reluctantly) conclude that perhaps matter is not infinitely divisible. Too bad!

    Let’s weaken our infinite-divisibility assumption. Basically, let’s substitute exponentially-great state-space dimensions for infinite divisibility. Specifically, let’s assume that the state-space of matter is exponentially large (and complex … it is a quantum state-space).

    Bolting on some other assumptions (like linearity and POVM-based measurement) we find that in this restricted quantum state-space we can (presumably) no longer solve NP-complete problems, but we can still solve some classes of problems that are non-classically hard (like database search and factoring).

    According to the rules of philosophy-based informatic physics, we must conclude (reluctantly) that the quantum state-space of matter is not exponentially large … because the ultra-conservative quantum-informatic view is that spooky capabilities like non-classically fast database searching are just too “informatically weird” for Nature to embrace.

    What could this smaller (but still quantum) state-space be? For reasons that Scott set forth very clearly in his Scientific American article, we have to be careful. We cannot assume that the state-space is linear and the dynamics is nonlinear … this leads to NP-complete computing and/or non-causal communication. But we *can* assume that the quantum state-space is low-dimension and curved, and that the dynamics is simply ordinary quantum mechanics projected onto this curved space.

    AFAICT, none of the traditional informatic no-go theorems apply in this latter case. Perhaps someone can comment upon this?

    In any case, we are therefore led (by this highly uncertain philosophical physics reasoning) to entertain the idea of a world in which quantum state-space is curved and dynamical, much like the state-space of general relativity. IMHO this would be a perfectly fun world to live in … which is one of the main goals of philosophy. 🙂

    Even supposing that Nature’s quantum space is exactly linear, exponentially large, and non-dynamical (which seems like a mighty big assumption when we say it that way), it surely is a very important fact about quantum systems that they can be well-approximated by a curved state-space of lower-dimensionality (as the quantum chemists are teaching us).

    To return to a classical analogy, even if *sometimes* the simplex algorithm requires exponentially many steps to run, the fact that *usually* it requires only polynomially many steps, is a very important fact about the world, from both a mathematical and practical point of view.

  119. Gil Says:

    Daer John, I also tend to doubt if philosophical arguments like the one Scott presented in remark 41 can be useful in discussing concrete technical issues. But such arguments are often interesting and their potential relevance (in particular cases and in general) is also interesting. (Often, I find it hard to understand such an argument ; E.g. I do not understand how you deduced that the the “space of states” is curved and low-dimensional and what precisely this means. I also do not understand the analogy with the simplex algorithm.)

    In the case of Scott’s argument I think I understnd what he claims:

    “if quantum mechanics can’t be used to efficiently sample any probability distribution beyond what a classical computer can efficiently sample (forget for the moment about deciding all languages in BQP), then there must be an efficient classical algorithm to simulate the outputs of a quantum process.”

    What I say is that by “a quantum process” we should mean “a quantum process that can be simulated to start with” (and not “any quantum process we can imagine”).

    Scott continues:

    “But that, in turn, means that there’s some complete description of interacting physical systems that’s vastly more efficient than writing down the entangled wavefunction (i.e., that requires polynomially many rather than exponentially many bits).”

    And, also here, when Scott talks about “interacting physical system” he really refers to “interacting physical systems that we can describe to start with” (not anything that we can imagine.) So this paragraph can be read as a reference to the inability to describe complex quantum systems and not to the existence of a short description for an arbitrary quantum system.

  120. John Sidles Says:

    Hi Gil! A concrete example of Scott’s

    description of interacting physical systems that’s vastly more efficient than writing down the entangled wavefunction

    is the set of states known to quantum chemists as Slater determinants, which are manipulated by algorithms that are collectively known as Löwdin Rules.

    If we consider these Slater determinants as geometric objects, they turn out to be objects that are known to differential geometers as Grassmannians, which have the interesting curvature property of being Einstein manifolds (their metric tensor is proportional to their Ricci tensor).

    IMHO, it is pretty astonishing how little is known about these quantum state-space manifolds from a QIT point of view, given that the fundamental articles on quantum chemistry are among the most-cited in all of the scientific literature.

    A good starting point is a recent pre-print by Beylkin, Mohlenkamp, and Pérez.

  121. Scott Says:

    Gil, I find it interesting that you see me, specifically, as drawing technical conclusions from philosophical arguments. Physicists might see your a priori argumentation against quantum fault-tolerance in the same light… 🙂

  122. John Sidles Says:

    I did a search on this thread for the word “noise” and found two posts that I thought were nicely complementary.

    Gil (Kalai) said:

    If we want to simulate realistic noisy quantum processes (including all [Scott’s] examples) there is no a priori reason to expect that the full power of quantum computers will be needed. If these processes do not involve any phenomena similar to the highly entangled QEC states or any fault tolerance it is not clear why fault tolerance is required for their simulations.

    Scott (Aaronson) said:

    Why do we find it so useful to use almost-noiseless, fault-tolerant computers to simulate things as noisy and chaotic as (say) the earth’s climate, or the airflow around a wing?

    To reconcile these two points of view (and to suggest an explicit answer to Scott’s question) large-scale classical simulations typically *do* include either an explicitly random element (that “state of sin” called a random number generator) or else, an explicitly information-destroying element (like artificial viscosity in turbulent fluid flow).

    In consequence, conceiving classical simulations as being “almost-noiseless” is somewhat misleading. The mathematical reality is that *most* of the effort and sophistication in the design of classical simulation algorithms is focussed upon the efficient destruction of irrelevant information!

    What are we to make of this? My own view pretty much matches Gil’s (as I understand it) — to me it seems plausible that noisy quantum systems are not algorithmically harder to simulate than noisy classical systems. In particular, the role that artificial viscosity plays in classical simulations is played in quantum systems by the realization of noise processes as covert measurement processes.

    In consequence, just as fluid-mechanical simulations need not encode microscopic turbulent eddies, quantum simulations need not encode exponentially high-order correlations.

    From this point of view, zero-viscosity fluid flow and zero-noise quantum dynamics both are computationally intractable, for much the same underlying reasons. Conversely, viscous fluid flow, and noisy quantum systems, both can be simulated with (possibly immense) classical resources.

    From this point of view, simulating quantum systems is easier than simulating classical system for the common-sense reason that the mathematics of quantum mechanics has more sophisticated tools for information destruction! 🙂

  123. Gil Says:

    Dear Scott, I just referred comment #46 and to some of your other writings (e.g. reasons to believe I/II, and various others.) This was not meant as a criticism but as a compliment. When I said it is fascinating I really meant it. (I also meant to say that I doubt if philosophical arguments can be useful in deciding technical matters. I do think they are useful in discussing these matters.) Of course, you are also famous for your papers where you draw technical conclusions from technical arguments.

    I like such very general arguments and I do make them myself. E.g. my suggestion (#48) that realistic quantum processes in nature do not involve quantum error-correction/fault tolerance and therefore computationally superior QC are not required for their simulation is also a “philosophical” non-technical suggestion. (And you offered some counter arguments in #55 #60 to which I offered some counter-counter arguments.) As I said, such arguments of very general nature are interesting and so are attempts to make counter arguments and to find a methodology for discussing them. So if you see some problem with my recent counter argument #117 I’d be happy to hear.

    As for my “a priori” argumentation “against” quantum fault tolerance you are absolutely correct. Physicists (and CS people and mathematicians) might see (and actually do see) my argumentation in the same light, and to a large extent I agree.

    Part of the challenge is to translate rather general ideas to technical mathematically formulated conjectures. Even then it is very difficult to distinguish between conditions that will cause QEC/FT to fail from conclusions to the (hypothetical) failure of QEC/FT. (This is what Greg referred to as tautological kibbetzing which is a term I actually like, as mathematics is a sort of tautological kibbetzing.)

    At the very end experimental QEC or some other experimental physics will decide these issues (and as there is a wide consensus now that QC are possible this may well be the outcome.) But it will be nice to develop, in a mathematical language, an appealing story of how QC fail a little like the highly developed terrific story of successful QC. (And it is also nice to discuss these matters from a philosophical point of view as you and I do.)

  124. Scott Says:

    Gil, no offense taken and none intended. I wish all QC skeptics were as reasonable as you…

  125. Heatherrr Says:

    When I thought about all NP problems have one characteristic in common, I was, like, “Oh, my God!”