“So You Think Quantum Computing Is Bunk?”

On Wednesday, I gave a fun talk with that title down the street at Microsoft Research New England.  Disappointingly, no one in the audience did seem to think quantum computing was bunk (or if they did, they didn’t speak up): I was basically preaching to the choir.  My PowerPoint slides are here.  There’s also a streaming video here, but watch it at your own risk—my stuttering and other nerdy mannerisms seemed particularly bad, at least in the short initial segment that I listened to.  I really need media training.  Anyway, thanks very much to Boaz Barak for inviting me.

68 Responses to ““So You Think Quantum Computing Is Bunk?””

  1. Ross Snider Says:

    The “many worlds versus many words” bit was very awesome. I think “many words” is my new favorite interpretation of QM. Seems to have the smallest Kolmogorov complexity of any of the interpretations to me.

  2. James Gallagher Says:

    Hi Scott,

    You don’t need to be scientifically radical or have a revolution in Quantum Mechanics to explain why QC might not work. In fact a rather modest interpretation will do – suppose Nature is a UNIQUE path in a “Many Worlds” Universe – with the path/branches randomly selected by Nature (So that the wavefunction of the universe is “collapsing” randomly every ~10^-43 secs for example)

    Now you have quantum superpositions but you don’t have “many worlds”, and since you have a universe evolving along a unique path, Nature can’t factorise large numbers any better than a classical computer can.

  3. Attila Szasz Says:

    @James #2

    I seriously hope you meant it as a joke/parody of skeptics.

  4. Pangloss Says:

    Was this a job talk? Sure seems like it…

  5. Scott Says:

    Pangloss #4: No, I assure you a job talk looks very different.

  6. Gil Kalai Says:

    Nice lecture, Scott. Since my point of view is briefly mentioned in your lecture let be add a link to my own recent presentation at MIT’s quantum information seminar entitled  “Why quantum computers cannot work and how.”

    Two remarks: First, I try to put on formal grounds not only assertions where Scott and I disagree but also assertions where we both seem to agree like: “It indeed seems possible that ultimately, “pure” BosonSampling will not scale beyond a few dozen photons. To go significantly beyond that, you would want quantum error-correction, but if you have that, you could probably just build a universal QC instead.”

    Second, a non trivial number of Kalais were positioned in the lecture hall to rapidly confront any threshold being passed or other troubles or abuse. As I said, the talk was nice and there was no need put them to action.

  7. Scott Says:

    Thanks so much, Gil!

  8. Rahul Says:


    Loved those slides! Great.

    Especially loved the clarity of your:

    (1) Does impossibility of QC means breakdown of QM?
    The short answer is: No

    (2) If computationally superior quantum computers are not
    possible does it mean that, in principle, classical computation
    suffices to simulate any physical process?
    The short answer is: Yes

  9. James Gallagher Says:

    Hi Atilla #3

    Joke? Parody? I would have referred to deterministic hidden variables or superdeterminisn if I was looking for laughs. 🙂

    No, I’m serious, it’s not so crazy is it? We allow Nature to do all the quantum jumps (that ever occur) and evolve the universe wavefunction via schrodinger evolution everytime a random jump occurs so that the whole universe knows what’s just happened

    We just have to assume the evolution steps (ie each quantum jumps) occur at a sufficently fine time resolution (say in steps of ~planck time on average) to be consistent with current experimental observations.

    ‘t Hooft suggests something similar but without the randomness (so I’m not clear how he gets superpositions at all), his evolution operator is just a gigantic permutation matrix operating on the entire universe state vector (this may be slightly oversimplifying, but I think it’s the jist)

    My suggestion, is really not so different to the many-worlds interpretation, but introduces discrete time evolution seeded by random quantum jumps (the explicit randomness enables us to get rid of the multiple branches and allows us to have a unique evolution path in the Hilbert Space)

    Also we now have no measurement-induced “collapse” (nature does all the collapsing, regardless of our existence)


  10. Scott Says:

    James #2 and #9: If there were actually “dynamical collapses” roughly once per Planck time, then we wouldn’t see interference between different branches of the wavefunction over timescales much longer than the Planck time. But of course, we do see such interference (even, as I said in my talk, over ~15 minutes = ~1046 Planck times!). And accounting for such interference is the entire reason why we need to describe Nature using complex vectors in Hilbert space in the first place. So it’s your proposal itself that collapses ~1 Planck time after being proposed. 🙂

  11. James Gallagher Says:



    Imagine the universe consists of zillion of states, each represented by a complex number. Now suppose any of those states can randomly change complex phase. Now, whenever a state changes its complex phase you evolve the universe via a unitary operator (so the whole universe gets updated depending on what the previous single random change was)

    There, you have a single path evolution with superpositions

    WE DON’T KNOW WHAT THE EVOLUTION PATH IS UNTIL WE MEASURE IT!!!! (It is in a superposition of all possible evolutions)

    So measurement just REVEALS knowledge about the state-vector/wave-function – nothing else

    Of course you will get all the usual quantum inteference effects – this model will only depart from standard QM at small time scales, which we currently can’t measure. But it also won’t allow exponential speed up with QC algorithms – since the “actual” superpositions are constantly being destroyed

    The key point is that a fundamental random seeded evolution is equivalent to superpositions in MW, but that doesn’t mean the superpotions have an actual ontological existence, like in MW (well, they do, but only for minute time steps)


  12. Scott Says:

    James, if we indeed “get all the usual quantum interference effects,” then for that very reason we can do QC, which is conceptually just like the double-slit experiment, except with a large number of particles interacting in a complicated way. Nothing in a QC depends on what’s happening at the Planck scale—or rather, if it did, then the discovery of that would be one of the great revolutions in the history of physics.

    Here’s a simple sanity check: what’s the simplest experiment for which your model predicts an outcome different from that predicted by standard quantum mechanics?

  13. James Gallagher Says:

    The simplest experiment must be one which relies on the ontological existence of the superpositions. So Quantum Suicide would probably be the simplest, but this is not practical. So in fact, we must look at another quantum scenario that requires the superpositions to exist – and that is in Quantum Computing algorithms, I do not think there is a simpler test of the ontological status of superpositions.

    In the probabilistically seeded discrete time evolution model, the superpositions exist mathematically (so most predictions of the model will be the same as for MWI) but I think you are incorrect to say QC algorithms follow from the basic postulates of QM – they do not eg QC doesn’t work with deterministic hidden variable models which are consistent with the basic postulates of QM.

    Do we get anything extra from the random seeded discrete time model to make it worth considering? Yes, we get a speed of light limit (but note that the time steps need not be constant like in a deterministic cellular automata model, they may just average ~10^-43 secs (or whatever))

  14. John Sidles Says:

    Scott opines: “One historical analogy: People thought Charles Babbage’s idea was cute but would never work in practice. And they were right — for ~130 years!”

    The quantum-Babbage analogy can be traced back to the early days of the quantum computing literature, e.g., in Andrew Stearne’s highly-recommended 1997 survey Quantum Computing we find:

    “The father of computer science is arguably Alan Turing (1912-1954), and its prophet is Charles Babbage (1791-1871). Babbage conceived of most of the essential elements of a modern computer, though in his day there was not the technology available to implement his ideas.”

    What have we learned since 1997?


    Post-1997 lesson-Learned #1  The STEM capabilities of Babbage’s era were entirely adequate to build the machines that Babbage envisioned

    As inarguable evidence, at least two working Babbage engines have been constructed to Babbage’s design, using only nineteenth century technologies (amazing video here).


    Post-1997 lesson-Learned #2  The STEM capabilities of the modern era are entirely inadequate to build the machines that the QIST Roadmaps envisioned

    As inarguable evidence, over the last three decades, a level of global STEM investment that would have sufficed to many generations of Babbage machines has not sufficed (arguably) to advance beyond even first-generation quantum computers.


    A modest suggestion  From a 21st century perspective, the appropriate historical exemplar for quantum computer development is not Babbage’s quest to build a computing engine, but rather Isaac Newton’s quest to synthesize a Philosopher’s Stone (see, e.g, Newton’s Lapis Philosophicus cum suis rotis elementaribus).

    The father of all perfection in ye whole world is here. Its force or power is entire if it be converted into earth. Seperate thou ye earth from ye fire, ye subtile from the gross sweetly wth great indoustry.

    It ascends from ye earth to ye heaven & again it desends to ye earth and receives ye force of things superior & inferior.

    By this means you shall have ye glory of ye whole world & thereby all obscurity shall fly from you. Its force is above all force. for it vanquishes every subtile thing & penetrates every solid thing. So was ye world created.

    From this are & do come admirable adaptations …

    Newton’s pragmatical intent, and his experiment-oriented research program, and his informatic regard for alchemical principles, all are strikingly echoed in the QIST Roadmaps:

    Quantum computation (QC) holds out tremendous promise for efficiently solving some of the most difficult problems in computational science …

    It should be possible to reach the “quantum computer-science test-bed regime”—if challenging requirements for the precision of elementary quantum operations and physical scalability can be met.

    The quantum computer-science test-bed destination that we envision in this roadmap will open up fascinating, powerful new computational capabilities.

    The journey ahead will be challenging but it is one that will lead to unprecedented advances in both fundamental scientific understanding and practical new technologies.

    From a modern perspective, we appreciate the good-and-sound STEM reasons why Newton’s alchemical roadmap could not be pursued to completion (the atomic foundations of chemical processes, the symplectic foundations of separation processes, and the evolutionary foundations of biological processes all being largely absent from the STEM worldview of Newton’s generation).

    Are there comparably good-and-sound STEM reasons why the QIST Roadmap cannot be pursued to completion? Is there comparably much still to learn regarding the fundamental mathematics and physics of chemical, separatory, and biological processes? In a nutshell, is our present appreciation of QC/QIT “alchemically” limited, in consequence of inadequate mathematical foundations and over-idealized dynamical assumptions? These are open questions!

    As for the QC/QIT/Babbage analogy … well … it served a useful purpose in decades past, but nowadays it is obsolete. The clear STEM lesson of recent decades is that the QC/QIT/alchemy analogy is apropos.

    Or at the very least, the lessons of Newton’s alchemical science are usefully thought-provoking for QC/QIT researchers! 🙂

  15. Scott Says:

    James #13: It’s not sufficient to give “Quantum Computing algorithms” as your answer. For as you know, quantum algorithms like Grover’s and Shor’s have been implemented, on up to 7 qubits, and the predictions of QM have always been perfectly confirmed. So the burden is on you to say: which quantum algorithm, and on what number of qubits? Where will this supposed breakdown of QM first become detectable, and why?

  16. Scott Says:

    John Sidles #14: I find it perfectly plausible that, in the year 2300 or whatever, some hobbyist will go back and implement quantum computing with “mere 1997 technology,” just to show it could’ve been done. Everything’s easier in retrospect, and the fact that someone built the Analytical Engine recently is only weak evidence that Babbage realistically could’ve built it.

  17. John Sidles Says:

    Scott opines: “Everything’s easier in retrospect, and the fact that someone built the Analytical Engine recently is only weak evidence that Babbage realistically could’ve built it.”

    Hmmm … visitors to IBM’s archives learn that by 1853 Georg and Edvard Scheutz built and operated two Babbage-style difference engines … isn’t this prima facie proof that “Babbage realistically could’ve built [his engine]?”

    Whereas in striking contrast, we now appreciate that — for reasons that seem simple now, but were far from obvious in the late seventeenth century — Isaac Newton’s alchemical roadmap could not realistically hope to transmute base metals into gold, or to extend life, or to cure diseases, or indeed to achieve any of the main objectives of the alchemical roadmap.

    The QIST Roadmap’s goals are reasonably explicit:

    It is known that it should be possible to reach the “quantum computer-science test-bed regime” … the ten-year (2012) goal would extend QC into the “architectural/algorithmic regime,” involving a quantum system of such complexity that it is beyond the capability of classical computers to simulate.”

    Now it is natural to wonder whether the QIST vision of a “quantum computer-science test-bed” more closely resembles Babbage’s Engine or Newton’s Lapis Philosophicus? … to wonder whether the QIST obstructions reside mainly in engineering, to be overcome by ingenuity? … to wonder whether the QIST obstructions are showing us that our present understanding of quantum dynamics embodies “alchemical” misapprehensions, to be remediated (we hope!) by new foundations in mathematics and physical science?

    Conclusion  Attractive arguments can be advanced to support all possible answers to these tough questions. And these uncertainties are very good news for young STEM professionals!

  18. Scott Says:

    John Sidles: Well, I understand that there were major differences between the Difference Engine and the Analytical Engine. Among other things, only the latter would’ve been Turing-complete.

  19. DMcM Says:

    I have a query about the many-worlds interpretation. It’s probably a misunderstanding on my part, but here seems like a good place to ask.

    Let’s say you have a device which measures the spin of a particle. A dial on the device will point to “+” if the spin is up and “-” if the spin is down. After measurement you basically find that the reduced (internal environment of the device traced out) density matrix of the “equipment and particle” system has become almost exactly diagonal, with the two diagonal pieces:
    “Particle spin up, Dial at +”
    “Particle spin down, Dial at -”

    The Many-World’s interpretation takes this to represent two worlds. The two points I don’t understand are:

    (a) What is the meaning of the probabilities on the diagonal if they aren’t both 1/2? Is it the probability of finding yourself in one of the universes? Surely that could only be 1/2 though, since there are only two?

    (b) Why interpret it as two different worlds? When a density matrix becomes diagonal, you essentially have a superselection rule generated by the environment. Different values of the Dial observable cannot be placed in superposition, just like electric charge in QED. In QED the way states factor over the algebra of observables means that kets which are a linear combination of two different electrically charged states are actually just classical probabilistic mixtures.
    Aren’t the dial values in the apparatus now just in a classical mixture and so shouldn’t we interpret the values along the diagonal of the density matrix in the same way as the 1/6 probability values for a fair dice roll?

    I’ve probably expressed myself badly, hopefully somebody can clear up my confusion.

  20. Scott Says:

    DMcM #19: What you’re asking about are precisely two of the most common criticisms of MWI! I recommend reading any pro- or anti-MWI paper on the web (or maybe some commenter here feels like taking a swing). Briefly, what an MWI proponent would say to your questions is:

    (a) The meaning of the probabilities is something like “fraction of world-volume.” Operationally, the probabilities simply give you the odds at which you should bet on ending up in one world as opposed to the other world.

    (b) The reason MWI proponents want to interpret the branches as two different worlds is that the formalism of QM—the same formalism needed to explain the double-slit experiment, etc.—does seem to imply that both of the worlds “exist” in some sense (for example, the simplest digital computer simulation of QM would record an amplitude for both of the worlds). And in principle, one could even imagine an experiment in which two macroscopically-distinct worlds—with observers having registered different experiences, etc.—interfered with one another. This was David Deutsch’s point in the early 80s: he argued that, if such an experiment were successfully carried out, then “collapse interpretations” (to whatever extent they’re well-defined) would seem to be ruled out experimentally, and the ability of conscious beings to have superpositions of different memories and experiences would’ve been experimentally confirmed. (Not surprisingly, other people argue that even then, one still wouldn’t have established the reality of many worlds. Do you see what I meant by “Many Worlds vs. Many Words”? 🙂 )

  21. PS Says:

    Minor bibliographical point: I had a vague recollection that I had heard something similar to the “many world vs many words ” quip before. It turns out Max Tegmark said something very similar in the abstract of one of his papers. Quote below:

    It is argued that since all the above-mentioned approaches to nonrelativistic quantum mechanics give identical cookbook prescriptions for how to calculate things in practice, practical-minded experimentalists, who have traditionally adopted the “shut-up-and-calculate interpretation”, typically show little interest in whether cozy classical concepts are in fact real in some untestable metaphysical sense or merely the way we subjectively perceive a mathematically simpler world where the Schrodinger equation describes everything – and that they are therefore becoming less bothered by a profusion of worlds than by a profusion of words.

  22. James Gallagher Says:

    Scott #15

    I mean any QC algorithm that demonstrates exponential (or any beyond classical) speedup. Since with the small qubit scenarios we can’t be sure that that we have true “Quantum Computation” – there may be subtle details of the experimental setup that we have overlooked. ie small qubit demonstrations are not a “proof” that large scale QC is possible (even in principle)

    If an exponential (or similar) speed up of a calculation beyond what is considered possible classically (I would be convinced by large factorisation, even if not proved to be outside P) is ever demonstrated, then, like the Monkees, I’m a Believer.

    I actually do want QC to work, but I reckon, perhaps controversially, that evolution would have discovered it and implemented it, if it were possible. Same reason I don’t believe in levitation and telepathy. (Evolution has exhaustively probed reality at earth conditions for us, but obviously I realise evolution couldn’t make use of bose-einstein condensates, lasers etc)

  23. anonymous Says:

    My personal view as a man from the street: the situation with QC seems to be simlar with the flight to Mars, the timeline is always somewhere in the near, not so near, then distant future. But while I am sure that eventually NASA will send someone to Mars (if USA don’t bomb North Corea first with some of their “tactical” weapons), simply because they need to revive the dying enthusiasm for space/space missions/star wars and the like, QC with the time passing looks more and more like philosopher’s stone, with alchemists pronouncing strange formulas, calling the entire universe (or perhaps many universes) to help them, taking money from broke barons and counts in order to see at last the golden glitter among the ashes in the retort..

  24. Clayton Says:

    Thanks as always for the slides. Could I trouble you to post these as .pdfs, though? Some of us don’t have Powerpoint on our machines. Thanks!

  25. Audun Says:

    you mention that if quantum computation was impossible, there would exist efficient classical algorithms for simulating “realistic quantum systems”.

    There has actually been a lot of activity in this area recently, at least if you take “realistic systems” to mean many-particle systems with local interactions. In that case, the low-energy states feature a small amount of entanglement, which allows an approximate, polynomial description of the system, again allowing low-energy initial states to be time-developed efficiently (at least for short timescales). See e.g. [1].

    Of course, one could prepare high-energy excitations which would be hard to simulate. Is this what happens in Shor’s and Grover’s algorithms?

  26. Audun Says:

    Forgot the reference:
    [1]: G. Vidal: Efficient classsical simulation of slightly entangled quantum computation. DOI: 10.1103/PhysRevLett.91.147902

  27. James Gallagher Says:

    arxiv link for the paper Audun has mentioned:


    (for a moment there I thought it was by Gore Vidal)

  28. Scott Says:

    anonymous #23: If building a QC would be as easy (at least from a technological standpoint) as sending humans to Mars, then I guess this debate is over!

    Your position seems to be not that we can’t do such things, but that we shouldn’t, since it “looks more and more like philosopher’s stone, with alchemists pronouncing strange formulas, calling the entire universe (or perhaps many universes) to help them, taking money from broke barons and counts in order to see at last the golden glitter among the ashes in the retort” [sic].

    In that case, an obvious question arises that you don’t address: what about the people who built classical computers, or the Internet, or airplanes, or modern medicine? Should they also have turned back from their Faustian bargain?

  29. Scott Says:

    Clayton #25: If you go to the streaming video page, they made a PDF of the slides.

  30. Scott Says:

    James Gallagher #22: You’ve refuted your own “evolutionary argument” against QC, but to pile on additional examples — evolution also didn’t “discover” nuclear weapons (or any other use of nuclear energy), gunpowder, the wheel, … so that argument is 100% invalid, and shouldn’t even enter the discussion.

    Regarding experiments with small numbers of qubits: well, whatever “subtle details of the experimental setup” were overlooked, must have just so conspired that the experiments all perfectly confirmed QM! What do you make of the fact that experiments with thousands or millions of entangled particles—for example, involving Bose-Einstein condensates, high-temperature superconductors, Josephson junctions, and the like—also perfectly confirm QM, whenever we can actually carry them out? Not only have you not answered my “Sure/Shor separator” challenge, you haven’t yet even understood the challenge. (Which, of course, makes you no different from the vast majority of casual QC skeptics…)

  31. Neel Krishnaswami Says:

    Maybe this is a really dumb question, but I’ve always wondered: why is classical computing possible? That is, Liouville’s theorem says that the action of a conservative Hamiltonian on a phase space preserves its volume. As a result, it seems like globally the universe cannot perform classical computations, since no information can be destroyed, and so you can’t delete bits. Locally, you can build a dissipative system, of course, but it’ seems a bit weird that a classical universe as a whole can only perform reversible computations?

  32. Scott Says:

    Audun #25 and #26: I’m familiar with Guifre’s beautiful results, which turned quantum computing ideas on their head, to push the boundaries of which quantum systems people could efficiently simulate using classical computers. Nothing in those results suggests a route to killing quantum computation, as I imagine Guifre himself would be the first to tell you. Most obviously, his results mainly apply to slightly-entangled 1-dimensional spin lattices—in 2 dimensions and higher, people already don’t know what to do! But even aside from that, no one knows of any physical principle that would necessarily keep you in the “slightly-entangled” regime that Guifre’s algorithms can handle. Indeed, I think that the same sorts of states that were counterexamples to the “tree size is a Sure/Shor separator” conjecture, would also be counterexamples for Schmidt rank, matrix product states, and the other things Guifre considers.

  33. DMcM Says:

    Scott #20, thanks for the explanation! I’ll read some more on the topic as you suggested.

  34. Scott Says:

    Neel #31: Why the universe can do classical computation is not a dumb question at all! Indeed, it’s something that QC skeptics typically take for granted, but shouldn’t.

    (A related point, which I should’ve mentioned in my talk but didn’t: it turns out to be extremely hard to design a physically-plausible noise model that would only kill QC, and not also kill classical computation!)

    If it’s reversibility that you’re worried about, then I’ll simply point out that it’s been known since the 80s that reversible computers can simulate non-reversible ones with only a constant-factor slowdown. Yes, it’s “weird” that the universe can only do reversible computations (at least if you consider the universe as a whole, rather than e.g., the part accessible to any one observer limited by the speed of light in an expanding universe), but I have nothing to add to your correct reasoning that leads to that conclusion! 🙂

  35. Scott Says:

    PS #21:

      I had a vague recollection that I had heard something similar to the “many world vs many words ” quip before. It turns out Max Tegmark said something very similar…

    Yeah, I’ve heard various people use that quip—I’m not sure whether it was Tegmark or someone else who invented it, but it certainly wasn’t me.

  36. James Gallagher Says:

    Scott #30

    I understand the challenge, I’m saying that most QM experiments do not rely on the “actual” existence/persistence of superpositions to explain the measured outcome of the experiment. Wheras exponentially sped-up QC algorithms definitely do depend on the “actual” existence/persistence of the superpositions.

    The common argument says decoherence explains the problem with getting persistent superpositions, but I say no, Nature itself doesn’t have persistent superpostitions (she wouldn’t be so wasteful to try to evolve exponentially increasing information!). Mathematically they exist, and any QM experiment so far carried out has results which are a prediction of the mathematical framework – where the superpositions are only a mathematical tool for a calculation.

    But for your exponentially sped up QC algorithms you will need these superpostions to really exist/persist, even if you are just using the Copenhangen Instrumentalist Interpretation you won’t get exponential sped up quantum algorithm if, as I suggest, the superpositions are just a mathematical construction. (Except in a very unlikely probabilistic scenario whereby the evolution happens to give the required outcome from an exponential number of possibilities)

    And btw, evolution did discover the wheel and even room-temperature engines (~gunpowder) at the microscopic scale, so stop dissing evolution, it’s much cleverer than you guys trying to create laboratory quantum computers

  37. John Sidles Says:

    Scott asserts “No one knows of any physical principle that would necessarily keep you in the “slightly-entangled” regime that Guifre’s algorithms can handle.”

    Please allow me to commend to Shtetl Optimized readers Fernando Brandao and Michał Horodecki’s recent Exponential Decay of Correlations Implies Area Law (arXiv:1206.2947v2, 2012) which begins

    Quantum states of many particles are fundamental to our understanding of many-body physics. Yet they are extremely daunting objects, requiring in the worst case an exponential number of parameters in the number of subsystems to be even approximately described. How then can multiparticle quantum states be useful for giving predictions to physical observables? The intuitive explanation, based on several decades of developments in condensed matter physics and more recently also on complementary input from quantum information theory, is that physically relevant quantum states, defined as the ones appearing in nature, are usually much simpler than generic quantum states. In this paper we prove a new theorem about quantum states that gives further justification to this intuition.

    Brandao and Horodecki go on to reference earlier work by David DiVincenzo, Debbie Leung and Barbara Terhal Quantum data hiding (arXiv:quant-ph/0103098v1, 2001), and by Patrick Hayden, Debbie Leung, and Andreas Winter, Aspects of generic entanglement (arXiv:quant-ph/0407049v2, 2004) which introduces the notion of quantum data hiding states.

    It seems entirely credible (to me) that further development of these ideas — accompanied by further beautiful theorems of course! — may inexorably convey our understanding toward a world that is governed by

    The 21st century QM/QIT/QED thermodynamical quenching postulates  Low energy/non-gravitational experiments are governed [to an excellent approximation] by quantum electrodynamical (QED) field theory, and similarly [to an excellent approximation] the finite-dimensional Hilbert-space dynamics of quantum information theory (QIT) can be experimentally realized as a limiting case of QED dynamics. However, for n-qubit systems, arbitrary unitary evolution upon QIT state-spaces of effective dimension 2^n cannot be scalably realized, in consequence of QED coupling to the vacuum, which dynamically ensures that in the large-n limit, all QED systems asymptotically acquire a well-defined local entropy density, such that the thermodynamically extended “quantum data hiding” states that are essential to scalable computations, are generically quenched in the large-n limit by the dissipative dynamical processes that are intrinsic to QED.

    Of course, the more conservative members of the engineering/chemistry/biology/experimental physics communities already know that the QM/QIT/QED thermodynamical quenching postulates are true, in Feynman’s pragmatic sense that “a very great deal more truth can become known than can be proven!”

  38. Scott Says:

    James #36: I don’t know how you explain even (say) the double-slit experiment, or the Bell inequality, without invoking superposition in very much the same way that QC invokes them. Words like “ontological” or “really exist” are irrelevant to this discussion: since we’re talking about the outcomes of actual experiments, the only question is what role superpositions play in the theory we use to predict those outcomes. And this is where you’re caught in the horns of a dilemma:

    (1) If superpositions are “just a statistical tool” for describing some deeper, “classical” layer of reality, then how do you explain known phenomena like Bell inequality violation, or the behavior of spin lattices or Bose-Einstein condensates—all of which appear to force the “classical” layer, if it exists, to be so weird that it’s basically just a restatement of quantum mechanics?

    (2) If superpositions are more than such a statistical tool—if there is no “deeper, classical layer”—then how do you rule out QC?

    My position is the following: before I could take QC skepticism seriously, I’d need an answer to either (1) or (2) that nontrivially engages my intellect, and deals with all the obvious objections that a physicist would raise.

  39. Scott Says:

    Incidentally, regarding Nature having invented “wheels” or even “gunpowder” (!) at the molecular scale: well, if those are the rules you want to play by, then Nature has also invented “quantum computing” at the molecular scale! See the recent work on the GMO photosynthetic complex, and its interpretation in terms of a quantum walk algorithm. Or the use of quantum entanglement in the internal compasses of European robins. These things are at least as much “quantum computers” as the metabolism going on in my stomach is a “gun”! 🙂

  40. James Gallagher Says:

    Scott #38

    Aye, there’s the rub.

    There is no classical layer.

    I’m saying Nature is fundamentally random, but she only evolves in a SINGLE probabilistically determined path. So all the other “branches”of the evolution don’t exist.

    Fundamental randomness is a funny thing, and it was never properly introduced to QM, even with the Copenhagen Interpretation.

    I’m explicitly inserting fundamental randomness at the tiniest level. In fact I’m saying the only reason the universe evolves at all is because of these microscopic random “jumps” and then Nature does some bookkeeping by unitarily evolving the enitire universe state vector, so we don’t just have chaotic random evolution.

    What I’m saying is perhaps confusing, I believe it’s not a just a trivial interpretation of things.

    I just wanted to point out, that a small adjustment to our interpretation of QM might explain why QC can’t work while everything else seems to conform perfectly well to Standard Quantum Mechanics.

    Of course this model satisfies Bell Violations – unless there is a branch in MWI which doesn’t.

    An easier refutation of this model might come from showing the EM spectrum is continuous to a much finer resolution than planck timescales, maybe from gamma ray bursts or similar.

  41. Attila Szasz Says:

    James #36:

    [..] so stop dissing evolution, it’s much cleverer than you guys trying to create laboratory quantum computers

    At this point I can’t resist to mention this, sort of half jokingly;
    there actually is some evidence that european robins are
    better at maintaining entangled qubit pairs in a controlled manner, than any expensive laboratory equipment at present.

  42. James Gallagher Says:

    Scott & Attilla

    that photosynthesis and Robin research is pretty stunning.

    But I think it’s better as a refutation of Tegmark’s simple argument that the “warm” brain can’t do any interesting quantum stuff than an argument for Quantum Computing – ie I don’t think “entanglement robustness” = “quantum computing”

    But I will be at least as happy as Scott if it’s proved that photosynthesis or Robins are doing QC! 🙂

  43. Attila Szasz Says:


    We were playing by your rules, as Scott puts it; these weren’t arguments for QC, just a few funny remarks against your quite silly (ok, you can say “controversial”) evolutionary reasoning.
    Personally, I’m getting the impression that discussing the correct spelling of my first name would make just as a scientifically rewarding and profound conversation as this one.

  44. Scott Says:

    James #40: Sorry to keep harping, but setting aside the question of whether your model is true, you haven’t even explained your model. All the words about “fundamental randomness” and “only one branch being real” are useless to me.

    Are you claiming that, whenever a measurement is made, the universe conspires to make it look like it was obeying QM—but if it has to perform too much computation in order to do so, then it will fail, and the conspiracy will be unmasked? If so, then exactly how much computation is too much? You still haven’t answered the question: will we start seeing this breakdown of QM at 10 qubits? 15? 20? And what will the breakdown look like? In other words: what exactly is it that you predict a QC will do, if not working as QM says it will work? Will it “crash the universe”? Or produce an error message from God? 🙂 Or will it just produce random garbage? What kind of random garbage?

  45. James Gallagher Says:

    Sorry Attila about spelling.

    Scott, you’re being ridiculous.

    A minor modifcation to the interpretation of QM will not cause the universe to crash.

    I have suggested that the discrete time evolution model can more easily be attacked by investigations on the continuity of the EM spectrum, rather than wating for you bumbling guys construct something that REALLY should do a quantum computation.

    I don’t want your money, btw, I just want the truth.

  46. James Gallagher Says:

    In answer to your question, properly understood, a “QC” will fail at 2 qubits.

  47. Scott Says:

    Sorry James, you’re hereby banned from this blog for 2 years, by reason of trollishness and evasiveness.

    Hearing the sirens outside my window, responding to the explosions at the Boston marathon, really causes me to reevaluate my priorities, and debating ignoramuses on the blogosphere isn’t one of them.

  48. Audun Says:

    I certainly didn’t mean to suggest that Vidal’s results does anything to contradict quantum computation, I am merely trying to understand their connection to the issues discussed here.

    Note that similar techniques can be applied in order to stimulate two-dimensional systems (search for PEPS out “projected entangled pair states”).

    As for what would keep you in the “slightly entangled regime”, it has been shown that ground states (and also, I think, low energy states) of local Hamiltonians can be represented by matrix product states, which means they are only slightly entangled. Of course, how time development enters this is a different story.

  49. James Gallagher Says:

    Yeah, I remember almost starting a fight with someone recycling bottles just after I’d put my baby daughter to bed. The protectiveness instinct passes once they’re a few years old,

  50. Audun Says:

    The proof I mentioned is the article “MPS represent ground states faithfully” by Verstreate and Cirac (arxiv: http://arxiv.org/abs/cond-mat/0505140 ). Of course, it is limited to 1D. It makes no mention of excitations, but it seems plausible that something similar would apply to low-lying excitations, and numerics say yes.

    I am very sad to hear about the bombs, by the way 🙁

  51. John Sidles Says:

    Audun remarks (#50) “MPS represent ground states faithfully” by Verstreate and Cirac (arXiv:cond-mat/0505140, 2005) is limited to 1D …

    That is true of the theorems, but on the other hand, it is striking that Verstreate and Cirac provide three pages of physical intuitions (leading up to their Eq. 1) that apply in arbitrary dimension. It is natural to wonder, whether physical principles that can be rigorously proved in one dimension, may plausibly also be true in more dimensions?

    This provides an occasion to quote from Sergei Novikov’s The role of integrable models in the development of mathematics (1991):

    It is very difficult to carry out Hilbert’s program and to write theoretical physics in an axiomatic style. … The development of physics is more rapid than the flux of theorems which try to axiomatize it.

    Nowadays Novikov’s maxim aptly describes the the accelerating application of QIT methods to the simulation of physical dynamics.

  52. Audun Says:

    So if it is true that the low-energy spectra of realistic systems are only slightly entangled, this should mean that any quantum computational speedup must start with a highly excited state. Could this be a helpful perspective on the difficulties of building a QC?

  53. Scott Says:

    Audun #52: Except that’s not true in general. I’m told that one of Kitaev’s many great discoveries involves realistic 2D systems whose ground states exhibit “topological order” (i.e., are highly entangled in some sense). I don’t understand it, but hopefully someone else here can explain it.

  54. Audun Says:

    Duh, I forgot about those! Well, there goes my “unique perspective” 🙂

  55. Daniel Freeman Says:

    Scott #53 and Audun #54:

    How realistic Kitaev’s topological systems are depends on who you ask.

    You have to perform pretty clever tricks to physically realize Kitaev’s toric code hamiltonian (which isn’t capable of universal quantum computation–it’s just a “protected” memory, which actually isn’t so protected because it’s unstable at any finite temperature). This usually involves all sorts of lasers and shuttling of particles around.

    For his topological system capable of universal computation, I’m not sure any physical proposals exist. There’s been a lot of noise about the fractional quantum hall systems, but no one has demonstrated the right type of non-abelian anyons, to my knowledge, nor the necessary control over them.

  56. Anon Says:

    Hey Scott, just wanted to let you know that as a layman (electrical engineer) I find your arguments with ignoramouses very illuminating. I don’t blame you for getting frustrated with aggressively ignorant kooks, but it tends to prompt you to explain how the debate is framed at the topmost level, which I appreciate immensely.

  57. Mike Says:

    I agree with Anon@56. Just do what you think is right, but keep on blogging comparde. We”ll all be the better for it — hopefully. 😉

  58. jonas Says:

    In #34, Scott says “(A related point, which I should’ve mentioned in my talk but didn’t: it turns out to be extremely hard to design a physically-plausible noise model that would only kill QC, and not also kill classical computation!)”

    Interesting. Is this only because we don’t know enough about QC error correction to be able to prove that a certain noise definitely does not allow QC?

  59. John Sidles Says:

    Scott says “It turns out to be extremely hard to design a physically-plausible noise model that would only kill QC, and not also kill classical computation!”

    Steve Simon voices similarly broad claims in his (terrific!) on-line CSSQI video lecture Topological Quantum Computing: “You can argue that all noise processes are local!”

    However, these broad claims are far more commonly encountered in QIT talks, and in blogs comments, than in the peer-reviewed literature. For example, no such claims are advanced in the much-referenced survey upon which Steve Simon’s talk is based — much-referenced because it’s terrific! — namely Nayak, Simon, Stern, Freeedman and Das Sarma, “Non-Abelian Anyons and Topological Quantum Computation” (http://arxiv.org/abs/0707.1889).

    The reason is simple: Nature provides plenty of physically-plausible noise models that are problematic for QC precisely because they are generically non-local.

    Disc drives provide an example that is both familiar and instructive: the classical memory is composed of thermodynamically stable magnetic domains in the platter. As the read-sensor flies overhead, each platter-bit transiently “sees” magnetic images of adjacent platter-bits in the conduction-band of the read-sensor, and is (non-locally) perturbed by those images.

    Fortunately — or rather, in virtue of careful design — the platter-memory bits are self-correcting, in that the platter itself constitutes a thermal reservoir that continuously reads-and-corrects each platter bit. Yet even at the classical level, non-local memory errors are ubiquitously present — in both magnetic and electrostatic memories — so much so, that the unix command “memtest all 1” will run more than a dozen tests, that assess vulnerability to both non-local and pattern-dependent noise.

    Needless to say, similar interactions dynamically entangle photon sources, photon detectors, and optical interferometer modes, such that none can be assessed in isolation from the other two. This is why assessing/demonstrating the scalability of n-photon coherent sources is comparably difficult to assessing/demonstrating the scalability of n-qubit memories, or of assessing/demonstrating the scalability of n-qubit qubit computations.

    Can quantum memories be as robustly and scalably self-healing as classical memories, both in principle and if so, then (hopefully someday!) in practice? That is an open question, regarding which the invited speakers at QStart 2013 no doubt will have much to say.

  60. Peter Shor Says:

    The quantum Babbage analogy goes back at least to late April 1994, in the first public talk I gave about the factoring algorithm, at the ANTS conference. Specifically, in his introduction Len Adelman compared me to both Charles Babbage and Leonardo da Vinci (probably the most flattering introduction I will ever receive).

    To be specific, he showed the plans for Babbage’s analytical engine, and showed one of Leonardo da Vinci’s drawings, and said that Babbage’s invention was a hundred years ahead of its time, but da Vinci’s was conceptually impossible.

  61. Peter Shor Says:

    Slight revision (my memory is faulty). The ANTS conference was at Cornell, May 6-9, 1994.

  62. Scott Says:

    Anon #56 and Mike #57: Thanks so much for the encouragement!

  63. Scott Says:

    jonas #58:

      Is this only because we don’t know enough about QC error correction to be able to prove that a certain noise definitely does not allow QC?

    I don’t know but I very much doubt it. The way I think about it, the reason why it’s so hard to kill QC without also killing classical computation is that, to get universal QC, there are basically only two requirements:
    (1) A universal set of classical operations.
    (2) A single “quantum” operation, such as the Hadamard gate.
    And yes, it’s possible to design noise models (for example, pure dephasing noise) that target only (2) without targeting (1). But because they require specifying the basis in which the classical computation will be allowed to take place unmolested, such noise models tend to be pretty contrived. Most noise models will kill either (1) and (2) (in which case they don’t even allow classical computation), or neither (in which case they allow QC).

  64. jonas Says:

    Re Scott #63: OK, thank you for the explanation.

  65. James Gallagher Says:

    To answer your question:

    I believe that the QC experiments will begin fail with higher probability as the number of qubits increases. The reason they will fail with higher probability is that the fraction of the possible “unitary” evolution paths where the discrete fourier transform etc will be useful becomes smaller and smaller as more qubits are involved. I’m suggesting that this is a fundamental limitation of Nature, not of engineering. I think this will be very hard to demonstrate convincingly until a few hundred qubits are reached, since at low numbers of qubits there is not much effect probabilistically on the predicted outcomes of the quantum algorithms by assuming that superpositions are not persistent in reality.
    ie at low qubits the probability is very high that the algorithms will work (the universe evolves on a path where the algorithm is effective) – and any failures will be so rare as to not be distinguished from experimental error.

    Now, you might say that this is too vague and not much of a claim, but if I was able to claim anything stronger, then I would certainly have to be wrong as it would imply a lot of well-established QM results shouldn’t work in practice either.

    That’s why I suggested that any “discrete” time* evolution model could better be attacked by observations on things like the recent huge gamma ray burst GRB 130427A.
    (*Note that time is continous, and so are the amplitude phases)

    But perhaps a quantitative calculation of the probability of failure wrt qubit number for a specific algorithm would be possible – then I guess people would be more interested.

  66. Ben Standeven Says:

    If your figures are right, you’re mistaken about what to look at. A gamma ray burst tends to produce photons in the 10-100 GeV range; this is around 30 orders of magnitude below the Planck scale, and a couple of orders below the output of a high-end particle accelerator. But as hard as it is to build, say, a 512 qubit quantum computer, it is still child’s play compared to building a Planck-scale particle accelerator.

    That said, in the unlikely event that I am understanding your model correctly, the best thing to look at would be Quantum Zeno Effect experiments, since in your theory wavefunctions will collapse even in the absence of observation.

  67. Ben Standeven Says:

    Whoops! It’s the input to a high-end accelerator that is in the 1-10 TeV range, not the output; the latter is probably lower.
    But not enough lower that it would matter to my argument, I think.
    On the other hand, it seems as if cosmic rays can sometimes run up to 10^21 eV, so only 20 orders below the Planck level.

  68. Gavin Heathcote Says:

    Your mannerisms were fine, good talk.