D-Wave Open Thread

A bunch of people have asked me to comment on D-Wave’s release of its 1000-qubit processor, and a paper by a group including Cathy McGeoch saying that the machine is 1 or 2 orders of faster (in annealing time, not wall-clock time) than simulated annealing running on a single-core classical computer.  It’s even been suggested that the “Scott-signal” has been shining brightly for a week above Quantham City, but that Scott-man has been too lazy and out-of-shape even to change into his tights.

Scientifically, it’s not clear if much has changed.  D-Wave now has a chip with twice as many qubits as the last one.  That chip continues to be pretty effective at finding its own low-energy states: indeed, depending on various details of definition, the machine can even find its own low-energy states “faster” than some implementation of simulated annealing running on a single-core chip.  Of course, it’s entirely possible that Matthias Troyer or Sergei Isakov or Troels Ronnow or someone like that will be able to find a better implementation of simulated annealing that closes even the modest gap—as happened the last time—but I’ll give the authors the benefit of the doubt that they put good-faith effort into optimizing the classical code.

More importantly, I’d say it remains unclear whether any of the machine’s performance on the instances tested here can be attributed to quantum tunneling effects.  In fact, the paper explicitly states (see page 3) that it’s not going to consider such questions, and I think the authors would agree that you could very well see results like theirs, even if what was going on was fundamentally classical annealing.  Also, of course, it’s still true that, if you wanted to solve a practical optimization problem, you’d first need to encode it into the Chimera graph, and that reduction entails a loss that could hand a decisive advantage to simulated annealing, even without the need to go to multiple cores.  (This is what I’ve described elsewhere as essentially all of these performance comparisons taking place on “the D-Wave machine’s home turf”: that is, on binary constraint satisfaction problems that have precisely the topology of D-Wave’s Chimera graph.)

But, I dunno, I’m just not feeling the urge to analyze this in more detail.  Part of the reason is that I think the press might be getting less hyper-excitable these days, thereby reducing the need for a Chief D-Wave Skeptic.  By this point, there may have been enough D-Wave announcements that papers realize they no longer need to cover each one like an extraterrestrial landing.  And there are more hats in the ring now, with John Martinis at Google seeking to build superconducting quantum annealing machines but with ~10,000x longer coherence times than D-Wave’s, and with IBM Research and some others also trying to scale superconducting QC.  The realization has set in, I think, that both D-Wave and the others are in this for the long haul, with D-Wave currently having lots of qubits, but with very short coherence times and unclear prospects for any quantum speedup, and Martinis and some others having qubits of far higher quality, but not yet able to couple enough of them.

The other issue is that, on my flight from Seoul back to Newark, I watched two recent kids’ movies that were almost defiant in their simple, unironic, 1950s-style messages of hope and optimism.  One was Disney’s new live-action Cinderella; the other was Brad Bird’s Tomorrowland.  And seeing these back-to-back filled me with such positivity and good will that, at least for these few hours, it’s hard to summon my usual crusty self.  I say, let’s invent the future together, and build flying cars and jetpacks in our garages!  Let a thousand creative ideas bloom for how to tackle climate change and the other crises facing civilization!  (Admittedly, mass-market flying cars and jetpacks are probably not a step forward on climate change … but, see, there’s that negativity coming back.)  And let another thousand ideas bloom for how to build scalable quantum computers—sure, including D-Wave’s!  Have courage and be kind!

So yeah, if readers would like to discuss the recent D-Wave paper further (especially those who know something about it), they’re more than welcome to do so in the comments section.  But I’ve been away from Dana and Lily for two weeks, and will endeavor to spend time with them rather than obsessively reloading the comments (let’s see if I succeed).

As a small token of my goodwill, I enclose two photos from my last visit to a D-Wave machine, which occurred when I met with some grad students in Waterloo this past spring.  As you can see, I even personally certified that the machine was operating as expected.  But more than that: surpassing all reasonable expectations for quantum AI, this model could actually converse intelligently, through a protruding head resembling that of IQC grad student Sarah Kaiser.

dwavedwave2

94 Responses to “D-Wave Open Thread”

  1. Phylliida Says:

    If you were given head over an entire research team built with the task of making a working quantum computer, and given necessary funding, what approach would you go about doing it?

  2. Scott Says:

    Phyliida #1: I would give the money to Martinis, Wineland, and a few other such people and let them figure out what to do with it! (I’m guessing it would mostly involve superconducting and ion-trap QC—in both of which, the main challenge remains what it’s been for the past 15+ years, to further push down decoherence rates with a bunch of integrated qubits to the point where you can start doing fault-tolerance. I’m sure more money could make things happen faster, like with most things.)

  3. Joshua Zelinsky Says:

    Part of the reason is that I think the press might be getting less hyper-excitable these days, thereby reducing the need for a Chief D-Wave Skeptic. By this point, there may have been enough D-Wave announcements that papers realize they no longer need to cover each one like an extraterrestrial landing.

    I don’t know if the press is getting less excitable, but the general public definitely seems to be about as excitable and hyperbolic as ever. See for example this Reddit thread https://www.reddit.com/r/Futurology/comments/3hpsgr/some_preliminary_benchmarks_on_dwaves_1000_qubit/

  4. Scott Says:

    Joshua #3: If I want to maintain my positive, hopeful, generous, broad-minded attitude as long as possible, probably I shouldn’t even look at that thread…

  5. Douglas Knight Says:

    The realization has set in, I think, that both D-Wave and the others are in this for the long haul

    What lead you to believe that about D-Wave?

  6. Scott Says:

    Douglas #5: Well, the last few times I talked to journalists about this, the knowledge level seemed slightly higher than in the past—the points had gotten across that, yes, there is expert skepticism for good reasons, and yes, this is a long-term undertaking, which might or might not ever be able to give you a significant advantage over classical computers for real-world optimization problems—and if it can, then probably not for a long time. The days seemed gone when every D-Wave announcement meant “Day Zero” of a new quantum era where no previous rules applied, and where I would only be called for pro forma “due diligence” on a pre-decided narrative that was fundamentally false.

    On reflection, though, I wonder how much of this is just pure selection bias—my no longer looking up the sorts of things that I know would make me rip my hair out (cf. comment #4), and the journalists who write such things no longer bothering to call me?

  7. Douglas Knight Says:

    Oh, I thought you meant that you believed that D-Wave was in it for the long haul.

  8. Job Says:

    Is there an implementation of Quantum annealing as a circuit using universal quantum gates?

  9. F. Marshall Says:

    All of this made me wonder, how is the book ‘Speaking Truth to Paralellism’ coming along?

  10. Scott Says:

    Job #8: Yes, it’s straightforward to simulate quantum annealing in the circuit model, using a trick called “Trotterization” (which just means: approximating a continuous Hamiltonian by the product of a large number of small discrete rotations, on 1 or 2 qubits at a time). The converse direction, proved by Aharonov et al. in 2004, is much more interesting: any quantum circuit can be simulated in the adiabatic model with only polynomial slowdown, provided you’re willing to allow a final Hamiltonian whose ground state is a superposition over the entire history of the circuit-model computation.

  11. Scott Says:

    F. Marshall #9: Sorry, still working on the manuscript! A few more months at least before I send it back to MIT Press.

  12. Graeme Says:

    I think you’re right about the improving press coverage: the journalists seem to be a lot savvier than five years ago.

  13. Job Says:

    The converse direction, proved by Aharonov et al. in 2004, is much more interesting: any quantum circuit can be simulated in the adiabatic model with only polynomial slowdown, provided you’re willing to allow a final Hamiltonian whose ground state is a superposition over the entire history of the circuit-model computation.

    That is interesting, and i wasn’t aware that the adiabatic model could simulate an universal QC with polynomial overhead. I’m assuming from that last requirement that this would not be practical.

    Yes, it’s straightforward to simulate quantum annealing in the circuit model, using a trick called “Trotterization”

    I wonder if there is a classical algorithm which, given an optimization problem, can produce a quantum circuit to solve it using Trotterization – e.g. is there a description of how to do this somewhere?

    I realize that, in practice, the adiabatic model is more viable than an universal QC, so implementing quantum annealing using a circuit is not very smart, but i personally would find it easier to study and understand.

    For example, at this time i understand that a classical machine probably can’t solve Simon’s problem for input functions that use lots of Toffoli gates, even though it can do pretty well for many other functions. I would like to understand which optimization instances are hard classically, using the gate model, and how DWave (or eventually Martinis) are doing with those.

  14. Raoul Ohio Says:

    One data point concurring with Scott’s point that journalists were getting smarter about this. In years past I had discussed with the Register’s writers their “born yesterday” repeating of D-Waves claims. They seem a lot more with it now:

    http://www.theregister.co.uk/2015/08/27/dwave_whether_or_not_its_quantum_its_faster/

  15. La máquina D-Wave 2X supera los 1000 cubits útiles | Ciencia | La Ciencia de la Mula Francis Says:

    […] (y famoso crítico de D-Wave Systems) Scott Aaronson, “D-Wave Open,” Shtetl-Optimized, 26 Aug 2015. “El (sobre)coste de codificar un problema de optimización de interés […]

  16. Rolf Says:

    Is the Sarah/D-Wave hybrid a walking/talking Turing test?

    Re: “… I enclose two photos from my last visit to a D-Wave machine, which occurred when I met with some grad students in Waterloo this past spring. As you can see, I even personally certified that the machine was operating as expected. But more than that: surpassing all reasonable expectations for quantum AI, this model could actually converse intelligently, through a protruding head resembling that of IQC grad student Sarah Kaiser.”

  17. ed Says:

    The pictures are great – right there is this year’s Halloween costume for sure!

  18. Arko Says:

    Scott… umm, you need to exercise.

  19. Scott Says:

    Arko #18: Tsk, tsk. Can’t a guy blog about quantum computing without these sorts of body-shaming, paunch-shaming remarks denigrating his physique? Your comment is all the more offensive and problematic by virtue of being accurate. Having absorbed patriarchal cultural ideals of the fit, in-shape male, do you have any idea how it feels to be out of breath after mere minutes of chasing your toddler around the apartment?

  20. Scott Says:

    ed #17: The students told me the device was, indeed, originally built as a Halloween costume, and only later repurposed for accosting me.

  21. Arko Says:

    Scott #19: My cousin sister had gone out of shape. Her twin toddlers dragged her back into it. She would tell you that it’s like “chasing chickens”. 😉

  22. John Sidles Says:

    Scott aspires (comment #4)  “I want to maintain my positive, hopeful, generous, broad-minded attitude as long as possible.”

    My wife set me a similar task more than a year ago, in service of a book that she is writing upon the topic of hope, that task being to read all works — fiction and nonfiction alike — in a charitable spirit.

    As a roadmap in service of this “positive, hopeful, generous, broad-minded” objective, Alexander Pope’s couplets have stood the test of time:

    An Essay on Criticism (abridged)
       by Alexander Pope (1711)

    Whoever thinks a faultless Piece to see,
    Thinks what ne’er was, nor is, nor e’er shall be.
    —-
    A perfect Judge will read each Work of Wit
    With the same Spirit that its Author writ.
    —-
    ‘Tis not a Lip, or Eye, we Beauty call,
    But the joint Force and full Result of all.
    —-
    Those oft are Stratagems which Errors seem,
    Nor is it Homer Nods, but We that Dream.

    Not hamartia, but anagnorisis  The 21st century’s emerging community of laughing quantum philosophers envisions — and perhaps even crucially fosters — a planet upon which which skeptical mathematical postulates (Kalai), optimistic scientific visions (Aaronson), and transformative technological capacities (DWave), are concurrently appreciated, and even compatibly realized, as universal, natural, and performative.

  23. Nick Read Says:

    A couple of questions:

    1) Is “quantum annealing” now supposed to be synonymous with the “quantum adiabatic algorithm”? I thought they were supposed to be somewhat distinct, and have been annoyed that some of the press unhelpfully conflated the two. The point being that D-Wave is the former, not the latter (according to e.g. Troyer—hope I’m not putting words in Matthias’s mouth, if he’s reading).

    2) Any idea whether all optimization problems can be embedded in the Chimera graph? Certainly not all planar problems are NP-complete, and Chimera is close to planar in some sense. Indeed, in some ways problems on the Chimera graph look “easier” than on general graphs. For example, finding the ground state of an Ising spin glass on a Chimera has a poly time approx. algorithm, like the planar case. See arXiv:1306.6943, which is cited by Vazirani et al, who make additional remarks about this.

    Alternatively, if the answer to question 2 is yes, can it be done efficiently—that seems to be what you mentioned.

  24. fred Says:

    Fasting twice a week is easier than exercise.

  25. Raoul Ohio Says:

    John, how does, say, Ayn Rand come across in a charitable spirit?

  26. Paul B. Says:

    @Nick Read, #23

    Regarding your second question, certainly — Chimera was explicitly designed to be able to embed up to complete graphs with quadratic overhead (see, e.g., section 1.C of http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6802426), so, large enough matrix can embed any Ising spin energy minimization problem.

    Ising spin minimization with local fields on planar graphs or on non-planar graphs was shown to be NP-hard, see Barahona ’82: http://www.yaroslavvb.com/papers/barahona-on.pdf

    Hope this helps!

  27. John Sidles Says:

    Raoul Ohio asks “How does Ayn Rand come across in a charitable spirit?”

    Here is a serious answer to your serious (but peripheral) question. And yes, there is some “quantum” in this answer!

    A starting point  One good starting point to an appreciation of Ayn Rand is Yannick Grannec’s Goddess of Small Victories (2014), which is about the marriage of Kurt Gödel and Adele Porkert. Here Grannec presents a peripeteiac study of the difficulties and rewards of marriage to a charismatic, rationalizing person who is severely afflicted with a personality spectrum-disorder.

    Gödel and Rand contrasted  One major difference (obviously) between Gödel and Rand, is that Gödel’s professional works are proving to have enduring mathematical and philosophical value; Rand’s not so much. Still in their private lives, neither Gödel nor Rand was easy to live with.

    Conclusion  Just as mathematically “there is no royal road to geometry”, there is medically and morally no royal road to dealing effectively with personality disorders in the “positive, hopeful, generous, broad-minded attitude” that Scott commends.

    Because for all too many disorders, the long-term medical prognosis is just plain poor, in the present century as sadly as in all prior centuries.

    Charity and hope  And yet progress is feasible, and we have ample quantum reasons to foresee that this progress will accelerate in the present century. In this respect the “other Scott A” weblog Slate Star Codex, and also the family-oriented weblog Out of the Fog, are recommended resources (by me at least).

  28. Scott Says:

    Nick #23: Regarding your first question, I always thought of the adiabatic algorithm as just the zero-temperature limit of quantum annealing, or (conversely) of quantum annealing as the adiabatic algorithm at nonzero temperature.

  29. Paul B. Says:

    @ Scott #28 and Nick #23:

    Scott, thanks for your perspective! I was going to offer a similar distinction, but I am glad that you gave yours first, and we agree on this semantical point! 🙂

    Alternative way to put it is that your “Intel Core whatever” is *not* a “Turing machine” (for starters, it does not have an infinite tape, or, anymore, any tape at all) — but it is a real-life embodiment of an abstract concept…

    Of course, it is just my private way of thinking about those things…

    Cheers!

  30. Greg Kuperberg Says:

    Nick – “2) Any idea whether all optimization problems can be embedded in the Chimera graph? Certainly not all planar problems are NP-complete, and Chimera is close to planar in some sense.”

    The first answer is yes. The Chimera sequence of graphs are sufficiently non-planar that you can embed any graph in them. All you need for this is a planar graph with a sufficient supply of non-planar bridges. For instance, if the planar graph is periodic in both direction and has at least one bridge in its repeating cell, that’s usually enough. Besides, planarity by itself does not even usually prevent NP-completeness. You need planarity and a relatively tame local optimization problem. One way to see it is that it many planar interactions make it possible to make an equivalent of a bridge.

    The second answer is that this doesn’t help D-Wave’s claims very much, because the reduction is wasteful. They like to talk as if exponential speedup is their realistic goal. However, even if their qubits were better than not-all-that-quantum, you can only reasonably expect a quadratic speedup for quantum annealing for a general NP-hard problem. (Because the best that it can be is a version of Grover’s algorithm.) Even though the standard convention for complexity classes like BQP and NP is to take all polynomial equivalence for granted, most of the topic of computer algorithms in practice is about polynomial accelerations. These reductions from something else to the Chimera-type graphs can easily negate the acceleration that you were after.

    For example, this is why a paper on D-Wave and Ramsey numbers topped out at R(2,8): How many people do you need in a room to guarantee that at least 2 are friends or at least 8 are mutual strangers? You should carefully consider the “difficulty” of this problem. Yes, it’s as dumb as it sounds. Not just because noisy quantum annealing isn’t magic, but also because of the overhead when they embedded this into their “Chimera” graph.

  31. Job Says:

    Greg #30,

    However, even if their qubits were better than not-all-that-quantum, you can only reasonably expect a quadratic speedup for quantum annealing for a general NP-hard problem. (Because the best that it can be is a version of Grover’s algorithm.)

    If D-Wave could match the performance provided by Grover’s algorithm for an NP-Hard problem, then wouldn’t the speedup be exponential given that the search domain is exponential in size?

    I would take O(2^n/2+n^2) over O(2^n). Am i thinking about this wrong?

  32. Rahul Says:

    D-Wave’s news release is like the announcement of a super duper new version of a homeopathic medicine that’s twice as potent as the current homeopathic product.

    If you don’t first convince me that the principle behind homeopathy works, then why should I care?!

    It gets worse if the price tag on super duper drug is one million dollars whereas boring, common pharma medicine (aka a single core clasical computer) gets the same effect in $500.

  33. Scott Says:

    Job #31: First, just as a point of terminology, we would never call an improvement from 2n to 2n/2 “exponential.” One looks not at the ratio, but at the new running time as a function of the old running time (in this case it’s the square root).

    More to the point, if you suffered a quadratic blowup in the reduction and then got a Grover-type speedup, then your new running time would not be 2n/2+n2. It would be 2(n^2)/2, which not only kills any speedup but is much worse than 2n. If you’re getting only a Grover-type speedup, then in order to make the whole thing worthwhile, the blowup in the reduction has to be by less than a factor of 2.

  34. Job Says:

    Thanks for clearing that up.

    I wasn’t aware that the reduction step would increase the size of the input that significantly.

    Also, i would have accepted an improvement from 2^n to 2^n/2 as exponential, so i was definitely interpreting this incorrectly.

  35. Nick Read Says:

    Paul B. #26, Scott #28, Greg #30: thanks for your answers.
    (I was well aware of the Barahona reference.) Also, there are many problems with that quantum adiabatic algorithm for Ramsey numbers work, one being the lack of any estimate for the runtime.

    Rahul #32 re homeopathy: wouldn’t that be *half* as potent? 🙂

    Best of luck to various with Ayn Rand, English poetry, and getting exercise.

  36. Greg Kuperberg Says:

    Nick – I know a convincing non-quantum algorithm to calculate the Ramsey number R(2,n), and I can even give you a sharp estimate for the runtime.

  37. Nick Read Says:

    Lol. I know it too. But the theory paper was for all R(m,n),
    so the question is for that case.

  38. Nick Read Says:

    More seriously, do you think calculating Ramsey numbers
    R(m,n) is even in QMA? (First we should make sense of the question; presumably rephrase as decision problem, greater or less than some bound.)

  39. William Hird Says:

    Fred # 24:
    Actually, it is theoretically easier (according to fasting expert Arnold Ehret ) to get in shape by exercising as opposed to fasting because fasting can introduce more waste products into the bloodstream faster than exercising can you so you have to be more careful when fasting.

  40. Greg Kuperberg Says:

    Nick – Bounding R(m,n) is a typical example of a “sparse” subproblem of a more general algorithmic problem that is easier to analyze. To begin with, the max clique problem is NP-complete. Now consider a generalization of the Ramsey problem where you have a partly colored complete graph on r vertices, and you want to know whether there is a coloring of the remaining edges that has neither a red m-clique nor a blue n-clique. This is in Sigma_2P, the analogue of NP where you have a one-round contest between a prover and a disprover. I have not checked, but I would guess off the top of my head that this is Sigma_2P-complete.

    However, in the standard Ramsey problem, you are not given the creativity of a partially colored complete graph, you are given a blank complete graph. It looks difficult like the other cases, but you are now robbed of the tools to easily create reductions.

    For counting problems rather than two-move problems, the answer is typically #P-complete when the input is creative; for instance the number of matchings or dimer states of a graph is #P-complete. But you often see sparse special cases of these problems in combinatorics and stat mech, so that it still looks difficult but no one knows how to prove it, not even conditionally relative to standard complexity theory conjectures.

  41. Paul B. Says:

    @ Greg #30

    Glad that we agree on the fine line between “almost planar” (perceived as bad) and “able to embed non-trivial problems” (perceived as good) graphs!

    But I would have to respectfully disagree with your (seeming, at least to my not necessarily educated eye!) conflating of space and time complexity when you assume that N^2 penalty in *qubit count* in embedding somehow translates to corresponding losses in runtime, cancelling any Grower-like improvements — it could as well be true, but I can not immediately see why, sorry…

    When you analyse run times of a normal gate model QC, you do not count as a penalty that it takes (eventually) thousands of real life qubits to implement an error-corrected abstract one, right?

    Just curious!

  42. Greg Kuperberg Says:

    But I would have to respectfully disagree with your (seeming, at least to my not necessarily educated eye!) conflating of space and time complexity when you assume that N^2 penalty in *qubit count* in embedding somehow translates to corresponding losses in runtime, cancelling any Grower-like improvements — it could as well be true, but I can not immediately see why, sorry…

    It isn’t a theorem that it will always work this way. It can’t be, because when you reduce A to B, it is always possible that you will land in an easier sector of problem B that runs faster than B does in general. But it is also no theorem at all that you will always get this ace in the hole. If all it takes is embeddability of graphs, then you could take any spin frustration model on a graph and replace each edge with a long ferromagnetic chain without any annealing penalty.

    When you analyse run times of a normal gate model QC, you do not count as a penalty that it takes (eventually) thousands of real life qubits to implement an error-corrected abstract one, right?

    Of course, yes, but there are two exponential whammies that work in your favor. One is that the overhead is polylogarithmic, not polynomial, even if it is “eventually” thousands. The other is that for some important problems, the quantum acceleration is exponential.

    Certainly one thing that I’m never going to do is take a quantum acceleration that’s only predicted to be polynomial in theory, and then talk as if it’s exponential as if that’s acceptable salesmanship.

  43. Greg Kuperberg Says:

    While we are at it, exactly because of these reducibility issues, the so-called “benchmark” in this D-Wave paper, as in the last one, is almost self-simulation. It’s like arguing that a musician is brilliant until and unless Weird Al makes a parody that’s outright better than the original song.

  44. Raoul Ohio Says:

    First contribution of W. A. Yankovic to Quantum theory duly noted. Not to be confused with “Weird AI”, a popular course in CS departments.

    I propose honoring the event with a new unit of measurement for speedup of computations. If a D-Wave machine takes time T_{DW} to do problem X, and a PC costing 1.0E5 times less takes time T_{PC5} for problem X, then we say the speed up is (T_{PC5} / T_{DW}) Weird Al’s.

    Following the lead of the controlled fusion community, when a $1E7 DW machine can match a $1E2 PC (rating = 1.0 Weird Al), they will have achieved “Weird Al Breakeven”.

  45. Greg Kuperberg Says:

    If a D-Wave machine takes time T_{DW} to do problem X

    The key point is that the success case hasn’t been “problem X” for X an independent variable. Make that “If a D-Wave machine takes time T_DW to do problem DW…”

  46. John Sidles Says:

    Scott opines “I think the press might be getting less hyper-excitable these days, thereby reducing the need for a Chief D-Wave Skeptic.”

    Please allow me to concur with this sentiment. Indeed, I have followed Scott’s leading in posting Nature’s weblog a “simple, unironic, 1950s-style message of hope and optimism”, titled A Tale of Two Preprints: Readings in 21st century quantum discourse.

    Conclusion  The quantum research community was blessed in August 2015 with an abundance of high-quality preprints (including D-Wave’s preprint). Now the quantum research community stands most urgently in need of “simple, unironic, 1950s-style” discourse that is grounded in “hope and optimism”, and filled (hopefully non-ironically) with “positivity and good will” that helps us to “invent the future together.”

    Preprints like D-Wave’s (and others, per the Nature comment) severely challenge our critical faculties. To quote Pope again (as in comment #22)

    Nature affords at least a glimm’ring light;
    The lines, tho’ touch’d but faintly, are drawn right.
    But as the slightest sketch, if justly trac’d,
    Is by ill colouring but the more disgrac’d,
    So by false learning is good sense defac’d.

    Thank you, Scott Aaronson, for reminding Shtetl Optimized readers of the vital necessity — and severe difficulty — of Pope’s “justly trac’d” scientific discourse that is (in Scott’s phrasing) simple, unironic, and grounded in hope and optimism.

  47. Nick Read Says:

    So Greg, which of Weird Al’s musical parodies did you think was better than the original?

  48. John Sidles Says:

    Nick Read wonders “Which of Weird Al’s musical parodies was better than the original?”

    Combinatorically minded Shtetl Optimized readers will perceive that Bob Dylan’s Tombstone Blues is ingeniously bettered by Weird Al’s Bob.” So yes, it can happen. 🙂

  49. Garrett Says:

    I’m not up on the latest on quantum computing – I deal in classical computing.
    That having been said, has anybody managed to do something practical with a QC yet? By which I mean solve a problem where the time to key in the problem/data and get an output is less than the amount of time required to do it with pencil-and-paper?

  50. Scott Says:

    Garrett #49: Yes, if you’re willing to count simulations of one quantum system by another (and you compare against the running times of the best known classical simulation methods, rather than the best possible, which is not known).

  51. Greg Kuperberg Says:

    So Greg, which of Weird Al’s musical parodies did you think was better than the original?

    For instance, Word Crimes is decent, while the original “Blurred Lines” is a waste of time in comparison.

  52. Raoul Ohio Says:

    #48:

    John, once again I am impressed with the breadth of your scholarship. “Bob” is great, and demonstrates the flexibility of Dylan’s style.

  53. John Sidles Says:

    Garrett (# 49)  “Has anybody managed to do something practical with a QC yet?”

    Scott (# 50)  “Yes, if you’re willing to count simulations of one quantum system by another.

    Lewis Carroll (in Sylvie and Bruno Concluded, 1893)  “We now use the country itself, as its own map, and I assure you it does nearly as well.

    The point  The empirical capabilities of quantum simulation algorithms that require only classical computational capabilities have been accelerating at a Moore’s Law pace that equals or surpasses even the most optimistic quantum computing capabilities that were set forth in roadmaps of past decades.

    Conclusion  Complexity theory provides good reason to foresee that efficient quantum simulation by classical computation is infeasible in general. And yet in practice — and especially with ongoing cumulative improvements in algorithmic accuracy and efficiency — the quantum simulation problems that nature poses in biology, chemistry, and condensed-matter systems are proving to be less computationally demanding than many quantum researchers foresaw (say) a decade ago.

    For how long can the present Moore’s Law improvement in efficient quantum simulation by classical computation be sustained? And mathematically speaking, how is it that this sustained improvement is even possible?

    For young researchers especially, these are crucial questions, to which no one (at present) can give reliable answers. Which is very good news for young researchers!

  54. John Sidles Says:

    Raoul Ohio is misled “John, once again I am impressed with the breadth of your scholarship.”

    In regard to my “scholarship” — which isn’t scholarship at all, but rather an obsession with quantum implications for 21st century medical practice — Weird Al’s Devo tribute Dare to be Stupid is more to the point, because it shows us that:

    • Weird Al is cooler than Devo, and
    • Devo is cooler than Mick, yet paradoxically
    • Mick (with David) is cooler than Weird Al

    Conclusion  Al and Devo and Mick remind us that perhaps the Kalai/Harrow optimist/skeptic quantum debate won’t be settled very soon, or have a black-and-white outcome.

  55. Charles Says:

    @John #54: That would be an instance of Condorcet’s paradox. 🙂

  56. fred Says:

    John #53

    Do you any advice or good news for old researchers?

  57. fred Says:

    a good article on this
    http://arstechnica.com/science/2015/09/d-wave-unveils-new-quantum-computing-benchmark-and-its-fast/

  58. Raoul Ohio Says:

    Fred 56:

    Here is some news I (and many others) would love to see: Someone with deep pockets (Google, Apple, Oracle, MS,FB,…) who would appreciate some good cred in the tech community become a champion for LaTeX. With a well funded team, there might be a standardized, cleaned up, enhanced LaTeX 3 before the world ends.

    Or, at least a NetBeans plugin for LaTeX. I am about 10X more productive writing Java than LaTeX due to NetBeans support (on the fly syntax checking, large project navigation, easy refactoring, …). On the fly syntax checking and instant display would be a huge help on long technical documents.

  59. wolfgang Says:

    @Scott #50

    > simulations of one quantum system by another

    But for this purpose we already have powerful quantum computers and they are used routinely to calculate e.g. the spectrum of atoms, molecules etc.
    They typically consist of a quantum system surrounded by (semi)classical measurement devices.
    They are available at many universities since many years and are called labs.

  60. Scott Says:

    wolfgang #59: So, whether you call a physical system a “computer” is partly a question of knowledge and intent. I’d call it a computer if

    (a) you completely understand the underlying physics of this system,
    (b) you don’t much care about its behavior in and of itself, but (c) you’re using it to learn about the behavior of some other system, which you do care about, and which is too hard to measure directly, and which is “isomorphic” to this system in some relevant respect.

    We can call the system a “useful quantum computer” if we also have

    (d) there’s no known efficient classical method to calculate the same spectra (or whatever else it is).

    My point was that nowadays, systems satisfying (a)-(d) certainly exist (and the fact of their existence considerably complicates the question of “how long to a quantum computer?”). Of course, what we have today is still very far from universal, programmable QCs.

  61. wolfgang Says:

    @Scott #60

    In general a QC would consist of N particles + (semi)classical devices of all kind and is supposed to simulate a system which consists of n particles.

    Garrett in #49 used the criterion that the QC should be more efficient than a classical computer or pencil-and-paper.

    I wanted to make the point that the QC is only interesting if it is also more efficient than a conventional lab setup of the n particles + measurement devices.

    And it is not clear to me that this is achievable in general, since N > n.

  62. Jay Says:

    Raoul,

    Latex is to physicists what stethoscopes are to physicians. Arguably usefull for some very specific applications. Ridiculously useless to most of those who proudly use it.

  63. Raoul Ohio Says:

    D-Wave update from Ars:

    http://arstechnica.com/science/2015/09/d-wave-unveils-new-quantum-computing-benchmark-and-its-fast/

    This is a good illustration of the improved sophistication of reporting on the issue. Of course, ArsTechnica is probably the best tech news site going.

  64. Gil Kalai Says:

    “nowadays, systems satisfying (a)-(d) certainly exist”

    Scott, what are the most convincing examples of such systems?

  65. fred Says:

    wolfgang #59

    I guess in the broad sense of an “analog” quantum machine, that’s true.
    But just like “analog” machines aren’t digital computers (i.e. based on bits and boolean logic), to have a true QC, the main ingredient is to realize qubits as an abstraction, no?

  66. John Sidles Says:

    Scott remarks “I’d call it [a device] a computer if (a) you completely understand the underlying physics of this system …”

    This remark provides illumination to the concluding chapters of a great many modern textbooks on quantum dynamics.

    Harrow-Kalai/optimist-skeptic comptibility  If we suppose that the canonical postulates of quantum electrodynamics (QED) accurately describe “the underlying physics” of our computing devices — including crucially the superposition principle holding on a Hilbert space of exponential dimension, and supposing too the computational irrelevance of short-range Standard Model forces and long-range gravitational forces  — still it can logically be the case (and even empirically is the case) that Gil Kalai’s postulates hold for QED-world computation devices.

    Because QED is special …  The lossy flow of energy and entropy toward spatial infinity (which so vexes quantum cryptographers and BosonSamplers), suggests alternative dynamical descriptions of QED dynamics, that are formally equivalent to Hilbert-space QED dynamics, but admit PTIME simulation of (inevitably lossy) large-scale computation processes.

    Exemplary visions  The concluding chapter of the new Zeng-Chen-Zhou-Wen textbook Quantum Information Meets Quantum Matter, as featured on CalTech’s weblog Quantum Frontiers (available in-draft at arXiv:1508.02595), is one of many recent quantum textbooks whose concluding chapter (titled “Outlook: a unification of information and matter — from qubits”) admits a Kalai-favorable reading. Specifically, Quantum Information Meets Quantum Matter, grounds Kalai-compatible quantum physics emergently in tensor-network descriptions of condensed-matter systems.

    Common reflections  What these textbooks have in common is reflections like the following:

    “We would like to point out that each of the four revolutions [Newton/Maxwell/Einstein/Dirac] totally changed the way how we view our world. The changes are so dramatic that we have to introduce new mathematical languages to described our new pictures of world. […] Many-qubit entanglement is so rich that it is beyond our comprehension and beyond the capacity of our current mathematical language.”
     — Zeng-Chen-Zhou-Wen, Quantum Information Meets Quantum Matter (2015)

    “It is absolutely intolerable to use [the term] analytical geometry for linear algebra with coordinates, still called analytical geometry in the elementary books. Analytical geometry in this sense never existed. There are only people who do linear algebra badly, by taking coordinates and this they call analytical geometry. Out with them! Everyone knows that analytical geometry is the theory of analytical spaces, one of the deepest and most difficult theories of all mathematics.”
     — Jean A. Dieudonne, The Work of Nicholas Bourbaki (1970)

    “In the course of preparing this book I have been fortunate to have had many discussions with computer scientists, applied mathematicians, engineers, physicists, and chemists. Often the beginnings of these conversations were very stressful to all involved. […] Don’t use coordinates unless someone holds a pickle to your head!”
     — Joseph Landsberg, Tensors: Geometry and Applications (2012)

    Conclusion  21st century researchers will continue to view quantum dynamics through the traditional lens of linear algebra … and increasingly they will view quantum dynamics through a burgeoning set of ever-more-universal nonlinear noncoordinate algebraic lenses too. Viewed through these new naturally nonlinear lenses, Kalai’s quantum postulates can be more readily appreciated — mathematically, scientifically, technologically, and pedagogically (and even medically) — as beautiful, plausible, and performative.

    And this is how, in a 21st century QED-world, quantum optimists and quantum skeptics can both be right.

  67. Scott Says:

    wolfgang #61: The points you’re missing are that

    (1) the system you’re trying to study may be experimentally super-hard to access (quark-gluon plasmas, the early universe, etc.), whereas the simulator is easy to access.

    (2) even if you can access the system you’re trying to study, you may not know its underlying Hamiltonian, so studying the system itself doesn’t tell you whether or not the system matches your theoretical model of the system, which is what you wanted to know.

  68. Graeme Says:

    Scott #60, could you cite some examples?

    Only a few experiments I know of claim d), and these are all very much of the “simulating itself” analog variety with fairly limited tunability of parameters. Impressive as these experiments are, most people won’t call this a computer since the device can answer only a single question about its own behavior as a function of a limited number of parameters.

    Even the claims of outperforming classical devices (which are rare!) should be taken with a grain of salt—just like with D-wave, if we have a dedicated device to answer a particular question, we should also (at least!) compare to specialized algorithms designed to answer the same question. Generally people haven’t put time into developing these algorithms because (again, like D-wave!) it’s not clear that the questions being answered actually satisfy criterion c) of the question being answered being of interest.

    As you mention, this is a slippery issue. This is especially true, since it depends on whether you think a particular question a simulator can answer is interesting, and whether a simulator is rich enough to call a computer rather than an experiment (i.e., whether the isomorphism you mention is sufficiently nontrivial). But at this point, very few people would agree that there exist devices achieving a-d. That’s why there’s so much excitement about the possibility of achieving “quantum supremacy”—we’re not there yet.

  69. John Sidles Says:

    Scott says (in #60) “My point was that nowadays, systems satisfying (a)-(d) certainly exist.”

    Those system traits being (as I understand them):

    [a]  we know the physics of the computation process,
    [b]  the result is abstracted from the process,
    [c]  the result has practical implications, and
    [d]  the result is hard for classical Turing machines.

    Scott’s claim can be read in either of two ways: as asserting [a-d] severally, versus asserting [a-d] jointly. Considered severally, each assertion is true, yet considered jointly, there are ample grounds for uncertainty in regard to whether all four can be demonstrated simultaneously, in any single computational process.

  70. Scott Says:

    John #69: No, I never asserted [c] or [d]. I didn’t say “practical” implications; I just said that one system is being used to simulate a different system that you care about more (possibly for pure scientific reasons). And I didn’t say the result is hard to simulate on a classical computer in an absolute sense, only that it outperforms the best classical methods people were able to think of so far. (Of course, here one has the ‘D-Wave problem,’ that this could simply depend on how hard people worked on it. But, crucially, at least we’re now in a realm where one theoretically expects exponential speedups, rather than at best polynomial speedups.)

    Obviously, the issue is to demonstrate all of (a)-(d) in a single process, not to demonstrate them “severally.” But my understanding, from Matthias Troyer and other physicists I talked to, was that current quantum simulations do indeed achieve this. I’m going to try to find references.

  71. Matthias Troyer Says:

    Graeme #68, Scott #60: Scott asked me to provide some examples since I might have mentioned it to him at some point. These are all examples of special purpose analog devices, which – just by the definition of their nature – somehow “simulate themselves” and are limited to certain applications. The examples I will give are quantum simulators using ultracold atomic gases, which are the most advanced quantum simulators at the moment because they are easy to scale.

    As long as classical computers can still simulate the same system one gets to the tricky question of how to compare (time, cost, power, …) that I don’t want to get into since if one has to ask such question one has not reached clear quantum superiority yet. To mention some papers, for bosonic systems in http://arxiv.org/abs/0905.4882 the classical simulation seems easier and there are also mean field theories that are better than the accuracy of the quantum simulator. For fermions in http://arxiv.org/abs/1110.3747 one might argue that the quantum simulation gives smaller error bars, but then again I can think of other classical algorithms which – using a supercomputer – can get the same or better results.

    The most convincing work so far might be http://arxiv.org/abs/1101.2659, which looks at the dynamics of a correlated Bose gas. There the quantum simulation agrees with state of the art classical methods as long as they work, but the quantum simulation reaches longer times. The reason is a growing entanglement entropy which at some point causes the classical simulations to become unreliable. This is however one of the first demonstrations of a quantum simulator providing results that we don’t have a classical algorithm for. How useful the results are is a matter of debate (some criticize that the Bose gas in this study equilibrates quickly and the equilibrium state can again be simulated efficiently), but so is the question of whether achieving the target energy in D-Wave’s new time to target metric is useful.

  72. John Sidles Says:

    In an outstandingly lucid post (#71), Matthias Troyer reminds us

    “Growing entanglement entropy […] at some point causes the classical simulations to become unreliable”

    Crafting STEAM-compatible definitions of Matthias Troyer’s terms “growing”, entanglement, entropy, “at some point”, classical, simulations, and unreliable — a task that (to many researchers) seemed straightforward ten or twenty years ago — nowadays is commonly appreciated as a challenge that demands Bourbakian levels of mathematical and scientific talent and commitment.

    “These theories [that Bourbaki seeks to set forth] have arrived at the point that they can be outlined in an entirely rational way. […] Bourbaki’s aim [is] to gather from the diverse processes used by mathematicians whatever can be shaped into a coherent theory, logically arranged, easily set forth and easily used. […] The work method used by Bourbaki is a terribly long and painful one. […] Certain foreigners, invited as spectators to Bourbaki meetings, always come out with the impression that it is a gathering of madmen. They could not imagine how these people, shouting — sometimes three of four at the same time — about mathematics, could ever come up with something intelligent. It is perhaps a mystery but everything calms down in the end.”
     — Jean Dieudonné,
         The work of Nicholas Bourbaki, (1970)

    Conclusion  Quantum information theory has “gathered its Bourbakian maniacs”; nowadays their “shouting” is vigorous, and it can reasonably be foreseen that “everything will calm down in the end.”

    We can hope (with good reason and ample historical precedent) that our appreciation of quantum dynamics will be deepened and widened by this arduous/ardorous collectively Bourbakian discourse.

  73. wolfgang Says:

    @Scott #67

    Yes, but then we talk about a scaleable, universal QC or something very similar, but I thought you have something else in mind with your comment #50, because obviously we are far away from simulating the early universe with a QC.

  74. Scott Says:

    wolfgang #73: Given how uniform the early universe was, it’s far from obvious to me that you couldn’t usefully simulate some aspect of it by some other quantum system, without needing a universal QC.

    The last example Matthias #71 provided—the correlated Bose gas—seems to pretty clearly achieve my criteria (a), (b), and (d), but it’s open to interpretation and argument whether it achieves (c) (that is, whether we should take the quantum system to be telling us about some “other” quantum system, or only about itself).

  75. wolfgang Says:

    @Scott #74

    I once heard a talk by Frank Tipler (the Omega point Tipler), which basically reduced the early universe to the quantum mechanics of a harmonic oscillator (physicists like to reduce everything to the harmonic oscillator).
    So I am sure one can simulate *that* – but I am not so sure if we would get anything beyond what we put in.

  76. Matthias Troyer Says:

    Scott #74: I would argue that c) is also satisfied. This is not just a gas of bosons doing what they want to do, but gas of atoms that are cooled into a regime where all that is important is their mass, scattering length, and that – in that limit – they behave like bosonic point particles and the rest of their structure no longer matters. Then one adds an artificial optical lattice to mimic a lattice model, and tunes the parameters so that the effective low-energy model of the cold atoms in an optical lattice can be mapped to that of the lattice model one wants to “solve”. For me this qualifies as an optical lattice quantum emulator. This is not a “normal” phase of rubidium but a gas in a (continuum) box got engineered into a corner where it now mimics a quantum lattice model. If you don’t let this count then I don’t know if you would let any analog quantum emulation count.

  77. Scott Says:

    Matthias #76: Thanks so much for the explanation!

  78. Matthias Troyer Says:

    John #72: take a look at that paper, the issue is much simpler. There is a certain time where the classical data stops, simply because one could no longer calculate it. It’s a simple as that. The words you write about are simple musings as to why it stops there without having to go into details.

  79. John Sidles Says:

    Matthias Troyer #78, it was pleasing to see Ulrich Schollwöck listed among the authors of (the outstanding article that you cited in #71) “Probing the relaxation towards equilibrium in an isolated strongly correlated 1D Bose gas” (arXiv:1101.2659v1 [cond-mat.quant-gas], Nature Physics, 2012), because Prof. Schollwöck is also the sole author of (the much-cited review) “The density-matrix renormalization group (DMRG)” (arXiv:cond-mat/0409292v1, Reviews of Modern Physics, 2005).

    Together these two articles illustrate, specifically and concretely, the point that my “Bourbakian” comment (#72) was making generally:

    “The experiments can be seen as a self-sustained dynamical quantum simulation where the simulation effort is the same for each value of [experiment-time] t and each set of parameters. As long as the time-evolution is not perturbed by experimental imperfections, this dynamical quantum simulator outperforms any continuous-time numerical simulation for which the classical effort increases with t. Simulation methods on classical computers such as matrix-product state-based time-dependent DMRG used here suffer from an extensive increase in entanglement entropy which limits the relaxation times accessible in the calculations.”
      — from Probing relaxation … (2012)
    ——
    “Generally, all correlations in matrix-product states are either long-ranged or purely exponentially decaying. They are thus not suited to describe critical behavior in the thermodynamic limit. […] But this statement has to be qualified; DMRG still works for rather large finite systems […] Extrapolations to the infinite-size limit […] are highly controversial.”
      — from DMRG (2005)

    Slow-downs both experimental and computational   Thermodynamical systems nearing critical points exhibit both critical slowing and divergent correlation-lengths, which impede both experimental and numerical quantum simulations (especially in a QED-world that is computationally afflicted by infrared divergences and light-speed transport), to such an extent that no logically rigorous case — in fact not even a convincing heuristic case (as it seems to me) — can be made that either method has an inherent exponential advantage over the other.

    When performative is transformative  It is entirely plausible that experimental quantum simulations may exhibit polynomial speedups relative to numerical quantum simulations, and of course “mere” polynomial speedups can in practice be technologically transformational (fast algorithms for discrete fourier transforms are an example). Needless to say, any such polynomial advantage foreseen for ion-trap quantum simulations can with comparable plausibility be foreseen for Josephson-junction quantum simulations (like D-Wave’s).

    Simulation as a Knuthian “art”  Bourbaki’s views also are relevant to the incredible mathematical ingenuity — which is amply documented in Schollwöck’s RMP survey — that practitioners numerical simulation methods like DMRG, MPS, TNS (etc) are exhibiting:

    “Despite the extraordinarily penetrating theorems that have been proved, one cannot say that we have a general method of attack. We have several of them, and one always has the impression that one is working like a craftsman, by accumulating a series of stratagems.”

    The real optimists  Are we approaching an epoch in which the “craftsmanlike ingenuity” associated to numerical quantum simulation methods can be distilled to a “coherent theory, logically arranged, easily set forth and easily used.” People who believe in the feasibility of this quantum Bourbakian synthesis are (as it seems to me) the real optimists in the optimist-vs-skeptic debate!

    Three conclusions  (1) Increasing respect for ion-trap simulation logically correlates with increasing respect for D-Wave simulation and with increasing respect for numerical simulation; (2) in all three disciplines, “mere” polynomial performative improvements are associated to transformational practical consequences; and (3) Bourbakian readings of the STEAM literature assist the appreciation of conclusions (1) and (2).

  80. Gil Kalai Says:

    Regarding Matthias very interesting comment (#72).

    For the correlated 1D Bose Gas, Matthias mentioned that classical simulation, had become unreliable, at a certain time (due to growing entanglement entropy). This may look like a manifestation of quantum supremacy (i.e., a quantum dynamics too hard for a classical simulation) but an alternative explanation, would be that the dynamics of the experimental system is actually described by a certain perturbation (noisy version, if you wish,) of the dynamics described by the authors of the paper. If so, once the accumulated inaccuracy (or “noise”) reaches a certain level, the original simulation is becoming unreliable. However, this says nothing on the hardness of the correct evolution describing the experiment, which may well be computationally simpler than the one used for the simulation.

  81. John Sidles Says:

    Hype-ocolypse Now?

    Appearing today …

    • Scientific American, Cryptographers brace for quantum revolution Encryption fix begins in preparation for arrival of futuristic computers

    • Nature, Online security braces for quantum revolution, Encryption fix begins in preparation for arrival of futuristic computers.

    … eerily similar tag-lines?

    Natural questions  In the event that D-Wave’s noisy quantum machines can factor big keys in reasonable times — perhaps via proprietary post-Shor algorithms? — then can post-Hilbert classical simulations of “D-Waveian” dynamics factor these keys too? WIth only polynomial overhead?

    Conclusion  Hmmm … perhaps it is prudent to change-over … even at a cost that (as it seems to me) will greatly exceed the cost of all of the QIT research that has ever been funded.

  82. fred Says:

    Gil #80
    Is that fundamentally different from, say, a classical numerical simulation of any chaotic system (three bodies, double pendulum)?

  83. John Sidles Says:

    Fred #82, one Kalai-compatible answer to your question begins with a fine article by Stephen Jordan and Keith Lee and John Preskill, titled “Quantum algorithms for quantum field theories” (Science, 2012).

    Collapsing the quantum hype-ocalypse  The Jordan/Lee/Presskill article is accompanied by an admirably circumspect (as it seems to me) Science Perspective article by Philipp Hauke and Luca Tagliacozzo and Maciej Lewenstein titled “Speeding up quantum field theories”. This article concludes:

    “Last but not least, there have been several recent examples where advances in quantum information and quantum computation, such as the understanding of the entanglement structure for relevant quantum states, have been exploited to design more efficient classical algorithms [e.g., the recently developed tensor network algorithms].

    These developments give hope that classical simulations of quantum field theories in 1 + 1 dimensions, and perhaps even in some instances of 2 + 1 dimensions, might be efficient. Again, the techniques developed by Jordan et al. may turn out to be useful for such investigations.

    The referenced “tensor network algorithms” are reviewed by (e.g.) Schollwöck’s “The density-matrix renormalization group” (2005) and “The density-matrix renormalization group in the age of matrix product states” (2011).

    Physical origins of Kalai-compatibility  One Kalai-compatible consideration is that real-world accelerator experiments invariably are accompanied by electromagnetic radiation, and these electromagnetic processes are invariably are afflicted (in QED field theory) by zero-mass infrared divergences that require mathematical treatment beyond the unitary Trotter-decompositions that the finite-mass Jordan/Lee/Preskill analysis assumes … and it is precisely the lossy dynamics associated to these divergences that Schollwöck-style classical simulations efficiently exploit, both directly and indirectly (the latter by tuning their non-linear non-Hilbert algebraic state-space geometry).

    Conclusion  The zero-mass infrared entanglements that are inescapably associated to all QED-world scattering experiments substantially alter both the questions that can be experimentally answered and the mathematical techniques appropriate to simulating those experiments … and these alterations are naturally compatible — qualitatively at the very least — with Gil Kalai’s quantum postulates.

    The main lesson  For quantum optimists and quantum skeptics alike, there’s abundant motive and opportunity to do good research.

  84. John Sidles Says:

    Scott pines for the fjords (#4) “I want to maintain my positive, hopeful, generous, broad-minded attitude as long as possible.”

    As this “D-Wave Open Thread” winds down, here are two similarly-titled survey articles that are exemplary of quantum informatic discourse that is “positive, hopeful, generous, and broad-minded”.

    •  “What is a quantum simulator?” (2014) by Tomi Johnson, Stephen Clark, and Dieter Jaksch, and

    •  “Can one trust quantum simulators?” (2012) by Philipp Hauke, Fernando Cucchietti, Luca Tagliacozzo, Ivan Deutsch and Maciej Lewenstein.

    The survey “What is a quantum simulator?” (with 127 references) is relatively shorter and less technical, whereas the survey “Can one trust quantum simulators?” (with 211 references) is relatively longer and more technical; the latter is a sequel to “Speeding up quantum field theories” (of #83).

    Conclusion  These articles survey a banquet of open quantum research questions that skeptics and optimists alike will find to be deliciously palatable.

    Fun factoid  Based upon publication rate of quantum physics articles prior to WWII, it’s plausible that the 337 articles (including duplicates) in the bibliographies of these two “simulation survey” articles substantially exceed in number all of the quantum physics articles that John von Neumann read in his lifetime.

  85. Peter Shor Says:

    Gil Kalai says, about simulating a 1D Bose gas:

    If so, once the accumulated inaccuracy (or “noise”) reaches a certain level, the original simulation is becoming unreliable. However, this says nothing on the hardness of the correct evolution describing the experiment, which may well be computationally simpler than the one used for the simulation.

    In which case, the quantum simulation is still superior, because it’s simulating the original system using the correct “noise”, whilst we have no idea what the correct “noise” is or how to simulate it classically.

    It seems to me that now you actually have a way to verify your “noise theory” of the impossibility of quantum computation: show that it gives the correct results for this quantum simulation.

  86. John Sidles Says:

    Shtetl Optimized readers who appreciate (as I do) gauge-field dynamics as an avatar of Kalai-postulate dynamics will enjoy the considerations raised by Basham, Brown, Ellis, and Love’s “Energy correlations in electron-positron annihilation in quantum chromodynamics: Asymptotically free perturbation theory” (Physical Review D, 1979). This 36-year-old paper admirably sets the stage for numerous still-open problems in gauge-field dynamics, some of which are set forth in the the Clay Institute’s Millennium Prize problem “Quantum Yang-Mills Theory” (the Clay Institute sometimes calls this problem “Yang–Mills and Mass Gap”).

    After many decades of wrestling with the difficult mathematics and difficult physics of quantum electrodynamics (QED) and quantum chromodynamics (QCD), it is very far from evident to anyone (including me) that their low-energy limit is entirely understood, either mathematically or physically.

    The low-energy limit of QCD is problematic because of the notoriously tough (Millennium Prize) question, how are quarks and gluons dynamically confined? The low-energy limit of QED is problematic because of a comparably tough question, how can quantum information be spatially localized (when we want it to be localized) and spatially delocalized (when we want it be delocalized)?

    Historical confection  Willard Lamb’s article “Anti-Photon” (Applied Physics B, 1995) hilariously begins

    “The author does not like the use of the word “photon”, which dates from 1926. In his view, there is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. […]

    At the first of the 1960’s Rochester Coherence Conferences, ! suggested that a license be required for use of the word “photon”, and offered to give such a license to properly qualified people.

    My records show that nobody working in Rochester, and very few other people else+ where, ever took out a license to use the word ‘photon’.”

    The common-sense need for Lamb’s (jocular) “photon license” has been amply demonstrated by five decades of difficult-yet-steady progress in understanding the subtle quantum dynamics of low-energy “gluons” and the even subtler (as it seems to many, including) quantum informatics of low-energy “photons” … both near black-hole horizons and in flat-space laboratories.

    Conclusion  The Kalai postulates can be appreciated as mathematical postulates regarding the quantum informatic transport properties of low-energy, unconfined, massless gauge fields. The low-energy quantum informatic properties of quantum electrodynamics (QED) are comparably mysterious, both physically and mathematically, to the low-energy confinement properties of quantum chromodynamics (QCD).

  87. Gil Kalai Says:

    Hi Peter, thanks, as always, for the comment. I wasn’t attending for a few (holiday) days.

    “It seems to me that now you actually have a way to verify your “noise theory” of the impossibility of quantum computation: show that it gives the correct results for this quantum simulation.”

    Absolutely! There are two (possibly related) things that we can check. With Guy Kindler we considered a very simple form of noise for Bosons that amounts to exponential decay of high Hermit-Fourier coefficients. Perhaps we can try this here. We can make this decay constant in time and adjust the rate to fit the witnessed breakdown of simulation, and then check how good it is.

    The second thing we can try is to take the Bose-Hubbard model written in the paper and apply some standard noise to it. (or even try my smoothed variation; here, since there is no fault-tolerance in place it will not make a big difference.) Again I would guess that the net effect will be that of suppressing some higher order terms and this will make the evolution computationally simpler.

    The term “the correct results” can be a tricky issue here. but we can hope for

    a) better result compared to the simulations in the paper, or even

    b) as good result as the experimental inaccuracy witnessed in the experiment itself. (I am sure there is a technical term for it, but I don’t remember it.)

    Anyway, it will be fun to think about these suggestions together, and in fact, I also wrote Matthias about it a couple weeks ago.

    ” In which case, the quantum simulation is still superior, because it’s simulating the original system using the correct “noise”, whilst we have no idea what the correct “noise” is or how to simulate it classically.”

    Right, it is hard to argue that the experiment itself simulates its own behavior better. (It will be harder for me in my computer to simulate an unknown process that you run in your computer.) But this have no baring (I think) on the issue of quantum computational supremacy, and the issue here does not seem to be the “quantumness” of the noise. In some cases, the issue might be the large amount of extra parameters needed to describe the noise (which is somewhat related to Fred (#83)).

    In any case, I think we may well get some extra mileage by adding noise and replacing the Hubbard-Bose dynamics by a stochastic version (which may very well be also computationally simpler).

    Of course, there might be some specific reasons I am not aware of to think that the paper we discuss manifest some “quantum supremacy”. I will certainly be interested to hear what they are.

    (BTW, A group of us here at HUJI (Nadav Katz Ronnie Kozlov, Dorit Aharonov and me) are meeting once in a while to discuss the best potential evidence for “quantum supremacy” in available experimental quantum physics. We do not insist that it is about one system emulating another one.
    We chatted about a few potential candidates, ( Rubidium was mentioned 🙂 but not the specific experiment that Matthias described.) It looks to us that certain QED-computations for the hydrogen atom are still most promising. Also there, my hope would be that certain noisy perturbations of the system will allow better (and more efficient) simulation. In some cases, certain existing effective computations are doing some similar things. )

  88. John Sidles Says:

    In regard to “quantum supremacy” and high-precision simulations (see Gil Kalai’s #87), NIST’s James Sims and Stanley Hagstrom are among many authors who routinely publish high-precision classical simulations of multi-electron systems (see e.g. “Hylleraas-configuration-interaction study of the \(2{}^{2}S\) ground state of neutral lithium and the first five excited \({}^{2}S\) states” (2009)).

    Despite these quantum systems’ strong dynamical nonlinearity and formally infinite Hilbert-space dimension, relative precisions of order \(10^{-9}\) are routinely achieved, and there are no fundamental obstructions (AFAICT) to achieving very much smaller numerical precisions. Perhaps hundred-digit simulation precision is computationally feasible? Nobody knows.

    It will be awhile (obviously) before quantum simulations can hope to demonstrate comparable precision.

    What’s going on? Why are the limits to the precision and efficiency of classical quantum simulation so un-obvious?

    The concluding three sections of the recent (fourth) edition of Gene Golub and Charles van Loan’s Matrix Computations (2013) survey some of the computational advances that ground high-precision quantum simulations.

    12.3 Kronecker Product Computations
    12.4 Tensor Unfoldings and Contractions
    12.5 Tensor Decompositions and Iterations

    What we need to know about matrix computations is growing exponentially! Naturally it is impossible to provide in-depth coverage of all the great new advances and research trends. […]

    Our aim in these few pages is simply to give a snapshot of the “inner loop” linear algebra that is associated to a few of these methods and to build intuition for this increasingly important area of high-dimensional scientific computing.

    Sixty-two references follow, chief among them Tamara Kolda and Brett Bader’s SIAM Review “Tensor Decompositions and Applications” (2009).

    Kolda and Bader’s SIAM Review already has accumulated more than two thousand citations; this is indicative of the burgeoning vigor of tensor-space research.

    Entangled questions

    • How is it that (Kalai-compatible?) tensor-space dynamical systems allow efficient quantum simulation in principle?

    • Why are high-precision tensor-space quantum simulations proving to be so (relatively) easy to demonstrate in practice?

    • How is it that Hilbert-space dynamical systems QM allow scalable quantum computation in principle?

    • Why are scalable quantum computations proving to be so (relatively) difficult to demonstrate in practice?

    Conclusion  It’s plausible that answering any one of these four entangled questions in-depth will require that all four be answered in-depth:

    Surveys like Kolda and Bader’s and textbooks like Golub and van Loan’s are a very good start … but they are only a start.

  89. Gil Kalai Says:

    Dear Peter,

    A few more comments Regarding the 1-D Bose gas experiment that Matthias mentioned, and on how my “noise theory” might be related to the difficulty in classical simulations of the experiment.

    A) As for the 1-D strongly coupled Bose gas system that Matthias mentioned, we have to choose between two explanations for the difficulty (witnessed at some time) in simulating the experiment  via the Bose-Hubbard evolution.

    a) This expresses some sort of quantum computational supremacy of the experimental process!

    b) This expresses errors which accumulates in time.  

    I do not see good reasons to prefer a) on b) as Matthias, Scott, and you seem to do. (The second possibility neither requires “my noise” nor a skeptical stance on QC in general.) Maybe I miss something here.

     

    B) Now, to my theory.  My theory comes in two levels, or scales if you wish. One scale is represented by my work with Guy Kindler and we identify some behavior that characterizes a very simple noise model for non-interacting bosons, that may well extend to other noise models for bosons and, more generally,  “small quantum computers”.  It will be a good idea to see, as you suggested, if this will give better classical (in fact,  low-complexity) simulation for the 1-D Bose gas experiment.

    This behavior that Guy and I describe for noisy quantum systems in the small scale gives (I think) an explanation for why you will not be able to reach the starting point for quantum error-correction or any manifestation of quantum supremacy in a larger scale.

    C) My other (and older) noise theory takes for granted that noise accumulates (or that fault-tolerance is absent or impossible,) and try to model noisy quantum circuits (and more general systems) based on this assumption. ( B gives a possible picture supporting the assumption.)

    While (under this assumption) universal quantum circuits are beyond reach and they cannot be achieved even for a small number of qubits, quantum circuits remain a powerful framework and model for quantum evolutions.  Here (when the device leading to the evolution is not specified,) we cannot expect an explicit noise model of the form “d rho / dt = ” but only implicit modeling  in terms of conditions that the entire noisy process must satisfy. The two of us discussed my smoothing gymnastics for this purpose at length in this thread. (And Aram Harrow and me discussed at length other conjectural implicit conditions on the noise.) Again, this part of my work is not so relevant to the 1-D Bose gas since the system does not have any fault-tolerance component to start with. You may explain the gap between classical simulation and the actual experimental behavior by using the very standard noise models for quantum circuits of the threshold theorem and earlier work (also your FT work, and also works by the first generation QC skeptics.)

  90. Shehab Says:

    From the 2015 publications of the Martinis Group, it is not obvious what new architectural approach, apart from using their superconducting qubits, are going to be used. I don’t expect all of them to be published though.

  91. fred Says:

    Scott is quoted here

    http://spectrum.ieee.org/tech-talk/computing/hardware/how-much-power-will-quantum-computing-need?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrum+%28IEEE+Spectrum%29

  92. anon Says:

    Hi Scott, sorry for going back to this old post but I have a question.
    Maybe somebody else proposed this already but, along the lines of dwave: what about making a dedicated chip which is really a controllable ising lattice? Basically this would be a classical version of the quantum (??) one of dwave. Such a system could even be built on a board which you can then be inserted into a normal PC.
    So basically you realize the annealing on a real system instead of into a program and it should be much faster. It’s kind of doing your pegs-and-soapy water experiment in reality instead of running a variational calculation program..
    Does it make sense, or the complexity of translating the problem into an ising minimization kills also this idea?
    Best, K.

  93. Steve Jurvetson Says:

    Ok, is 10^8 notable?

    Unprecedented perhaps?

    I imagine you are pondering today’s announcement by Google Research: http://googleresearch.blogspot.ca/2015/12/when-can-quantum-annealing-win.html

  94. Quantum Supercomputers Entice Wall Street Vowing Higher Returns - Government Jobs Today Says:

    […] stirred Scott Aaronson, an associate highbrow during a Massachusetts Institute of Technology, to write that while researchers don’t entirely know a characteristics of a machine, “D-Wave and a […]