D-Wave Open Thread
A bunch of people have asked me to comment on D-Wave’s release of its 1000-qubit processor, and a paper by a group including Cathy McGeoch saying that the machine is 1 or 2 orders of faster (in annealing time, not wall-clock time) than simulated annealing running on a single-core classical computer. It’s even been suggested that the “Scott-signal” has been shining brightly for a week above Quantham City, but that Scott-man has been too lazy and out-of-shape even to change into his tights.
Scientifically, it’s not clear if much has changed. D-Wave now has a chip with twice as many qubits as the last one. That chip continues to be pretty effective at finding its own low-energy states: indeed, depending on various details of definition, the machine can even find its own low-energy states “faster” than some implementation of simulated annealing running on a single-core chip. Of course, it’s entirely possible that Matthias Troyer or Sergei Isakov or Troels Ronnow or someone like that will be able to find a better implementation of simulated annealing that closes even the modest gap—as happened the last time—but I’ll give the authors the benefit of the doubt that they put good-faith effort into optimizing the classical code.
More importantly, I’d say it remains unclear whether any of the machine’s performance on the instances tested here can be attributed to quantum tunneling effects. In fact, the paper explicitly states (see page 3) that it’s not going to consider such questions, and I think the authors would agree that you could very well see results like theirs, even if what was going on was fundamentally classical annealing. Also, of course, it’s still true that, if you wanted to solve a practical optimization problem, you’d first need to encode it into the Chimera graph, and that reduction entails a loss that could hand a decisive advantage to simulated annealing, even without the need to go to multiple cores. (This is what I’ve described elsewhere as essentially all of these performance comparisons taking place on “the D-Wave machine’s home turf”: that is, on binary constraint satisfaction problems that have precisely the topology of D-Wave’s Chimera graph.)
But, I dunno, I’m just not feeling the urge to analyze this in more detail. Part of the reason is that I think the press might be getting less hyper-excitable these days, thereby reducing the need for a Chief D-Wave Skeptic. By this point, there may have been enough D-Wave announcements that papers realize they no longer need to cover each one like an extraterrestrial landing. And there are more hats in the ring now, with John Martinis at Google seeking to build superconducting quantum annealing machines but with ~10,000x longer coherence times than D-Wave’s, and with IBM Research and some others also trying to scale superconducting QC. The realization has set in, I think, that both D-Wave and the others are in this for the long haul, with D-Wave currently having lots of qubits, but with very short coherence times and unclear prospects for any quantum speedup, and Martinis and some others having qubits of far higher quality, but not yet able to couple enough of them.
The other issue is that, on my flight from Seoul back to Newark, I watched two recent kids’ movies that were almost defiant in their simple, unironic, 1950s-style messages of hope and optimism. One was Disney’s new live-action Cinderella; the other was Brad Bird’s Tomorrowland. And seeing these back-to-back filled me with such positivity and good will that, at least for these few hours, it’s hard to summon my usual crusty self. I say, let’s invent the future together, and build flying cars and jetpacks in our garages! Let a thousand creative ideas bloom for how to tackle climate change and the other crises facing civilization! (Admittedly, mass-market flying cars and jetpacks are probably not a step forward on climate change … but, see, there’s that negativity coming back.) And let another thousand ideas bloom for how to build scalable quantum computers—sure, including D-Wave’s! Have courage and be kind!
So yeah, if readers would like to discuss the recent D-Wave paper further (especially those who know something about it), they’re more than welcome to do so in the comments section. But I’ve been away from Dana and Lily for two weeks, and will endeavor to spend time with them rather than obsessively reloading the comments (let’s see if I succeed).
As a small token of my goodwill, I enclose two photos from my last visit to a D-Wave machine, which occurred when I met with some grad students in Waterloo this past spring. As you can see, I even personally certified that the machine was operating as expected. But more than that: surpassing all reasonable expectations for quantum AI, this model could actually converse intelligently, through a protruding head resembling that of IQC grad student Sarah Kaiser.
Comment #1 August 26th, 2015 at 6:44 pm
If you were given head over an entire research team built with the task of making a working quantum computer, and given necessary funding, what approach would you go about doing it?
Comment #2 August 26th, 2015 at 7:35 pm
Phyliida #1: I would give the money to Martinis, Wineland, and a few other such people and let them figure out what to do with it! (I’m guessing it would mostly involve superconducting and ion-trap QC—in both of which, the main challenge remains what it’s been for the past 15+ years, to further push down decoherence rates with a bunch of integrated qubits to the point where you can start doing fault-tolerance. I’m sure more money could make things happen faster, like with most things.)
Comment #3 August 26th, 2015 at 8:19 pm
Part of the reason is that I think the press might be getting less hyper-excitable these days, thereby reducing the need for a Chief D-Wave Skeptic. By this point, there may have been enough D-Wave announcements that papers realize they no longer need to cover each one like an extraterrestrial landing.
I don’t know if the press is getting less excitable, but the general public definitely seems to be about as excitable and hyperbolic as ever. See for example this Reddit thread https://www.reddit.com/r/Futurology/comments/3hpsgr/some_preliminary_benchmarks_on_dwaves_1000_qubit/
Comment #4 August 26th, 2015 at 8:55 pm
Joshua #3: If I want to maintain my positive, hopeful, generous, broad-minded attitude as long as possible, probably I shouldn’t even look at that thread…
Comment #5 August 26th, 2015 at 9:07 pm
What lead you to believe that about D-Wave?
Comment #6 August 26th, 2015 at 10:24 pm
Douglas #5: Well, the last few times I talked to journalists about this, the knowledge level seemed slightly higher than in the past—the points had gotten across that, yes, there is expert skepticism for good reasons, and yes, this is a long-term undertaking, which might or might not ever be able to give you a significant advantage over classical computers for real-world optimization problems—and if it can, then probably not for a long time. The days seemed gone when every D-Wave announcement meant “Day Zero” of a new quantum era where no previous rules applied, and where I would only be called for pro forma “due diligence” on a pre-decided narrative that was fundamentally false.
On reflection, though, I wonder how much of this is just pure selection bias—my no longer looking up the sorts of things that I know would make me rip my hair out (cf. comment #4), and the journalists who write such things no longer bothering to call me?
Comment #7 August 27th, 2015 at 1:22 am
Oh, I thought you meant that you believed that D-Wave was in it for the long haul.
Comment #8 August 27th, 2015 at 3:17 am
Is there an implementation of Quantum annealing as a circuit using universal quantum gates?
Comment #9 August 27th, 2015 at 5:58 am
All of this made me wonder, how is the book ‘Speaking Truth to Paralellism’ coming along?
Comment #10 August 27th, 2015 at 6:26 am
Job #8: Yes, it’s straightforward to simulate quantum annealing in the circuit model, using a trick called “Trotterization” (which just means: approximating a continuous Hamiltonian by the product of a large number of small discrete rotations, on 1 or 2 qubits at a time). The converse direction, proved by Aharonov et al. in 2004, is much more interesting: any quantum circuit can be simulated in the adiabatic model with only polynomial slowdown, provided you’re willing to allow a final Hamiltonian whose ground state is a superposition over the entire history of the circuit-model computation.
Comment #11 August 27th, 2015 at 6:27 am
F. Marshall #9: Sorry, still working on the manuscript! A few more months at least before I send it back to MIT Press.
Comment #12 August 27th, 2015 at 9:38 am
I think you’re right about the improving press coverage: the journalists seem to be a lot savvier than five years ago.
Comment #13 August 27th, 2015 at 10:23 am
That is interesting, and i wasn’t aware that the adiabatic model could simulate an universal QC with polynomial overhead. I’m assuming from that last requirement that this would not be practical.
I wonder if there is a classical algorithm which, given an optimization problem, can produce a quantum circuit to solve it using Trotterization – e.g. is there a description of how to do this somewhere?
I realize that, in practice, the adiabatic model is more viable than an universal QC, so implementing quantum annealing using a circuit is not very smart, but i personally would find it easier to study and understand.
For example, at this time i understand that a classical machine probably can’t solve Simon’s problem for input functions that use lots of Toffoli gates, even though it can do pretty well for many other functions. I would like to understand which optimization instances are hard classically, using the gate model, and how DWave (or eventually Martinis) are doing with those.
Comment #14 August 27th, 2015 at 11:51 am
One data point concurring with Scott’s point that journalists were getting smarter about this. In years past I had discussed with the Register’s writers their “born yesterday” repeating of D-Waves claims. They seem a lot more with it now:
http://www.theregister.co.uk/2015/08/27/dwave_whether_or_not_its_quantum_its_faster/
Comment #15 August 27th, 2015 at 12:37 pm
[…] (y famoso crítico de D-Wave Systems) Scott Aaronson, “D-Wave Open,” Shtetl-Optimized, 26 Aug 2015. “El (sobre)coste de codificar un problema de optimización de interés […]
Comment #16 August 27th, 2015 at 4:13 pm
Is the Sarah/D-Wave hybrid a walking/talking Turing test?
Re: “… I enclose two photos from my last visit to a D-Wave machine, which occurred when I met with some grad students in Waterloo this past spring. As you can see, I even personally certified that the machine was operating as expected. But more than that: surpassing all reasonable expectations for quantum AI, this model could actually converse intelligently, through a protruding head resembling that of IQC grad student Sarah Kaiser.”
Comment #17 August 27th, 2015 at 9:20 pm
The pictures are great – right there is this year’s Halloween costume for sure!
Comment #18 August 27th, 2015 at 10:41 pm
Scott… umm, you need to exercise.
Comment #19 August 28th, 2015 at 2:36 am
Arko #18: Tsk, tsk. Can’t a guy blog about quantum computing without these sorts of body-shaming, paunch-shaming remarks denigrating his physique? Your comment is all the more offensive and problematic by virtue of being accurate. Having absorbed patriarchal cultural ideals of the fit, in-shape male, do you have any idea how it feels to be out of breath after mere minutes of chasing your toddler around the apartment?
Comment #20 August 28th, 2015 at 3:07 am
ed #17: The students told me the device was, indeed, originally built as a Halloween costume, and only later repurposed for accosting me.
Comment #21 August 28th, 2015 at 3:30 am
Scott #19: My cousin sister had gone out of shape. Her twin toddlers dragged her back into it. She would tell you that it’s like “chasing chickens”. 😉
Comment #22 August 28th, 2015 at 8:47 am
My wife set me a similar task more than a year ago, in service of a book that she is writing upon the topic of hope, that task being to read all works — fiction and nonfiction alike — in a charitable spirit.
As a roadmap in service of this “positive, hopeful, generous, broad-minded” objective, Alexander Pope’s couplets have stood the test of time:
Not hamartia, but anagnorisis The 21st century’s emerging community of laughing quantum philosophers envisions — and perhaps even crucially fosters — a planet upon which which skeptical mathematical postulates (Kalai), optimistic scientific visions (Aaronson), and transformative technological capacities (DWave), are concurrently appreciated, and even compatibly realized, as universal, natural, and performative.
Comment #23 August 28th, 2015 at 11:05 am
A couple of questions:
1) Is “quantum annealing” now supposed to be synonymous with the “quantum adiabatic algorithm”? I thought they were supposed to be somewhat distinct, and have been annoyed that some of the press unhelpfully conflated the two. The point being that D-Wave is the former, not the latter (according to e.g. Troyer—hope I’m not putting words in Matthias’s mouth, if he’s reading).
2) Any idea whether all optimization problems can be embedded in the Chimera graph? Certainly not all planar problems are NP-complete, and Chimera is close to planar in some sense. Indeed, in some ways problems on the Chimera graph look “easier” than on general graphs. For example, finding the ground state of an Ising spin glass on a Chimera has a poly time approx. algorithm, like the planar case. See arXiv:1306.6943, which is cited by Vazirani et al, who make additional remarks about this.
Alternatively, if the answer to question 2 is yes, can it be done efficiently—that seems to be what you mentioned.
Comment #24 August 28th, 2015 at 12:18 pm
Fasting twice a week is easier than exercise.
Comment #25 August 28th, 2015 at 3:26 pm
John, how does, say, Ayn Rand come across in a charitable spirit?
Comment #26 August 28th, 2015 at 4:40 pm
@Nick Read, #23
Regarding your second question, certainly — Chimera was explicitly designed to be able to embed up to complete graphs with quadratic overhead (see, e.g., section 1.C of http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6802426), so, large enough matrix can embed any Ising spin energy minimization problem.
Ising spin minimization with local fields on planar graphs or on non-planar graphs was shown to be NP-hard, see Barahona ’82: http://www.yaroslavvb.com/papers/barahona-on.pdf
Hope this helps!
Comment #27 August 28th, 2015 at 6:34 pm
Here is a serious answer to your serious (but peripheral) question. And yes, there is some “quantum” in this answer!
A starting point One good starting point to an appreciation of Ayn Rand is Yannick Grannec’s Goddess of Small Victories (2014), which is about the marriage of Kurt Gödel and Adele Porkert. Here Grannec presents a peripeteiac study of the difficulties and rewards of marriage to a charismatic, rationalizing person who is severely afflicted with a personality spectrum-disorder.
Gödel and Rand contrasted One major difference (obviously) between Gödel and Rand, is that Gödel’s professional works are proving to have enduring mathematical and philosophical value; Rand’s not so much. Still in their private lives, neither Gödel nor Rand was easy to live with.
Conclusion Just as mathematically “there is no royal road to geometry”, there is medically and morally no royal road to dealing effectively with personality disorders in the “positive, hopeful, generous, broad-minded attitude” that Scott commends.
Because for all too many disorders, the long-term medical prognosis is just plain poor, in the present century as sadly as in all prior centuries.
Charity and hope And yet progress is feasible, and we have ample quantum reasons to foresee that this progress will accelerate in the present century. In this respect the “other Scott A” weblog Slate Star Codex, and also the family-oriented weblog Out of the Fog, are recommended resources (by me at least).
Comment #28 August 28th, 2015 at 9:26 pm
Nick #23: Regarding your first question, I always thought of the adiabatic algorithm as just the zero-temperature limit of quantum annealing, or (conversely) of quantum annealing as the adiabatic algorithm at nonzero temperature.
Comment #29 August 28th, 2015 at 10:49 pm
@ Scott #28 and Nick #23:
Scott, thanks for your perspective! I was going to offer a similar distinction, but I am glad that you gave yours first, and we agree on this semantical point! 🙂
Alternative way to put it is that your “Intel Core whatever” is *not* a “Turing machine” (for starters, it does not have an infinite tape, or, anymore, any tape at all) — but it is a real-life embodiment of an abstract concept…
Of course, it is just my private way of thinking about those things…
Cheers!
Comment #30 August 29th, 2015 at 12:41 am
Nick – “2) Any idea whether all optimization problems can be embedded in the Chimera graph? Certainly not all planar problems are NP-complete, and Chimera is close to planar in some sense.”
The first answer is yes. The Chimera sequence of graphs are sufficiently non-planar that you can embed any graph in them. All you need for this is a planar graph with a sufficient supply of non-planar bridges. For instance, if the planar graph is periodic in both direction and has at least one bridge in its repeating cell, that’s usually enough. Besides, planarity by itself does not even usually prevent NP-completeness. You need planarity and a relatively tame local optimization problem. One way to see it is that it many planar interactions make it possible to make an equivalent of a bridge.
The second answer is that this doesn’t help D-Wave’s claims very much, because the reduction is wasteful. They like to talk as if exponential speedup is their realistic goal. However, even if their qubits were better than not-all-that-quantum, you can only reasonably expect a quadratic speedup for quantum annealing for a general NP-hard problem. (Because the best that it can be is a version of Grover’s algorithm.) Even though the standard convention for complexity classes like BQP and NP is to take all polynomial equivalence for granted, most of the topic of computer algorithms in practice is about polynomial accelerations. These reductions from something else to the Chimera-type graphs can easily negate the acceleration that you were after.
For example, this is why a paper on D-Wave and Ramsey numbers topped out at R(2,8): How many people do you need in a room to guarantee that at least 2 are friends or at least 8 are mutual strangers? You should carefully consider the “difficulty” of this problem. Yes, it’s as dumb as it sounds. Not just because noisy quantum annealing isn’t magic, but also because of the overhead when they embedded this into their “Chimera” graph.
Comment #31 August 29th, 2015 at 9:19 am
Greg #30,
If D-Wave could match the performance provided by Grover’s algorithm for an NP-Hard problem, then wouldn’t the speedup be exponential given that the search domain is exponential in size?
I would take O(2^n/2+n^2) over O(2^n). Am i thinking about this wrong?
Comment #32 August 29th, 2015 at 11:35 am
D-Wave’s news release is like the announcement of a super duper new version of a homeopathic medicine that’s twice as potent as the current homeopathic product.
If you don’t first convince me that the principle behind homeopathy works, then why should I care?!
It gets worse if the price tag on super duper drug is one million dollars whereas boring, common pharma medicine (aka a single core clasical computer) gets the same effect in $500.
Comment #33 August 29th, 2015 at 11:39 am
Job #31: First, just as a point of terminology, we would never call an improvement from 2n to 2n/2 “exponential.” One looks not at the ratio, but at the new running time as a function of the old running time (in this case it’s the square root).
More to the point, if you suffered a quadratic blowup in the reduction and then got a Grover-type speedup, then your new running time would not be 2n/2+n2. It would be 2(n^2)/2, which not only kills any speedup but is much worse than 2n. If you’re getting only a Grover-type speedup, then in order to make the whole thing worthwhile, the blowup in the reduction has to be by less than a factor of 2.
Comment #34 August 29th, 2015 at 12:24 pm
Thanks for clearing that up.
I wasn’t aware that the reduction step would increase the size of the input that significantly.
Also, i would have accepted an improvement from 2^n to 2^n/2 as exponential, so i was definitely interpreting this incorrectly.
Comment #35 August 29th, 2015 at 12:54 pm
Paul B. #26, Scott #28, Greg #30: thanks for your answers.
(I was well aware of the Barahona reference.) Also, there are many problems with that quantum adiabatic algorithm for Ramsey numbers work, one being the lack of any estimate for the runtime.
Rahul #32 re homeopathy: wouldn’t that be *half* as potent? 🙂
Best of luck to various with Ayn Rand, English poetry, and getting exercise.
Comment #36 August 29th, 2015 at 2:07 pm
Nick – I know a convincing non-quantum algorithm to calculate the Ramsey number R(2,n), and I can even give you a sharp estimate for the runtime.
Comment #37 August 29th, 2015 at 4:17 pm
Lol. I know it too. But the theory paper was for all R(m,n),
so the question is for that case.
Comment #38 August 29th, 2015 at 4:23 pm
More seriously, do you think calculating Ramsey numbers
R(m,n) is even in QMA? (First we should make sense of the question; presumably rephrase as decision problem, greater or less than some bound.)
Comment #39 August 29th, 2015 at 6:20 pm
Fred # 24:
Actually, it is theoretically easier (according to fasting expert Arnold Ehret ) to get in shape by exercising as opposed to fasting because fasting can introduce more waste products into the bloodstream faster than exercising can you so you have to be more careful when fasting.
Comment #40 August 29th, 2015 at 6:30 pm
Nick – Bounding R(m,n) is a typical example of a “sparse” subproblem of a more general algorithmic problem that is easier to analyze. To begin with, the max clique problem is NP-complete. Now consider a generalization of the Ramsey problem where you have a partly colored complete graph on r vertices, and you want to know whether there is a coloring of the remaining edges that has neither a red m-clique nor a blue n-clique. This is in Sigma_2P, the analogue of NP where you have a one-round contest between a prover and a disprover. I have not checked, but I would guess off the top of my head that this is Sigma_2P-complete.
However, in the standard Ramsey problem, you are not given the creativity of a partially colored complete graph, you are given a blank complete graph. It looks difficult like the other cases, but you are now robbed of the tools to easily create reductions.
For counting problems rather than two-move problems, the answer is typically #P-complete when the input is creative; for instance the number of matchings or dimer states of a graph is #P-complete. But you often see sparse special cases of these problems in combinatorics and stat mech, so that it still looks difficult but no one knows how to prove it, not even conditionally relative to standard complexity theory conjectures.
Comment #41 August 29th, 2015 at 11:14 pm
@ Greg #30
Glad that we agree on the fine line between “almost planar” (perceived as bad) and “able to embed non-trivial problems” (perceived as good) graphs!
But I would have to respectfully disagree with your (seeming, at least to my not necessarily educated eye!) conflating of space and time complexity when you assume that N^2 penalty in *qubit count* in embedding somehow translates to corresponding losses in runtime, cancelling any Grower-like improvements — it could as well be true, but I can not immediately see why, sorry…
When you analyse run times of a normal gate model QC, you do not count as a penalty that it takes (eventually) thousands of real life qubits to implement an error-corrected abstract one, right?
Just curious!
Comment #42 August 30th, 2015 at 2:31 pm
It isn’t a theorem that it will always work this way. It can’t be, because when you reduce A to B, it is always possible that you will land in an easier sector of problem B that runs faster than B does in general. But it is also no theorem at all that you will always get this ace in the hole. If all it takes is embeddability of graphs, then you could take any spin frustration model on a graph and replace each edge with a long ferromagnetic chain without any annealing penalty.
Of course, yes, but there are two exponential whammies that work in your favor. One is that the overhead is polylogarithmic, not polynomial, even if it is “eventually” thousands. The other is that for some important problems, the quantum acceleration is exponential.
Certainly one thing that I’m never going to do is take a quantum acceleration that’s only predicted to be polynomial in theory, and then talk as if it’s exponential as if that’s acceptable salesmanship.
Comment #43 August 30th, 2015 at 2:40 pm
While we are at it, exactly because of these reducibility issues, the so-called “benchmark” in this D-Wave paper, as in the last one, is almost self-simulation. It’s like arguing that a musician is brilliant until and unless Weird Al makes a parody that’s outright better than the original song.
Comment #44 August 31st, 2015 at 2:00 am
First contribution of W. A. Yankovic to Quantum theory duly noted. Not to be confused with “Weird AI”, a popular course in CS departments.
I propose honoring the event with a new unit of measurement for speedup of computations. If a D-Wave machine takes time T_{DW} to do problem X, and a PC costing 1.0E5 times less takes time T_{PC5} for problem X, then we say the speed up is (T_{PC5} / T_{DW}) Weird Al’s.
Following the lead of the controlled fusion community, when a $1E7 DW machine can match a $1E2 PC (rating = 1.0 Weird Al), they will have achieved “Weird Al Breakeven”.
Comment #45 August 31st, 2015 at 10:11 am
The key point is that the success case hasn’t been “problem X” for X an independent variable. Make that “If a D-Wave machine takes time T_DW to do problem DW…”
Comment #46 August 31st, 2015 at 11:03 am
Please allow me to concur with this sentiment. Indeed, I have followed Scott’s leading in posting Nature’s weblog a “simple, unironic, 1950s-style message of hope and optimism”, titled A Tale of Two Preprints: Readings in 21st century quantum discourse.
Conclusion The quantum research community was blessed in August 2015 with an abundance of high-quality preprints (including D-Wave’s preprint). Now the quantum research community stands most urgently in need of “simple, unironic, 1950s-style” discourse that is grounded in “hope and optimism”, and filled (hopefully non-ironically) with “positivity and good will” that helps us to “invent the future together.”
Preprints like D-Wave’s (and others, per the Nature comment) severely challenge our critical faculties. To quote Pope again (as in comment #22)
Thank you, Scott Aaronson, for reminding Shtetl Optimized readers of the vital necessity — and severe difficulty — of Pope’s “justly trac’d” scientific discourse that is (in Scott’s phrasing) simple, unironic, and grounded in hope and optimism.
Comment #47 August 31st, 2015 at 7:09 pm
So Greg, which of Weird Al’s musical parodies did you think was better than the original?
Comment #48 August 31st, 2015 at 8:34 pm
Combinatorically minded Shtetl Optimized readers will perceive that Bob Dylan’s “Tombstone Blues“ is ingeniously bettered by Weird Al’s “Bob.” So yes, it can happen. 🙂
Comment #49 August 31st, 2015 at 9:38 pm
I’m not up on the latest on quantum computing – I deal in classical computing.
That having been said, has anybody managed to do something practical with a QC yet? By which I mean solve a problem where the time to key in the problem/data and get an output is less than the amount of time required to do it with pencil-and-paper?
Comment #50 August 31st, 2015 at 9:43 pm
Garrett #49: Yes, if you’re willing to count simulations of one quantum system by another (and you compare against the running times of the best known classical simulation methods, rather than the best possible, which is not known).
Comment #51 August 31st, 2015 at 11:03 pm
For instance, Word Crimes is decent, while the original “Blurred Lines” is a waste of time in comparison.
Comment #52 August 31st, 2015 at 11:36 pm
#48:
John, once again I am impressed with the breadth of your scholarship. “Bob” is great, and demonstrates the flexibility of Dylan’s style.
Comment #53 September 1st, 2015 at 4:56 am
Garrett (# 49) “Has anybody managed to do something practical with a QC yet?”
Scott (# 50) “Yes, if you’re willing to count simulations of one quantum system by another.”
Lewis Carroll (in Sylvie and Bruno Concluded, 1893) “We now use the country itself, as its own map, and I assure you it does nearly as well.”
The point The empirical capabilities of quantum simulation algorithms that require only classical computational capabilities have been accelerating at a Moore’s Law pace that equals or surpasses even the most optimistic quantum computing capabilities that were set forth in roadmaps of past decades.
Conclusion Complexity theory provides good reason to foresee that efficient quantum simulation by classical computation is infeasible in general. And yet in practice — and especially with ongoing cumulative improvements in algorithmic accuracy and efficiency — the quantum simulation problems that nature poses in biology, chemistry, and condensed-matter systems are proving to be less computationally demanding than many quantum researchers foresaw (say) a decade ago.
For how long can the present Moore’s Law improvement in efficient quantum simulation by classical computation be sustained? And mathematically speaking, how is it that this sustained improvement is even possible?
For young researchers especially, these are crucial questions, to which no one (at present) can give reliable answers. Which is very good news for young researchers!
Comment #54 September 1st, 2015 at 6:30 am
In regard to my “scholarship” — which isn’t scholarship at all, but rather an obsession with quantum implications for 21st century medical practice — Weird Al’s Devo tribute Dare to be Stupid is more to the point, because it shows us that:
• Weird Al is cooler than Devo, and
• Devo is cooler than Mick, yet paradoxically
• Mick (with David) is cooler than Weird Al
Conclusion Al and Devo and Mick remind us that perhaps the Kalai/Harrow optimist/skeptic quantum debate won’t be settled very soon, or have a black-and-white outcome.
Comment #55 September 1st, 2015 at 11:57 pm
@John #54: That would be an instance of Condorcet’s paradox. 🙂
Comment #56 September 2nd, 2015 at 10:40 am
John #53
Do you any advice or good news for old researchers?
Comment #57 September 2nd, 2015 at 12:26 pm
a good article on this
http://arstechnica.com/science/2015/09/d-wave-unveils-new-quantum-computing-benchmark-and-its-fast/
Comment #58 September 2nd, 2015 at 3:54 pm
Fred 56:
Here is some news I (and many others) would love to see: Someone with deep pockets (Google, Apple, Oracle, MS,FB,…) who would appreciate some good cred in the tech community become a champion for LaTeX. With a well funded team, there might be a standardized, cleaned up, enhanced LaTeX 3 before the world ends.
Or, at least a NetBeans plugin for LaTeX. I am about 10X more productive writing Java than LaTeX due to NetBeans support (on the fly syntax checking, large project navigation, easy refactoring, …). On the fly syntax checking and instant display would be a huge help on long technical documents.
Comment #59 September 2nd, 2015 at 5:45 pm
@Scott #50
> simulations of one quantum system by another
But for this purpose we already have powerful quantum computers and they are used routinely to calculate e.g. the spectrum of atoms, molecules etc.
They typically consist of a quantum system surrounded by (semi)classical measurement devices.
They are available at many universities since many years and are called labs.
Comment #60 September 2nd, 2015 at 7:06 pm
wolfgang #59: So, whether you call a physical system a “computer” is partly a question of knowledge and intent. I’d call it a computer if
(a) you completely understand the underlying physics of this system,
(b) you don’t much care about its behavior in and of itself, but (c) you’re using it to learn about the behavior of some other system, which you do care about, and which is too hard to measure directly, and which is “isomorphic” to this system in some relevant respect.
We can call the system a “useful quantum computer” if we also have
(d) there’s no known efficient classical method to calculate the same spectra (or whatever else it is).
My point was that nowadays, systems satisfying (a)-(d) certainly exist (and the fact of their existence considerably complicates the question of “how long to a quantum computer?”). Of course, what we have today is still very far from universal, programmable QCs.
Comment #61 September 2nd, 2015 at 9:19 pm
@Scott #60
In general a QC would consist of N particles + (semi)classical devices of all kind and is supposed to simulate a system which consists of n particles.
Garrett in #49 used the criterion that the QC should be more efficient than a classical computer or pencil-and-paper.
I wanted to make the point that the QC is only interesting if it is also more efficient than a conventional lab setup of the n particles + measurement devices.
And it is not clear to me that this is achievable in general, since N > n.
Comment #62 September 2nd, 2015 at 10:06 pm
Raoul,
Latex is to physicists what stethoscopes are to physicians. Arguably usefull for some very specific applications. Ridiculously useless to most of those who proudly use it.
Comment #63 September 3rd, 2015 at 12:00 am
D-Wave update from Ars:
http://arstechnica.com/science/2015/09/d-wave-unveils-new-quantum-computing-benchmark-and-its-fast/
This is a good illustration of the improved sophistication of reporting on the issue. Of course, ArsTechnica is probably the best tech news site going.
Comment #64 September 3rd, 2015 at 12:23 am
“nowadays, systems satisfying (a)-(d) certainly exist”
Scott, what are the most convincing examples of such systems?
Comment #65 September 3rd, 2015 at 9:22 am
wolfgang #59
I guess in the broad sense of an “analog” quantum machine, that’s true.
But just like “analog” machines aren’t digital computers (i.e. based on bits and boolean logic), to have a true QC, the main ingredient is to realize qubits as an abstraction, no?
Comment #66 September 3rd, 2015 at 3:48 pm
This remark provides illumination to the concluding chapters of a great many modern textbooks on quantum dynamics.
Harrow-Kalai/optimist-skeptic comptibility If we suppose that the canonical postulates of quantum electrodynamics (QED) accurately describe “the underlying physics” of our computing devices — including crucially the superposition principle holding on a Hilbert space of exponential dimension, and supposing too the computational irrelevance of short-range Standard Model forces and long-range gravitational forces — still it can logically be the case (and even empirically is the case) that Gil Kalai’s postulates hold for QED-world computation devices.
Because QED is special … The lossy flow of energy and entropy toward spatial infinity (which so vexes quantum cryptographers and BosonSamplers), suggests alternative dynamical descriptions of QED dynamics, that are formally equivalent to Hilbert-space QED dynamics, but admit PTIME simulation of (inevitably lossy) large-scale computation processes.
Exemplary visions The concluding chapter of the new Zeng-Chen-Zhou-Wen textbook Quantum Information Meets Quantum Matter, as featured on CalTech’s weblog Quantum Frontiers (available in-draft at arXiv:1508.02595), is one of many recent quantum textbooks whose concluding chapter (titled “Outlook: a unification of information and matter — from qubits”) admits a Kalai-favorable reading. Specifically, Quantum Information Meets Quantum Matter, grounds Kalai-compatible quantum physics emergently in tensor-network descriptions of condensed-matter systems.
Common reflections What these textbooks have in common is reflections like the following:
Conclusion 21st century researchers will continue to view quantum dynamics through the traditional lens of linear algebra … and increasingly they will view quantum dynamics through a burgeoning set of ever-more-universal nonlinear noncoordinate algebraic lenses too. Viewed through these new naturally nonlinear lenses, Kalai’s quantum postulates can be more readily appreciated — mathematically, scientifically, technologically, and pedagogically (and even medically) — as beautiful, plausible, and performative.
And this is how, in a 21st century QED-world, quantum optimists and quantum skeptics can both be right.
Comment #67 September 4th, 2015 at 9:22 am
wolfgang #61: The points you’re missing are that
(1) the system you’re trying to study may be experimentally super-hard to access (quark-gluon plasmas, the early universe, etc.), whereas the simulator is easy to access.
(2) even if you can access the system you’re trying to study, you may not know its underlying Hamiltonian, so studying the system itself doesn’t tell you whether or not the system matches your theoretical model of the system, which is what you wanted to know.
Comment #68 September 4th, 2015 at 11:32 am
Scott #60, could you cite some examples?
Only a few experiments I know of claim d), and these are all very much of the “simulating itself” analog variety with fairly limited tunability of parameters. Impressive as these experiments are, most people won’t call this a computer since the device can answer only a single question about its own behavior as a function of a limited number of parameters.
Even the claims of outperforming classical devices (which are rare!) should be taken with a grain of salt—just like with D-wave, if we have a dedicated device to answer a particular question, we should also (at least!) compare to specialized algorithms designed to answer the same question. Generally people haven’t put time into developing these algorithms because (again, like D-wave!) it’s not clear that the questions being answered actually satisfy criterion c) of the question being answered being of interest.
As you mention, this is a slippery issue. This is especially true, since it depends on whether you think a particular question a simulator can answer is interesting, and whether a simulator is rich enough to call a computer rather than an experiment (i.e., whether the isomorphism you mention is sufficiently nontrivial). But at this point, very few people would agree that there exist devices achieving a-d. That’s why there’s so much excitement about the possibility of achieving “quantum supremacy”—we’re not there yet.
Comment #69 September 5th, 2015 at 1:22 am
Those system traits being (as I understand them):
[a] we know the physics of the computation process,
[b] the result is abstracted from the process,
[c] the result has practical implications, and
[d] the result is hard for classical Turing machines.
Scott’s claim can be read in either of two ways: as asserting [a-d] severally, versus asserting [a-d] jointly. Considered severally, each assertion is true, yet considered jointly, there are ample grounds for uncertainty in regard to whether all four can be demonstrated simultaneously, in any single computational process.
Comment #70 September 5th, 2015 at 4:25 am
John #69: No, I never asserted [c] or [d]. I didn’t say “practical” implications; I just said that one system is being used to simulate a different system that you care about more (possibly for pure scientific reasons). And I didn’t say the result is hard to simulate on a classical computer in an absolute sense, only that it outperforms the best classical methods people were able to think of so far. (Of course, here one has the ‘D-Wave problem,’ that this could simply depend on how hard people worked on it. But, crucially, at least we’re now in a realm where one theoretically expects exponential speedups, rather than at best polynomial speedups.)
Obviously, the issue is to demonstrate all of (a)-(d) in a single process, not to demonstrate them “severally.” But my understanding, from Matthias Troyer and other physicists I talked to, was that current quantum simulations do indeed achieve this. I’m going to try to find references.
Comment #71 September 5th, 2015 at 7:36 am
Graeme #68, Scott #60: Scott asked me to provide some examples since I might have mentioned it to him at some point. These are all examples of special purpose analog devices, which – just by the definition of their nature – somehow “simulate themselves” and are limited to certain applications. The examples I will give are quantum simulators using ultracold atomic gases, which are the most advanced quantum simulators at the moment because they are easy to scale.
As long as classical computers can still simulate the same system one gets to the tricky question of how to compare (time, cost, power, …) that I don’t want to get into since if one has to ask such question one has not reached clear quantum superiority yet. To mention some papers, for bosonic systems in http://arxiv.org/abs/0905.4882 the classical simulation seems easier and there are also mean field theories that are better than the accuracy of the quantum simulator. For fermions in http://arxiv.org/abs/1110.3747 one might argue that the quantum simulation gives smaller error bars, but then again I can think of other classical algorithms which – using a supercomputer – can get the same or better results.
The most convincing work so far might be http://arxiv.org/abs/1101.2659, which looks at the dynamics of a correlated Bose gas. There the quantum simulation agrees with state of the art classical methods as long as they work, but the quantum simulation reaches longer times. The reason is a growing entanglement entropy which at some point causes the classical simulations to become unreliable. This is however one of the first demonstrations of a quantum simulator providing results that we don’t have a classical algorithm for. How useful the results are is a matter of debate (some criticize that the Bose gas in this study equilibrates quickly and the equilibrium state can again be simulated efficiently), but so is the question of whether achieving the target energy in D-Wave’s new time to target metric is useful.
Comment #72 September 5th, 2015 at 10:09 am
In an outstandingly lucid post (#71), Matthias Troyer reminds us
Crafting STEAM-compatible definitions of Matthias Troyer’s terms “growing”, entanglement, entropy, “at some point”, classical, simulations, and unreliable — a task that (to many researchers) seemed straightforward ten or twenty years ago — nowadays is commonly appreciated as a challenge that demands Bourbakian levels of mathematical and scientific talent and commitment.
Conclusion Quantum information theory has “gathered its Bourbakian maniacs”; nowadays their “shouting” is vigorous, and it can reasonably be foreseen that “everything will calm down in the end.”
We can hope (with good reason and ample historical precedent) that our appreciation of quantum dynamics will be deepened and widened by this arduous/ardorous collectively Bourbakian discourse.
Comment #73 September 5th, 2015 at 10:41 am
@Scott #67
Yes, but then we talk about a scaleable, universal QC or something very similar, but I thought you have something else in mind with your comment #50, because obviously we are far away from simulating the early universe with a QC.
Comment #74 September 5th, 2015 at 11:11 am
wolfgang #73: Given how uniform the early universe was, it’s far from obvious to me that you couldn’t usefully simulate some aspect of it by some other quantum system, without needing a universal QC.
The last example Matthias #71 provided—the correlated Bose gas—seems to pretty clearly achieve my criteria (a), (b), and (d), but it’s open to interpretation and argument whether it achieves (c) (that is, whether we should take the quantum system to be telling us about some “other” quantum system, or only about itself).
Comment #75 September 5th, 2015 at 11:27 am
@Scott #74
I once heard a talk by Frank Tipler (the Omega point Tipler), which basically reduced the early universe to the quantum mechanics of a harmonic oscillator (physicists like to reduce everything to the harmonic oscillator).
So I am sure one can simulate *that* – but I am not so sure if we would get anything beyond what we put in.
Comment #76 September 5th, 2015 at 12:21 pm
Scott #74: I would argue that c) is also satisfied. This is not just a gas of bosons doing what they want to do, but gas of atoms that are cooled into a regime where all that is important is their mass, scattering length, and that – in that limit – they behave like bosonic point particles and the rest of their structure no longer matters. Then one adds an artificial optical lattice to mimic a lattice model, and tunes the parameters so that the effective low-energy model of the cold atoms in an optical lattice can be mapped to that of the lattice model one wants to “solve”. For me this qualifies as an optical lattice quantum emulator. This is not a “normal” phase of rubidium but a gas in a (continuum) box got engineered into a corner where it now mimics a quantum lattice model. If you don’t let this count then I don’t know if you would let any analog quantum emulation count.
Comment #77 September 5th, 2015 at 1:13 pm
Matthias #76: Thanks so much for the explanation!
Comment #78 September 5th, 2015 at 1:28 pm
John #72: take a look at that paper, the issue is much simpler. There is a certain time where the classical data stops, simply because one could no longer calculate it. It’s a simple as that. The words you write about are simple musings as to why it stops there without having to go into details.
Comment #79 September 5th, 2015 at 3:10 pm
Matthias Troyer #78, it was pleasing to see Ulrich Schollwöck listed among the authors of (the outstanding article that you cited in #71) “Probing the relaxation towards equilibrium in an isolated strongly correlated 1D Bose gas” (arXiv:1101.2659v1 [cond-mat.quant-gas], Nature Physics, 2012), because Prof. Schollwöck is also the sole author of (the much-cited review) “The density-matrix renormalization group (DMRG)” (arXiv:cond-mat/0409292v1, Reviews of Modern Physics, 2005).
Together these two articles illustrate, specifically and concretely, the point that my “Bourbakian” comment (#72) was making generally:
Slow-downs both experimental and computational Thermodynamical systems nearing critical points exhibit both critical slowing and divergent correlation-lengths, which impede both experimental and numerical quantum simulations (especially in a QED-world that is computationally afflicted by infrared divergences and light-speed transport), to such an extent that no logically rigorous case — in fact not even a convincing heuristic case (as it seems to me) — can be made that either method has an inherent exponential advantage over the other.
When performative is transformative It is entirely plausible that experimental quantum simulations may exhibit polynomial speedups relative to numerical quantum simulations, and of course “mere” polynomial speedups can in practice be technologically transformational (fast algorithms for discrete fourier transforms are an example). Needless to say, any such polynomial advantage foreseen for ion-trap quantum simulations can with comparable plausibility be foreseen for Josephson-junction quantum simulations (like D-Wave’s).
Simulation as a Knuthian “art” Bourbaki’s views also are relevant to the incredible mathematical ingenuity — which is amply documented in Schollwöck’s RMP survey — that practitioners numerical simulation methods like DMRG, MPS, TNS (etc) are exhibiting:
The real optimists Are we approaching an epoch in which the “craftsmanlike ingenuity” associated to numerical quantum simulation methods can be distilled to a “coherent theory, logically arranged, easily set forth and easily used.” People who believe in the feasibility of this quantum Bourbakian synthesis are (as it seems to me) the real optimists in the optimist-vs-skeptic debate!
Three conclusions (1) Increasing respect for ion-trap simulation logically correlates with increasing respect for D-Wave simulation and with increasing respect for numerical simulation; (2) in all three disciplines, “mere” polynomial performative improvements are associated to transformational practical consequences; and (3) Bourbakian readings of the STEAM literature assist the appreciation of conclusions (1) and (2).
Comment #80 September 7th, 2015 at 2:31 am
Regarding Matthias very interesting comment (#72).
For the correlated 1D Bose Gas, Matthias mentioned that classical simulation, had become unreliable, at a certain time (due to growing entanglement entropy). This may look like a manifestation of quantum supremacy (i.e., a quantum dynamics too hard for a classical simulation) but an alternative explanation, would be that the dynamics of the experimental system is actually described by a certain perturbation (noisy version, if you wish,) of the dynamics described by the authors of the paper. If so, once the accumulated inaccuracy (or “noise”) reaches a certain level, the original simulation is becoming unreliable. However, this says nothing on the hardness of the correct evolution describing the experiment, which may well be computationally simpler than the one used for the simulation.
Comment #81 September 9th, 2015 at 8:16 pm
Hype-ocolypse Now?
Appearing today …
… eerily similar tag-lines?
Natural questions In the event that D-Wave’s noisy quantum machines can factor big keys in reasonable times — perhaps via proprietary post-Shor algorithms? — then can post-Hilbert classical simulations of “D-Waveian” dynamics factor these keys too? WIth only polynomial overhead?
Conclusion Hmmm … perhaps it is prudent to change-over … even at a cost that (as it seems to me) will greatly exceed the cost of all of the QIT research that has ever been funded.
Comment #82 September 10th, 2015 at 10:33 am
Gil #80
Is that fundamentally different from, say, a classical numerical simulation of any chaotic system (three bodies, double pendulum)?
Comment #83 September 10th, 2015 at 6:36 pm
Fred #82, one Kalai-compatible answer to your question begins with a fine article by Stephen Jordan and Keith Lee and John Preskill, titled “Quantum algorithms for quantum field theories” (Science, 2012).
Collapsing the quantum hype-ocalypse The Jordan/Lee/Presskill article is accompanied by an admirably circumspect (as it seems to me) Science Perspective article by Philipp Hauke and Luca Tagliacozzo and Maciej Lewenstein titled “Speeding up quantum field theories”. This article concludes:
The referenced “tensor network algorithms” are reviewed by (e.g.) Schollwöck’s “The density-matrix renormalization group” (2005) and “The density-matrix renormalization group in the age of matrix product states” (2011).
Physical origins of Kalai-compatibility One Kalai-compatible consideration is that real-world accelerator experiments invariably are accompanied by electromagnetic radiation, and these electromagnetic processes are invariably are afflicted (in QED field theory) by zero-mass infrared divergences that require mathematical treatment beyond the unitary Trotter-decompositions that the finite-mass Jordan/Lee/Preskill analysis assumes … and it is precisely the lossy dynamics associated to these divergences that Schollwöck-style classical simulations efficiently exploit, both directly and indirectly (the latter by tuning their non-linear non-Hilbert algebraic state-space geometry).
Conclusion The zero-mass infrared entanglements that are inescapably associated to all QED-world scattering experiments substantially alter both the questions that can be experimentally answered and the mathematical techniques appropriate to simulating those experiments … and these alterations are naturally compatible — qualitatively at the very least — with Gil Kalai’s quantum postulates.
The main lesson For quantum optimists and quantum skeptics alike, there’s abundant motive and opportunity to do good research.
Comment #84 September 12th, 2015 at 9:40 am
As this “D-Wave Open Thread” winds down, here are two similarly-titled survey articles that are exemplary of quantum informatic discourse that is “positive, hopeful, generous, and broad-minded”.
• “What is a quantum simulator?” (2014) by Tomi Johnson, Stephen Clark, and Dieter Jaksch, and
• “Can one trust quantum simulators?” (2012) by Philipp Hauke, Fernando Cucchietti, Luca Tagliacozzo, Ivan Deutsch and Maciej Lewenstein.
The survey “What is a quantum simulator?” (with 127 references) is relatively shorter and less technical, whereas the survey “Can one trust quantum simulators?” (with 211 references) is relatively longer and more technical; the latter is a sequel to “Speeding up quantum field theories” (of #83).
Conclusion These articles survey a banquet of open quantum research questions that skeptics and optimists alike will find to be deliciously palatable.
Fun factoid Based upon publication rate of quantum physics articles prior to WWII, it’s plausible that the 337 articles (including duplicates) in the bibliographies of these two “simulation survey” articles substantially exceed in number all of the quantum physics articles that John von Neumann read in his lifetime.
Comment #85 September 14th, 2015 at 7:01 pm
Gil Kalai says, about simulating a 1D Bose gas:
In which case, the quantum simulation is still superior, because it’s simulating the original system using the correct “noise”, whilst we have no idea what the correct “noise” is or how to simulate it classically.
It seems to me that now you actually have a way to verify your “noise theory” of the impossibility of quantum computation: show that it gives the correct results for this quantum simulation.
Comment #86 September 15th, 2015 at 10:47 am
Shtetl Optimized readers who appreciate (as I do) gauge-field dynamics as an avatar of Kalai-postulate dynamics will enjoy the considerations raised by Basham, Brown, Ellis, and Love’s “Energy correlations in electron-positron annihilation in quantum chromodynamics: Asymptotically free perturbation theory” (Physical Review D, 1979). This 36-year-old paper admirably sets the stage for numerous still-open problems in gauge-field dynamics, some of which are set forth in the the Clay Institute’s Millennium Prize problem “Quantum Yang-Mills Theory” (the Clay Institute sometimes calls this problem “Yang–Mills and Mass Gap”).
After many decades of wrestling with the difficult mathematics and difficult physics of quantum electrodynamics (QED) and quantum chromodynamics (QCD), it is very far from evident to anyone (including me) that their low-energy limit is entirely understood, either mathematically or physically.
The low-energy limit of QCD is problematic because of the notoriously tough (Millennium Prize) question, how are quarks and gluons dynamically confined? The low-energy limit of QED is problematic because of a comparably tough question, how can quantum information be spatially localized (when we want it to be localized) and spatially delocalized (when we want it be delocalized)?
Historical confection Willard Lamb’s article “Anti-Photon” (Applied Physics B, 1995) hilariously begins
The common-sense need for Lamb’s (jocular) “photon license” has been amply demonstrated by five decades of difficult-yet-steady progress in understanding the subtle quantum dynamics of low-energy “gluons” and the even subtler (as it seems to many, including) quantum informatics of low-energy “photons” … both near black-hole horizons and in flat-space laboratories.
Conclusion The Kalai postulates can be appreciated as mathematical postulates regarding the quantum informatic transport properties of low-energy, unconfined, massless gauge fields. The low-energy quantum informatic properties of quantum electrodynamics (QED) are comparably mysterious, both physically and mathematically, to the low-energy confinement properties of quantum chromodynamics (QCD).
Comment #87 September 16th, 2015 at 11:57 am
Hi Peter, thanks, as always, for the comment. I wasn’t attending for a few (holiday) days.
“It seems to me that now you actually have a way to verify your “noise theory” of the impossibility of quantum computation: show that it gives the correct results for this quantum simulation.”
Absolutely! There are two (possibly related) things that we can check. With Guy Kindler we considered a very simple form of noise for Bosons that amounts to exponential decay of high Hermit-Fourier coefficients. Perhaps we can try this here. We can make this decay constant in time and adjust the rate to fit the witnessed breakdown of simulation, and then check how good it is.
The second thing we can try is to take the Bose-Hubbard model written in the paper and apply some standard noise to it. (or even try my smoothed variation; here, since there is no fault-tolerance in place it will not make a big difference.) Again I would guess that the net effect will be that of suppressing some higher order terms and this will make the evolution computationally simpler.
The term “the correct results” can be a tricky issue here. but we can hope for
a) better result compared to the simulations in the paper, or even
b) as good result as the experimental inaccuracy witnessed in the experiment itself. (I am sure there is a technical term for it, but I don’t remember it.)
Anyway, it will be fun to think about these suggestions together, and in fact, I also wrote Matthias about it a couple weeks ago.
” In which case, the quantum simulation is still superior, because it’s simulating the original system using the correct “noise”, whilst we have no idea what the correct “noise” is or how to simulate it classically.”
Right, it is hard to argue that the experiment itself simulates its own behavior better. (It will be harder for me in my computer to simulate an unknown process that you run in your computer.) But this have no baring (I think) on the issue of quantum computational supremacy, and the issue here does not seem to be the “quantumness” of the noise. In some cases, the issue might be the large amount of extra parameters needed to describe the noise (which is somewhat related to Fred (#83)).
In any case, I think we may well get some extra mileage by adding noise and replacing the Hubbard-Bose dynamics by a stochastic version (which may very well be also computationally simpler).
Of course, there might be some specific reasons I am not aware of to think that the paper we discuss manifest some “quantum supremacy”. I will certainly be interested to hear what they are.
(BTW, A group of us here at HUJI (Nadav Katz Ronnie Kozlov, Dorit Aharonov and me) are meeting once in a while to discuss the best potential evidence for “quantum supremacy” in available experimental quantum physics. We do not insist that it is about one system emulating another one.
We chatted about a few potential candidates, ( Rubidium was mentioned 🙂 but not the specific experiment that Matthias described.) It looks to us that certain QED-computations for the hydrogen atom are still most promising. Also there, my hope would be that certain noisy perturbations of the system will allow better (and more efficient) simulation. In some cases, certain existing effective computations are doing some similar things. )
Comment #88 September 18th, 2015 at 8:43 am
In regard to “quantum supremacy” and high-precision simulations (see Gil Kalai’s #87), NIST’s James Sims and Stanley Hagstrom are among many authors who routinely publish high-precision classical simulations of multi-electron systems (see e.g. “Hylleraas-configuration-interaction study of the \(2{}^{2}S\) ground state of neutral lithium and the first five excited \({}^{2}S\) states” (2009)).
Despite these quantum systems’ strong dynamical nonlinearity and formally infinite Hilbert-space dimension, relative precisions of order \(10^{-9}\) are routinely achieved, and there are no fundamental obstructions (AFAICT) to achieving very much smaller numerical precisions. Perhaps hundred-digit simulation precision is computationally feasible? Nobody knows.
It will be awhile (obviously) before quantum simulations can hope to demonstrate comparable precision.
What’s going on? Why are the limits to the precision and efficiency of classical quantum simulation so un-obvious?
The concluding three sections of the recent (fourth) edition of Gene Golub and Charles van Loan’s Matrix Computations (2013) survey some of the computational advances that ground high-precision quantum simulations.
Sixty-two references follow, chief among them Tamara Kolda and Brett Bader’s SIAM Review “Tensor Decompositions and Applications” (2009).
Kolda and Bader’s SIAM Review already has accumulated more than two thousand citations; this is indicative of the burgeoning vigor of tensor-space research.
Entangled questions
• How is it that (Kalai-compatible?) tensor-space dynamical systems allow efficient quantum simulation in principle?
• Why are high-precision tensor-space quantum simulations proving to be so (relatively) easy to demonstrate in practice?
• How is it that Hilbert-space dynamical systems QM allow scalable quantum computation in principle?
• Why are scalable quantum computations proving to be so (relatively) difficult to demonstrate in practice?
Conclusion It’s plausible that answering any one of these four entangled questions in-depth will require that all four be answered in-depth:
Surveys like Kolda and Bader’s and textbooks like Golub and van Loan’s are a very good start … but they are only a start.
Comment #89 September 19th, 2015 at 2:30 pm
Dear Peter,
A few more comments Regarding the 1-D Bose gas experiment that Matthias mentioned, and on how my “noise theory” might be related to the difficulty in classical simulations of the experiment.
A) As for the 1-D strongly coupled Bose gas system that Matthias mentioned, we have to choose between two explanations for the difficulty (witnessed at some time) in simulating the experiment via the Bose-Hubbard evolution.
a) This expresses some sort of quantum computational supremacy of the experimental process!
b) This expresses errors which accumulates in time.
I do not see good reasons to prefer a) on b) as Matthias, Scott, and you seem to do. (The second possibility neither requires “my noise” nor a skeptical stance on QC in general.) Maybe I miss something here.
B) Now, to my theory. My theory comes in two levels, or scales if you wish. One scale is represented by my work with Guy Kindler and we identify some behavior that characterizes a very simple noise model for non-interacting bosons, that may well extend to other noise models for bosons and, more generally, “small quantum computers”. It will be a good idea to see, as you suggested, if this will give better classical (in fact, low-complexity) simulation for the 1-D Bose gas experiment.
This behavior that Guy and I describe for noisy quantum systems in the small scale gives (I think) an explanation for why you will not be able to reach the starting point for quantum error-correction or any manifestation of quantum supremacy in a larger scale.
C) My other (and older) noise theory takes for granted that noise accumulates (or that fault-tolerance is absent or impossible,) and try to model noisy quantum circuits (and more general systems) based on this assumption. ( B gives a possible picture supporting the assumption.)
While (under this assumption) universal quantum circuits are beyond reach and they cannot be achieved even for a small number of qubits, quantum circuits remain a powerful framework and model for quantum evolutions. Here (when the device leading to the evolution is not specified,) we cannot expect an explicit noise model of the form “d rho / dt = ” but only implicit modeling in terms of conditions that the entire noisy process must satisfy. The two of us discussed my smoothing gymnastics for this purpose at length in this thread. (And Aram Harrow and me discussed at length other conjectural implicit conditions on the noise.) Again, this part of my work is not so relevant to the 1-D Bose gas since the system does not have any fault-tolerance component to start with. You may explain the gap between classical simulation and the actual experimental behavior by using the very standard noise models for quantum circuits of the threshold theorem and earlier work (also your FT work, and also works by the first generation QC skeptics.)
Comment #90 September 25th, 2015 at 3:55 pm
From the 2015 publications of the Martinis Group, it is not obvious what new architectural approach, apart from using their superconducting qubits, are going to be used. I don’t expect all of them to be published though.
Comment #91 October 5th, 2015 at 10:56 am
Scott is quoted here
http://spectrum.ieee.org/tech-talk/computing/hardware/how-much-power-will-quantum-computing-need?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrum+%28IEEE+Spectrum%29
Comment #92 October 30th, 2015 at 5:12 pm
Hi Scott, sorry for going back to this old post but I have a question.
Maybe somebody else proposed this already but, along the lines of dwave: what about making a dedicated chip which is really a controllable ising lattice? Basically this would be a classical version of the quantum (??) one of dwave. Such a system could even be built on a board which you can then be inserted into a normal PC.
So basically you realize the annealing on a real system instead of into a program and it should be much faster. It’s kind of doing your pegs-and-soapy water experiment in reality instead of running a variational calculation program..
Does it make sense, or the complexity of translating the problem into an ising minimization kills also this idea?
Best, K.
Comment #93 December 9th, 2015 at 12:22 am
Ok, is 10^8 notable?
Unprecedented perhaps?
I imagine you are pondering today’s announcement by Google Research: http://googleresearch.blogspot.ca/2015/12/when-can-quantum-annealing-win.html
Comment #94 December 9th, 2015 at 9:30 am
[…] stirred Scott Aaronson, an associate highbrow during a Massachusetts Institute of Technology, to write that while researchers don’t entirely know a characteristics of a machine, “D-Wave and a […]