D-Wave: Truth finally starts to emerge

Wrap-Up (June 5): This will be my final update on this post (really!!), since the discussion seems to have reached a point where not much progress is being made, and since I’d like to oblige the commenters who’ve asked me to change the subject.  Let me try to summarize the main point I’ve been trying to get across this whole time.  I’ll call the point (*).

(*) D-Wave founder Geordie Rose claims that D-Wave has now accomplished its goal of building a quantum computer that, in his words, is “better at something than any other option available.”  This claim has been widely and uncritically repeated in the press, so that much of the nerd world now accepts it as fact.  However, the claim is not supported by the evidence currently available.  It appears that, while the D-Wave machine does outperform certain off-the-shelf solvers, simulated annealing codes have been written that outperform the D-Wave machine on its own native problem when run on a standard laptop.  More research is needed to clarify the issue, but in the meantime, it seems worth knowing that this is where things currently stand.

In the comments, many people tried repeatedly to change the subject from (*) to various subsidiary questions.  For example: isn’t it possible that D-Wave’s current device will be found to provide a speedup on some other distribution of instances, besides the one that was tested?  Even if not, isn’t it possible that D-Wave will achieve a genuine speedup with some future generation of machines?  Did it make business sense for Google to buy a D-Wave machine?  What were Google’s likely reasons?  What’s D-Wave’s current value as a company?  Should Cathy McGeoch have acted differently, in the type of comparison she agreed to do, or in how she communicated about its results?  Should I have acted differently, in my interaction with McGeoch?

And, I’m afraid to say, I jumped in to the discussion of all of those questions—because, let’s face it, there are very few subjects about which I don’t have an opinion, or at least a list of qualified observations to make.  In retrospect, I now think that was a mistake.  It would have been better to sidestep all the other questions—not one of which I really know the answer to, and each of which admits multiple valid perspectives—and just focus relentlessly on the truth of assertion (*).

Here’s an analogy: imagine that a biotech startup claimed that, by using an expensive and controversial new gene therapy, it could cure patients at a higher rate than with the best available conventional drugs—basing its claim on a single clinical trial.  Imagine that this claim was widely repeated in the press as an established fact.  Now imagine that closer examination of the clinical trial revealed that it showed nothing of the kind: it compared against the wrong drugs.  And imagine that a more relevant clinical trial—mostly unmentioned in the press—had also been done, and discovered that when you compare to the right drugs, the drugs do better.  Imagine that someone wrote a blog post bringing all of this to public attention.

And now imagine that the response to that blogger was the following: “aha, but isn’t it possible that some future clinical trial will show an advantage for the gene therapy—maybe with some other group of patients?  Even if not, isn’t it possible that the startup will manage to develop an effective gene therapy sometime in the future?  Betcha didn’t consider that, did you?  And anyway, at least they’re out there trying to make gene therapy work!  So we should all support them, rather than relentlessly criticizing.  And as for the startup’s misleading claims to the public?  Oh, don’t be so naïve: that’s just PR.  If you can’t tune out the PR and concentrate on the science, that’s your own damn problem.  In summary, the real issue isn’t what some clinical trial did or didn’t show; it’s you and your hostile attitude.”

In a different context, these sorts of responses would be considered strange, and the need to resort to them revealing.  But the rules for D-Wave are different.

(Interestingly, in excusing D-Wave’s statements, some commenters explicitly defended standards of intellectual discourse so relaxed that, as far as I could tell, just about anything anyone could possibly say would be OK with them—except of course for what I say on this blog, which is not OK!  It reminds me of the central tenet of cultural relativism: that there exist no universal standards by which any culture could ever be judged “good” or “bad,” except that Western culture is irredeemably evil.)

Update (June 4): Matthias Troyer (who, unfortunately, still can’t comment here for embargo reasons) has asked me to clarify that it’s not he, but rather his postdoc Sergei Isakov, who deserves the credit for actually writing the simulated annealing code that outperformed the D-Wave machine on the latter’s own “home turf” (i.e., random QUBO instances with the D-Wave constraint graph).  The quantum Monte Carlo code, which also did quite well at simulating the D-Wave machine, was written by Isakov together with another of Matthias’s postdocs, Troels Rønnow.

Update (June 3): See Cathy McGeoch’s response (here and here), and my response to her response.

Yet More Updates (June 2): Alex Selby has a detailed new post summarizing his comparisons between the D-Wave device (as reported by McGeoch and Wang) and his own solver—finding that his solver can handily outperform the device and speculating about the reasons why.

In other news, Catherine McGeoch spoke on Friday in the MIT quantum group meeting.  Incredibly, she spoke for more than an hour, without once mentioning the USC results that found that simulated annealing on a standard laptop (when competently implemented) handily outperformed the D-Wave machine, or making any attempt to reconcile those results with hers and Wang’s.  Instead, McGeogh used the time to enlighten the assembled experts about what quantum annealing was, what an exact solver was, etc. etc., then repeated the speedup claims as if the more informative comparisons simply didn’t exist.  I left without asking questions, not wanting to be the one to instigate an unpleasant confrontation, and—I’ll admit—questioning my own sanity as a result of no one else asking about the gigantic elephant in the room.

More Updates (May 21): Happy 25th birthday to me!  Among the many interesting comments below, see especially this one by Alex Selby, who says he’s written his own specialist solver for one class of the McGeoch and Wang benchmarks that significantly outperforms the software (and D-Wave machine) tested by McGeoch and Wang on those benchmarks—and who provides the Python code so you can try it yourself.

Also, Igor Vernik asked me to announce that on July 8th, D-Wave will be giving a technical presentation at the International Superconducting Electronics Conference in Cambridge.  See here for more info; I’ll be traveling then and won’t be able to make it.  I don’t know whether the performance comparisons to Matthias Troyer’s and Alex Selby’s code will be among the topics discussed, or if there will be an opportunity to ask questions about such things.

In another exciting update, John Smolin and Graeme Smith posted a paper to the arXiv tonight questioning even the “signature of quantumness” part of the latest D-Wave claims—the part that I’d been ~98% willing to accept, even as I relayed evidence that cast enormous doubt on the “speedup” part. Specifically, Smolin and Smith propose a classical model that they say can explain the “bimodal” pattern of success probabilities observed by the USC group as well as quantum annealing can. I haven’t yet had time to read their paper or form an opinion about it, but I’d be very interested if others wanted to weigh in.   Update (May 26): The USC group has put out a new preprint responding to Smolin and Smith, offering additional evidence for quantum behavior in the D-Wave device that they say can’t be explained using Smolin and Smith’s model.

Update (May 17): Daniel Lidar emailed me to clarify his views about error-correction and the viability of D-Wave’s approach.  He invited me to share his clarification with others—something that I’m delighted to do, since I agree with him wholeheartedly.  Without further ado, here’s what Lidar says:

I don’t believe D-Wave’s approach is scalable without error correction.  I believe that the incorporation of error correction is a necessary condition in order to ever achieve a speedup with D-Wave’s machines, and I don’t believe D-Wave’s machines are any different from other types of quantum information processing in this regard.  I have repeatedly made this point to D-Wave over several years, and I hope that in the future their designs will allow more flexibility in the incorporation of error correction.

Lidar also clarified that he not only doesn’t dispute what Matthias Troyer told me about the lack of speedup of the D-Wave device compared to classical simulated annealing in their experiments, but “fully agrees, endorses, and approves” of it—and indeed, that he himself was part of the team that did the comparison.

In other news, this Hacker News thread, which features clear, comprehending discussions of this blog post and the backstory that led up to it, has helped to restore my faith in humanity.


Two years ago almost to the day, I announced my retirement as Chief D-Wave Skeptic.  But—as many readers predicted at the time—recent events (and the contents of my inbox!) have given me no choice except to resume my post.  In an all-too-familiar pattern, multiple rounds of D-Wave-related hype have made it all over the world before the truth has had time to put its pants on and drop its daughter off in daycare.  And the current hype is particularly a shame, because once one slices through all the layers of ugh—the rigged comparisons, the “dramatic announcements” that mean nothing, the lazy journalists cherry-picking what they want to hear and ignoring the inconvenient bits—there really has been a huge scientific advance this past month in characterizing the D-Wave devices.  I’m speaking about the experiments on the D-Wave One installed at USC, the main results of which finally appeared in April.  Two of the coauthors of this new work—Matthias Troyer and Daniel Lidar—were at MIT recently to speak about their results, Troyer last week and Lidar this Tuesday.  Intriguingly, despite being coauthors on the same paper, Troyer and Lidar have very different interpretations of what their results mean, but we’ll get to that later.  For now, let me summarize what I think their work has established.

Evidence for Quantum Annealing Behavior

For the first time, we have evidence that the D-Wave One is doing what should be described as “quantum annealing” rather than “classical annealing” on more than 100 qubits.  (Note that D-Wave itself now speaks about “quantum annealing” rather than “quantum adiabatic optimization.”  The difference between the two is that the adiabatic algorithm runs coherently, at zero temperature, while quantum annealing is a “messier” version in which the qubits are strongly coupled to their environment throughout, but still maintain some quantum coherence.)  The evidence for quantum annealing behavior is still extremely indirect, but despite my “Chief Skeptic” role, I’m ready to accept what the evidence indicates with essentially no hesitation.

So what is the evidence?  Basically, the USC group ran the D-Wave One on a large number of randomly generated instances of what I’ll call the “D-Wave problem”: namely, the problem of finding the lowest-energy configuration of an Ising spin glass, with nearest-neighbor interactions that correspond to the D-Wave chip’s particular topology.  Of course, restricting attention to this “D-Wave problem” tilts the tables heavily in D-Wave’s favor, but no matter: scientifically, it makes a lot more sense than trying to encode Sudoku puzzles or something like that.  Anyway, the group then looked at the distribution of success probabilities when each instance was repeatedly fed to the D-Wave machine.  For example, would the randomly-generated instances fall into one giant clump, with a few outlying instances that were especially easy or especially hard for the machine?  Surprisingly, they found that the answer was no: the pattern was strongly bimodal, with most instances either extremely easy or extremely hard, and few instances in between.  Next, the group fed the same instances to Quantum Monte Carlo: a standard classical algorithm that uses Wick rotation to find the ground states of “stoquastic Hamiltonians,” the particular type of quantum evolution that the D-Wave machine is claimed to implement.  When they did that, they found exactly the same bimodal pattern that they found with the D-Wave machine.  Finally they fed the instances to a classical simulated annealing program—but there they found a “unimodal” distribution, not a bimodal one.  So, their conclusion is that whatever the D-Wave machine is doing, it’s more similar to Quantum Monte Carlo than it is to classical simulated annealing.

Curiously, we don’t yet have any hint of a theoretical explanation for why Quantum Monte Carlo should give rise to a bimodal distribution, while classical simulating annealing should give rise to a unimodal one.  The USC group simply observed the pattern empirically (as far as I know, they’re the first to do so), then took advantage of it to characterize the D-Wave machine.  I regard explaining this pattern as an outstanding open problem raised by their work.

In any case, if we accept that the D-Wave One is doing “quantum annealing,” then despite the absence of a Bell-inequality violation or other direct evidence, it’s reasonably safe to infer that there should be large-scale entanglement in the device.  I.e., the true quantum state is no doubt extremely mixed, but there’s no particular reason to believe we could decompose that state into a mixture of product states.  For years, I tirelessly repeated that D-Wave hadn’t even provided evidence that its qubits were entangled—and that, while you can have entanglement with no quantum speedup, you can’t possibly have a quantum speedup without at least the capacity to generate entanglement.  Now, I’d say, D-Wave finally has cleared the evidence-for-entanglement bar—and, while they’re not the first to do so with superconducting qubits, they’re certainly the first to do so with so many superconducting qubits.  So I congratulate D-Wave on this accomplishment.  If this had been advertised from the start as a scientific research project—“of course we’re a long way from QC being practical; no one would ever claim otherwise; but as a first step, we’ve shown experimentally that we can entangle 100 superconducting qubits with controllable couplings”—my reaction would’ve been, “cool!”  (Similar to my reaction to any number of other steps toward scalable QC being reported by research groups all over the world.)

No Speedup Compared to Classical Simulated Annealing

But of course, D-Wave’s claims—and the claims being made on its behalf by the Hype-Industrial Complex—are far more aggressive than that.  And so we come to the part of this post that has not been pre-approved by the International D-Wave Hype Repeaters Association.  Namely, the same USC paper that reported the quantum annealing behavior of the D-Wave One, also showed no speed advantage whatsoever for quantum annealing over classical simulated annealing.  In more detail, Matthias Troyer’s group spent a few months carefully studying the D-Wave problem—after which, they were able to write optimized simulated annealing code that solves the D-Wave problem on a normal, off-the-shelf classical computer, about 15 times faster than the D-Wave machine itself solves the D-Wave problem!  Of course, if you wanted even more classical speedup than that, then you could simply add more processors to your classical computer, for only a tiny fraction of the ~$10 million that a D-Wave One would set you back.

Some people might claim it’s “unfair” to optimize the classical simulated annealing code to take advantage of the quirks of the D-Wave problem.  But think about it this way: D-Wave has spent ~$100 million, and hundreds of person-years, optimizing the hell out of a special-purpose annealing device, with the sole aim of solving this one problem that D-Wave itself defined.  So if we’re serious about comparing the results to a classical computer, isn’t it reasonable to have one professor and a few postdocs spend a few months optimizing the classical code as well?

As I said, besides simulated annealing, the USC group also compared the D-Wave One’s performance against a classical implementation of Quantum Monte Carlo.  And maybe not surprisingly, the D-Wave machine was faster than a “direct classical simulation of itself” (I can’t remember how many times faster, and couldn’t find that information in the paper).  But even here, there’s a delicious irony.  The only reason the USC group was able to compare the D-Wave one against QMC at all, is that QMC is efficiently implementable on a classical computer!  (Albeit probably with a large constant overhead compared to running the D-Wave annealer itself—hence the superior performance of classical simulated annealing over QMC.)  This means that, if the D-Wave machine can be understood as reaching essentially the same results as QMC (technically, “QMC with no sign problem”), then there’s no real hope for using the D-Wave machine to get an asymptotic speedup over a classical computer.  The race between the D-Wave machine and classical simulations of the machine would then necessarily be a cat-and-mouse game, a battle of constant factors with no clear asymptotic victor.  (Some people might conjecture that it will also be a “Tom & Jerry game,” the kind where the classical mouse always gets the better of the quantum cat.)

At this point, it’s important to give a hearing to three possible counterarguments to what I’ve written above.

The first counterargument is that, if you plot both the runtime of simulated annealing and the runtime of the D-Wave machine as functions of the instance size n, you find that, while simulated annealing is faster in absolute terms, it can look like the curve for the D-Wave machine is less steep.  Over on the blog “nextbigfuture”, an apparent trend of this kind has been fearlessly extrapolated to predict that with 512 qubits, the D-Wave machine will be 10 billion times faster than a classical computer.  But there’s a tiny fly in the ointment.  As Troyer carefully explained to me last week, the “slow growth rate” of the D-Wave machine’s runtime is, ironically, basically an artifact of the machine being run too slowly on small values of n.  Run the D-Wave machine as fast as it can run for small n, and the difference in the slopes disappears, with only the constant-factor advantage for simulated annealing remaining.  In short, there seems to be no evidence, at present, that the D-Wave machine is going to overtake simulated annealing for any instance size.

The second counterargument is that the correlation between the two “bimodal distributions”—that for the D-Wave machine and that for the Quantum Monte Carlo simulation—is not perfect.  In other words, there are a few instances (not many) that QMC solves faster than the D-Wave machine, and likewise a few instances that the D-Wave machine solves faster than QMC.  Not surprisingly, the latter fact has been eagerly seized on by the D-Wave boosters (“hey, sometimes the machine does better!”).  But Troyer has a simple and hilarious response to that.  Namely, he found that his group’s QMC code did a better job of correlating with the D-Wave machine, than the D-Wave machine did of correlating with itself!  In other words, calibration errors seem entirely sufficient to explain the variation in performance, with no need to posit any special class of instances (however small) on which the D-Wave machine dramatically outperforms QMC.

The third counterargument is just the banal one: the USC experiment was only one experiment with one set of instances (albeit, a set one might have thought would be heavily biased toward D-Wave).  There’s no proof that, in the future, it won’t be discovered that the D-Wave machine does something more than QMC, and that there’s some (perhaps specially-designed) set of instances on which the D-Wave machine asymptotically outperforms both QMC and Troyer’s simulated annealing code.  (Indeed, I gather that folks at D-Wave are now assiduously looking for such instances.)  Well, I concede that almost anything is possible in the future—but “these experiments, while not supporting D-Wave’s claims about the usefulness of its devices, also don’t conclusively disprove those claims” is a very different message than what’s currently making it into the press.

Comparison to CPLEX is Rigged

Unfortunately, the USC paper is not the one that’s gotten the most press attention—perhaps because half of it inconveniently told the hypesters something they didn’t want to hear (“no speedup”).  Instead, journalists have preferred a paper released this week by Catherine McGeoch and Cong Wang, which reports that quantum annealing running on the D-Wave machine outperformed the CPLEX optimization package running on a classical computer by a factor of ~3600, on Ising spin problems involving 439 bits.  Wow!  That sounds awesome!  But before rushing to press, let’s pause to ask ourselves: how can we reconcile this with the USC group’s result of no speedup?

The answer turns out to be painfully simple.  CPLEX is a general-purpose, off-the-shelf exact optimization package.  Of course an exact solver can’t compete against quantum annealing—or for that matter, against classical annealing or other classical heuristics!  Noticing this problem, McGeoch and Wang do also compare the D-Wave machine against tabu search, a classical heuristic algorithm.  When they do so, they find that an advantage for the D-Wave machine persists, but it becomes much, much smaller (they didn’t report the exact time comparison).  Amusingly, they write in their “Conclusions and Future Work” section:

It would of course be interesting to see if highly tuned implementations of, say, tabu search or simulated annealing could compete with Blackbox or even QA [i.e., the D-Wave machines] on QUBO [quadratic binary optimization] problems; some preliminary work on this question is underway.

As I said above, at the time McGeoch and Wang’s paper was released to the media (though maybe not at the time it was written?), the “highly tuned implementation” of simulated annealing that they ask for had already been written and tested, and the result was that it outperformed the D-Wave machine on all instance sizes tested.  In other words, their comparison to CPLEX had already been superseded by a much more informative comparison—one that gave the “opposite” result—before it ever became public.  For obvious reasons, most press reports have simply ignored this fact.

Troyer, Lidar, and Stone Soup

Much of what I’ve written in this post, I learned by talking to Matthias Troyer—the man who carefully experimented with the D-Wave machine and figured out how to beat it using simulated annealing, and who I regard as probably the world’s #1 expert right now on what exactly the machine does.  Troyer wasn’t shy about sharing his opinions, and while couched with qualifications, they tended toward extremely skeptical.  For example, Troyer conjectured that, if D-Wave ultimately succeeds in getting a speedup over classical computers in a fair comparison, then it will probably be by improving coherence and calibration, incorporating error-correction, and doing other things that “traditional,” “academic” quantum computing researchers had said all along would need to be done.

As I said, Daniel Lidar is another coauthor on the USC paper, and also recently visited MIT to speak.  Lidar and Troyer agree on the basic facts—yet Lidar noticeably differed from Troyer, in trying to give each fact the most “pro-D-Wave spin” it could possibly support.  Lidar spoke at our quantum group meeting, not about the D-Wave vs. simulated annealing performance comparison (which he agrees with), but about a proposal of his for incorporating quantum error-correction into the D-Wave device, together with some experimental results.  He presented his proposal, not as a reductio ad absurdum of D-Wave’s entire philosophy, but rather as a positive opportunity to get a quantum speedup using D-Wave’s approach.

So, to summarize my current assessment of the situation: yes, absolutely, D-Wave might someday succeed—ironically, by adapting the very ideas from “the gate model” that its entire business plan has been based on avoiding, and that D-Wave founder Geordie Rose has loudly denigrated for D-Wave’s entire history!  If that’s what happens, then I predict that science writers, and blogs like “nextbigfuture,” will announce from megaphones that D-Wave has been vindicated at last, while its narrow-minded, theorem-obsessed, ivory-tower academic naysayers now have egg all over their faces.  No one will care that the path to success—through quantum error-correction and so on—actually proved the academic critics right, and that D-Wave’s “vindication” was precisely like that of the deliciousness of stone soup in the old folktale.  As for myself, I’ll probably bang my head on my desk until I sustain so much brain damage that I no longer care either.  But at least I’ll still have tenure, and the world will have quantum computers.

The Messiah’s Quantum Annealer

Over the past few days, I’ve explained the above to at least six different journalists who asked.  And I’ve repeatedly gotten a striking response: “What you say makes sense—but then why are all these prestigious people and companies investing in D-Wave?  Why did Bo Ewald, a prominent Silicon Valley insider, recently join D-Wave as president of its US operations?  Why the deal with Lockheed Martin?  Why the huge deal with NASA and Google, just announced today?  What’s your reaction to all this news?”

My reaction, I confess, is simple.  I don’t care—I actually told them this—if the former Pope Benedict has ended his retirement to become D-Wave’s new marketing director.  I don’t care if the Messiah has come to Earth on a flaming chariot, not to usher in an age of peace but simply to spend $10 million on D-Wave’s new Vesuvius chip.  And if you imagine that I’ll ever care about such things, then you obviously don’t know much about me.  I’ll tell you what: if peer pressure is where it’s at, then come to me with the news that Umesh Vazirani, or Greg Kuperberg, or Matthias Troyer is now convinced, based on the latest evidence, that D-Wave’s chip asymptotically outperforms simulated annealing in a fair comparison, and does so because of quantum effects.  Any one such scientist’s considered opinion would mean more to me than 500,000 business deals.

The Argument from Consequences

Let me end this post with an argument that several of my friends in physics have explicitly made to me—not in the exact words below but in similar ones.

“Look, Scott, let the investors, government bureaucrats, and gullible laypeople believe whatever they want—and let D-Wave keep telling them whatever’s necessary to stay in business.  It’s unsportsmanlike and uncollegial of you to hold D-Wave’s scientists accountable for whatever wild claims their company’s PR department might make.  After all, we’re in this game too!  Our universities put out all sorts of overhyped press releases, but we don’t complain because we know that it’s done for our benefit.  Besides, you’d doubtless be trumpeting the same misleading claims, if you were in D-Wave’s shoes and needed the cash infusions to survive.  Anyway, who really cares whether there’s a quantum speedup yet or no quantum speedup?  At least D-Wave is out there trying to build a scalable quantum computer, and getting millions of dollars from Jeff Bezos, Lockheed, Google, the CIA, etc. etc. to do so—resources more of which would be directed our way if we showed a more cooperative attitude!  If we care about scalable QCs ever getting built, then the wise course is to celebrate what D-Wave has done—they just demonstrated quantum annealing on 100 qubits, for crying out loud!  So let’s all be grownups here, focus on the science, and ignore the marketing buzz as so much meaningless noise—just like a tennis player might ignore his opponent’s trash-talking (‘your mother is a whore,’ etc.) and focus on the game.”

I get this argument: really, I do.  I even concede that there’s something to be said for it.  But let me now offer a contrary argument for the reader’s consideration.

Suppose that, unlike in the “stone soup” scenario I outlined above, it eventually becomes clear that quantum annealing can be made to work on thousands of qubits, but that it’s a dead end as far as getting a quantum speedup is concerned.  Suppose the evidence piles up that simulated annealing on a conventional computer will continue to beat quantum annealing, if even the slightest effort is put into optimizing the classical annealing code.  If that happens, then I predict that the very same people now hyping D-Wave will turn around and—without the slightest acknowledgment of error on their part—declare that the entire field of quantum computing has now been unmasked as a mirage, a scam, and a chimera.  The same pointy-haired bosses who now flock toward quantum computing, will flock away from it just as quickly and as uncomprehendingly.  Academic QC programs will be decimated, despite the slow but genuine progress that they’d been making the entire time in a “parallel universe” from D-Wave.  People’s contempt for academia is such that, while a D-Wave success would be trumpeted as its alone, a D-Wave failure would be blamed on the entire QC community.

When it comes down to it, that’s the reason why I care about this matter enough to have served as “Chief D-Wave Skeptic” from 2007 to 2011, and enough to resume my post today.  As I’ve said many times, I really, genuinely hope that D-Wave succeeds at building a QC that achieves an unambiguous speedup!  I even hope the academic QC community will contribute to D-Wave’s success, by doing careful independent studies like the USC group did, and by coming up with proposals like Lidar’s for how D-Wave could move forward.  On the other hand, in the strange, unlikely event that D-Wave doesn’t succeed, I’d like people to know that many of us in the QC community were doing what academics are supposed to do, which is to be skeptical and not leave obvious questions unasked.  I’d like them to know that some of us simply tried to understand and describe what we saw in front of us—changing our opinions repeatedly as new evidence came in, but disregarding “meta-arguments” like my physicist friends’ above.  The reason I can joke about how easy it is to bribe me is that it’s actually kind of hard.

689 Responses to “D-Wave: Truth finally starts to emerge”

  1. Dave Bacon Says:

    Wow, ripping out the red ink. That’s like, almost CAPS LOCK. Fun times, fun post 😉

  2. Dave Bacon Says:

    Oh and “if you wanted even more classical speedup than that, then you could simply add more cores to your classical computer” seems a bit off. Did you really mean that?

  3. Scott Says:

    Dave: Thanks, changed “cores” to “processors.” Basic point remains unchanged.

  4. Mehmet Ozan Kabak Says:

    Great post with a lot of interesting information. I definitely agree with your criticisms of the Amherst paper. It would be interesting to see if it is possible to come up with a set of “ground rules” for sensible benchmarking/comparison of QC candidates and classical alternatives. I think we might see more results like this, especially in the absence of such “ground rules” that are agreed upon by the majority of the community.

    On the other hand, given that benchmarking is a controversial subject even for implementations of purely classical algorithms, I don’t know how realistic of a proposal I made..

  5. Michael Trick’s Operations Research Blog : Quantum computing and operations research Says:

    […] sure to check out Scott Aaronson’s take on this, as he resumes his role as “Chief D-Wave […]

  6. Dave Bacon Says:

    Oh and “QMC is efficiently implementable on a classical computer!” Don’t quite understand that one either. Certainly QMC is not efficient on all problem instances. Probably the most interesting paper I’ve read since I left quantum computing was Matt Hasting result showing that for stoquastic Hamiltonians having a small quantum gap the QMC simulation may take exponentially long to reach equilibrium: http://arxiv.org/abs/1302.5733

  7. Michael Bacon Says:

    Dave,

    You’re circling the heard, picking off the weak one at a time. What about the main thrust of the post? 😉

  8. Michael Bacon Says:

    Or, “herd” Ah well

  9. X Says:

    When you say that D-Wave solves a problem equivalent to ‘QMC (technically, “QMC with no sign problem”)’, do you mean that the cannot solve QMC with a sign problem or that their solution eliminates the sign problem from QMC?

  10. Scott Says:

    Elder Clown #6: Yes, the Freedman-Hastings result shows that QMC can take exponential time to simulate the adiabatic algorithm (at least in some very special examples), but QMC certainly doesn’t need exponential time to simulate itself! 🙂 And Troyer’s conjecture, if I understood him correctly, is that the current generation of D-Wave machines can be understood as not doing anything that QMC can’t also do.

  11. Rahul Says:

    Bravo Scott! Great post.

    D-Wave deserves this post and I hope more people pay heed.

    D-Wave’s a scam. Pure and simple. Any attempt to add nuance to what they are doing are distracting.

  12. Skeptic Says:

    “I predict that the very same people now hyping D-Wave will turn around and—without the slightest acknowledgment of error on their part—declare that the entire field of quantum computing has now been unmasked as a mirage, a scam, and a chimera.”

    This is the part that undoes anything of value in all your other posting. You are essentially saying that all the good that comes from media attention and funding being directed at QC is outweighed by your own personal and baseless suspicions as to the actions of a nameless group of people in a hypothetical scenario. That’s even more of a reach than D-wave’s marketting claims!

  13. Robin B-K Says:

    Scott,

    Great post, and I’m glad to have you back in your skeptical saddle.

    Now, a technical point about that intriguing discrepancy between unimodal and bimodal distributions of success probabilities (Fig. 2 in arxiv/1304.4595). I agree that explaining this in detail is an urgent problem (in the sense that we should study it RIGHT NOW), but I doubt that it’s “an outstanding open problem” in the sense of having a deep and mysterious explanation.

    Why? Because the QMC distribution isn’t just bimodal. It’s specifically got a peak at p=0 and a peak at p=1, with a broad flat distribution in between. There’s a pretty simple explanation for this (although I don’t know if it’s correct!):
    (1) Each instance (k) requires a certain amount of time (T_k) to solve. T_k is broadly distributed (over the ensemble of instances) across a range [0..Tmax].
    (2) If you run QMC for T<>T_k, it succeeds. For T close to T_k, it’s random.

    This is actually a really plausible model, since all it’s saying is that different instances require (widely) different running times. The “mystery” is mostly an artifact of plotting a histogram of success probabilities (not a very natural thing for algorithms) at fixed time (rather than a histogram of running times). I’ll be surprised if this isn’t roughly what’s happening. In fact, the mystery to me is why SA has a Gaussian distribution. It implies that all instances of the “D-Wave problem” are of roughly equal hardness for SA, which surprises me.

  14. Robin B-K Says:

    Argh. I forgot about what WordPress does to inequality signs. In post #13, part (2) of the explanation should read:

    “(2) If you run QMC for T much less than T_k, it fails. For T much greater than T_k, it succeeds. For T close to T_k, it’s random.”

  15. Petter Says:

    Are data for typical “Dwave problems” publicly available? I’d like to try writing classical optimizers.

  16. Michael Bacon Says:

    Skeptic@12,

    “This is the part that undoes anything of value in all your other posting.”

    And, the above is the part that undoes anything of value in what should have been your point that Scott may have been a little over the top in the paragraph you quote. You are essentially saying that all the good that comes from Scott clearly highlighting the scientific facts and circumstances current at work here, is outweighed by your own personal and baseless overreaction as to one albeit overly dramatic statement on Scott’s part. Why, I’d go so far as to say that your comment is even more of a reach than D-wave’s marketing claims and Scott’s comment combined 😉

  17. Rahul Says:

    Finally they fed the instances to a classical simulated annealing program—but there they found a “unimodal” distribution, not a bimodal one. So, their conclusion is that whatever the D-Wave machine is doing, it’s more similar to Quantum Monte Carlo than it is to classical simulated annealing.

    Is there a chance that there’s a similar bug in both implementations that gives the bimodal? Or is that too unlikely. Are the source codes of USC / DWave both open for examination?

  18. kingtoots Says:

    Scott (if I can be so informal),

    I agree with you about 90%, so I say this with love, HOWEVER, a plain text reading of your blog entry (and I know that you will probably disagree with this w.p. 1) but, it seems that your 2 big complaints are about things that I don’t understand why. You are a very smart guy, so this confuses me. You seem to care who is paying for research and whether or not if people will apologize if things don’t pan out.

    Do you really think that the funding should come from a University or the Government and that would be better than a bunch of billionaires putting in what is “chump change” on a highly risky design is somehow better? Perhaps you think failure will somehow poison the well for future QM based computers. I can assure you that your concern is quaint but not really serious. I know when you are making 80,000 a year 100 million looks like a lot but believe me, it aint.

    As for people switching sides of a debate and not really paying a price for it or apologizing, well, what can I say. Welcome to the real world. If this is a major concern of yours then you will be very disappointed throughout life, so stay angry my friend.

  19. kingtoots Says:

    Ok, I re-read my post, (damn that I didn’t do it before I posted), and it sounds a little crazy, I didn’t mean MAJOR complaints. I read your post and and your SUBSTANTIVE technical complaints are the 90% I agree with. It was just the other stuff that I mentioned in the post that was kind of off-putting from the rest of the complaints.

  20. Andy McKenzie Says:

    I think that the case of stem cells, and in particular the fraud done by Dr. Hisashi Moriguchi and its backlash, shows how valuable it is for a scientific field to have contributors like Scott. See here for context: http://www.ipscell.com/2012/10/the-moriguchi-ips-transplant-fable-key-lessons-where-do-we-go-from-here/

  21. blazespinnaker Says:

    Ya know, I think the whole foundation of science is based on skepticism. Without it, we might as well just believe in magic.

    I think what Troyer and Aaronson are doing here is terrific.

  22. David Poulin Says:

    From my discussion with Troyer, Robin’ explanation is correct. The instances with high success probability correspond to systems that have a spectral gap at all times, while the low success probability instances have a very small gap (critical) for some value of the transverse field. Since the runtime should be inversely proportional to the gap, this boils down to Robin’s point. While fig. 3 in their paper suggests that all instances become gapless for some value (0.23) of the transverse field, this is caused by a symmetry and does not affect the runtime. The point is that a solution to the Ising problem always come in pairs, flipping all the bits of a solution yields another solution. So the level crossing at 0.23 will cause the computer to pick one of these two solutions at random. It is the second critical point that slows down the computation, and this one is only seen in hard instances.

    So while the mechanism that causes failures is not mysterious, identifying problem instances that will have a large gap at all times (modulo symmetries) is a key problem.

  23. Steve Dekorte Says:

    “a normal, off-the-shelf classical computer, about 15 times faster than the D-Wave machine itself solves the D-Wave problem!”

    This is a good point but I’d be interested to know which technology, in the long run, will do more useful computation per unit energy. Would also like to see a comparison with analog electronic computing implemented on MEMS.

  24. Adam Says:

    Sorry, I’m confused. Which paper has a Bell Violation? Or a mixed state entanglement measure that indicates bipartite entanglement? Is this a slide from the talk?

  25. bjuba Says:

    It’s important to recognize that Scott’s fears are justified by the parallels of these events to the genesis of one of the many “hype-and-bust” cycles experienced by Artificial Intelligence (see: AI Winter). Unmanaged expectations today inevitably lead to the disappearance of funding for everybody 5-10 years later.

  26. Shehab Says:

    I did a quick review of the Amherst paper and was trying to build a superficial mental picture of what gave the D-Wave machine advantage. Probably each item in the series of the Ising Hamiltonian had dedicated processing element (magnetic field acting upon individual pairs of neighbors in parallel) for the case of the D-Wave machine. On the other hand, the computations on the classical computers had to compete for resources (I assume the number of processors was way smaller than the number of unit computational blocks probably generated by divide and conquer strategy). On top of that if there were recursive approaches it might also had contributed to the classical computing time through stack overhead and combining time for the solutions of sub-problems. I don’t see the scientific point of carrying out this research project. It is more like a poorly designed bench marking test with fundamentally flawed initial assumptions. I have failed to imagine a scenario where the results will be useful.

  27. Gareth McCaughan’s summary of what we know about D-wave’s quantum computer | Math and Code Says:

    […] is making waves (I know, I know) with its quantum computer. Scott Aaronson’s post about the latest on the D-wave saga was linked on Hacker News. Gareth McCaughan has given a very […]

  28. sflammia Says:

    Robin (#13) and David (#23) nailed it. I think Graeme and John are planning another “Pretending to…” article in their ongoing series to this effect.

  29. Scott Says:

    X #9:

      do you mean that the cannot solve QMC with a sign problem or that their solution eliminates the sign problem from QMC?

    Well, D-Wave’s current architecture only implements “stoquastic” Hamiltonians. And by definition, stoquastic Hamiltonians are those with no sign problem, meaning that indeed, the machine can’t directly simulate QMC with a sign problem.

  30. blazespinnaker Says:

    Interesting rebuttal by Geordie at his blog on dwave:

    The majority of that post is simply factually incorrect.

    As one example, Troyer hasn’t even had access yet to the system Cathy benchmarked (the Vesuvius – based system). (!) Yes Rainier could be beat by dedicated solvers — it was really slow! Vesuvius can’t (at least for certain types of problems). Another is he thinks we only benchmarked against cplex (not true) and he thinks cplex is just an exact solver (not true). These types of gross misunderstanding permeate the whole thing.

    I used to find this stuff vaguely amusing in an irritating kind of way. Now I just find it boring and I wonder why anyone listens to a guy who’s been wrong exactly 100% of the time about everything. Update your priors, people!!

    If you want to know what’s really going on, listen to the real experts. Like Hartmut.

    http://googleresearch.blogspot.ca/2013/05/launching-quantum-artificial.html

  31. Scott Says:

    Adam #24:

      Sorry, I’m confused. Which paper has a Bell Violation? Or a mixed state entanglement measure that indicates bipartite entanglement?

    As far as I know, it remains the case that no one has directly verified the presence of entanglement in the D-Wave device, for example by violating a Bell inequality. (Though other groups, like one at Yale, have demonstrated Bell inequality violations and even 3-qubit entanglement with superconducting qubits like D-Wave’s.) Furthermore, Mohammad Amin of D-Wave told me that their hardware simply isn’t set up for measuring Bell violations, and that D-Wave isn’t particularly interested in modifying their hardware so that it would be.

    Yet despite that, all the experts I spoke to told me that it’s hard to understand how you could be doing quantum annealing (like the USC experiments suggest is being done) without the intermediate states being entangled. So, despite the lack of conclusive evidence, I’m willing to give D-Wave the benefit of the doubt on this point. After all, I’m never a stickler just for the sake of stickling! 🙂

  32. Scott Says:

    Skeptic #12 and kingtoots #18: If you read my post, you’ll see that when I talked to journalists and various “pro-D-Wave” people (which I’ve been doing all week), I kept trying to explain the technical issues (e.g., speedup compared to CPLEX, but no speedup in a fair comparison against simulated annealing). They were the ones who kept offering me “sociological” replies (“sure, but what about the multimillion-dollar deals with Lockheed and Google? what about the need to maintain a positive, collegial tone so that we can get some of that sweet moolah too?”) It’s for that reason that I felt the need to formulate my own “sociological” reply—and having done so, I thought it might be interesting to share on my blog.

  33. hackenkaus Says:

    “Yes Rainier could be beat by dedicated solvers — it was really slow! ” Well of course! Where would anyone have got the impression Rainier was ‘sposed to be fast?

  34. Robin B-K Says:

    Adam #24 and Scott #31:

    The case for entanglement is actually pretty direct now, thanks to not-yet-published results. At the AQC workshop in March, Federico Spedalieri presented work based on an application of arxiv:1208.0860. They did a pretty good analysis of measurements on the D-Wave machine, and more or less convinced me that the entanglement question is resolved. It’s not quite as good as a Bell violation, but it’s pretty good — you show that a large set of measurement data isn’t consistent with any separable state, even though you can’t identify which non-separable state it is (or how to distill entanglement).

    (Technical note: actually, their data wasn’t quite statistically significant, but only for a very technical reason. It’s fixable, and I believe their conclusion).

    This doesn’t seem to be published yet, but the talk at AQC is written up pretty well in a New Scientist article (http://www.newscientist.com/article/dn23251-controversial-quantum-computer-aces-entanglement-tests.html). I spent a while discussing it with the author, and I think he got it fairly right.

    Ironically, I don’t think this is really that big a deal. It’s a nice benchmark, but at the same time… the relationship between nonseparability and computational speedup is pretty shaky. It’s sure as heck not sufficient, and Van den Nest’s result shows that (quantitatively) it’s very close to not necessary. I feel sort of bad for saying this — the science community has been beating D-Wave up for not showing entanglement, and now that they’ve shown it, I think we’re going to (correctly) say “It’s not really that important”.

  35. Michael Vassar Says:

    This http://www.overcomingbias.com/2013/05/high-road-doubts.html
    seems perfectly timed to me.

  36. Scott Says:

    blazespinnaker #30 and hackenkaus #33: LOL! Geordie’s reply was noticeably pricklier than usual. I suppose it’s easy to laugh off a complexity-theorist pipsqueak spouting his ivory-tower skepticism—but when that pipsqueak is reporting the conclusions of an expert condensed-matter physicist who studied the machine in depth and then wrote simulated-annealing code that outperformed it, it’s no longer as “vaguely amusing.”

    Yes, I said in my post that the McGeoch-Wang paper compared against other classical algorithms (including tabu search), not just against the CPLEX exact solver. But am I mistaken that the claim of a “3600-fold speedup” that’s being trumpeted all over the press does refer to the totally-useless comparison against the exact solver?

    Now, as for the difference between the 128-qubit machine and the 512-qubit one: based on the results Troyer explained to me (some of which, unfortunately, are not yet public), I see Geordie’s bet and raise it. Give Troyer and his postdocs full access to the Vesuvius-based system. If they can’t outperform it classically within a year or so, I will eat my words.

  37. Scott Says:

    Robin B-K #34: I think I’ve been pretty consistent that entanglement is necessary but not sufficient for a quantum speedup. I see entanglement as nothing more than a sanity check—and as such, as an obvious hurdle than one should clear before one even starts talking about quantum speedups. And as soon as it looked like D-Wave had cleared that hurdle, I immediately acknowledged it and congratulated them, without even putting up a fight!

    Regarding entanglement being a necessary condition, I stand by the way I put it in my post:

      while you can have entanglement with no quantum speedup, you can’t possibly have a quantum speedup without at least the capacity to generate entanglement

    What I meant was simply that, if you can’t apply entangling operations, then your QC can be trivially simulated by keeping track of the state of each qubit separately. Now, it’s true that no one has ruled the technical possibility that you could get a speedup with entangling operations that “just so happen” to leave your state a separable mixed state at every time step. And that’s a wonderful open problem, one I worked on myself back at Berkeley. But it’s also true that we have no actual examples of quantum algorithms—as in zero, not one—that get an asymptotic speedup without generating entangled states.

    Yes, I know there are papers that claim you can get a speedup with “almost no entanglement,” but everything in those papers hinges on the exact definition of “almost.” For example, I claim that I can implement Shor’s algorithm in a QC 99.9% of whose qubits are completely unentangled! I’ll simply run Shor on the other 0.1%. 🙂

  38. AJ Says:

    So how long before “A Method to Simulate Quantum Annealing” is patented? Sounds like a new algorithm class with a variety of uses.

  39. Harsh But True Says:

    The part of this I have trouble explaining to my engineer buddies is how a group or author would allow a claim of “faster” to even appear in a paper about “asymptotically faster” — except in the section labeled “experimental data”. These are professional complexity theorists right?

  40. Scott Says:

    AJ #38: Yes, Matthias was joking that maybe he could sell his classical simulation of the D-Wave machine to Google and Lockheed for tens of millions of dollars as well! My advice to him is to market his simulation as an even newer and cooler form of quantum computing: one so advanced that it can be run entirely on a classical computer. 🙂

  41. Scott Says:

    Harsh But True #39: What the hell are you talking about? Which group? Which paper?

  42. blazespinnaker Says:

    Was there any discussion about general purpose CPU versus GPU versus ASIC?

    When are we going to see Troyer’s work on arvix 🙂

  43. Scott Says:

    blazespinnaker #42: The USC paper is right here; it’s been on the arXiv for a month. But it represents the combined views of all the authors, not Troyer’s views alone.

    Yes, I think the CPU versus GPU issue did come up at some point, but I can’t remember what was said about it.

  44. Marshall Eubanks Says:

    Excellent post. Thanks for the insightful report.

  45. Vadim Says:

    Question, and I’ll understand if you chose to leave it alone or have no opinion, but do you think D-Wave knows that they’re selling snake oil, or are they hoping they might have something?

  46. Scott Says:

    Vadim #45: Based on repeated meetings with them (including a visit to their HQ in Vancouver), I think they’re genuinely hoping they might have something.

  47. Miquel Ramirez Says:

    Scott’s concerns regarding intellectual (and material) honesty are right on topic. I’d invite everyone to remember what was the effect of the “stem cell scandal” that broke out after that South Korean scientist was caught red-handed doing something which is not that different from all this Spin. Perverse incentives should be avoided at all times, no matter how many people are made uncomfortable.

    Those that waive the comparison with CPLEX at people faces are totally discredited, and clearly don’t know what they’re talking about. To be honest, those giving any value to that announcement, should go back to undergrad school and revisit the properties of Hill Climbing vs. Branch & Bound.

    The comparison with Tabu Search is more interesting, nonetheless. As Scott says, the D-WAVE guys spent a ridiculous amount of money to achieve a slight speedup, which I suspect is due to the heuristics implicit in D-WAVE construction. I wouldn’t be surprised that tested on a different benchmark against a “problem-agnostic” or “domain independent” Tabu Search algorithm – and spending another ton of cash to reconfigure the machine – such advantage would vanish.

  48. Scott Says:

    OK, one more comment before I go to sleep. I find it amusing that Geordie is now trying to spread the meme that I’ve been “wrong exactly 100% of the time about everything.”

    I admit that I’ve been wrong in underestimating human credulity and herd behavior: for example, I wouldn’t have predicted in 2007 that D-Wave would be selling multiple devices for tens of millions of dollars before demonstrating any speedup over classical computers in a fair comparison. And where I’ve been wrong on this or that technical issue, I’ve corrected myself as soon as I understood the error. But on the main scientific issues, where exactly have I been wrong? I pointed out, for example, that D-Wave hadn’t yet demonstrated entanglement in their qubits, let alone a quantum speedup (contrary to what was being claimed in the press), and that it was very important for them to do these sorts of sanity checks. Now that it looks like they finally have demonstrated entanglement, are my earlier statements revealed to have been incorrect?

  49. Pete Says:

    Scott, it seems from McGeoch-Wang that on QUBO they did test Tabu and AK as well as CPLEX, but CPLEX was the only one aside from Rainier that found any optimal solution whatsoever. So your statement that “an exact solver can’t compete against quantum annealing—or for that matter, against classical annealing or other classical heuristics!” doesn’t make sense here. CPLEX did the best out of the other solvers, not the worst.

  50. D-Wave: Aaronson stands his ground. | Gordon's shares Says:

    […] Link. He persuades me. […]

  51. dan miller Says:

    Lurker here, just hopping in to say that what you fear, the lemmings jumping off the boat as quickly as they jumped on, is exactly correct. It’s a healthy, justified fear.

    The antidote? Slowly, meticulously educating the smartest of the pointy-hairs — the ones with some intellectual chops who maybe just got swept up in the hype. Once they are educated to the subtle realities, they may be resistant to the urge to simply jump ship when the wind turns. Some of them — maybe very few — will stay on for the long haul.

    And guess what? Those few, who were carried in by the wave of hype, but stayed for the reality of progress — those are the ones you really want. Need, in fact — to help you complete the mission the long way around.

  52. D-Wave Quantum Computer - Page 2 Says:

    […] d-wave is in the news again. Research Blog: Launching the Quantum Artificial Intelligence Lab But temper your excitement with a bit of skepticism – D-Wave: Truth finally starts to emerge […]

  53. Neil Johnson Says:

    I am not enough of an expert on QC to enter the debate on the technical merits of the D-Wave machine, but I was amused to see Bo Ewald (Civil Engineering major) named to run another tech company. After he became CEO at Cray it nearly failed. After later becoming CEO at SGI it went on to fail. Then he when on to E-Stamps that also failed. Then he joined Linux Networks that again went on to failure. Perhaps D-Wave may be his chance to shine but the record so far is clear.

  54. Septimana Mirabilis – Major Quantum Information Technology Breakthroughs | Wavewatching Says:

    […] 2: Scott Aaronson finally weighs in, and as Robert Tucci predicted in the comments, he resumed his sceptical […]

  55. Henning Dekantq Says:

    If this had been advertised from the start as a scientific research project …

    Well, but that’s where the rubber meets the road. It isn’t, the company needs to raise funds, and the way to do this is by painting a very big picture and feeding everything through a megaphone.

    So I come down on the side of your physics friends, although I can see your point. My stance is that such blowback can be mitigated by educating people on all the various different approaches to QC.

    Anyhow, no worries …

    … , I’ll probably bang my head on my desk until I sustain so much brain damage that I no longer care either. But at least I’ll still have tenure, and the world will have quantum computers.

    Kinda the spirit but with less concussion please 🙂

  56. blazespinnaker Says:

    Interesting, apparently Geordie removed the part where he says:

    “Yes Rainier could be beat by dedicated solvers — it was really slow! Vesuvius can’t (at least for certain types of problems). ”

    from his blog comment.

    Does this mean, that he’s actually not sure that Vesuvius can be beat? Or simply doesn’t want to risk making that stand yet?

    Perhaps Scott’s challenge made him a bit nervous.

  57. blazespinnaker Says:

    Laugh, was just reading this on the New York Times:

    “In tests last September, an independent researcher found that for some types of problems the quantum computer was 3,600 times faster than traditional supercomputers. ”

    Supercomputers? Ahhh…

    http://bits.blogs.nytimes.com/2013/05/16/google-buys-a-quantum-computer/

    I guess the fact checking department at NYT bits blog was absent today.

  58. Kumar Says:

    I’d never heard of D-wave until this morning and the news reports looked rather suspicious. I landed on your blog after doing a google search.

    It will be really great if you can write an article on your blog on this topic, providing some context. Or, if you or somebody else has already written such an article, please share a link with us.

    It is great that professors like you have a blog. People who are not computer scientists, but are a little tech-savvy can make good use of this information source.

    Thanks!

  59. Henning Dekantq Says:

    Kumar #58, Scott already did quite a number on D-Wave’s brand equity. I blogged about it here, that’ll give you pointers to some of the back-story. Since I am huge fan of both Scott and D-Wave, I was happy that the hatchet was buried a while ago, and I hope that, despite this dust up, it’ll stay that way.

    Scott has of course every right to ask the question that he is raising, just hope it’ll yield constructive results rather than another round of tribal identity affirmation (academic QC CS versus enterprising QC pragmatists).

    Full disclaimer: I am not affiliated with D-Wave, nor a CS academic, or academic of any sort (I hold a physics degree though).

  60. Rahul Says:

    @Henning Dekantq #59:

    Can you elaborate on what made you a fan of D-Wave.

    Knowing that you have a Physics degree I wanted to learn more about what attracts you to D-wave.

  61. Eliezer Yudkowsky Says:

    I was wondering what was going on. I mean, not too strongly, but I was wondering what was going on. Thanks for clearing it up!

    Also the field of QC will be announced to have been disproven when D-Wave fails regardless of anything you do. I’m sorry to have to be the one to tell you this.

  62. Robert Rand Says:

    Scott,

    I think it’s worth distinguishing between the two types of what you refer to above as “sociological replies”, one of which strikes me as very valid, and one not as valid at all.

    For the “Argument from Consequences” there is a very simple response: “I’m a f***ing scientist you @#$%^&!!!” Or, if you prefer “I like to tell people the truth and this is a good opportunity to educate people about my field”. We do not all have to be PR machines.

    The “Argument from Google” is a very different one, albeit one that should be far more compelling to your readers than to you. “Why is Google doing this?” is not a question of “peer pressure”, it’s a serious question. You basically acknowledge as much when you say that you’d take account of Umesh Vazirani asserting that D-Wave machines demonstrate asymptotic speedup. And that’s not because you’re afraid of Umesh, it’s because you respect him as a computer scientist, and if he’s convinced of something, that means it’s worth giving it another look. (And in your case, you could still be justified in saying that it’s nonsense, but virtually none of your readers are in a position to say the same – they have neither the time nor ability to do the research you’ve done.)

    Google and NASA and not simply giant bags of money. I know very little about Computer Science and Quantum Mechanics research at NASA, but people assume they have credible scientists in almost all areas. And if Google was throwing a tremendous amount of resources into a certain problem in Machine Learning, I’d take a second look at it. Besides for Peter Norvig and Sebastian Thrun, two of its public faces, Google hires countless Machine Learning researchers from the best groups in the country. And given the size and scope of Google Research, there’s no reason for me to expect that they don’t have experts on Quantum Computing. (And there’s no reason to think they don’t employ some of the best people in the field: Microsoft employs some of the best people in PL.)

    So there are a few answers that you can give:

    The easiest is: Google doesn’t have anyone with expertise in the field. Here’s your list of experts, you won’t find anyone from Google.

    Alternatively, you can acknowledge that a problem exists, for the questioner: “Look, Dr. Einstein is a friend, and a very intelligent man. But if you read through everything I’ve written, and everything Heisenberg etc. have written, you will recognize that Quantum Mechanics is complete, and Dr. Einstein objections are unfounded. You probably will not be able to do all this, if which case you will have to await a scientific consensus. That has no bearing on the position I’m advancing, however.”

    I think there are simpler answers: This is a small investment for Google, it may well be a shot in the dark, they may reject many of D-Waves claims but have hope for the device nonetheless.

    But it’s not a vacuous question.

  63. Rahul Says:

    @Robert Rand #62:

    Some more possible answers:

    (1) If you are a manager at google you do a cost-benefit analysis. Say, DWave turns out a dud, we wasted a few million on it. No big deal. Nobody will ever notice. OTOH, if this turns out to be the next big thing and we lag behind I’ll have hell to pay.

    (2) Google’s smart decision makers see through the current PR hype but see potential in DWave’s Staff or have been told confidentially about something that’s in the works which will be novel and a game changer. So their pumping money is a way to nurture DWave so that they might get something potentially good come out of it.

    (3) Someone like Lockheed might consider this as purely a strategic PR decision. Cheap publicity with which to awe their stakeholders: “Look, we are on the cutting edge!”

  64. Joe Fitzsimons Says:

    Am I the only one who thinks that Fig 2D of arxiv:1304.4595 looks like a weighted sum of the distributions in Fig 2A and Fig 2B? And isn’t that exactly what we would expect for a machine with finite coherence lenght?

  65. Marcin Says:

    This version made me particularly sad. http://www.bbc.co.uk/news/science-environment-22554494

  66. Dan Brown Says:

    In my new novel, The D-Vinci Code, MIT complexologist Aaron Scottson fights an international conspiracy of dunces in a race against time to prevent the activation of a Canadian quantum computer, Vitruvius – the culmination of centuries-old plans by the shadowy secret society Algorithmica, who dream of a new order in which the average person could be “Mozart in the morning and Gauss in the afternoon, without ever becoming musician or mathematician”. In pursuit of their utopian scheme, they care nothing that Vitruvius will destroy privacy and national security, and collapse the world economy and the polynomial hierarchy. To see Scottson’s next step in the war against this ruthless and ingenious foe, stay tuned to your local science tabloid!

  67. wolfgang Says:

    Marcin #65
    >> This version made me particularly sad

    I noticed this description:

    “Effectively, it can try all possible solutions at the same time and then select the best”.

  68. Chris Says:

    Couple of good articles out today and yesterday.

    If I’ve understood correctly, the D-Wave chip basically solves unconstrained QUadratic Binary Optimization (QUBO) problems, which are “not exactly in CPLEX’s sweet spot”. Apparently CPLEX excels once constraints are introduced, but otherwise regularly has its ass kicked by special purpose algorithms.

    Also pointed out is that in general the QUBO problem must be mapped onto a chimera graph – an instance of the subgraph isomorphism problem. It seems like a bit of an issue for “compilation” to be an NP-complete problem, even though it’s always possible to solve certain instances.

  69. Scott Says:

    Robert Rand #62: Regarding the question of why NASA, Google, and other prestigious organizations are putting all this money behind D-Wave—I agree that it’s an extremely interesting question, and I don’t claim to understand all the sociological forces at work. However, I can tell you three things that might be relevant to your question:

    1. There’s a single individual at Google—Hartmut Neven—who came to Google by way of an acquisition, and who’s the one who’s been pushing D-Wave hyper-aggressively there for years. If I were a Google exec, and if everything I knew about quantum computing were filtered through Hartmut Neven, I’d be pretty gung-ho about D-Wave too!

    (Come to think of it, if the Google execs ever do decide they’d like to search the Internet for other perspectives on D-Wave besides Hartmut’s, I know a product that would be perfect for them… 😉 )

    2. Rahul #63 makes an extremely relevant point: that the results of a business cost-benefit analysis can look totally different from the results of an academic “truth analysis.” Imagine yourself in command of a $100-billion empire, when one of your subordinates comes babbling to you about a new startup that’s building interdimensional flux quantum jigga-transponders. A perfectly reasonable response would be: “I don’t understand this, I don’t have time to understand it, in fact it sounds a bit fishy, so what the hell, throw $10 million at it just in case. Next agenda item!”

    3. Regarding government agencies, I got a somewhat-disturbing look at some of the forces at work when a US military official once cornered me at a conference, to ask my opinion about whether the military should be investing in a D-Wave device. While not giving him a flat “no,” I explained various reasons for skepticism, some common misconceptions, things D-Wave had yet to demonstrate, etc. (the same stuff I’d been saying on my blog) The official listened attentively, but then finally interrupted me to say:

    “Well, if not D-Wave, then who is now building a quantum computer that we could buy?”

    I answered this question with a question: “supposing you had a quantum computer, what would you use it for?”

    The official didn’t have an answer to that question, and didn’t even seem to have considered it much. As far as I could tell, quantum computing really was just a “new, shiny, exciting thing” that his agency had to be seen as getting on board with, for fear of being left behind.

  70. Rahul Says:

    Scott #69:

    Hartmut Neven is Director of Engineering at Google (or Google Research?) though. Sounds pretty high up.

    Are we sure it is only one guy promoting D-Wave at Google or is Hartmut merely the public face of it all, since he happens to be Director of Engineering. (I’m playing devils advocate here for a bit)

    In any case, Hartmut hardly seems a QC guy. Wikipedia lists his turf as “computational neurobiology, robotics and computer vision. ” Google Research very likely has a larger team of core QC people: They were probably sold on this before the proposal went to Hartmut.

    OTOH, another possibility is Hartmut is playing the pure PR hype game. Note that Hartmut’s last two assignments were as CEO and CTO. So it’s likely he’s more wearing purely a manager hat than a scientist / engineer / research-manager hat.

  71. blazespinnaker Says:

    There’s another, I think simpler explanation: Vesuvius can actually solve (proven very quickly) certain types of AI problems that are very interesting to Google.

    Perhaps Dwave can take that 10M and develop their tech to build something even faster (maybe using the gate model that Geordie denigrated).

    Even Scott admits that’s possible. It’s a gamble, but most investments are, and I doubt Google cares about which quantum approach being used, as long as it works.

  72. Rahul Says:

    @blazespinnaker #71:

    Which are the sort of practical AI problems Vesuvius solves quickly and how does it compare with classical computational ways of solving these problems?

    The machine might be more interesting than I initially thought….

  73. Wayne Tedrow, Jr. Says:

    I wish somebody from D-Wave would defend the company on this post. Whenever Scott talks about D-wave all I can think is “Sour Grapes.”

  74. Ian T. Durham Says:

    Scott: Minor correction. Geordie jacked the price up 50%. It’s now $15 mil and not $10 mil. 😉

  75. blazespinnaker Says:

    Rahul check this out: http://googleresearch.blogspot.ca/2013/05/launching-quantum-artificial.html

    There’s no debate that what Dwave is trying to do is very important. And now we know thanks to McGeoch-Wang, we know that they can actually do it very fast.

    The only questions that remain are:

    a) Will its quantum behavior provide any significant speed up over classical computing
    b) If it does, will Dwave have to use the gate model that Geordie called ‘rotten’ to get that speed up

  76. Scott Says:

    Wayne #73: If you actually read the comments, you’ll see that Geordie did (briefly, dismissively) respond, and I responded to his response. Though if all you can think is “Sour Grapes,” then maybe you should reread the post from the top, keeping your eyes peeled for the actual arguments?

  77. Scott Says:

    Ian #74: Thanks, correction acknowledged! (Though maybe Geordie would offer a 33% discount to the Messiah in his flaming chariot? 😉 )

  78. Davide Says:

    Scott I did not analyze deeply the work of Troyer however I would like to ask you a clarification which is somehow connected to the “third counterargument”. Isn’t the optimization of Troyer and his no-speedup result restricted to a very particular class of problems which are addressable with the D-wave machine (i.e. values of the couplings +-1, non-planar graph…)?
    Or does he have indication of a no-go theorem?

  79. Henning Dekantq Says:

    Scott, #48 in all fairness I think Geordies “100% wrong” comment has to be taken in the context of the thread as 100% wrong on D-Wave.

    Nobody can be 100% wrong on everything even if they try 🙂

    Also I’ve been told you only get tenure at MIT when you are consistently wrong about 80% of the time 😉

  80. Henning Dekantq Says:

    Rahul #60, here’s in no particular orde some of the things that I find attractive about D-Wave (sorry that this is not better organized – are a bit hard pressed fro time right now).

    1) Josephson junctions always have been my favorite quantum system 🙂
    2) The engineering that went into leveraging existing lithographic foundry techniques for the Ni chip integration
    3) Their pragmatic approach seems to me to be close in spirit to Feynman’s original idea of a quantum simulator (and of course I adore Feynman)
    4) The Ising model that D-Wave tried to implement was what suckered me into computing in the first place. It could be used to model Hopfield neural networks, and that’s how my path diverted from pure physics in the first place. So I find D-Wave’s claims that it could be leveraged for this kind of machine learning quite plausible.
    5) The departure from silicon. Obviously related to (3) – it’s just exciting to see a commercial computing device that runs on something else than a semi-conductor.
    6) The adiabatic nature of the machine. Obviously that’ll hold more or less for all QC approaches, but I think it can’t come a moment to early. Big data is a reality, in my view the demand for computing begins to outpaces Moore’s law. Power consumption for our data centers becomes a really pressing issue. At this point if D-Wave’s machines could already solve problems with an overall smaller CO2 footprint than conventional machines then they are already a winner in my book.

  81. Debbie Says:

    Scott, thanks for your writing here. It seems that your endless conversations with the journalist aren’t totally wasted:
    http://www.economist.com/news/science-and-technology/21578027-first-real-world-contests-between-quantum-computers-and-standard-ones-faster?fsrc=rss|sct
    Debbie

  82. wolfgang Says:

    #70 “Are we sure it is only one guy promoting D-Wave at Google”

    Well there is this Pontiff guy who used to be in QC research and now works for Google. He wrote about Ising models on his blog…
    I wonder what kind of device he is working on in Google’s basement …

    ps: I’m kidding.

  83. mkatkov Says:

    Dan Brown #66 ++

  84. Josh Says:

    As an occasional reader who just happened to find this discussion going on, I have a pretty naive question: What is the meaning of a “100 qubit quantum computer” vs. a quantum computer with a different number of quibts? Is it analogous to the difference between 64 bit and 32 bit classical processors? It seems like it’s not, since there isn’t necessarily any reason to expect performance to scale with the length of a memory address you can handle.

  85. Rahul Says:

    If I were one of those Journalists bugging Scott here’s one question I’d ask:

    D-Wave seems to have filed (granted?) a lot of patents. What do you think about those?

    https://www.google.com/search?q=inassignee:%22D-Wave+Systems,+Inc.%22&tbm=pts&ei=T3SWUYrAK8fmrAf6qoGoAQ&start=0&sa=N&biw=1280&bih=681

    At the very least that is good evidence they have something novel and non-obvious up their sleeve? Or not? Or have the fooled the patent office like the rest of us?

    Or is it novel yet not useful. (Yes, I do realize a very small % of filed patents are even faintly useful. ).

    Comments?

  86. Rahul Says:

    Henning #80:

    As a non-expert, here’s a naive question about your point #6: One of the documents on the DWave website says: “Pulse Tube fridge requires ~15 kW supply power”

    Am I misinterpreting this? That sounded like a lot of power for what it’s doing. Or is this a one-time only startup power requirement? How were you basing your energy comparison with conventional computing?

    Or do you expect power to scale favorably as it grows larger?

  87. Douglas Knight Says:

    blazespinner 56: probably he edited it not to retract the positive claim, but to retract the negative claim. Why should he admit that Ranier is slow, just because it’s been proved?

  88. Henning Dekantq Says:

    Rahul #86, I expect most of the power to be consumed when the things is cooled down. I don’t know their power consumption during operation, nor the set-up power to get there, so my speculation is based on the adiabatic nature of the box. I hope we will get some papers in the not to distant future that sheds light on this aspect of the machine.

    Because of the chip’s state developing adiabatically when its processing a problem I’d expect the thing to scale better with regards to power consumption then conventional hardware. This will of course factor into the TOC considerations and be an equally important benchmark.

    I would also expect that a machine upgrade will mostly require swapping out the chip, so that the large upfront investment for the cooling apparatus is a one time charge. Obviously I am not privy to D-Wave’s pricing strategy (nor am I certain about the differences in cooling requirements for different chip generations), but that’s the kind of pitch I’d be going for.

  89. Henning Dekantq Says:

    Scott, #48: I thought of a simple prove that shows that Geordie’s statement that you are “100% wrong on everything” is clearly false:

    If he was right, I conjecture you could not possibly tie your shoelaces correctly in the morning. With shoes tied together you’d always come late to class, or would have to appear barefooted.

    Both conditions, bare foot lecturing as well as notorious tardiness would have prevented you from gaining tenure at MIT.

    Hence Geordie is clearly wrong on this. (eom) 🙂

  90. Henning Dekant Says:

    Josh #84, due to entanglement a true QC has the potential to scale exponentially with the number of qbits.

  91. blazespinnaker Says:

    Douglas@87, perhaps. Anyways, TBH I wouldn’t be surprised if we could beat Vesuvius with a classical solver. Nor would I particularly care.

    The question really is: can we beat Vesuvius’s descendant.

    Henning@80, sure solving the power problem is nice, but really don’t we want a fundamental inflection point in solving these problems?

    For big N, it’s just an evolutionary step (low power consumption or not) unless we can fundamentally alter the nature of the computational complexity of the problem via quantum behavior.

    Perhaps if we weren’t solving a narrow set of problems, an evolutionary step would be cool, but because the problem set IS so narrow, I think we need a leap forward to be interesting.

  92. LZ Says:

    Anyway, even if (IF) the DWave One could outperform every possible classical algorithm, with that topology you probably can’t encode anything much useful: 128 qubit are not so few, but the DWave’s graph is pretty sparse. If the problem you want to solve needs a lot of interactions between the qubits then it is very likely that you’ll end up to waste A LOT of qubits as “couplings” and so you’ll be able to encode only instances of, say, 15 variables.
    And I don’t know how they want to scale up the architecture, but if they’ll continue stuffing the plane with K_{4,4} graphs I don’t know how much effective scaling they’ll achieve.
    I know this is probably a much more simpler problem to deal with than, for example, maintaining entanglement for the entire evolution, but it is another argument showing that today, with this architecture, you can’t do anything very useful at all.

  93. blazespinnaker Says:

    It’d be interesting if there was an open source project using amazon GPU clusters to simulate Dwave. Perhaps Matthias could be convinced to release his work…

  94. Scott Says:

    Josh #84:

      As an occasional reader who just happened to find this discussion going on, I have a pretty naive question: What is the meaning of a “100 qubit quantum computer” vs. a quantum computer with a different number of quibts? Is it analogous to the difference between 64 bit and 32 bit classical processors?

    Yeah, that is a pretty naive question! 🙂 (But don’t worry, we all start that way. When I was a kid, I imagined “32-bit” meant that each pixel on the screen was 1/32×1/32 inch!)

    The meanings are indeed completely different in the two contexts. With a classical computer, “32-bit” means that the millions or billions of bits you have are organized into chunks of 32 each. With existing quantum computers, by contrast, “32-bit” means that the entire quantum computer literally has only 32 bits—or rather, 32 qubits (!). Of course, there are many more bits in the classical computer that’s controlling the QC.

  95. Douglas Knight Says:

    Rahul, here Rose seems to be pretty explicit that the 15kW is operating power, not initialization power. Other documents claim that they operate at 10 mK, so of course they need lots of power. But in the first link, he says that, yes, it will scale well, that the same 15kW fridge will work for the foreseeable future.

    Henning, thank you for your honesty.

    blazespinnaker, if you don’t care, why did you say Rose’s comment was interesting? Why did you ask about his edit?

  96. Stephan Says:

    Rahul #70: As near as I can tell, as someone somewhat hooked in to the academic and SF bay area quantum computing network, Hartmut is indeed the main proponent of quantum computing at Google. Google has indeed hired some quantum computing experts, but the person I know (an excellent scientist) started last month, at which point buying the D-Wave machine was already decided. I don’t think they’ve had a team of quantum computing experts for very long.

    At the least, I can tell you that they had no experts a few years ago when they hosted the (somewhat shameful) “Google Workshop on Quantum Biology” that I attended. If they had had the quantum computing experts they have on staff now back then, they would not have invited Stuart Hameroff and his ilk to spout their pseudo-science about quantum consciousness.

  97. Douglas Knight Says:

    For comparison, here is classical power consumption: when McGeoch-Wang failed to beat dwave, they used 7 chips, each 80W (not including RAM, etc). The many author group that beat it used a 240W GPU. This was 5x as fast as an 80W CPU.

  98. Scott Says:

    Henning #89:

      I thought of a simple prove that shows that Geordie’s statement that you are “100% wrong on everything” is clearly false:

      If he was right, I conjecture you could not possibly tie your shoelaces correctly in the morning. With shoes tied together you’d always come late to class, or would have to appear barefooted.

    Alas, while I agree with your conclusion, the following consideration shows that there must be a flaw in your proof: I only ever wear slip-ons, with no laces! I hate shoelaces. 🙂

  99. Peter W. Shor Says:

    Hi Scott,

    Re your argument

    In more detail, Matthias Troyer’s group spent a few months carefully studying the D-Wave problem—after which, they were able to write optimized simulated annealing code that solves the D-Wave problem on a normal, off-the-shelf classical computer, about 15 times faster than the D-Wave machine itself solves the D-Wave problem! Of course, if you wanted even more classical speedup than that, then you could simply add more processors to your classical computer, for only a tiny fraction of the ~$10 million that a D-Wave One would set you back.

    This is exactly like arguing that if you look at the Wright Brothers’ first flight at Kitty Hawk, they could have gone farther, faster, and much more cheaply if they had just used an automobile. It’s just not the right comparison. D-Wave’s money was not spent only to build this current device; you have to consider that from their viewpoint, it’s just one step on the pathway to a much more complicated and useful device.

    The real question is whether the D-Wave Five (or Six) quantum annealing computer to be built by D-Wave several years from now will be able to solve useful problems faster than a classical machine. I don’t know the answer to that question, but that is the question you should be asking.

  100. Douglas Knight Says:

    If you had $10 million to bet on the future quantum computers, would it be a better investment to buy equity in dwave, or to buy a pulse fridge sculpture that you can call “the first quantum computer,” regardless of whether dwave succeeds?

    The Wright Brothers’ plane is a lot more valuable today than their company.

  101. Scott Says:

    Peter #99:

      This is exactly like arguing that if you look at the Wright Brothers’ first flight at Kitty Hawk, they could have gone farther, faster, and much more cheaply if they had just used an automobile.

    I respectfully disagree. The Wright brothers’ plane definitely did something that a car couldn’t do: namely, it flew through the air on its own power. To make your analogy work, I think we’d need to imagine that the plane never left the ground, but the Wright brothers and their media supporters eagerly pointed out that
    (a) the plane did drive across the ground faster than a car whose engine had been removed and that had to be pushed from behind, and
    (b) Henry Ford and Andrew Carnegie are pouring money into this venture, so why isn’t that reason enough to accept that the plane is either flying now or will be flying soon?

    Now, faced with such terrible arguments, I think it would be understandable (but wrong) if the skeptics overreacted, and said that the Wright brothers will obviously never succeed, that nothing they say again should ever be taken seriously, etc. etc. And yet, in this analogy, that’s not what I’m saying at all! All I’m saying is that, if the Wright brothers are ever to succeed, then they’ll first need to solve what have long been recognized by experts as the technical problems of flight (in their case, 3-axis control, a problem that they did solve; in D-Wave’s case, probably quantum error-correction). It’s interesting that even the “pro-D-Wave” Daniel Lidar completely agrees with my assessment there. Yet Geordie conspicuously doesn’t! Had he been in the Wright brothers’ shoes, I could imagine him saying that the “so-called experts'” obsession with 3-axis control was silly and misguided, since all you really need to fly is to tie some wings to your arms and leap off a cliff.

  102. Gabriel Says:

    Scott has clearly discovered an algorithm that can create anti-D Wave arguments longer than the lifetime of the universe.

    Or if you do give a good argument he will respectfully disagree and give some fatuous alternatives to your argument that he can make fun of.

  103. Henning Dekant Says:

    Scott #89, thank you for pointing out this loophole in my proof.

    I thought I had to go back and try to come up with a completely different approach, but then I realized that we can at least salvage a partial result from this exchange.

    You obviously correctly pointed out the flaw in my reasoning. So we can establish that you got that right, and so even under the extrem assumption of Geordie that you are always wrong 100% of the time, going forward we can now clearly state that your error rate will be lower than unity.

    Now, I am convinced that this is symmetric under time reversal, just need to formalize this hunch …

  104. Henning Dekant Says:

    Douglas Knight #95, not quite sure what to make of your compliment.

    Don’t really have a dog in this race. Well, I have a bet going that D-Wave will eventually outperform conventional gear, but I play my bet a bit smaller than Scott, I only have some Maple Syrup to lose 🙂

  105. Josh Says:

    Scott #94, thanks! That clarifies things immensely!

  106. asdf Says:

    Hype aside, it sounds like this thing is not a useful computation device per se, but it’s arguably of value as experimental lab equipment with which interesting discoveries can be made.

    Analogy: say they instead made a hype-free announcement of a gate-based QC with 80 qubits, that they can demonstrate running Shor’s algorithm unreliably, i.e. they can factor 20-digit numbers with it, usually on the first or second attempt. Each attempt involves spending several hours charging up the flux turbo-capacitors and consuming a few liters of liquid helium before the computation even starts, so it’s not competitive with classical computers. They are offering these machines for $10 million each. The MIT physics department and CSAIL are interested in going in on one to play with: is this a good use of MIT’s research funds? I don’t know for sure, but it sounds plausible to me.

  107. Greg Kuperberg Says:

    Hell no it wouldn’t be a good use of MIT’s research funds, unless you think that D-Wave by itself is more important than all other QCQI work at MIT put together. D-Wave might well agree with such an implication; I sure wouldn’t.

    Google is a different case. If you’re the director of engineering at Google, then $10 million isn’t all that much money. If they throw away $10 million, then they can call it a long shot that didn’t work. If my math department threw away $10 million, it would be a devastating scandal.

  108. Scott Says:

    asdf #106: Yes! I tried to make it clear that if D-Wave could simply communicate honestly and accurately about what it had and hadn’t done, and if journalists could in turn report about it honestly, then I’d have nothing further to complain about. Trying to do quantum annealing with ~100 qubits is great considered purely as a physics experiment.

    On the other hand, there’s one enormous difference from your hypothetical scenario involving the use of Shor’s algorithm to factor 20-digit numbers. Namely, with Shor’s algorithm, at least we can be confident that there’s a huge asymptotic speedup over any known classical algorithm to be had. Whereas in the D-Wave case, whether you can ever get an asymptotic speedup over classical using stoquastic quantum annealing is precisely the question now at issue. We don’t have any strong theoretical evidence that you can get a speedup, and the existence of the Quantum Monte Carlo algorithm provides some evidence that you can’t. So it’s really a question to be decided by experiment. Which, again, makes me happy that someone is doing the requisite experiment, but is an extremely different picture than the one currently making it into the press!

  109. Greg Price Says:

    Just to correct one point that has confused Greg Kuperberg #107 as well as Rahul #70: Hartmut Neven is not *the* Director of Engineering at Google, he is *a* Director of Engineering.

    In Silicon Valley, “Director” is a generic middle-management title. At a rough estimate (based on working a summer there years ago), Google has one Director for every 50-100 engineers. At another rough estimate, that makes about 100-400 Directors of Engineering. Neven probably reports to a VP, who reports to a Senior VP, who reports to the CEO; perhaps one more layer than that.

  110. John Sidles Says:

    Here are two Fermi calculations:

    ————————-

    Fermi Estimate 1: The Valuation of Lustre

    Google’s market capitalization of $300B, discounted at 6% per&annum, is about equation to NASA’s yearly budget or $18B. Very broadly, the opportunity cost of keeping NASA and Google both running (as contrasted with investing in other enterprises) about $40B per year.

    Let us suppose that “lustre” enhances the enterprise-value of NASA and Google by about 5%, in which case it makes sense for NASA and Google to invest about $2B per year in lustre sustainment.

    The annual cost of investing in D-WAVE’s computer is about 1/1000 of this value of lustre sustainment. So the break-even time for D-WAVE investment to sustain the lustre of NASA and Google was about eight hours!

    Fermi Conclusion 1  D-WAVE was a terrific investment for D-WAVE and Google!

    ————————-

    Fermi Estimate 2: The Valuation of Insurance

    In regard to quantum computing, let us hope for the best, yet prepare for the worst:

    Bad (Good?) News I  Quantum computing turns out to be infeasible in practice because spatially-localized finite-dimension error-protected Hilbert spaces turns out to be incompatible with Nature’s true low-energy dynamics: QED field theory. Bad news (or is it?) for the QIT experimental community!

    Bad (Good?) News II  Quantum computing turns out to be infeasible in principle because the QED field theory of Bad News I turns out to be the algebraic resolution” of an underlying varietal manifold that is the true state-space of nature … a state-space that does not support quantum computing. Bad news (or is it?) for the QIT theory community!

    Bad (Good?) News III  Quantum computing turns out to be irrelevant in applications because the Moore’s Law pace of advances in classical computation hardware is sustained through 2050 and beyond. Bad news (or is it?) for D-WAVE!

    Fermi Conclusion 2  The outcomes of “Bad News 1-3” are actually good news for QIT experimentalists, QIT theorists, and QIT businesses. D-WAVE is performing an exceptionally valuable service, in assisting the world to a more rapid appreciation that the worst news in STEM interprises is not bad news, but rather, no news.

    For which lesson, this appreciation and thanks are extended to D-WAVE *and* to Shtetl Optimized!

  111. Henning Dekant Says:

    Scott #108, your comment nicely illustrates why I see this mostly as a collision of cultures. You don’t get seed capital by proposing a physics experiment for the sake of scientific edification (as much as I like the latter). You have to sell a vision, a dream, and you have to believe in it in order to sell it well. Hype is wired into the DNA of this process.

    They may or may not have a truly useful device at this point, but they will never stop trying to soundly beat conventional computing as long as their funding lasts.
    (Of course it’ll be ironic if they incorporate quantum error correction to get there – but whatever it takes is fine by me).

    IMHO I think it’s fair to say they are much closer to proving their merits than you would have assumed when they first caught your attention, aren’t they?

  112. Scott Says:

    Henning #111: Yes, it’s a collision of cultures, no argument there! What’s striking to me is the number of people on the Internet currently tripping over themselves to make allowances for the hype-and-exaggeration-filled startup culture, yet who make no allowances whatsoever for the truth-valuing culture of science—and who utterly fail to see the irony in that. (You yourself are a welcome exception!)

    It’s true that D-Wave has made impressive engineering progress since 2007, and I’m happy to acknowledge that. But as for “proving the merits” of stoquastic quantum annealing as a means to getting quantum speedups … well, I’d say we’re closer now to finding the speedups if they’re there to be had, but also closer to ruling them out if they’re not there! As I said in the post, I’m bracing myself for bitter attacks on academic QC researchers as the most egregiously-wrong people in the history of humankind, whichever of those two outcomes turns out to be the case. 🙂

  113. Greg Kuperberg Says:

    Henning – Yeah, no kidding, some people are in the business of selling dreams.

    Other Greg – Thanks for the clarification, and you’re right, I didn’t interpret Neven’s appointment at Google correctly. Even so, I suspect that a director in charge of 50-100 engineers at Google is in charge of a lot, and has spending powers at a different scale from a university research group.

  114. asdf Says:

    Scott #108, thanks, that clears up some issues that hadn’t quite sunk in for me.

    Greg #107, hmm, I was thinking of this hypothetical $10M machine as competing with MIT buying another particle accelerator or supercomputer cluster or something like that. They do buy or build stuff like that regularly. Steady streams of new PhD’s and journal publications, and the occasional Nobel Prize emerge from the resulting research.

  115. Greg Kuperberg Says:

    asdf – Universities don’t pay for their own particle accelerators. Sometimes they pay for their own data centers and sometimes they don’t. This one would be very much in the particle accelerator category.

    I would not in good conscience advise the National Science Foundation to pay for one of these machines, neither for MIT nor for my own university. Just like Scott, I can believe that D-Wave’s research program might eventually be worthwhile. Nevertheless, it’s an expensive project with middling experimental results and a world record of hype, and I don’t see how it’s cost effective. (In fact, hype that at the beginning contradicted established theorems in mathematics.) There are other experimental groups who are doing great work, better than this stuff and with a lot less funding. (Which is not to say that they are poorly funded, just not at the level of D-Wave.)

  116. DIY Says:

    I was thinking of building a quantum computer in my garage, but something is putting me off.

    Even if D-Wave never builds a real quantum computer, will they be able to use their patent portfolio* to thwart anyone else’s efforts to build a real quantum computer?

    *see Rahul Comment #85

  117. Grasshopper Says:

    Re: Greg #115

    And, if I may ask, where do particle accelerators come from? Being assembled from scratch by graduate students in their “spare” time, while paying for classes at MIT, etc.?

    It’s easy to diss D-Wave for “selling dreams”, and be proud of yourself for being a part of “truth-valuing culture”, but, where did your funding come from? Taken from a taxpayer (at the gunpoint, some would say)? Or, entirely from your own funds, or voluntary contributions? Why are you so eager to give an advice how other people should spend their money?

    At least, D-Wave have taken entirely voluntary investments in the company, with no coercion whatsoever! Fail or lose, it’s not your money or effort, why are you so negative then?

    And, yes, it’s not as pure as it sounds, D-Wave did accept investments from banks that did benefit from government largesse, bailouts, and taxation-by-inflation, which *is* a moral problem for me — but I am not sure that *you* would consider that a “problem”…

  118. Greg Kuperberg Says:

    Grasshopper – I am well aware of the terminology of libertarianism, such as referring to taxes in general as armed robbery, since I used to be a libertarian. The terminology is dubious, but that debate would be off on a tangent from this thread.

    Even on its own terms, your argument doesn’t work. Because a good chunk of D-Wave’s bread and butter comes from Lockheed, and that’s fighter jet money. It comes from a fighter jet budget that could become the biggest boondoggle in the history of government contracting. If investors expect to be repaid by customers such as Lockheed, that’s just as much taken from taxpayers, “at gunpoint” to use your hysterical phrase, as any of my funding. In fact more so, since it’s more money.

    Beyond that, a free market is supposed to include a free market of opinion, right? How well does a free market work if all product reviews are nicey-nicey?

  119. Grasshopper Says:

    Greg #118:

    Thanks for replying as a human person! Honestly, it’s appreciated, a lot!

    I do think that healthy debate is very important and not to be squished by *any* means, indeed! Which includes the possibility of both sides of the debate keeping their pre-debate convictions! 🙂

    I would be curious to know what journey one would have to take to become a “former libertarian”, but this discussion does not belong on this particular thread, agreed!

    And I can see that you are considering about 10% of D-Waves total funds over the life of the company coming from LM “a good chunk” — but it also has to cover costs of actually delivering the good(s), right, so it’s not the same as direct investment, with promise of more? Comparable chunk can also be called “eyeball impression money”, right?

    Unfortunately, we live in the world where only military/National Security types can afford “the first of everything” (as a side note, you probably were not entirely funded by NIH or NSF, right?) — and among their suppliers LM is not the worst, see their carbon nanotube desalination plant, or fusion attempts…

    As to D-Wave investors’ exit strategy — who knows… 😉

  120. Greg Kuperberg Says:

    Grasshopper – Regardless of how D-Wave uses the money, Lockheed’s payment to D-Wave is at least $10 million, and that is in effect public money.

    As for myself, all of my research grants as a university faculty member have been from the NSF, except that some time ago I also held a Sloan.

  121. Scott Says:

    Rahul #85:

      D-Wave seems to have filed (granted?) a lot of patents. What do you think about those?

    I regard today’s patent system (at least in the US) as something between tragedy and farce: a way for swarms of patent trolls to brandish written descriptions of cringingly-obvious ideas, as a way to intimidate anyone else from ever doing anything to implement the ideas. The only purpose of having a patent system is to “encourage innovation,” but the system we have now does precisely the opposite—so much so that we might be better off were all patents abolished entirely.

    So from a scientific perspective, in some sense I’d be swayed even less by whatever patents D-Wave has than by its media coverage or its business deals.

    As for the practical impact of D-Wave’s patents, though, DIY #116 asks an interesting question:

      Even if D-Wave never builds a real quantum computer, will they be able to use their patent portfolio to thwart anyone else’s efforts to build a real quantum computer?

    I have no idea! If the answer were yes, though, then I confess that this is one way D-Wave could both repay its investors, and have a lasting impact on the future of quantum computing.

  122. Scott Says:

    Gabriel #102:

      Scott has clearly discovered an algorithm that can create anti-D Wave arguments longer than the lifetime of the universe.

      Or if you do give a good argument he will respectfully disagree and give some fatuous alternatives to your argument that he can make fun of.

    For the benefit of other readers, it might be worth pointing out that this argument-free comment was apparently posted by Gabriel Durkin, coordinator of the new NASA Ames Quantum Laboratory that’s receiving the D-Wave machine.

  123. Random Person Says:

    Anyway we can either slides or video of the talks?? That would be nice.

  124. Rahul Says:

    To back up from the controversy a bit: Is there anyone else (in academia or industry) doing anything similar to what DWave is attempting?

    What’s the background of academic research in the area of quantum annealing experiments or adiabatic optimization? Who are the leading researchers in these specific areas and what are their opinions about DWave? Are there other SQUID based / non-gate / non-error-correcting attempts at QC?

  125. Rahul Says:

    I read that apparently 20% of the compute-time at the D-Wave machine purchased by Lockheed-USC was reserved for outside researchers.

    How come we are not seeing more publications / blog-posts / presentations etc. about third-party runs on the D-Wave hardware? Is there some sort of embargo? Or did I miss these articles.

  126. Scott Says:

    Random Person #123: There was no video of the talks. If you want slides, you’ll have to ask Matthias and/or Daniel!

  127. John Says:

    Here is a recent Daniel Lidar talk about experiments on the D-Wave One.

  128. John Says:

    Here is another recent video of a talk about the D-Wave One given at Princeton university by 2 people at USC.

  129. Greg Kuperberg Says:

    In this USC talk linked at comment #127, Daniel Lidar is quite the Pollyanna. He freely admits to a lot of bad news about the D-Wave chip — like that it is incapable of any more than classical error correction. (Actually, I wouldn’t know if that’s true; that’s what the talk says.) But he keeps a serious and optimistic tone, and along with it the conviction that the whole expensive show is worthwhile.

    I won’t accuse him of saying anything false. He is clearly eager to give the truth, the whole truth, and nothing but the truth; he also knows more about D-Wave than I do. The problem is that his interpretation of the truth seems crazy to me. In this talk, he’s the kind of guy who, if he saw an expensive house on fire, might say “there is a lot of news about this estate for both good and bad reasons” or “this is an exciting day to be a volunteer firefighter”.

    If the first D-Wave box at USC cost $10 million, and if the second one is $15 million, then together that makes $25 million. Measured in dollars, this is the biggest quantum computing project that I have ever heard of. But bigger is not always better.

  130. Spotting Black Swans with Data Science and Quantum Stuff | Pink Iguana Says:

    […] Aaronson, Shtetl-Optimized, D-Wave: Truth finally starts to emerge, here. […]

  131. Peter W. Shor Says:

    Greg Kuperberg criticizes the D-Wave chip on the grounds that it is incapable of more than classical error correction. This is actually a big drawback of both quantum adiabatic and quantum annealing algorithms: nobody knows how to do quantum error correction for them.

    On the other hand, do quantum annealing algorithms really need more than classical error correction? You can certainly do classical error correction so that the energy of the final state is corrected. And given that we have thermal fluctuations anyway, would quantum error correction do any good?

    For quantum adiabatic computers to be universal, we really do need to do quantum error correction on them, and figuring out whether/how we can do that is a major open problem in the field.

  132. John Says:

    For the comment #128, here is the link from the events calendar at Princeton for this talk on Dec 5, 2012.

    http://www.pppl.gov/events/adiabatic-quantum-computing-d-wave-one

  133. John Sidles Says:

    Greg Kuperberg says: “This is the biggest quantum computing project that I have ever heard of.”

    —————-

    Moreover, by the (reasonably) objective metric “demonstrated computational power” ⊗ “demonstrated scalability of design”, D-Wave’s quantum computing project is the most successful such project to date … by far!

    Why is that, the world wonders?

    Recently, my wife was amused by a disquieting radio tag-line: “Mother Teresa: She’s Not Who You Think She Is!”, and it is evident that D-Wave’s technical successes inspire similarly disquieting STEM reflections: “Quantum Computing: Perhaps It’s Not What We Think It Is?”

    The comments on Shtetl Optimized show us that reasonable people may differ in regarding the reflections that D-Wave inspires as reasonable and even beneficial, versus deluded and even harmful. My assessment trends strongly toward “reasonable and beneficial” … largely because the STEM debate regarding D-Wave’s achievements is illuminating and just plain fun.

  134. John Says:

    For comment #132, they also provide a link to the power point slides in the presentation.

    http://www.pppl.gov/sites/pppl/files/WC05DEC2012_FSpedalieri_0.pptx

  135. Greg Kuperberg Says:

    Peter – Of course, it’s a good question whether quantum error correction is needed for a special-purpose quantum annealing device. You could equally well ask whether any error correction is needed. After all, the whole idea of annealing is that noise is your friend, not your enemy, until you reach the lowest temperature.

    Nonetheless Lidar, who is the leading advocate of D-Wave among university faculty, devoted an entire talk to the thesis that D-Wave’s quantum annealing needs error correction because T1 (the noise time scale) is small. Okay, not quite the entire talk, because at the end there is a slide with the statement that “thermal relaxation is helpful”. I don’t see how to reconcile that with the statement at the beginning, “we’d like to error-correct this machine”.

    I think that this is what Scott meant by a reductio ad absurdum of D-Wave’s technical philosophy.

  136. Greg Kuperberg Says:

    John Sidles – No, you’re just wrong. D-Wave has not actually demonstrated any computational power whatsoever, beyond reinventing analog computation that was first made practical by Vannevar Bush 80 years ago. They have no evidence that they are any more than a quantum cover band for a classical computation song album. They have proven that their guitar strings do something quantum, but that says nothing about what song is played or how well it’s played; it only says something about the instruments that play it.

    If this were an ordinary university lab project with 6-figure funding, I would say more power to them and bon courage. As the best-funded experimental QC project in the world, a project with 8-figure spending sponsored by fighter jets and by Silicon Valley, and with substantial public press, it just doesn’t make sense.

  137. Peter w. Shor Says:

    Hi Scott,

    If you take any specific optimization problem, and have a crack programmer spend eight months writing special-purpose optimization software for it, you probably will be able to beat CPLEX. By the criteria you are using, this would mean CPLEX was worthless. However, businesses still pay tons of money for CPLEX. (Not millions of dollars, but it’s remarkably expensive.)

    Let me say that I am not disagreeing with your verdict on the D-Wave machine (i.e., I totally agree that they have given no evidence that future models of their machine will be useful). I am saying that parts of your argument supporting this verdict are completely specious. I have this tendency to argue with people when they use bogus logic to support a result, even if I think the result is correct. And I think your logic here is bogus.

  138. Rahul Says:

    Shor #137:

    I think you are getting it wrong here. The fact that CPLEX is good is not the point here nor that a crack programmer could beat CPLEX on a specific problem given enough time.

    The point is, benchamarking DWave against CPLEX isn’t fair. DWave is a special purpose machine and CPLEX is a general toolkit. A special purpose quantum machine tuned to be good at a specific problem should be run in a fair test against a special purpose classical machine (or algorithm) also allowed to be tuned to be good at that particular problem.

    That’s the crux of the issue so I think Scott’s argument is perfectly valid. Allowing highly tuned DWave to compete against general purpose CPLEX gives DWave an unfair advantage.

  139. Scott Says:

    Rahul #138: Yes, thanks, exactly!

  140. Peter w. Shor Says:

    Why is it not the case that in the future D-Wave couldn’t build a general-purpose chip with more qubits and more general connections that works on a broader range of problems? You are comparing a machine with 128 quantum bits to a machine with billions of bits. Which one do you think would win?

    I agree that D-Wave has no evidence that their future machines will be able to solve general problems well. And in fact, they have no evidence that quantum annealing works better than classical computing on any reasonable class of problems.

    But looking at this prototype D-Wave machine as a special-purpose machine built to solve the problem of putting it into its native ground state is being totally unfair to D-Wave. It is not D-Wave’s business model to build special-purpose machines for each problem they need to solve. Their business model seems to be to eventually build general-purpose quantum annealers, and hope that some useful class of problems where they work better than classical computers is discovered. And while they have no evidence that their business model will succeed, I don’t believe that anybody else has any solid evidence that it will fail.

  141. Bram Cohen Says:

    So the latest news on, when stripped of the hype, has some good news and some bad news for DWave:

    On the good side, there’s indirect evidence that they really do have some entanglement, because their machine completely succeeds half the time and completely fails the other half of the time, as predicted by simulation. Of course, it’s also possible that there’s a physical phenomenon which simply makes it fail half the time, but people seem to agree that the way it fails does provide strong evidence that real entanglement is going on.

    On the bad side, they’ve basically backed off from claiming to do Quantum Adiabic Computation to do Quantum Monte Carlo, in the interests of engineering practicality. But it turns out that QMC can be simulated classically with only a constant factor slowdown!

    On the whole, I’d say that’s horrible news!

    In response to Shor’s poo-pooing of comparing to an optimized digital computer implementation: The DWave machine is not a ‘purely’ quantum device, but also might have unknown amounts of analog classical computational capabilities. Carrying over the analogy, its not like comparing a car to an airplane, it’s like comparing a car to a very funny looking car-like thing which happens to have wings, and then speculating about whether the wings had anything to do with the funny-looking device’s speed or whether the fact that it’s bigger and made out of more expensive materials might have something to do with it.

    The wall clock time approach to comparing stochastic algorithms has been the gold standard for classical algorithms since we started playing that game. There have been many times when someone could draw a chart seemingly indicating that something would eventually get faster, but it couldn’t work on current hardware. In no cases that I’m aware of where there wasn’t an actual asymptotic analysis backing up the argument did such a phenomenon not simply evaporate once the greater computational power became available.

    One can argue about whether the DWave device is really an apples to apples in terms of hardware with an off the shelf digital computer, and it isn’t clear, but given how fast computers are an alternative approach with even a slight asymptotic advantage should be able to overcome a massive constant factor disadvantage, and given the budget which went into D-Wave’s chips if their goal had been to outperform a general purpose computer on a specific problem they could very, very easily have done so. In the case of Google and image analysis, which is apparently what they justified buying the D-Wave machine for, I’m quite certain that they could get much, much better results making hardware acceleration for neural nets. It’s ironic and a little bizarre that Google is making such an investment in custom hardware at all, because company-wide Google has a policy of keeping all hardware commodity.

  142. Scott Says:

    Rahul #124:

      Is there anyone else (in academia or industry) doing anything similar to what DWave is attempting?

    There are groups all over the world working on QC using superconducting qubits — including at Yale (Robert Schoelkopf’s group), UC Santa Barbara (John Martinis’s group), and IBM. Of course, none of those groups have anywhere near the amount of money D-Wave has. However, the Schoelkopf group and others did manage to do something (namely, directly demonstrating entanglement in 2-3 superconducting qubits, using Bell/GHZ violations) that D-Wave hasn’t yet done.

    Others should feel free to add more details about the current state of “superconducting QC beyond D-Wave”—I’d be interested in a summary as well!

  143. Jay Says:

    Peter #140,

    One a side note, to an outsider this sounds as if Dwave said “Earth is flat”, Scott said “Earth is spherical” and you said “No Scott, earth is an oblate spheroid”. Sure you’re one step closer to the truth, but this Scott-Peter distance seems so tiny as compared to the Scott-Dwave distance.

    By the way, one question please: do you accept to call Dwave machine a quantum computer, or you think it still has to be demonstrated, or you think we should reserve the term to machines able to implement any quantum algorithm at will?

  144. Scott Says:

    Peter #140:

      while they have no evidence that their business model will succeed, I don’t believe that anybody else has any solid evidence that it will fail.

    I actually agree with that assessment of the situation! The trouble is, what you wrote above is not at all the picture that’s making it into the press. The picture making it into the press is that McGeoch and Wang proved a 3600x speedup compared to classical computers, and that Google and NASA are moving ahead because of the proven practical benefit and their desire to exploit it now to solve their practical problems. My post was intended as a corrective to that cringe-inducing hype.

  145. Greg Kuperberg Says:

    Peter – If your point is that Scott’s analysis of D-Wave vs CPLEX could have been better honed, then I suppose that you’re right. The McGeoch-Wang benchmark certainly was rigged, as is clear from their own preprint. And the announcement has confused a number of journalists. But a proper explanation of the issue could be a little different than Scott’s first draft of one.

    But look at the bigger picture. Is it that you want to dispute bogus logic even when people are correct; or only when they are correct? If the former, then I think that your reference to the Wright Brothers not just debatable, but ill-advised.

    The reason that the Wright Brothers are famous is that, at least until 1908, they had the best grasp of the theory of aeronautics. The reason that D-Wave is irritating is that they show so little respect for the theory of quantum computing. They hardly seem to care what a quantum computer should be, as long as their product “works”. If they just say here is some entanglement and there is some performance, then that’s not any better than inventing an “airplane” which is actually a motorcycle with winglets.

    Not to mention Geordie Rose’s spiteful attitude when push comes to shove: his badmouthing of the “gate model” and his description of Hartmut Neven as “the real expert”. Did the Wright Brothers say that the Army Signal Corps of Engineers were the real experts, and that Octave Chanute was wrong about everything? (Actually, after 1908 the Wrights did say equally foolish things, but that’s another story.)

    I’m relieved that you agree that in the large, Scott is right about D-Wave. (Because for a moment you gave me a twinge of doubt about my professional sanity.) Now look at what Lidar is saying through the USC Engineering public relations office — not his seminar talks which are more guarded. It seems very possible that if he does well enough at this effort, then someone at MIT might do the same thing and get MIT to house its own D-Wave white elephant. (Again, to be precise, I have no firsthand knowledge that the D-Wave device is a white elephant, rather that Lidar’s talk makes it look like one.) What would you think of an outcome like that?

  146. Rahul Says:

    Scott #112 says:

    What’s striking to me is the number of people on the Internet currently tripping over themselves to make allowances for the hype-and-exaggeration-filled startup culture, yet who make no allowances whatsoever for the truth-valuing culture of science—and who utterly fail to see the irony in that.

    I think Scott’s giving academics a little too much credit here. I’ve read tons of proposals for grants filled with the same hype and exaggeration Scott derides here.

    Let’s assign blame where due. But not all startups run on hype and nor are all academics saints. I wish every academic was as honest and upfront as Scott is about QC, but that isn’t reality, unfortunately.

  147. Greg Kuperberg Says:

    Peter – Your last comment crossed places with mine. You say, “Looking at this prototype D-Wave machine as a special-purpose machine built to solve the problem of putting it into its native ground state is being totally unfair to D-Wave.” In other words, as I understand you, you think that it’s unfair to accuse D-Wave of claiming a rigged success with a hobbled prototype.

    I disagree, I think that that’s exactly what they’re doing. I thought so when I first saw Geordie Rose violate the Holevo-Nayak bound in 2007. He only had 16 qubits, yet he pressed a button that said “Quantum solver” that solved a Sudoku.

    Lest one suppose that things have changed, I just found this headline from last August: “D-Wave quantum computer solves protein folding problem.” This was not CNN or even the New York Times, this was a Nature Magazine blog. Lest one suppose that it was only that Nature garbled the story, the article itself says this: “I just got off the phone with Colin Williams, the director of business development at D-Wave…”

    I agree that there is no solid evidence that their business model of building a useful quantum computer is doomed. But I agree only because that’s not even falsifiable. As Scott says, it could always turn out to be the parable of the stone soup. The parable has come true many times in the business world.

  148. AJ Says:

    Peter #131:
    “This is actually a big drawback of both quantum adiabatic and quantum annealing algorithms: nobody knows how to do quantum error correction for them.”

    Couldn’t they just add a parity qubit? 🙂 I suppose this is unworkable as the sum of the qubits would be fuzzy.

  149. Peter W. Shor Says:

    There are a few questions people could be asking.

    (a) Can quantum annealing with thermal noise at a temperature higher than the gap solve useful problems faster than a classical computer?

    (b) Can D-Wave build a general-purpose quantum annealing machine in the next few generations of their machines?

    (c) Can the rather contrived problems the current D-Wave machine is solving be solved faster by using massive amount of classical computing power running a special-purpose program that was written by an amazingly good programmer?

    My personal opinion the answer to (a) is likely to be “no” (in which case D-Wave is probably doomed to fail), but that there is some small chance that the answer to (b) is “yes”.

    The answer to (c) is totally irrelevant to the question of whether D-Wave has a reasonable business model.

    And just because D-Wave has released a massive amount of nonsensical hype it doesn’t mean that it is excusable to answer their hype with a nonsensical argument. To use Scott Aaronson’s own words, that is degrading to the “truth-valuing culture of science.

  150. Michael Bacon Says:

    Peter@149,

    I guess I got lost in the back and forth, but could you please explain which of Scott’s “nonsensical” arguments you are referring to? I know I thought Scott’s Argument from Consequences was a little dramatic for my taste, and I said so, but that’s really only a quibble. It has nothing to do with the technical arguments, which I suspect mainly interest you. I really would like to see this in one place. Thanks.

  151. Greg Kuperberg Says:

    Peter – “And just because D-Wave has released a massive amount of nonsensical hype it doesn’t mean that it is excusable to answer their hype with a nonsensical argument.”

    I revisited your comment #99 to better understand your point. You’re saying that it’s a red herring that Matthias Troyer provided a classical sniff hound that is far better at finding Easter eggs than D-Wave’s sniff hound. Yes, I agree that if Scott had meant this as a standalone argument against D-Wave, then it would be a red herring for the reasons that you say.

    I don’t think that’s what he meant; in any case I feel edified by a different interpretation of his remarks. It seems like a red herring because it’s meant to answer another red herring, which is that D-Wave “beat” CPLEX. It did not actually beat CPLEX at all, because CPLEX looks for the best Easter egg in the field and proves that it’s the best one. That’s a lot harder than finding a good Easter egg that merely has some chance of being the best one. Even more so given that the field is by design D-Wave’s ideal dog park, a graph with exactly the connectivity of D-Wave’s network of qubits. As interpreted by journalists, the McGeoch-Wang comparison is a red herring the size of a mako shark. It may be unavoidable that the best refutation in context is another distracting fish.

    Also, re this question: “Can D-Wave build a general-purpose quantum annealing machine in the next few generations of their machines?” I would agree yes, they have some small chance to succeed. I would even say, if you hold fixed whether everyone else succeeds, that I wish them well. But I would also say, if you hold constant the number of QC research teams who succeed, then D-Wave hardly deserves to be one of them.

  152. Scott Says:

    Rahul #146:

      I’ve read tons of proposals for grants filled with the same hype and exaggeration Scott derides here … not all startups run on hype and nor are all academics saints.

    I couldn’t agree more! In fact I made the same point in the part of my post headed “The Argument from Consequences.” I know academics who I regard as basically used-car salespeople, and folks in industry who I regard as orders of magnitude more “truth-valuing.” But I was speaking not about individuals, or even about entire cultures, but simply about people’s perceptions of the cultures.

    It seems hard to deny that the reason many people want D-Wave to succeed—why they eagerly trumpet any scrap of evidence in D-Wave’s favor while making excuses for its bad behavior—is that they yearn to see the pointy-headed elitist academics, with all their “theory” and “principles,” get their comeuppance by scrappy outsiders who ignore the principles and just build something in their garage that works. (And as usual, they don’t care if the “scrappy outsiders” are actually much richer and better-connected than the “elite academics”—think of how George W. Bush and Dick Cheney won(?) two elections by portraying themselves as outsiders!) I find it impossible to explain the fervor for D-Wave that one finds on blogs like “nextbigfuture” without positing something like that. Every time “learned professors” make a confident prediction that fails, people can gratefully reassure themselves that their own failure to understand whatever the professors were prattling about is not a real handicap at all. Hardly anyone cheers (or even notices or remembers for too long!) when the “learned professors” turn out to be right.

    Not only do I consider this dynamic crucial to understanding people’s reactions to D-Wave—why they consistently lower the accuracy-bar for the company (“that’s just PR, ignore it”) even while raising the bar for its critics—but it seems like a “unifying principle” that can explain an otherwise puzzling phenomenon. Logically, the people who think D-Wave is about to revolutionize the world, and the people who think the whole idea of quantum computing is a fraud, wouldn’t seem to have too much in common with each other! Nor would the people who think P≠NP is so obvious that the quest to “prove” it as a farce, and the people who think P=NP. Yet despite their internal differences, these warring factions might all be united by a hunger to see those “ivory-tower CS theorists with their inscrutable complexity classes” brought down a peg. Or at any rate, that’s my current conjecture for why they all use such similar language when attacking me! 😀

  153. James Gallagher Says:

    You’re winning the debate by a mile in the D-Wave case, and (with some helpers) brilliantly so – but I don’t think you should extrapolate that to winning the debate about your views on nature/maths/philosophy overall. 😉

  154. Greg Kuperberg Says:

    Scott — “It seems hard to deny that the reason many people want D-Wave to succeed is that they yearn to see the pointy-headed elitist academics get their comeuppance by scrappy outsiders who ignore principles and just build something in their garage that works.”

    To be sure this is a general cultural bias, particularly in the US. In the movies, it’s very often that scrappy outsider who beats those elitist academics. Think, Good Will Hunting. For one thing, it’s usually a more interesting story than the reverse: that the article of the meritocracy might be straightforward and expert (and middle class), while the scrappy outsider might be unreliable and a jerk. (Although come to think of it, that’s what happens in Jaws! Kudos to Spielberg for that.)

    But if many people think this way, especially die-hard libertarians and die-hard D-Wave fans, I don’t think it’s most people. For the most part, D-Wave simply exploits people’s fascination with quantum computing. They want to co-opt you, and they’d rather pretend that everyone is on the same page.

    It’s tricky because it’s surely better not to chase down D-Wave to pour cold water on them. However, when journalists come calling, which they do from time to time, it could be very useful to have a coordinated response. Interested science journalists shouldn’t just have one skeptical contact; they should have several to confirm the message.

  155. Noon Says:

    Greg #154: Yeah, but that suggests a website like “what do researchers thing about X”, and it will surely contain a very standard set of answers based on the class that the researcher is in; if they’re famous and settled, it’ll be “I’m skeptical, it’s overhyped”, if they’re younger and eager for work it’ll be “They’ve not addressed these things, A, B, C, which I happen to know a lot about”, and so on and so forth. I don’t think it would be a particularly surprising thing to do, considering how predictable it would be.

  156. Henning Dekant Says:

    Greg Kuperberg, #129

    If the first D-Wave box at USC cost $10 million, and if the second one is $15 million, then together that makes $25 million. Measured in dollars, this is the biggest quantum computing project that I have ever heard of.

    Compared to the IT budgets of Fortune 500 companies this is peanuts, and goes to show that QC research really needs to tap corporate funding.

  157. Greg Kuperberg Says:

    Henning – To be precise, D-Wave is the biggest single quantum computing project that I have heard of, as measured in dollars. It might not be bigger than Microsoft’s total investment (through Station Q) or IBM’s investment, nor Mike Lazaridis’ investment through the Perimeter Institute and IQC. What Google, Lockheed, and Jurvetson et al are doing that no else is doing, is putting so many eggs into one basket. As I said, bigger is not always better.

    While I am at it, I cannot help but critique two points on your most recent blog post: “D-Wave made some major news when the first peer reviewed paper to conclusively demonstrate that their machine can drastically outperform conventional hardware was recently announced.” Nope, this is just wrong and Scott is right. The paper demonstrates that D-Wave can outperform conventional hardware, in a rigged contest in which the conventional hardware runs conventional software that is also held to a higher standard. In a fair contest the conventional hardware (and reasonably conventional software) won, even though it was an Easter egg hunt in the ideal dog park for D-Wave.

    “Apparently, my back of the envelope calculation from last year, that was based on the D-Wave One performance of a brute force calculation of Ramsey numbers, wasn’t completely off.” Actually, the Ramsey number calculation could have been one of the lamest benchmarks in the history of computation. With the exception of R(3,3), which is a recurring Mensa puzzle, they calculated R(2,n), as in: “How many people do you need at a party so that at least 2 are friends, or at least n are mutual strangers?” Duh…

  158. Blazespinnaker Says:

    All fun aside, a lot of this is actually Troyers and Lidars efforts being reported by a middleman. Love to hear it straight from them if possible.

    Scott is unquestionably a genius, but Troyer and Lidar are the ones with the direct knowledge.

  159. Michael Bacon Says:

    James@153,

    “I don’t think you should extrapolate that to winning the debate about your views on nature/maths/philosophy overall”

    Still, it’s some evidence, no? A brick in the wall, so to speak? Although, I try to avoid “induction” whenever possible.

    But, what is your point exactly? What views of Scott’s regarding nature, maths and philosophy do you feel compelled to warn him (and the rest of us) against 😉 ?

  160. Scott Says:

    Blazespinnaker #158: I won’t dispute any part of your last sentence. 😉 So, for starters, I’d strongly suggest reading the USC paper! Unfortunately, Troyer and Lidar (and Boixo, and the others) are apparently prohibited right now from directly commenting on blogs about their paper, for journal embargo reasons.

  161. James Gallagher Says:

    Michael #153

    He thinks Quantum Mechanics was pretty well understood in 1927 at the Solvay Conference

  162. Rahul Says:

    Two interesting observations on Michael Trick’s blog, an Operations Research expert familiar with CPLEX. The more I read about this, the more I’m convinced that the Vesuvius versus CPLEX race was rigged.

    (1) The comparator is primarily CPLEX, a code designed for finding and proving optimal solutions. I have refereed 3 papers in the last month alone that give heuristics that are orders of magnitude faster than CPLEX in finding good solutions. Of course, CPLEX is generally infinitely faster in proving optimality (since it wins whenever it proves optimality). Beating CPLEX by orders of magnitude on some problems in finding good solutions (particularly with short times) does not require fancy computing.

    3) The problems chosen are not exactly in CPLEX’s sweet spot. Consider maximum weighted 2-SAT, where the goal is to maximize the weight of satisfied clauses in a satisfiability problem, where each clause has exactly two variables. The natural linear relaxation of this problem results in a solution with all variables taking on value .5, which gives no guidance to the branch-and-bound search. Now CPLEX has a lot of stuff going on to get around these problems, but there are certainly NP-Hard problems for which CPLEX has better relaxations, and hence better performance. And if you take any sort of problem and add complicating constraints (say, max weighted 2-sat with restrictions on the number of TRUE values in various subsets of variables), I’ll put my money on CPLEX.

    http://mat.tepper.cmu.edu/blog/?p=1786

  163. Greg Kuperberg Says:

    Not to mention that CPLEX is written in C, which is compiled in to machine language, which is itself interpreted by microcode. If someone took the trouble to directly implement simulated annealing in microcode using the D-Wave connectivity graph — or even better, specialized VLSI — it could be incredibly fast, for all I know a million times faster than the D-Wave device. The CPLEX benchmark truly is a very conventional use of general-purpose computers, as well as accomplishing more by certifying the exact solution. It’s nothing like what a classical SPD can do in principle.

  164. blazespinnaker Says:

    I think Matthias is saying a few things in private talks that’d be interesting to hear in public 🙂

    I have a small question, Scott .. are there significant patents around error correction? What are the issues with building error correction into Dwave as they scale?

    Geordie seems open to it:

    http://dwave.wordpress.com/2013/05/08/first-ever-head-to-head-win-in-speed-for-a-quantum-computer/#comments

    Historically, I’m sure academics will get credit for the ideas, and Dwave will get credit for actually making it happen (no small feat, btw).

  165. Rahul Says:

    Greg #163

    One easy test may be an FPGA port if possible. Those things are fast and not as expensive as etching your own chip.

    Hell, might be worth testing DWave versus a simulated annealing algorithm or DWave itself coded onto an FPGA.

  166. Henning Dekant Says:

    Greg Kuperberg, #157

    Yes, I wrote that the paper showed that ” … their machine can drastically outperform conventional hardware …” when I first learned about it.

    But I didn’t explicitly state that the contest wasn’t rigged 😉

    Although, I must admit I invested some trust in academic honesty, peer review and all that jazz … i.e. assuming that it wasn’t.

    Nevertheless, I am quietly resigned to the fact that I soon may have to ship some Maple syrup as it seems to become more and more apparent, that hand crafted algos on a GPU may still be able to outperform a D-Wave machine (meaning I will lose my bet that I placed on the Canadian home team).

    Yet, although I may lose the syrup I may yet win some cheese at some later time.

    BTW, of course the Ramsey calculation wasn’t useful, my point was that this was supposedly equivalent to a brute force graph search, and the question was how much conventional CPU power you’d require to run such a dumb algorithm. It’s not a shocker to me to lose my syrup to a hand optimized algorithm (see there’s a reason I didn’t go all in like Scott with his $100K QC bet).

    In the end it of course all comes down to the question if D-Wave has a path to actually outcompeting our current hardware paradigm. My only difference to the critical CS crowd is that I am happy to cheer them on – doesn’t mean that they won’t have to deliver.

  167. blazespinnaker Says:

    “It’s absolutely possible that adding active error correction might help at some point, maybe even in the next generation. If that is the case, we’ll certainly try anything anyone can think of to make the processors work better! In the specific case of exhibiting scaling differences over conventional algorithms, I’d bet we don’t need error correction (at least at the current snapshot) but at the next level (say 2000+ qubits) maybe we might. If we do, no problem — we’ll find a way!”

    http://dwave.wordpress.com/2013/05/08/first-ever-head-to-head-win-in-speed-for-a-quantum-computer/#comment-23435

  168. Henning Dekant Says:

    Rahul #162, my understanding is that the CPLEX algorithm was not used to prove that it found the global optimum, but only to find a suitable solution comparable to what the D-Wave chip was expected to deliver.

    I didn’t have the time yet to study the paper in more depth, but I am sure that the authors themselves will eventually address the accusation that it was rigged. After all, implying that a scientist essentially sold off his or her integrity is no joking matter.

  169. Henning Dekant Says:

    In #168, that was of course supposed to be “CPLEX software” not “CPLEX algorithm” – the latter sells the product a bit short 🙂

  170. Greg Kuperberg Says:

    Henning – I have no reason to believe that McGeogh and Wong are really dishonest, nor that there was any real failure of peer review just because their paper was published. When Scott or I say that their benchmark was “rigged”, that is a summary of certain points that are completely clear in the paper. The issue is not whether the paper hides says anything false, but rather how to interpret its findings. A lot of people have simply read too much into this paper.

    This is the way that academia works. A paper can be published just because it seems trustworthy and readers might be interested, and not necessarily because it’s important. It’s a way to communicate, not necessarily a way to declare any victory. Say that you believe that a certain molehill is actually a mountain. Then you can write all kinds of papers explaining your theory of molehills and mountains, or photographing the molehill from different angles, or your experience climbing the molehill. Or your evidence that the molehill is much larger than it first appears, which in your opinion might possibly make it a mountain. Arguably this kind of research is lame; or maybe it’s cool research in a tongue-in-cheek sort of way. But it’s not dishonest as long as you properly describe what you did and saw.

    The McGeoch-Wong paper and Daniel Lidar’s USC seminar are exactly in that spirit. I have no reason to doubt either one; it seems to me that they both describe a molehill. In and of themselves, they’re passable. The problem is all of the song and dance about mountains outside of the seminar and the paper, including from Daniel Lidar himself.

  171. Greg Kuperberg Says:

    Rahul #165 – Why don’t we just not and say we did. Troyer already made his point.

  172. Scott Says:

    blazespinnaker #164:

      I think Matthias is saying a few things in private talks that’d be interesting to hear in public 🙂

    You might be right 🙂

      are there significant patents around error correction?

    The basic ideas of quantum error-correction were discovered by academics (Shor, Aharonov, Ben-Or, Preskill, Gottesman, Knill, Laflamme…), and none of them are patented (this is wrong — see below). As for whether there are any patents around more specific QEC techniques (by whom?)—that’s an interesting question to which I don’t know the answer.

      What are the issues with building error correction into Dwave as they scale?

    I’m not sure! But Lidar has obviously thought a lot about it; I’d watch his talk for starters.

      Historically, I’m sure academics will get credit for the ideas, and Dwave will get credit for actually making it happen (no small feat, btw).

    I’m not sure of any of that. But my attitude is a lot like Greg Kuperberg’s: I would much rather that D-Wave builds a useful QC than that no one does! On the other hand, given the godlike power to pick the most “deserving” group to win that race, surely I’d pick NIST or Waterloo or Yale or Bristol or Innsbruck or Queensland or Singapore or any of the countless others around the world who’ve appreciated the theory of QC from the beginning, rather than denigrating and dismissing it only to (probably) grudgingly incorporate it later.

  173. Scott Says:

    Henning #168: Greg #170 makes an extremely important point. I’m not accusing McGeogh and Wang of any sort of academic dishonesty. I’m sure they did their experiments carefully and reported them accurately. It’s just that the USC experiments seem so much more informative about the questions at issue, that it’s hard not to see laziness or bias in journalists repeating the McG/W-derived “3600x speedup” soundbite while ignoring or downplaying the comparison to optimized simulated annealing code. And yes, it’s possible that McG/W themselves are pained by journalists’ oversimplifications or misrepresentations of what they said, in which case I can only sympathize with them.

  174. Rahul Says:

    Greg #171

    Rahul #165 – Why don’t we just not and say we did. Troyer already made his point.

    Sorry, that was too cryptic for my. 🙂

    What do you mean?

  175. Scott Says:

    Rahul #174: I assume Greg meant that the results of such an FPGA experiment are so obvious in advance (yes, of course, you can get a big further speedup that way) that the actual doing of the experiment would be more publicity stunt than science.

    While he has a point, I’d add that this might be a publicity stunt worth doing!

  176. Rahul Says:

    Greg #170 and Scott #173:

    This is the way that academia works.

    While true, It’s rather sad though. Sometimes referees and editors ought to exercise more discretion in not letting the wrong / silly / misleading / irrelevant questions be asked in the first place

    After all, we don’t publish just about any comparison in journals do we so why let silly comparisons get credibility? They might be trustworthy, yet silly.

    In this specific case, how much more work would it have been for McGeoch to time DWave against Tabu Search or some proper heuristic as Scott indicates may be more relevant. In fact, if they have done it (seems so) the referees ought to insist that they publish the actual numbers.

    More importantly, the McGeoch work hasn’t been published (in the peer reviewed sense) has it yet? Maybe the lacunae will eventually get caught by peer review.

    If so, what we need is scientists to imprint a big red stamp (for journalists) on draft paper uploads saying “This may look like a published paper, yet for now these are merely my ramblings”

    Frankly, often, even I lose mental track of the fact that what I’m reading is an arxiv upload and not (yet) a peer-reviewed paper.

  177. Rahul Says:

    A naive question I had:

    Why do Lockheed and Google buy these machines but then site them at USC and NASA? What’s the rationale there? The $15M price-tag comes all from Lockheed / Google right?

    Is this common?

    My concern was, if Lockheed / Google really saw a game-changing strategic advantage here, wouldn’t they rather site these babies on their own campuses? Why don’t they?

    A part of me wonders whether these are just akin to donations. But from DWave’s viewpoint it sounds a lot nicer to release a PR-note saying “Lockheed bought one” than “USC bought one”.

  178. John SIdles Says:

    Scott believes (wrongly)  “The basic ideas of quantum error-correction were discovered by academics (Shor, Aharonov, Ben-Or, Preskill, Gottesman, Knill, Laflamme), and none of them are patented.”

    Some early patents that collectively encompass pretty much all aspects of “traditional” quantum computing (including error correction) are:

    • John von Neumann, “Non-linear capacitance or inductance switching, amplifying and memory organs”, US 2815488, Apr 28, 1954

    • Charles H. Bennett, “Interferometric quantum cryptographic key distribution system”, May 25, 1993, US 5307410 A

    • David P. DiVincenzo, “Three dot computing elements”, US 5530263 A, Aug 16, 1994

    • Peter W. Shor, “Method for reducing decoherence in quantum computer memory”, US 5530263 A, Oct 26, 1995

    • Daniel Gottesman, “Quantum error-correcting codes and devices”
    US 6128764 A, Feb 6, 1996

    • Ralph Godwin Devoe, “Parallel architecture for quantum computers using ion trap arrays”, US 5793091 A, Dec 13, 1996

    • Isaac Liu Chuang, “Nuclear magnetic resonance quantum computing method with improved solvents”, US 6218832 B1, Feb 16, 1999

    • Bruce Kane, “Electron devices for single electron and nuclear spin measurement”, US 6369404 B1, Sep 17, 1997

    Historically, the key elements of quantum computing were disclosed in patent applications before they were disclosed in the literature. Moreover, patent applications are (required to be) “full enabling disclosures” and for this reason they commonly are more instructive to read than the literature of the same date.

    Summary  D-Wave’s device represents a technological path that was not foreseen in patent filings by pioneering QIT researchers (e.g., Bennett, DiVincenzo, Shor, Devoe, Chuang, or Kane).

    To the best of my (fallible) technological appreciation, D-WAVE’s devices infringe upon only *one* of the above patents: John von Neumann’s 1954 “Non-linear capacitance or inductance switching, amplifying and memory organs”.

    He knew a lot, that guy von Neumann, eh? Good on yah, D-Wave!

  179. Scott Says:

    John Sidles #178: Wow, I stand corrected on the patents.

  180. Henning Dekant Says:

    Greg #170 and Scott #173: I may not appreciate all the subtleties here, but it seems to me that Rahul #176 has a point.

    If I understand correctly the claim that the benchmarks are “rigged” is due to supposedly selecting a meaningless benchmark that allows the machines to come out ahead.

    But then this in not be regarded as “academically dishonest” because all this is disclosed in the fine print.

    So now I am left utterly confused. Looks to me that this is pretty much the same ethical standard that a used car sales man would be held to.

  181. Michael Bacon Says:

    James@161,

    “He thinks Quantum Mechanics was pretty well understood in 1927 at the Solvay Conference”

    Very cryptic indeed, but I’m afraid that I’ve never been that good at Rorschach Tests.

    So, let’s try this again. What is your point exactly? What views of Scott’s regarding nature, maths and philosophy do you feel compelled to warn him (and the rest of us) against?

    And, while we’re at it, what do these ostensible failings have to do with D-Wave’s claimed achievements, and Scott’s analysis of them?

  182. Greg Kuperberg Says:

    Henning – Yes and no. I wouldn’t call McGeogh and Wang used are salesmen, for two reasons. First, they aren’t salemen! These are just two academics as far as I know. Second, there is no “fine print” in their paper, it’s all ordinary print. The title and the abstract are both very neutral, and the text of the paper is a straightforward description of their test comparisons. If a book is influential, you can’t really say that a third-party summary of the book is a sleazy sales pitch and the actual pages of the book are “fine print”.

    The McGeogh-Wang benchmark is pretty lame, but I wouldn’t even call it meaningless. If D-Wave had implemented a mathematically rigorous quantum algorithm, like Shor’s algorithm, then the benchmark would have been very encouraging. But that’s not what they did; they don’t think that mathematical rigor counts for anything.

    In my view, the Ramsey number benchmark was worse; it was so trivial as to be practically meaningless. They were solving for graphs with no edges — where do you go from there, printing “Hello, world” as a benchmark?

    On the other hand, D-Wave really is selling something, it really does trumpet a sensational interpretation McGeogh-Wang, and what was ordinary print written by McGeogh and Wang becomes fine print for D-Wave. They really are car salesmen who trade in half truths.

  183. Greg Kuperberg Says:

    Rahul #177 – This part of the story is actually fairly reasonable; if only journalists interpreted it for what it is. Lockheed and Google are admitting that this D-Wave hardware does not in fact give them any clear strategic advantage. They are lending these devices away to the non-corporate world in order to sponsor basic research. This basic research could make them look good — except when it makes them look silly. And they could learn something from it that gives them some competitive advantage.

    Many large, stable companies are willing to sponsor and share basic research. It is a bit like sharing study notes in a class that you know has a grade curve. Even though globally you want a competitive advantage, locally you earn good will. You’re also not giving away the whole game, since you understand your own study notes better than other people do.

    However, the way that Google, Lockheed, and for that matter USC are putting so many eggs into just one quantum basket is bizarre. They have all been distracted by hype. It’s almost a no-brainer that they ought to fund broader research programs if they want to go quantum.

  184. Henning Dekant Says:

    Greg, I think for most journos and lazy bloggers it’s save to say that anything below the abstract is considered fine print 🙂

  185. Henning Dekant Says:

    Greg, riddle me this: If D-Wave’s was grossly misrepresenting the content of the McGeogh and Wang paper, wouldn’t their academic integrity compel them to dispel the distortions?

    Again, I may be missing some of the semantics here, but it seems to me that the authors don’t have any issue with supporting the contention that D-Wave can, given the appropriate constrains, outperform conventional hardware.

    It seems to me that Scott and you argue that these constrains have been rigged to make D-Wave look good.

    If the latter is a correct summation, then it can’t help myself but conclude that that’ll reflect poorly on the author’s integrity.

  186. Bill Kaminsky Says:

    I believe John Sidles last comment about QC patents deserves a clarification, specifically his closing paragraph:

    Summary D-Wave’s device represents a technological path that was not foreseen in patent filings by pioneering QIT researchers (e.g., Bennett, DiVincenzo, Shor, Devoe, Chuang, or Kane).

    Note that the operative phrase is “foreseen in patent filings by pioneering QIT researchers”. This is a much different thing than merely “foreseen” by said researchers.

    I know this well because from late 2001 to mid 2004, work on a superconducting architecture adiabatic quantum optimization was put forth by:

    1) Seth Lloyd (definitely a theoretical QC pioneer if I may say so),

    2) Terry Orlando (definitely a experimental superconducting qubit QC pioneer if I may say so), and

    3) yours truly (an one-time PhD student of Seth who deserves no particular praise as a “pioneer”, but will happily accept whatever trickles off in his general direction 😉 ).

    We foresaw things perfectly clearly. We merely didn’t pursue patents for our work, which you can read for free at the arXiv:

    W.M. Kaminsky and S. Lloyd, Scalable Architecture for Adiabatic Quantum Computing of NP-Hard Problems, http://arxiv.org/abs/quant-ph/0211152

    W.M. Kaminsky, S. Lloyd, and T.P. Orlando, Scalable Superconducting Architecture for Adiabatic Quantum Computation http://arxiv.org/abs/quant-ph/0403090

    I can’t speak for Seth and Terry why patents weren’t pursued.

    My personal take is that a patent would’ve been morally wrong. I mostly mean “morally wrong” in the specific sense that absent the nitty-gritty device technical details of an actual implementation, any patent would really be just a “blocking patent” making the people willing to do the hard nitty-gritty work of making the darned thing kowtow to people who didn’t want to do the hard nitty-gritty work of making the darned thing.* That said, I do in part mean “morally wrong” in the context of broader “copyleft” notions regarding how much theoretical math/science knowledge should ever become “intellectual property”.

    (Once again, all these opinions are guaranteed only to be my own, not Seth’s or Terry’s. For example, Seth and Terry might also add that the PhD student they were working with at the time was going through a lot of personal stuff and wasn’t being terribly productive in terms of writing stuff… indeed, they might even say that this erstwhile PhD student is clinically neurotic when it comes to formal writing. But again, that’s just me surmising. I can’t speak for them. 😉 )

    Footnotes:

    * Which isn’t to say we (especially Terry) weren’t involved in doing hard nitty-gritty work in making other darned things… e.g., experiments showing Rabi oscillations, Ramsey fringes, and other quintessential 1-qubit coherent phenomena in superconducting devices… stuff that fast-forwarding to 2013 in concert with other academic groups leads to academic superconducting QC researchers probably having qubits with coherence times 1, if not 2, orders of magnitude longer than those likely present in the D-Wave One chips and D-Wave Two chips… and thus stuff that, if it were in the D-Wave One and D-Wave Two chips, might substantially enhance their performance.**

    ** Or not… your humble commenter has since become quite enamored of the hypothesis that whenever any specific adiabatic quantum optimization / quantum annealing scheme is efficient on a specific optimization instance, one can construct a specific classical annealing scheme from the quantum one that captures much of that efficiency. I have a truly wonderful (and not-even-all-that-handwaving) proof of this, but it’s too long to put in this blog comment. 🙂

    Last but not least… obligatory Full Disclosure: During late 2008 and early 2009, I worked as an off-site consultant for D-Wave. In other words, I got $$$ directly from ’em. I have not received $$$ from them since early 2009. Beyond that de jure disclosure, my de facto disclosure vis-a-vis D-Wave is that I:

    (1) wish them all the best in their endeavor to actually make the darned thing,

    (2) wouldn’t be averse to getting $$$ from them in the future, depending on how things evolve

    (3) think their PR department could take it down a few notches without really hurting their fundraising and thus really should take it down *at least* a few notches.

  187. James Gallagher Says:

    Michael #181

    I think I’ve encroached on Scott’s blog enough for now.

    Scott and his collaborators have brilliantly debunked D-Wave and that is to be applauded – ESPECIALLY because these things must be done right. It’s hugely important that D-Wave doesn’t have a big influence on academia’s perception of quantum physics research.

  188. Henning Dekant Says:

    Bill #186 wrote:

    “… your humble commenter has since become quite enamoured of the hypothesis that whenever any specific adiabatic quantum optimization / quantum annealing scheme is efficient on a specific optimization instance, one can construct a specific classical annealing scheme from the quantum one that captures much of that efficiency. I have a truly wonderful (and not-even-all-that-handwaving) proof of this, but it’s too long to put in this blog comment.”

    This sounds very interesting. Did you already write this up in a paper?

  189. Henning Dekant Says:

    James G. #187, don’t think there is much to worry that D-Wave will “have a big influence on academia’s perception of quantum physics research.”

    Now, as to Scott’s concern that it may color the perception of QC research, that’s certainly possible 🙂

  190. Greg Kuperberg Says:

    Henning – I think that you’re a little bit fixated on the word “rigged”. If a marathon race allows paraplegic athletes to use wheelchairs, then is the race “rigged”? Not really, the race just is what it is, even though competitive wheelchair racers are faster than runners. What if someone uses such a race to claim that he can cure paraplegia? Then that really is rigged.

    Are McGeoch and Wang ethically obliged to disavow D-Wave’s distorted interpretation of their results? I agree that ideally they would. However, you can’t really require researchers to chase down every wrong interpretation of their work. D-Wave is making other people jump through hoops. Even if McGeoch did jump through that hoop of disavowal, D-Wave could shout her down with a megaphone like they have done other times. (In fact that’s exactly what they did to Scott in 2007. They misinterpreted a quote from him, then shouted him down when he corrected them.)

    Does this create a loophole in which a paper like McGeoch-Wang is technically objective but its authors can be biased? Yes it does, but scientists are only human and, while it is an intellectual lapse, it does not rise to the level of a breach of ethics. A key point is that there is a difference between letting someone else spew hype by citing your work — even if you quietly hold some of the same bias — and spewing hype yourself.

  191. Henning Dekant Says:

    Greg #190, I am only following this more closely for the last two years, but it is quite apparent that Scott held his own in the 2007 brouhaha, D-Wave’s megaphone isn’t all that effective.

    I am certainly hoping that McGeoch or Wang will give their perspective on this new controversy.

    What I am interested in, is to gauge (in $) at what level D-Wave’s machine currently performs (and no I am not expecting this to be already at 15M).

    Generally, what I want to track is the amount you’d have to pay to solve a D-Wave friendly problem with comparable performance on commodity hardware, and how this $ amount changes from one chip generation to the next.

  192. Rahul Says:

    Greg Kuperberg #183

    However, the way that Google, Lockheed, and for that matter USC are putting so many eggs into just one quantum basket is bizarre. They have all been distracted by hype. It’s almost a no-brainer that they ought to fund broader research programs if they want to go quantum.

    I don’t blame Google / Lockheed though. There are no other good “baskets” from an applied perspective. Apart from D-Wave nobody in QC is promising anything even faintly practical or close to application.

    Whether a dud or not aside (and very crucial), D-Wave’s already offering API’s for image recognition and sentiment analysis whereas the rest of the flock is still struggling to factor 15. If D-Wave’s hype did indeed turn not hype (personally, I doubt it) then D-Wave is miles ahead of the rest of them.

    A better image classification widget immediately pays of for Google (quantum, hybrid or classical irrespective). OTOH $10M pumped into various “baskets” in the hope of factoring 50 instead of 15 is scientifically exhilarating no doubt, yet commercially useless.

    Funds for the “broader QC research programs” are more NSF territory than Google / Lockheed.

  193. Greg Kuperberg Says:

    Sorry, I spoke too soon about Catherine McGeoch. She is showing some of the same symptoms as Daniel Lidar. What is only a molehill in the technical paper or seminar, becomes a mountain in the press release, and not just quoted by other people.

    “On one of the tests we have run, it is faster, much faster,” said computer scientist Catherine McGeoch of Amherst (Mass.) College, who announced the results at a computer science meeting on Thursday. “Roughly 3,600 times faster than its nearest competitor.”

    Sorry, CPLEX is not the nearest competitor. It’s only the nearest of the few competitors in McGeoch’s own tests. Troyer’s code is the nearest competitor. It is the code that does something similar to what D-Wave’s chip does, except without any quantum probability, and without the tremendous advantage of a special-purpose chip fab.

    Is this an unethical sound bite? It is somewhere between mediocre and disingenuous. A phrase such as “breach of ethics” is more of a legal standard and it’s better to stick to the science. It would be nice if McGeoch could explain why she maintains that CPLEX is “the nearest competitor” while several of us disagree.

    http://www.lohud.com/usatoday/article/2216255

  194. Greg Kuperberg Says:

    Henning – “What I am interested in, is to gauge (in $) at what level D-Wave’s machine currently performs”

    At the practical level, the answer is $0. Even if someone gives you a D-Wave for free, you can’t do anything practical with it that you can’t do faster with the laptop that you need anyway to use the D-Wave. It can’t even beat a cheap smartphone. The one thing that it can do, almost as well as Troyer’s code, is an artificial computational annealing problem on a graph with exactly the same connectivity as the D-Wave qubits. (If they should even be called qubits. It is mostly established that they are not just bits, but maybe they should still be called dwits.)

    Again, the D-Wave machine had been a demonstration of something theoretically certified, which it isn’t, that would have been different. Then, as Peter Shor says, speed would not have been the point at all.

    Rahul – “There are no other good “baskets” from an applied perspective. Apart from D-Wave nobody in QC is promising anything even faintly practical or close to application.”

    But promises are just words. What the academic laboratories are doing is much closer to actually being practical. By my estimate, 30 years in their case vs infinitely many years in the case of D-Wave. However, that’s just a guess and not a prediction in both cases.

  195. Rahul Says:

    Does Catherine McGeoch get paid by D-Wave for this work? I didn’t see a Conflict of Interest statement in that paper. So can I assume she doesn’t get any reward?

    Are such things not conventional in CS? I do see an acknowledgement to DWave but that’s not the same.

  196. Henning Dekant Says:

    Greg #194, that’s not what I want to track as I spelled out at the end of my comment. So, I repeat it here:

    I want to track is the amount you’d have to pay to solve a D-Wave friendly problem with comparable performance on commodity hardware, and how this $ amount changes from one chip generation to the next.

    Your stance seems to be more extreme than Scott’s who seems to agree with Daniel Lidar that there is a chance that this architecture may eventually deliver.

    That’s what I want to closely follow. Your assertion that D-Wave is worthless seems to be a forgone conclusion on your part, so you obviously don’t think there is any merit in keeping an eye on this.

    Hence, I think we finally identified the impasse, and don’t have to continue this back and forth.

  197. James Says:

    Henning #196:

    You are exactly right about Greg. Unlike Scott who is quite reasonable although sometimes over the top, Greg seems to have decided long time ago that D-Wave is nothing but trash. Nothing will change his mind, so you might as well try reasoning with a monkey.

    I am just wondering how come Greg hasn’t yet presented to us on this post his favorite anecdote about how nobody mentions D-Wave at QIP except over beer at the social event. His utterly irrelevant story has been amusing to me so many times in the past 🙂 Let’s hear it again, Greg!

  198. interested Says:

    I’m not a physicist, but, looking on, I get the impression that DWave has already accomplished its goal, which is to secure a place in history (justifiably or not) as the first maker of a practical quantum computer. Moreover, it’s one where a scientist might get computing time! Even if their computer doesn’t currently beat a dedicated implementation, it seems to beat a general-purpose one –at least in some cases– which is what’s usually available.

    I had a certain mixed-integer LP problem a while ago for which I tried GLPK and CPLEX, but the results were not as good as I had hoped. It would be really cool if my problem could be tackled by the DWave computer (or if I could simply say that I used a quantum computer for it!) In any case, using a totally different computer architecture to tackle a practical problem like that is something worthy of attention, and so I’ll continue to keep an eye on this.

    It’s not shocking that the DWave can only tackle a very limited scope of problems right now. Looking at the history of classical computing, people thought it was necessary to build a different computer for each type of problem, so it was surprising to build general purpose machine to tackle all sorts of problems. Perhaps the history of practical quantum computer is starting the same way. (On the other hand, I expect a lot of this is just hype, simply because I’ve been witnessing increasingly more of it in science recently, and so why should DWave be any different:)

  199. Bill Kaminsky Says:

    [Hmmm… Seemed WordPress didn’t like my previous submission. At the risk of redundancy allow me to resubmit in chunks.]

    To Henning @ #188:

    “… your humble commenter has since become quite enamoured of the hypothesis that whenever any specific adiabatic quantum optimization / quantum annealing scheme is efficient on a specific optimization instance, one can construct a specific classical annealing scheme from the quantum one that captures much of that efficiency. I have a truly wonderful (and not-even-all-that-handwaving) proof of this, but it’s too long to put in this blog comment.”

    This sounds very interesting. Did you already write this up in a paper?

    Of course not! I’m clinically neurotically paralyzed about formal writing!! But strangely, I ain’t neurotically paralyzed about informal blog commenting.

    Because of this strange quirk of mine, (i.e., commenting on Scott’s blog being a way around my writing neuroses), I’ll beg Scott’s indulgence to possibly hijack this commenting thread with an outline of incomplete, original research… (to be continued)

  200. Bill Kaminsky Says:

    (…continued from #199)

    A la Pascal, here’s as short a version as I can write quickly.

    Please forgive me for any buzzwords I don’t explain well enough in the Background section that follows. I do swear any such buzzwords are in more-or-less standard use in the sub-community of condensed matter theorists who study classical and quantum spin glasses.

    ***BACKGROUND***

    Adiabatic quantum optimization, quantum annealing, and classical annealing all can be formalized as stochastic motion over a “Thouless-Anderson-Palmer [‘TAP’ for short] free energy landscape.”

    To briefly get technical, for a system of N spin-1/2 particles with given sets of individual fields $\{ h_i \}$ and pairwise couplings $\{ J_{ij} \}$ that are at a given temperature $\beta \equiv 1/T$, the TAP free energy is a real-valued function of the *magnetizations* of the spins (i.e., their individual expectation values along the x-, y-, and z-axes $\{ m_{1x}, m_{1y}, m_{1z}, \ldots, m_{Nx}, m_{Ny}, m_{Nz} \}$). Namely, the TAP free energy is defined such that the probability of the system exhibiting a given set of magnetizations when it has the aforementioned fields, couplings, and temperature is directly proportional to
    $$
    \exp[- \beta F_{TAP}(m_{1x}, m_{1y}, m_{1z}, \ldots, m_{Nx}, m_{Ny}, m_{Nz}; \{ h_i \}, \{ J_{ij} \}, \beta)]
    $$
    As such, the equilibrium *phases* of the spin system correspond to the global minima of $F_{TAP}$. *Metastable* states are local minima of $F_{TAP}$.

    [Sidenote: Yes, I use TeX in my informal blog commenting. Doesn’t everyone?! 😉 ]

    Note that the TAP landscape corresponding to an algorithm, be it classical or quantum, is *dynamic*. $F_{TAP}$ with respect to the magnetizations varies as you vary the fields, couplings, and temperature. For example, the starting point of classical annealing is a high-temperature landscape taking the form of a simple “funnel” landscape to one global minimum, and the starting point of quantum annealing / adiabatic quantum optimization is a low-or-zero-temperature-but-high-transverse-fields landscape also taking the form of a simple “funnel” to one global minimum. As temperature is reduced in the classical case and as transverse field is reduced in the quantum case, this one initial global minimum can furcate into global “daughter” minima (a continuous, aka “2nd order” phase transition). Alternatively, it can be overtaken by a minimum that nucleated somewhere else on the landscape then subsequently decreased in free energy faster than it (a discontinuous, aka “1st order”, phase transition). And there’s also more complicated situations, like when the global minimum furcates, but the “daughters” grow to have different free energies (an initally continuous phase transition that’ll require a subsequent discontinuous phase transition if you want to move between the daughter phases).

    **In any case, the hope for quantum annealing / adiabatic quantum optimization is really that by building a quantum annealer, you get to move on a TAP landscape is *qualitatively simpler* than the one’s you’re able to move on classically** By “qualitatively simpler,” I mean a landscape where you can take many fewer and shorter discontinuous jumps in between simple, greedy continuous gradient descents on your way to the global minimum at the algorithm endpoint (i.e., essentially zero temperature and zero transverse fields.)

    ***MY CONTRIBUTION (which I’ll merely assert without proof… so take with a sizable grain of skeptic’s salt)***

    This hope is misplaced. Transverse Ising models which are “densely connected” (i.e., each spin is coupled to a finite fraction of all the others) permit one to calculate $F_{TAP}$ via a convergent series expansion with 1/(# of coupled-neighbors) as your small parameter. Now, on the one hand, densely connected Ising models (with or without transverse field) possess ground states complicated enough to pose NP-complete problems. In other words, finding the location of a global minimum of $F_{TAP}$ for such densely connected Ising models is NP-complete. Yet on the other hand, thanks to the aforementioned convergent series expansion, finding merely the $F_TAP$ associated with some given set of magnetizations, fields, couplings, and temperature to some necessary level of accuracy is quite classically tractable.

    **Thus, you don’t need to build a quantum computer to be able to move on such a quantum TAP landscape!**

    One can make efficiently make what I’d call a “meta” stochastic *classical* algorithm that directly moves about on the quantum TAP landscape (i.e., one that stochastically over magnetizations according to F_{TAP}’s that are accounting for quantum fluctuations) even if one could never efficiently make what I’d call a “direct” classical simulation of the quantum algorithm by, say, Path-Integral Monte Carlo because the Monte Carlo paths would never sample the relevant low energy portion of configuration space efficiently.

  201. Blazespinnaker Says:

    Hopefully Lidar is patenting his work on error correction for dwave. Perhaps he can force dwave to cross license all their work so we can replicate what they have done 🙂

  202. Scott Says:

    Henning #196 and James #197: No, I completely agree with Greg’s answer to the question he was asked. If you asked me to estimate the commercial value of the current D-Wave devices—subtracting out PR, scientific altruism, and other such “externalities”—then I would also say roughly $0. (Well, not exactly $0, since the cooling system, black panels, and other components that ship with the device surely have some value, and since even buying a laptop and installing Troyer’s code on it would cost ~$600.)

    On the other hand, I certainly wouldn’t estimate the value of D-Wave as a company at $0, since as I mentioned before, there’s some probability (which reasonable people can disagree about) that they’ll eventually manage to overcome the technical problems of getting a quantum speedup, and also since they seem to hold many patents that could conceivably be valuable someday.

  203. Henning Dekant Says:

    Bill K. #199, if Scott doesn’t indulge you feel free to post at my blog (linked above at my name).

    I have a WP plug-in that allows for LaTeX in comments like this: $ \sum _{1}^{\infty }\tau$

    So depending on how far you want to take it that may help – but of course we can always keep it as informal as your neurosis requires 🙂

  204. Greg Kuperberg Says:

    Henning – Your questions are a moving target. First you ask how the machine currently performs, in units of dollars. Now you want to know about whether its architecture will ever be useful, which is a different question.

    Yes, Lidar and Scott and I all agree that D-Wave’s architecture could eventually be useful, but Lidar believes much more than that. He believes that his hosting of a D-Wave device (or two now) is a massive quantum leap into the future for his university. That is a very different claim from simply “could eventually be useful”. There are many different values of the word “could”. Prince Andrew “could” be King of England one day, and so “could” I, but it’s not the same “could”.

    Anyway if I understand correctly, Troyer can replicate the solution to D-Wave’s ideal annealing problem with roughly $1,000 of off-the-shelf classical hardware. And with the same performance scaling.

  205. Scott Says:

    interested #198: It sounds like you and D-Wave could be a match made in heaven! You don’t care about getting an actual speedup over a classical computer, in a comparison where even 1/1000th of the effort has been put into optimizing the classical code as has been put into optimizing the QC. Instead, in your words, you mostly want to “simply say” that you used a quantum computer to solve your mixed-integer LP. Well, D-Wave can’t currently give you a speedup, but it is more than happy to let you say you used a quantum computer (slightly reminiscent of those companies that, for the right price, will let you say that such-and-such a star is named after you). Do you happen to have $15 million lying around? 🙂

  206. Henning Dekant Says:

    Scott, you just made my point: Assuming a going IT consulting rate of 150$/h for the development of Mathias Troyer’s code and couple of man-months work, we get to about the same replacement value as a workstation with a commercial CPLEX license i.e. about 10K-20K (assuming that this will indeed do the trick for where D-Wave’s machines are at at this point).

    Now, it will be interesting to know the commercial equivalence price for each subsequent chip generation.

    After all, I think we can all agree that the chip versions before D-Wave One’s RAINIER were at best interesting toys (full of promise for some and full of baloney for others).

    So what I want to track is the commercial replacement value for each subsequent doubling of qubits.

  207. James Says:

    Scott and Greg:

    How about also factoring in the cost of human labor that went into writing Troyer’s code as well as the opportunity cost for the time it took to develop the whole thing before it could be run in a head-to-head comparison with the D-Wave machine.

    And if we keep going into ridiculousness, how about also factoring in the billions of dollars that it has taken humanity to develop the current classical computing technology over several decades?

  208. Greg Kuperberg Says:

    interested – “I’m not a physicist, but, looking on, I get the impression that DWave has already accomplished its goal, which is to secure a place in history (justifiably or not) as the first maker of a practical quantum computer.”

    Unfortunately yes, that’s possible. There are several cautionary tales in history for that sort of thing, people securing their place in history without justification.

    For example, Jonas Salk secured his place in history as the man who found the vaccine for polio. What he actually did was trumpet a vaccine that was not ready for prime time. Other foundations cut off funding for other polio researchers — they thought that the problem was solved. Salk’s vaccine was deployed in the United States and had some positive effect, but it also took credit for a cyclical downturn in polio. It also trampled the experimental environment for testing other vaccines. After about a decade the US accepted incontrovertible proof that it should use Sabin’s vaccine instead. Salk was initially smug about having beaten everyone else to the punch, and he made his case with false magnanimity. By the end, when it was ever clearer that Sabin was right, Salk accused Sabin of endangering the public, complained that the government wasn’t consulting him, etc. Nonetheless, in the history books Salk mostly still won.

    Yes, D-Wave might already have secured its place in history just like Jonas Salk did. Probably not, but maybe. That’s what I’m afraid of.

  209. Henning Dekant Says:

    Greg, #208 if this is your concern, then it seems to me that your energy is misguided.

    Seems to me, if that’s what’s driving you, then the only rational way forward will be Daniel Lidar’s approach, i.e. trying to help them succeed.

  210. Scott Says:

    Henning #206: On the other hand, now that Matthias and his group have already written the code, there’s a good chance you could get it from them for free! But the very fact that it comes down to stuff like that illustrates why I think the entire question of the “value” of the D-Wave device is a premature one.

  211. Henning Dekant Says:

    Scott, #210 have to disagree on that, getting a handle on the value and its possible trajectory going forward is essential now that the device can be purchased.

    Will be interesting to see how generic Matthias code will be once I can get my hands on the paper.

    Ironically, if he gives it away for free D-Wave could integrate it into the emulator for their development environment 🙂

  212. Henning Dekant Says:

    Bill K. #199 to make it more readable (math is rendered) and in order to not over-burden this comment thread I copied your quick write-up to a page on my blog.

  213. Alex Selby Says:

    Out of curiosity I tried writing a simple QAP solver to see how well the software solvers in McGeoch and Wang (http://www.cs.amherst.edu/ccm/cf14-mcgeoch.pdf) were performing.
    (Program and results are at https://github.com/alex1770/myfirstqapsolver)

    This may be considered “cheating” (or “dedicated”) as it isn’t a general purpose constraint satisfaction solver, but to even it up somewhat, it is a very simple short unoptimised program written in Python (not particularly known for its speed) and it only runs each problem for one minute instead of the full 30 minutes on each problem.

    (There is a slightly puzzling total computation time given in the paper, which suggests Blackbox/D-Wave spent 6*24*60/33 = 261 minutes per problem rather than the stated 30. I must be missing something obvious here.)

    Comparing the results from this run with the best from McGeoch-Wang (the best of the software and hardware solvers), this program gets 6 answers the same, 2 worse and 25 better. That would reverse the score from “D-Wave found 24 of the best that TABU didn’t; TABU found 5 of the best that D-Wave didn’t” to “D-Wave finds 0,1, or 2 of the best that QAP solver didn’t; QAP solver found 25-33 of the best that D-Wave didn’t”. (You can’t tell exactly, because they haven’t specified which solver scored what on each problem. This is a pity, since it would be nice to see if D-Wave does better on the easier or harder problems.)

    Or to put it another way, to cause the results to be comparable, you have to restrict the running time of the python program to about 1 or 2 seconds.

    The authors say in section 3 that you can’t compare a dedicated program with a general purpose constraint satisfaction solver, but the problem with this is that it means that the rules of the game are unclear and things are going to depend heavily on how you encode the original problem so that it works on your particular solver, and how much of your specialist knowledge you might have implicitly used in doing so.

    This could apply particularly to the (unspecified) 5.5-fold compacting transformation that is used for the benefit of the Blackbox (D-Wave-based) solver, but not for the software solvers. It seems possible that this gives a significant advantage, and may arguably be similar to using a dedicated program (how do you get such a large reduction without understanding something about the nature of the problem?). The authors say they didn’t have time to apply the same transformation for the benefit of the software solvers, but I don’t see how this section of the paper is complete without that experiment. In any case, I’m surprised they weren’t so eaten up with curiosity about the result that they delayed publication a little.

  214. Rahul Says:

    Two questions from a non-physicist:

    (1) Is it possible to build a physical annealing based computer that is not quantum in nature? Non-quantum systems left to themselves seek ground states too, right? Or is the quantum nature somehow fundamental to an annealing based physical computing machine? Would a physical classical annealing computer be too slow or otherwise impractical? Just curious.

    (2) What sort of problems can and cannot be mapped to an Ising model based annealing optimizer?

  215. Peter w. Shor Says:

    Hi Scott. I’ve figured my real complaint about your blog post, and I think I can boil it down into terms that you can understand.

    You guys aren’t taking into account the difference between capital cost and marginal cost. Suppose I build a new wind-powered generator plant for the cost of several hundred million dollars and run it for a day or so. You then divide the cost by the amount of electricity produced, and say that the power costs a hundred dollars per kilowatt-hour, conclude that the plant is a failure, and laugh at me.

    It should be obvious that your analysis is completely wrong. That’s because it doesn’t take into account the difference between capital cost and marginal cost.

    You’re doing exactly the same thing here. The hundreds of millions of dollars put into the development of the D-Wave chip is capital cost. The months that Matthias Troyer put into developing his simulation code is marginal cost. You can’t compare them without being wrong on a fundamental level. Once you’ve bought a D-Wave machine, you don’t need to buy another one to run a different problem on it. But once you’ve hired a crack programmer to write very fast special-purpose code for one problem, good luck getting it to run on a different problem.

    (Of course, if the wind-powered generator plant doesn’t actually produce any electricity and never will, then it would indeed be a failure. But look back at what you wrote … that’s not your actual argument.)

  216. Peter w. Shor Says:

    And in fact, to make money, D-Wave’s machine doesn’t have to be faster than special-purpose code written for a specific problem, they just have to be faster than off-the-shelf software for solving problems (they also have to reduce the cost of their chips by an order of magnitude or two, but I feel certain that if they make more than three or four, it won’t cost them 15 million per chip to make).

  217. Scott Says:

    Peter #214, #215: I’m afraid your argument makes no sense whatsoever to me. Matthias’s code is precisely as “special-purpose” as the D-Wave machine itself is! The machine solves Ising spin minimization problems with a particular, fixed constraint graph. Matthias’s code solves Ising spin minimization problems with the same constraint graph. If you want the D-Wave machine to solve other optimization problems, then you first have to map those other problems onto D-Wave’s problem. But having done so, you could also feed the result to Matthias’s code.

    So the relevant question is simply: does there exist any probability distribution D over instances of the D-Wave problem, for which the machine outperforms Matthias’s code? I touched on exactly that question in the post, where I tried to make clear that the situation is this:

    There’s no proof that such a D can’t exist, and maybe D-Wave will even manage to find one (I gather they’re searching for one now). But at present, we don’t know any example of such a D. Moreover, the “uniform distribution” over instances of the D-Wave problem would seem naively like a distribution very highly tilted in D-Wave’s favor (after all, the instances have all the “structure” their machine is designed for, and none that it isn’t designed for). So if Matthias’s code outperforms the machine even there, then it’s hard to see why we’d expect the machine to do better elsewhere. But again, I freely confess that that’s not an airtight argument.

    Summary: If your defense of D-Wave boils down to, “No one has found any situation where D-Wave’s machine (which is one fixed thing) outperforms Matthias’s code (which is another fixed thing), but nor has anyone proved that no such situation could possibly exist,” then I’m fine with that! I’ll simply note that that’s a vastly weaker defense than most people reading the recent D-Wave news would imagine could be mounted.

  218. John Sidles Says:

    Peter W. Shor says (#149): “There are a few questions people could be asking: [Peter’s well-posed and thought-provoking list of questions follows]

    Please let me say this Peter’s #149 was (to my mind) the single most productive comment that has been posted on this long (and too-often rancorous) D-Wave thread.

    Isn’t it plausible that — like many scientific advances — the chief merit of D-Wave’s achievements will turn out to be, not the new answers to existing questions that these achievements provide, or the new and greater computational capabilities,  … but rather new and better questions?

    Hooke’s microscopes focused scientific and mathematical attention upon good questions like “How does blood flow? How do tissues grow?” Galileo’s telescopes focused scientific and mathematical attention upon good questions like “How does the moon control motion of the tides? How does the sun control the motion of planets? ”

    Peter Shor’s list of questions that D-Wave’s work inspires is excellent … and no doubt could be extended to a longer list of new and fundamental questions! A Shtetl Optimized discussion of “Questions inspired by D-Wave” would be a terrific stimulus to creative thought by the QIT community.

    It is of course neither necessary, nor feasible, nor even desirable that everyone focus upon the same questions. A great merit of D-Wave’s achievements is the diversity of the questions that these achievements inspire.

  219. Scott Says:

    John #217: After reading your comment, I tried to think of a single example of an interesting new research question that the D-Wave affair has inspired—a question, of course, that doesn’t involve D-Wave itself. Finding such questions is more challenging than your comment suggests!

    Certainly none of Peter Shor #149’s questions qualify. For as Bill Kaminsky #186 documents, people were kicking around Peter’s question (a) for years before D-Wave became the subject of widespread attention in 2007, while Peter’s questions (b) and (c) are about D-Wave.

    The best I can come up with is that D-Wave has inspired more attention than there would’ve been otherwise on interesting questions like the following:

    – How exactly should one verify the “quantumness” of an alleged QC? If a “QC” can be simulated efficiently by a classical computer in every known situation, then what properties besides speedup make it a QC in the first place? (Certainly the presence of quantum effects can’t suffice, for then we’d have to say that every computer with transistors in it is a “quantum computer.”)

    – What’s a fair performance comparison, in practice, between a QC and a classical computer?

    – Can finite-temperature quantum annealing provide any speedup over classical simulated annealing?

    – How should we think about error-correction and fault-tolerance for adiabatic QC?

    On the other hand, it’s important to understand that all of these questions would’ve gotten some attention even without D-Wave—and furthermore, it’s possible that other interesting questions would’ve gotten more attention without D-Wave taking up people’s time.

  220. Bram Cohen Says:

    Peter Shor #214: How are we supposed to make a comparison except by wall clock time? Drawing graphs of how fast something is running and claiming that the slower thing should eventually win if the hardware is fast enough is an easy route to self-delusion. I know of no case which didn’t have a firm asymptotic analysis under it where such a phenomenon didn’t simply evaporate once the greater computing power was thrown at it.

    Also, the capital outlay into DWave’s machine was quite large, large enough that classical chips with that kind of budget would easily outperform general purpose chips on specialized problems. And computers are quite fast these days, even a very slight improvement in the asymptotic at that scale should make the thing with the better asymptotic win by a large margin, even if it’s on nominally slower hardware.

    If we’re going to be honest about progress towards ‘real’ quantum computers here, experiments other than DWave’s have far more demonstrable results, but they don’t have the outrageous marketing claims which DWave does, even though they have far more claim to them.

  221. Michael Bacon Says:

    John@217,

    “Isn’t it plausible that — like many scientific advances — the chief merit of D-Wave’s achievements will turn out to be, not the new answers to existing questions that these achievements provide, or the new and greater computational capabilities, … but rather new and better questions?”

    Yes.

    “Please let me say this Peter’s #149 was (to my mind) the single most productive comment that has been posted on this long (and too-often rancorous) D-Wave thread.”

    Really? It’s certainly one of the most open-ended and philosophical post (nothing against either) — new and better questions are in a sense, what it’s all about. However, given the focus of the original post, and the many, very thoughtful (not so rancorous) responses discussing the substance of the claims and underlying science, I’d be hard pressed to agreed that it even comes close to being the single most productive comment. And, if it is the single most productive comment, then it will be the single most productive comment on almost any similar post. 😉

  222. Rahul Says:

    Michael Bacon #220

    It’s certainly one of the most open-ended and philosophical post

    Maybe that explains why John Sidles likes it so much. 🙂

  223. Greg Kuperberg Says:

    Peter – I haven’t studied Troyer’s code, but I suspect that the only thing that’s special-purpose about it is its specific annealing strategy. If so, it is actually far less special-purpose than a D-Wave device, which is committed by hardware to a specific graph connectivity for its “qubits”. Also, although I don’t know how long it took him to develop his code, I would think that it was days or weeks, not months.

    So I conjecture that both the marginal and capital costs of what Troyer is doing are much lower than what D-Wave is doing. I conjecture that a more polished version of his pseudo-simulator could perpetually be cheaper and faster than anything that D-Wave can make without some massive redesign. He also hasn’t yet taken advantage of FPGA methods, although I don’t know whether he needs them in order to be D-Wave’s doppelgänger.

  224. John Sidles Says:

    Scott suggests [as a question inspired by D-Wave’s achievements] “Can finite-temperature quantum annealing provide any speedup over classical simulated annealing?”

    Scott, that is a pretty good question (as it seems to me) … provided that the term “speedup” is appropriately defined. Arguments of the form “<algorithm/hardware A> provides merely a polynomial speedup over <algorithm/hardware B>” are superficially plausible … until we substitute

    “<algorithm/hardware A> ⇒ Cooley–Tukey FFT

    “<algorithm/hardware B> ⇒ FT (by matrix multiplication)

    The point is that “mere” polynomial speedups have transformed entire disciplines in mathematics, science, and engineering. We need all such speedups we can get!

    Should it come about, that understanding D-Wave’s quantum hardware performance, points us toward (unexpected!) next-generation classical algorithms that offer “only” polynomial speedups for broad classes of practical problems … wouldn’t that be a terrific outcome for mathematicians, scientists, engineers, and entrepreneurs — the entire STEM community?

    What specific and well-posed algorithm-centric questions does D-Wave’s research inspire us to ask? A guy named Scott Aaronson suggested that the best place to post algorithm-related questions that are specific and well-posed is not Shtetl Optimized, but rather forums like TCS StackExchange and Math Overflow.

    Do I personally have any specific questions in mind? Yah, sure, you betcha! However, it may take a week or two to code-up the test cases, citations, etc. (it’s considerable work to construct good questions for these forums). Inspired by the Shtetl Optimized question “What’s up with those painting elephants?”, watch for a D-Wave-inspired question “What’s up with those Cunningham numbers?”

    Conclusion  Past decades have witnessed many transformational advances in algorithms that run on classical computers — Cooley-Tukey FFT, Shannon codes, Kohn-Sham theory, multipole expansions, Buchberger’s algorithm (and dozens more)— and so there is every reason to anticipate that coming decades will witness dozens more. To the extent that reflecting upon D-Wave’s remarkable quantum achievements helps us to discover these next-generation transformational advances, every STEM researcher already has substantial reason to proclaim “Thank you, D-Wave!”

  225. Scott Says:

    John Sidles #223: I never demanded, anywhere, that D-Wave provide super-polynomial speedups over the best known classical algorithms! I (and most others, I imagine) would be thrilled to see any quantum speedup, for any problem whatsoever—just as long as the evidence for such a speedup can withstand scrutiny like Troyer’s.

  226. John Sidles Says:

    Scott (#224), what about classes of algorithms (which are relatively plausible IMHO) that generically require n^p operations on n D-Wave qubits, but require (kn)^p operations on n classical bits?

    Here k might be (for example) the effective rank of the dynamical state-space that supports D-Wave’s qubit trajectories. In effect, each qubit does the work of k classical bits.

    By some complexity-theoretic measures, this is no speedup at all! And yet for p-exponents p≳4, and k-rank k≳10, aren’t the practical implications (again) transformational?

    Conclusion  Some common measures of computational complexity are not well-adapted to assessing the implications of D-Wave’s achievements.

  227. Greg Kuperberg Says:

    Now that I think about it further, I’ve changed my mind about my comment that Troyer has fully made his point.

    There has now been a lot of hype and press about big bad Lockheed and big bad Google buying the D-Wave device, and about an illusory 3600x speedup. So I think that it would be useful and even fun to make another web site called “S-Wave” that does the same annealing optimizations that D-Wave does, except without anything quantum. You could have a chart of D-Wave vs S-Wave benchmarks, downloadable software, things like that. You could even use S-Wave to compute trivial Ramsey numbers. I already happen to know that R(2,8) = 8 and that the empty graph on 7 vertices is the unique optimum, but why not confirm that with simulated annealing.

    It seems that tech journalists these days are impressed by trapping of Silicon Valley, including snazzy web sites. If D-Wave is so eager to pursue quantum experiment without quantum theory, and equally eager to hype the results, then they deserve this treatment. I was non-plussed by Daniel Lidar’s concept of hype “for” and “against” D-Wave; now I think it might mean something.

  228. Rahul Says:

    @Greg #226

    I like your design. If I may suggest modifications, please house it in a big, heavy black box. 🙂

  229. Scott Says:

    Greg #226 and Rahul #227: LOL, yes!

    The attitudes of many commenters, on this thread and other D-Wave threads, remind me of the story about the kid who thought there were little men inside his TV. Hearing this, the kid’s engineer mother took a few hours to explain to him all about digital signal transmission, liquid crystal displays, etc. The kid listened with interest, asked intelligent questions, and seemed to understand. But finally he said: “but mom, there must be at least a few little men inside the TV, right??”

    No matter how many times you try to explain that there’s not yet evidence for any genuine speedup whatsoever, people keep coming back and saying stuff that presupposes that there must be such a speedup, so that it makes sense to talk about all sorts of other issues like whether the speedup is constant-factor, polynomial, or exponential, or the economic value of the machine, or its applications, or the wonderful new scientific questions raised, or D-Wave’s place in history. It’s as if the possibility of no benefit whatsoever from this approach goes so far against what people want to believe, that it can’t even be seriously entertained. No matter what you say, there are always at least a few little men inside the TV.

  230. Rahul Says:

    Scott #228

    What’s funny about this discussion is also the degree of divergence in opinions: You are in the realms of “no speedup whatsoever” whereas I see other blogs talk in the tone of “more computing power than the whole universe in another 3 years”.

    Now that to me was amazing. We aren’t arguing about small differences here.

  231. Henning Dekant Says:

    Scott, tracking the equivalence value of a D-Wave machine is translating the open speed-up question into a language any CIO will understand. I.e. if Mathias Troyer can fully replicate the D-Waver performance on a Laptop then the price comparison between the Laptop and D-Wave One speaks a very clear language.

    If on the other, and this is pure speculation on my part, Mathias would need a small cluster to beat the next D-Wave chip generation than this could hint at the possibility that maybe D-Wave could scale this up fast enough to become more interesting in the intermediate future. Can they possibly achieve this without some sort of quantum speed-up or magic little men in the box? Probably not, but the broader audience won’t ask that question, they will compare dollars to dollars, so that’s what I will keep tracking.

    If the dollar equivalence value was to flatline, this will indicate that they cannot achieve a speed-up over conventional hardware, and then again, this will speak a very simple language any IT manager can understand.

    In the latter case, Greg’s and your nightmare may very well come to pass, and a QC winter may ensue.

  232. John Sidles Says:

    Scott, perhaps a more à propos fable is the celebrated Pony Joke (in either the Ronald Reagan version or the Calvin & Hobbes version). Because STEM history shows us plainly that optimistic shovelers have uncovered plenty of ponies in surprising locations!

    So perhaps what von Neumann celebrated description of mathematics in general, applies also to QIT?

    ———-
    The Mathematician
    The only remedy [for mathematical staleness] seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas. I am convinced that this is a necessary condition to preserve the freshness and vitality of the subject, and that this will remain equally true in the future.’
    ———-

    Everyone appreciates that D-Wave’s achievements point toward mathematical insights that have a substantial empirical component. Is this empirical element good or not? That is entirely a matter of personal taste, and strict unanimity of opinion in this regard surely would be bad for the STEM community.

  233. Henning Dekant Says:

    Rahul #229, very good point. What’s irritating to me, is that the hardcore sceptics seem to insist that there is no point in ever paying attention to D-Wave. Even if the latter is just for the sake of pragmatically trying to figure out if they may be able to eventually deliver a convincing speed-up.

  234. Bill Kaminsky Says:

    Amplifying Scott’s point (#228) about the need for “evidence of a genuine speedup” and how there’s none presently, I want to add something new to the discussion, namely:

    Evidence for a genuine speedup on what we’ve dubbed here the “D-Wave problem” (that is, finding a ground state of an Ising model with $\pm J$ random couplings on a chimera graph) will require significantly more than solid proof of better scaling for the D-Wave device than classical annealing.

    Classical annealing is only pertinent to the discussion thus far because:

    (1) it’s a well-known heuristic for hard combinatorial minimization problems generally

    (2) it’s been a constant worry that the D-Wave machine is, when all is said and done due to decoherence, just a classical annealer.

    However, classical annealing is not the fastest currently-known heuristic for solving random combinatorial minimization problems like the “D-Wave Problem” of a random Ising model on a chimera graph. Instead, something called survey propagation* is. Survey propagation can be astoundingly faster than classical annealing, and we even have some rigorous mathematical proofs why**. (NB: I’m 99.99% sure that Troyer’s “optimized” classical annealing code is not survey propagation. The optimizations are strictly in terms of compactly remembering where your random path has taken you thus far.)

  235. Bill Kaminsky Says:

    Footnotes to my last comment (#233):

    * The Intuition behind Survey Propagation and the requisite Buzzwords to read deeper:

    Naive Intuition: Random constraint satisfaction problems are hard because not only are there lots more local minima than global minima (indeed, exponentially more in the # of bits), but also all the minima are far apart from each other in Hamming distance (i.e., # of bit flips needed to move among ’em will be \Omega(N) for N bit system) with no obvious reason to believe there’s any rhyme or reason to how they’re spread out in Hamming space.

    More-refined intuition (and this is what survey propagation depends upon): For certain densities of clauses-to-bits, there actually is still some rhyme or reason to how things are spread in Hamming distance. To be specific, the minima come in “clusters” with relatively small Hamming distances among them and—skipping a whole lot of very nontrivial non-rigorous stat mech and/or mathematically-rigorous probability—one can calculate “marginal distributions” over these “clusters” regarding their “cavity fields” (that is the net field the neighbors of a bit impose on it due to their couplings to it). As such, just because the landscape has started to look like lots of random minima strewn across all of Hamming space with lots of significant Hamming distances among ’em, doesn’t in fact mean there ain’t structure to exploit still… and this very structure is what survey propagation exploits.

    Even more-refined intuition: But all good things must come to an end, it alas always seems. Add yet more constraints per bit and the “clusters” of minima mentioned above “shatter” into truly “isolated” minima with respect to Hamming distance, and here — as best as anyone can tell thus far — there is no rhyme or reason to where these minima lie. Certainly there are no longer big enough “clusters” on which you can glean statistics about useful quantities. As best as anyone can tell, you well and truly seem to have the worst case scenario of exponentially rare needles (i.e., global minima) strewn utterly randomly in a haystack filled with exponentially more numerous look-like-but-ain’t-needles (i.e., local minima). Though, kinda amazingly, one can rigorously pin down where this horrible situation starts, which brings us to…

  236. Bill Kaminsky Says:

    […And since wordpress seems to hate my long comments, here’s the final chunk…]

    ** References to read:

    Probably best to just start with the introduction to the current big daddy of the rigorous work and look there for all the references to the prior, non-rigorous art (which honestly can be even harder to understand IMHO than the rigorous stuff)

    Dimitris Achlioptas, Amin Coja-Oghlan, and Federico Ricci-Tersenghi, “On the Solution-Space Geometry of Random Constraint Satisfaction Problems.” Random Structures & Algorithms, Volume 38, Issue 3, pages 251–268, May 2011; available for free from Achlioptas’s website (Google it, WordPress seems to hate my links)

  237. Henning Dekant Says:

    Rahul #213 Don’t see that anybody answered you questions, so let me take a stab at it.

    (1) Yes, you could build an analog computer based on classical annealing, but I am not aware that this has been done, it’ll be kind of pointless as you expect simulated classical annealing to pretty much capture all the aspects of such a device. Due to the non-local nature of entanglement and quantum tunnelling I expect quantum annealing to perform better at finding a global minimum than classical annealing, but I don’t think that this has been conclusively demonstrated.

    (2) Any problem that you can translate into finding the minmum of a global “energy function”. I.e training of artificial neural networks map trivially onto this. Things it cannot do are all the quantum gate based algorithm like Shor’s most famous one for factoring primes.

  238. Rahul Says:

    Henning #236:

    Thank you.

  239. Artificial Intelligence: How will this change AI/AGI? - Quora Says:

    […] the article and the D-Wave "quantum computer" is too high for this to have any impact. :-Dhttp://www.scottaaronson.com/blo…Embed QuoteComment Loading… • Report • Just now    Menno Mafait, Thinknowlogy […]

  240. Scott Says:

    Henning #230:

      Greg’s and your nightmare may very well come to pass, and a QC winter may ensue.

    What I keep trying to say is that there’s absolutely no reason for a “D-Wave winter” to turn into a broader “QC winter”—just as long as people are able to keep the two things separate in their heads!

    Henning #236:

      the quantum gate based algorithm like Shor’s most famous one for factoring primes.

    Err, factoring composites. 🙂

  241. Henning Dekant Says:

    Scott #239, and here I was trying to factor this prime for the last ten years, darn it Scott!

  242. Rahul Says:

    Henning #236:

    “You could build an analog computer based on classical annealing, but I am not aware that this has been done”

    Could D-Wave’s machines be just this i.e. a practical classical annealing computer? Why not. Rather, how would one test.

    Could noise give a false impression of non-locality?

  243. John Sidles Says:

    Henning Dekant remarks “Due to the non-local nature of entanglement and quantum tunnelling I expect quantum annealing to perform better at finding a global minimum than classical annealing, but I don’t think that this has been conclusively demonstrated.”

    Plausibly supposing that D-Wave’s quantum dynamical trajectories are pulled-back (by Lindbladian flow) onto tensor-product state-spaces of rank-k, and further plausibly supposing each such polynomially entangled qubit is computationally equivalent (for annealing purposes) to k classical qubits, then the analysis of comment #225 applies … in which event the practical computational consequences, for many problem classes, indeed would be transformational.

    Is this mechanism (or any similar mechanism) at work in D-Wave’s devices?

    As you (correctly) point out, no one knows.

    Conclusion  D-Wave’s pile of demonstrations is amply large to contain many “ponies” (per Comment #231) … or none! 🙂

    ——————

    PS  The earliest computational annealing device (known to me) is Francis Galton’s Quincunx Board of 1894, which worked to compute the normal distribution function. Nowadays these devices commonly are called “bean machines” … most science museums have one on-display!

  244. DIY Says:

    A question for the experts here: In your opinion, what does it take for a device to deserve the title “quantum computer”? And has D-Wave ever attempted/claimed to build something that would satisfy your criteria?

  245. Rahul Says:

    In your opinion, what does it take for a device to deserve the title “quantum computer”?

    From a strictly empirical perspective, a good PR department, apparently. 🙂

  246. Bram Cohen Says:

    Scott #228: It may be that the ridiculousness of using an exhaustive search solver is lost on most people. It’s a fun exercise to play with something like n-queens, which tends to go completely splat when an exhaustive solver is used on it but finish extraordinarily quickly when any stochastic algorithm is thrown at it.

  247. Greg Kuperberg Says:

    So, I was notified by e-mail by Daniel Lidar about this thread, is understandably chagrined by comments. On certain specific points he offers correct emendations (or correct as far as I know) to what I have said:

    (0) He goes by Daniel and not Danny. (Scott agreed to fix this, in which case this remark can be removed.)

    (1) It is an oversimplification to say that the D-Wave chip is only capable of classical error correction. A more careful statement according to what was said in Lidar’s talk is only correction of one type of Pauli matrix is possible. This can be called “pseudoclassical” error correction, and it is according to the talk a severe restriction, but it is loose to just call it classical.

    (2) Although I did say “if” because I didn’t know, I implied that USC’s two D-Wave devices are separate purchases (by I suppose Lockheed) with a total cost of $25 million. It’s probably lower than that. However, D-Wave as a whole has surely spent more than $25 million, and that was my real concern.

    (3) I also said that D-Wave (I did not mean specifically D-Wave related work as USC) is the most expensive quantum computing project that I have ever heard of it. Lidar tells me that the Kane proposal in Australia — which I hadn’t heard of before — is even more expensive. That could be true. Either way, these are gargantuan projects by the standards of most academic research.

    (4) I said that Lidar “believes” that his hosting of a D-Wave device is a massive quantum leap into the future for his university. He is quoted according to his university as follows: “We have been strong in quantum computing for years but this development really is a ‘quantum leap’ for us.” The title of the press release — although I do not know exactly who stands by the title — is “A Quantum Leap in Computing”.

    http://dornsife.usc.edu/news/stories/1079/a-quantum-leap-in-computing/

    Lidar insists that I should not speculate about people’s beliefs, which is fair enough. It’s better to quote people than to speculate about what they believe.

  248. blazespinnaker Says:

    It’d be terrific if source could be released that we could hook up to Amazon GPU.

    Get your very own 10M Dwave for 10 bux an hour (or whatever).

  249. DIY Says:

    DIY #244
    [b]A question for the experts here: In your opinion, what does it take for a device to deserve the title “quantum computer”? And has D-Wave ever attempted/claimed to build something that would satisfy your criteria?[/b]

    So here’s my non-expert take on this. A “quantum computer” should mean a “scalable universal quantum computer”.

    Is my understanding correct that even if D-Wave has built what it claims, no-one could honestly claim it is even a candidate for being a “quantum computer”?

  250. Joshua Zelinsky Says:

    DIY #249,

    That seems overly narrow. Consider for example the hypothetical of a machine able to only implement Shor’s algorithm. This doesn’t seem extremely likely, but it is conceivable that such a machine could take advantage of the restricted nature of what it needs to do (primarily just a lot of quantum fourier transforms) to make it easier to build (in a way similar to how some early computers were specialized machines). I suspect that most would call that a “quantum computer” with little reservation.

  251. James Says:

    DIY #249:

    A “spaceship” should mean “a faster than light vehicle that can zip through wormholes and instantly take us to distant galaxies whenever we want to visit our alien friends there”.

    Is my understanding correct that even if NASA has built rockets and shuttles, no-one could honestly claim they are even candidates for being “spaceships”?

  252. Scott Says:

    DIY #244 and #249: Just like with almost all human concepts, the definitional boundaries of “quantum computer” and “not quantum computer” are a bit fuzzy. A machine that directed implemented Shor’s algorithm to factor 10,000-digit numbers would definitely be a quantum computer. And at the other extreme, the computer on your desk is definitely not a quantum computer (despite, e.g., the “quantumness” of its transistors), if the term is to retain its meaning.

    But between those extremes, there are all sorts of intermediate possibilities that might become increasingly relevant in the next few decades (and not only because of D-Wave). For example, there’s BosonSampling, and other proposals that would almost certainly let you do something that’s hard to simulate classically, but without giving you full quantum universality (e.g., the ability to implement Shor’s algorithm).

    And then there are the “QCs”—based on liquid NMR, ion traps, photonics, etc.—that have successfully factored 15 into 3×5 and even 21 into 3×7 (!) using Shor’s algorithm, but that we don’t yet know how to scale up. But if we could scale them up, then we’re pretty sure we’d be getting an exponential speedup over any possible classical algorithm.

    And then there are things like Clifford-group QCs or noninteracting-fermion QCs — subsets of quantum computing that would indeed produce extremely-entangled quantum states on many thousands of particles, but that are known to admit efficient classical simulations for nontrivial reasons (i.e., the simulations don’t keep track of the whole superposition directly; they do something cleverer). How should we classify those?

    For me personally, the central question is whether or not I can at least see a “straight path forward” to getting an asymptotic speedup over any possible classical algorithm, under mathematical conjectures that I believe, and assuming quantum mechanics continues to be valid. For universal quantum computing, and for intermediate proposals like BosonSampling, there is such a path. For D-Wave’s annealing approach, by contrast, I’d say that no one has ruled out the possibility of such a path, but no one has made a compelling case for one either.

  253. Jay Says:

    Greg #247

    http://www.nature.com/nature/journal/v496/n7445/full/nature12011.html

  254. Michael Bacon Says:

    Yeah, James@251, the D-Wave device is to a computer using, say, entanglement, to do calculations unable to be matched classically, as rockets and space shuttles are to faster than light ships. So why don’t we stop the quibbling and just go ahead and the D-Wave device a quantum computer 😉

  255. Michael Bacon Says:

    Pace Jay@247, does anyone have any idea whether this:

    http://arxiv.org/abs/1305.4499 (Control of decoherence with no control),

    holds any promise as a way to better deal with decoherence?

    I know that I like the counter intuitive approach from a aesthetic standpoint. 🙂

  256. Nobody Special Says:

    Scott, I’m wondering if you can clarify something that’s been bothering me.

    What DWave claims to have built is a single-purpose device for solving QUBO. Assuming that DWave actually manages to get an asymptotic speedup. This is still an adiabatic device, it solves problems by finding a Hamiltonian who’s ground state matches the problem. Is there any reason to believe that even if an asymptotic speedup could be found for QUBO. Is that any reason to believe that the same could be said for other problems in other complexity classes? i.e. BQP?

  257. Scott Says:

    Nobody Special #256: First, congratulations on leaving the 28th comment on my 25th birthday! 🙂

    Second, when suitably generalized to an arbitrary number of spins, QUBO is NP-complete. That means that, in principle, you could map any other NP search or optimization problem onto QUBO, with “only” a polynomial blowup in the instance size. The two issues with that are:

    (1) As I keep saying, there’s no evidence yet of any speedup whatsoever compared to simulated annealing, even for D-Wave’s “native” QUBO problem.

    (2) Even if (per your question) there were a small speedup for QUBO, that speedup could easily be wiped out by the polynomial overhead in mapping other, more useful NP problems onto QUBO.

  258. John Sidles Says:

    Scott stipulates  “For me personally, the central question is whether or not I can at least see a “straight path forward” to getting an asymptotic [what does this word mean?] speedup over any possible classical algorithm, under mathematical conjectures that I believe, and assuming quantum mechanics continues to be valid.”

    Because different STEM cultures assign very different technical meanings to the term “asymptotic”, this “central question” remains meaningless until further definitions are given.

    A Test Question  Today’s VLSI processors are 10^8++ times faster than the mechanical processors of Babbage and Turing, and they access 10^12++ times more memory. Shall we say that their net computational capacity is asymptotically the same?

    Conclusion  Definitions of asymptotic computational capacity that conflate 19th century mechanical computers with 20th century electronic computaters are well-posed in a strictly mathematical sense (of course), and yet these same definitions have only marginal practical utility in scientific and engineering discussions … and in public discussions, insistence upon strict mathematical definitions can be grossly misleading.

    For similar reasons, mathematical definitions that serve to strictly conflate VLSI computational performance with D-Wave computational performance have only marginal practical utility and in public discussions can be grossly misleading.

  259. Anonymous Says:

    Here is a basic question that confuses me: why can’t we reduce factoring to D-Wave problem? Is it because there is no known efficient reduction from factoring to it? I.e. is the decision version of their problem a possible NPI problem? If yes, it seems from the interest from Google and others that the class of problems efficiently reducible to it should be quite interesting theoretically. How does it compare to other known classes? Is my understanding correct that it seems to be inside BQP intersect NP and contains P and D-Wave’s opinion is that it is not contained in BPP?

  260. Anonymous Says:

    Happy Birthday. Hope you have a good one.

  261. Nobody Special Says:

    Thanks Scott #257 (and happy birthday #32!). Yes, I am completely with you, D-Wave’s device as it stands seems to be inferior to not just classical machines but classical machines which are several orders of magnitude cheaper (I have comparable computing power used in that paper in my garage). I think what I was getting at though was that other problems commonly associated with Quantum Computing are not necessarily solved *EVEN* if DWave is somehow successful. i.e. Integer factorization is not, known to be in NP-Complete. So DWave’s current machine couldn’t be adapted to factor integers with the same hypothetical speedup. So I would assume then that some other Hamiltonian would have to be used? In which case is there any guarantee that a Hamiltonian for solving integer factorization would be equal to or better than running Shor’s on a gate model machine? (Aside: I sometimes find it useful when attempting to get through the layers of misconceptions that people have about QC is to point out that there are already known cases – such as factoring small integers – where classical machines outperform QC).

  262. Nobody Special Says:

    Anonymous #258. I believe if integer factorization was a DWave problem then it would mean that NP = co-NP which while not as mind-curdling as P = NP it goes against expectations.

  263. Scott Says:

    John Sidles #258: If there was a “quantum” device that got an enormous constant-factor speedup over all existing “classical” devices, but there wasn’t any plausible path to getting a super-constant speedup over existing computers, then I personally would be inclined to call the device an exciting new form of classical computing, rather than quantum computing. My argument boils down, once again, to the fact that transistors already heavily depend on quantum mechanics, but we don’t call our existing computers “quantum computers” as a result. So the hypothetical device we’re talking about could be seen as “just(!) a super-transistor.” I prefer the term “quantum computing” to retain its currently-understood meaning; if we want to discuss possibilities like that “quantum super-transistor” then we should invent different terms for them.

  264. Scott Says:

    Anonymous #259:

      Here is a basic question that confuses me: why can’t we reduce factoring to D-Wave problem?

    We certainly can. But if we do, then there’s no reason whatsoever to expect any speedup (and even D-Wave might agree about that).

    Factoring is in NP, and is believed to be “NP-intermediate.” In particular, NP-complete problems can’t be reduced to factoring unless NP=coNP, but factoring can certainly be reduced to any NP-complete problem (including D-Wave’s QUBO problem).

    The issue is that no one has ever found reducing factoring to a general NP-complete problem, and then trying to solve the latter, to be a competitive way to factor numbers. When you do so, you “throw away” all the number-theoretic structure that makes factoring easier than many other search problems! For example, using the Number Field Sieve, it’s possible to factor an n-bit integer classically in ~2O(n^1/3) steps. But how is your SAT-solver supposed to know about that? Likewise, even if the D-Wave machine were capable of running Shor’s algorithm (which it isn’t), if you first reduce factoring to QUBO and then run the adiabatic algorithm on the latter, the algorithm isn’t going to “magically recognize” that the QUBO instance came from factoring and that it can therefore do something much faster. Instead, it will be stuck doing an exponential brute-force search, and I would be astonished if it were anywhere near competitive with clever classical algorithms like the Number Field Sieve.

    On your last question, I’d say that we still don’t understand what the D-Wave machine does, well enough to be able to talk about what complexity class might correspond to it. The D-Wave machine implements stoquastic Hamiltonians only, and those are believed to not give us all of BQP. If the temperature is too high, you might not even get anything outside of BPP. But the devices are a moving target: as soon as you figure out the limitations of one machine, D-Wave announces that maybe nothing you say is relevant to their next machine. So for the time being, the relevant question is just whether you can get any empirical speedup over classical computers in a fair comparison—not what complexity class their machine corresponds to.

  265. Greg Kuperberg Says:

    Factoring is morally in the intersection of NP and coNP. I say “morally” because it is not a decision problem. However, it is in TFNP with a unique solution, so it can easily be decomposed into decision problems in NP intersect coNP. Therefore there is no way for factoring to be NP-hard unless NP = coNP.

  266. Nobody Special Says:

    Further to the point about factorizing integers adiabatically. This paper here: http://arxiv.org/pdf/0808.1935.pdf shows an algorithm where it appears to scale quadractically (at 1/8 probability of success instead of 2/3 as is accepted for BQP) which might mean it’s in P. So either because O(log(n)^3) scales better than O(n^2) or because the success probability seems quite low for the adiabatic algorithm and perhaps at 2/3 it would look much worse. Perhaps we can’t assume that problems solved with a adiabatic device can be solved at the same complexity as a gate model device?

  267. Scott Says:

    Nobody Special #266: Well, the core of the matter is simply that I don’t believe one can extrapolate the numerical simulations from that paper! All of my experience with similar situations, and everything I know about the workings of the adiabatic algorithm (and its total “ignorance” of the factoring problem), leads me to predict that the curve will bend and become exponential when you get up to larger instances. I’d put $50,000 on that prediction, except that Dana has forbidden me from making any more such bets. 😀

  268. Anonymous Says:

    Thanks Scott.

    Can’t one make the same argument against any non-special purpose machine model?

    You are a lucky man Scott, cheers to Dana. 🙂

  269. Michael Bacon Says:

    “I’d put $50,000 on that prediction, except that Dana has forbidden me from making any more such bets”

    A wise woman. You were lucky to find her. Happy birthday!

  270. Alex Selby Says:

    Without commenting on D-Wave’s “quantumness”, I doubt it is currently useful as a computational device. A proof that it is dominated would be to find a simple program that outperforms D-Wave Two on its native QUBO problem, which might be doable because the Chimera graph may be a bit too “thin” to be useful, at least at this size.

    If you fix the 256 horizontally-connected bits you can trivially optimise over the 256 vertically-connected bits, and vice-versa, and you can also efficiently optimise over one or two columns or rows of K4,4s. These are very large neighbourhoods and if you started with a random arrangement and did optimisations like this, then repeated the process a few times, I wouldn’t be surprised if you hit the global minimum at least as often as D-Wave Two manages and in a similar running time.

  271. SW Says:

    I read Smolin & Smith’s paper and I think it’s interesting. I agree with their conclusion that just showing the bimodal distribution does not conclusively prove the existence of entanglement and thus quantum speedup of D-wave’s device. What I’d like to see is a much more thorough study of the classical model, perhaps using it to produce much of the other plots in Boixo & Troyer’s paper, including for example its performance with respect to hardness (problem gap size).

    Another, unrelated thought: I’m reminded of http://arxiv.org/abs/1204.5789, about a system consisting of ~300 ion-qubits, capable of both implementing the Ising spin model and high-fidelity quantum control. It would be very interesting to try to run the D-wave problem on that system. Of course, the system layouts are very different and I’m not sure if the ion system right now allows as much a degree of individual control as the D-wave system does, but wouldn’t it be useful if it did …

  272. Henning Dekant Says:

    Scott, my advice would be to bet simply things like maple syrup and such, that’s what I do to avoid spousal repercussions 🙂

    SW #270, indeed I was wondering about this ion-qubits model as well. (BTW the associated press releases also hyped the QCness of this system).

  273. 2013-05-22 Log | Reading Room Says:

    […] order a D-Wave Two quantum computer. Scott Aaronson wrote an informative critique titled “D-Wave: Truth finally starts to emerge” in his […]

  274. Greg Kuperberg Says:

    SW – No, it wouldn’t be very interesting to run the D-Wave problem on some other set of qubits. The D-Wave problem studied by McGeoch and Wang is a contrived optimization problem designed for D-Wave’s device to do as well as possible. It’s almost simply a challenge to simulate D-Wave’s device faster than it can simulate itself. Out in the real world, it’s way too early for computational performance benchmarks for (toy) quantum computers. It makes sense to have some kind of benchmarks, like qubit fidelity and so on, but only benchmarks that reflect that only primitive pieces of quantum computers are being built.

    No, the right answer to D-Wave’s optimizations is conventional software on conventional computers — but not so conventional as to be inept, like making CPLEX achieve more than D-Wave’s device itself does. It would be ideal to beat D-Wave with a smartphone app. Since they don’t care about quantum theory or computer science theory, only performance, that’s the kind of benchmark that they deserve.

  275. Greg Kuperberg Says:

    Meanwhile the Wall Street Journal and NPR join the ranks of loose reporting on D-Wave. The WSJ article does divulge that D-Wave has raised more than $100 million so far. Probably with all of this good publicity, more money will roll in. It does seem to be the most expensive quantum computing project in the world.

  276. Alex Selby Says:

    Having looked at the McGeoch paper some more, there are various technical points I don’t understand, and some others where I think it would be helpful if they gave more information.

    In the native QUBO section (3.2), it would be ideal if they had provided the set of tests they had carried out so that their results can be replicated and tested. It is possible to regenerate random instances of a similar problem, but it is unclear which graphs were used (even the 439 qubit graph is unspecified), and in any case it is better to compare against the same instances since some are much harder than others. So it would be helpful to see the results broken down by problem instance, though this would require a data supplement to the paper. (Same remark applies to the other sections too.)

    In the Weighted MAX 2SAT section (3.3) I don’t understand why they don’t give the running times taken by D-Wave. The D-Wave timings are replaced here by estimates of proxies for timings, namely a comparison of objective function evaluations or iteration counts. Admittedly I don’t completely understand this, but I can’t see how these measures could both make sense, and it looks like they want the final comparison to be independent of the D-Wave hardware query time which seems to be a different measuring philosophy from the other sections. (I presume the phrase “we observe the median ratio of Blackbox iteration counts to TABU iteration counts is 5.2” is a typo for the reciprocal.)

    In the QAP test section (3.4), I don’t understand why they don’t give the other solvers the same version of the problem that they have given Blackbox/D-Wave. It would also be helpful for them to specifiy what encoding of the problem they have used in the D-Wave case (and in the other cases) so that their results can be replicated.

    In section 4, I don’t understand why they calculate the expected number of samples it takes to hit the best sample you are going to see out of 1000 samples. This quantity is minimised if your device always produces the same sample value (regardless of how bad this value is), so it doesn’t calculate the expected number of samples it takes to hit the optimal value. It looks like they are using their answer as if it were a running time, but that’s not reasonable since you don’t know you’ve hit the minimum value you are going to hit in 1000 samples until you’ve actually done the 1000 samples, so you can’t stop early without taking a hit in the quality of answer.

    This approach might work if it turned out that the best answer is returned by D-Wave more often than other answers. It is possible this is happening, but this is not mentioned in the paper. Figure 5 shows that non-best sample values can have higher multiplicity than the best one, but it is possible that the non-best sample values all arise from different final states, whereas the best sample value might be from the same state. In this case it would be possible to halt early, but there isn’t any indication that this is the method considered.

  277. John Sidles Says:

    Scott’s preference “I prefer the term ‘quantum computing’ to retain its currently-understood meaning; if we want to discuss possibilities like that ‘quantum super-transistor’ then we should invent different terms for them.”

    In this regard please let me commend to Shtetl Optimized readers certain highly-rated discussions on Math.StackExchange and MathOverflow that have titles like:

    Q  What is the difference between a variety and a manifold? (recommended)

    This question recurs perennially; examples (that a search will find) include:

    Q  How do professional algebraic geometers think about varieties?

    Q  Is there a theory that generalizes both varieties and manifolds?

    Q  Can manifolds be uniformly approximated by varieties?

    Q  Is it possible to compute if an algebraic variety is a differential manifold?

    Q  Variety vs. Manifold?

    Q  Algebraic varieties which are topological manifolds

    Q  Algebraic Varieties which are also Manifolds

    Q  Origin of terms “flag”, “flag manifold”, “flag variety”?

    Q  Compact Kaehler manifolds that are isomorphic as symplectic manifolds but not as complex manifolds (and vice-versa)

    Q  What’s the difference between a real manifold and a smooth variety?

    These questions point to a significant obstruction in quantum computing: there is no presently envisioned technology for quantum dynamical computing that uses *all* of Hilbert space, yet neither is there a unified mathematical terminology for describing the restrictions that various computational approaches impose.

    In particular, the simple and natural question What is dynamical state-space of D-Wave’s computer? belongs to a class of mathematical questions that (experience shows) is exceedingly tough to answer! Because (as it seems to me) this dynamical state-space is neither a generic vector/Hilbert space (which is too big and boring) nor a generic manifold (whose singularities crush both trajectories and understanding), but rather the happy middle ground of generical varietal state-spaces (whose pulled-back monodromies are thermodynamically, informatically, algebraically, and topologically both fertile and benign).

    Observation  Quantum field theorists are embracing varietal mathematical frontiers; can quantum information theory lag behind?

    Conclusion  Finding in-depth answers to the various thought-provoking questions related to D-Wave’s achievements — in particular the questions that have been asked by Peter Shor (comment circa #149) and by Scott Aaronson (comment circa #219) — almost certainly will require mathematical apparatus that is not part of the standard quantum information theory curriculum.

    Prediction  We may reasonably (and optimistically) anticipate that in coming years, stronger QIT mathematics will convey more light and less heat to discussion of D-Wave’s achievements. 🙂

  278. Greg Kuperberg Says:

    Alex – It would of course be ideal to straighten out these matters. I would ask whether McGeoch and Wang are available to explain the matter in person.

  279. Rahul Says:

    @Greg Kuperberg #275 says:

    “The WSJ article does divulge that D-Wave has raised more than $100 million so far. Probably with all of this good publicity, more money will roll in. It does seem to be the most expensive quantum computing project in the world.”

    Maybe that’s strong evidence that people value “fundamentally shaky but possibly useful” over “theoretically sound but likely useless”.

  280. Greg Kuperberg Says:

    Rahul – It means that people value “fundamentally shaky and therefore likely useless, but theatrically hyped” over “theoretically sound and therefore possibly useful, and explained properly”.

    I was just thinking about the whole issue this morning. It seems that we live in a new age of patent medicine. We’ve got D-Wave and Tesla and Virgin Galactic and Bitcoin. Each one has a lot of razzle-dazzle and has managed to pull in real money. Each one is dangerous to bet against, because you’re not only betting against the idea itself, you’re also betting that everyone will agree with you that the ideas are bad.

  281. foobar Says:

    @Greg#280

    Why is Tesla a bad idea?

  282. James Says:

    Greg #280:

    You are too extreme in your views of how science should always trump engineering. This is a classic clash between differently set minds, but in today’s dynamic economy nobody can wait for you to fully figure out your precious theory before you can give them the green light to start building.

    People are willing to take risks to just build something and bet that it might work. When the potential payoff is so huge, the risk might be justified. If it ends up failing, oh well, maybe the big payoff will come on the next bet.

    Ironically, you and Scott might actually cause the QC winter that you are so afraid of. What if Dwave is currently the only chance for humanity to get started with at least some kind of rudimentary practical QC? What if your extreme views end up prevailing and killing the honest effort of these guys? What if the next such effort doesn’t come for another couple of centuries after this? Think about how history will view you then.

  283. Greg Kuperberg Says:

    foobar – It depends. If the idea is to sell electric cars at a profit to rich liberals, then it could be a good idea. If the idea is to have any significant effect on the auto industry or on greenhouse gases from automobiles, then Tesla has no idea for how to do that. Electric cars don’t even have a better carbon footprint per mile, at the average location, than a Toyota Prius. The Prius alone has done 100 times more than all electric car models put together. Order of 100 times as many have been sold, and the ones that are sold are used a lot more.

  284. John Sidles Says:

    Greg Kuperberg says (circa #280): “We live in a new age of patent medicine. We’ve got D-Wave and Tesla and Virgin Galactic and Bitcoin …”

    LOL … But Greg, isn’t it objectively the case that all four of those pioneering enterprises already have demonstrated their technical milestones far more fully than quantum computing’s QIST Program?

    Why is this, the world (STEM students especially) wonders?

    This is not a facetious question … the (slow) emergence of good answers to it will (as it seems to me) substantially shape the course of the 21st century STEM enterprise … that is why it is a very good question to ask.

  285. Rahul Says:

    Greg Kuperberg #280:

    Do you really classify QC as “possibly useful”?

    My reading of Scott and other writers is that, a honest assessment of QC is along the lines of “fundamentally exhilarating, perhaps revealing some deep insights into the workings of our world etc.” An argument for QC research seems more genuine when analogous to proving Fermat’s Last Theorem or some such significant conceptual problem. No doubt we might discover exciting things along the way, but expectation of an useful applied quantum computer doesn’t seem realistic.

    From an applied perspective, I got the feeling that it was safe to regard QC as effectively useless. At least in the short and medium term and, of course, no one can predict the long term.

    From what we know right now is there any indication that a working Quantum Computer (which offers significant super-classical speedups) is in our future any time soon? And by “soon” I’ll be pretty generous, say next two or three decades? Saying, we don’t know is OK, but let’s not pretend that we have any evidence or reasonable hope that this will happen.

    AFAIK, all those dreams of using QC (a journalistic favorite) to model proteins, design materials, optimize supply chains etc. are just idle speculation. From where we are right now (factoring 15 at immense expense and perpetually harassed by decoherence / non-scaling), it’d probably be as probablistically reasonable to expect to meet friendly, super-advanced aliens in those decades who will teach us to solve those same applied problems.

  286. John Sidles Says:

    ———————–
    John Smolin and Graeme Smith definitionally proclaim (arXiv:1305.4904v1)

    Quantum annealing involves no randomness or temperature, at least in the ideal. Rather, it is a particular type of adiabatic evolution. … Quantum annealing and simulated annealing are very different procedures.

    The D-Wave machine is made of superconducting “flux” qubits. Because of the high decoherence rates associated with these flux qubits, it has been unclear whether the machine is fundamentally quantum or merely performing a calculation equivalent to that of a classical stochastic computer.
    ———————–

    Wal, thar’s yer problem, ma’am! Yer research engine’s definitions are so dang restricted, it’s amazin` it runs at all! We’ll jes` pour some Grothendieck oil into the sump between th` Newton base-pan and the Hilbert manifold, `an soon she’ll be purrin` like a kitten! 🙂

  287. David Poulin Says:

    Smolin and Smith (http://arxiv.org/abs/1305.4904) very convincingly argue that the bimodal distribution observed on the D-Wave device and QMC simulations—but not in the simulated annealing—should not be taken as evidence of quantumness of the D-Wave device. Rather, it is evidence of the quasi-deterministic nature of the algorithm (obviously). They conceived a classical algorithm by relaxing spins into rotors, ‘adiabatically’ switching off a transverse field, and solving the classical equations of motion. This algorithm (with and without noise) produces a bimodal distribution just like the one observed in the D-wave device.

    In my opinion, this takes away most of the credibility to the conclusion that the D-wave device behaves quantumly. All bets are off.

  288. Scott Says:

    Rahul #285:

      My reading of Scott and other writers is that, a honest assessment of QC is along the lines of “fundamentally exhilarating, perhaps revealing some deep insights into the workings of our world etc.” … but expectation of an useful applied quantum computer doesn’t seem realistic.

    No, not quite. I think that a scalable QC would absolutely have practical applications—foremost among them (besides codebreaking, of course) being the simulation of quantum physics and chemistry. It’s also likely that the adiabatic algorithm and Grover-like algorithms can give us modest speedups over the best classical algorithms for combinatorial optimization problems—but that’s partly an empirical question, and I’d say that the evidence is not yet fully in. I also think it’s quite possible that I’ll see practical QCs within my lifetime.

    On the other hand, given the widespread interest in the topic (both among scientists and laypeople), I think it’s important to stress that
    (a) the applications of a scalable QC wouldn’t be nearly as widespread as many popular articles claim, and
    (b) I see no compelling evidence that we’re on the “verge” of building practical QCs—it might be many decades or even longer.

    I also like to stress that I, personally, am much more interested in the scientific and conceptual questions raised by QCs than in their applications! But I wouldn’t really begrudge someone else if they cared more about the applications. 😉

  289. Michael Bacon Says:

    John@286,

    That’s your take away from the Smolin and Smith paper? Seriously?

  290. foobar Says:

    @greg#283

    Presumably Tesla’s objective, like all corps, is profit. But it is not clear to me that there will be no effect on the environment or the auto industry.

    The problem with the Prius is that the target demographics are limited; most people would never consider that class of car. And while it is true that the Model S suffers from the same problem Tesla will be introducing models that have much broader appeal, namely the SUV and family sedan. This larger market presentation would presumably have a greater effect on the environment notwithstanding the fact that the Tesla vehicles are less environmentally friendly than the Prius.

  291. Rahul Says:

    Scott says:

    No, not quite. I think that a scalable QC would absolutely have practical applications—foremost among them (besides codebreaking, of course) being the simulation of quantum physics and chemistry.

    Well, I agree. But my point was whether we’ll ever make one!

    It’s like discussing whether Time Travel, or Teleportation would have awesome practical applications. Or if the Yeti might make a good mountain soldier. Probably, but will we ever realistically have any of those?

    One cannot get too enamored with a hypothetical device before we have any idea whether the technology is remotely practical or not and are not even close in any sense to making one.

  292. Rahul Says:

    foobar #290:

    The problem with the Prius is that the target demographics are limited; most people would never consider that class of car.

    I find that statement puzzling when the current statistics show that the Prius is sold in 80 countries, has had a production history of more than a decade and sold close to 3 million units so far.

    No, the Model S does not suffer from the same problems as the Prius. It suffers from a MUCH BIGGER problem. That’s a model that has barely sold 10,000 units so far, forget the fact that it sells upward of $65,000 (versus ~$25,000 for the Prius)

    And sure Tesla motors might make a winning super-car in the future. So might Toyota!

  293. Michael Bacon Says:

    Rahul@291,

    “But my point was whether we’ll ever make one!”

    Unless there is some limitation imposed by physics which we’re not aware of (Scott and Gil and others have discussed this at length), then my take (for what it’s worth), is that it’s “simply” an engineering problem. 🙂 Of course, even if it turns out that a useful QC can’t be built, then I agree with you that the scientific questions addressed are still well worth the effort.

  294. Joe Fitzsimons Says:

    Rahul: Will we have large scale quantum computers in a year? No, probably not. But I am typing this comment on a device that makes the communicators from Star Trek look hilariously antiquated. Frankly, an IBM quantum computer doesn’t look particularly implausible to me. Hell, if Boston Dynamics can basically build the terminator …

  295. Henning Dekant Says:

    Greg, #280, the idea that D-Wave is about as terrible as Tesla and Virgin Galactic is certainly novel and puts your views into perspective (I leave out Bitcoin as this is not an effort by a single company).

  296. Greg Kuperberg Says:

    Henning – Actually Tesla is a lot more credible than D-Wave. D-Wave and Virgin Galactic could be about the same.

  297. Peter W. Shor Says:

    I don’t see why Tesla is so terrible. They’re selling to the upper end of the market, but this is something that Rolls Royce, Jaguar, and Ferrari have also done, and I don’t think any of these companies is terrible, either. I’ve only heard one first-hand report about Tesla, and if I am to believe this, they make excellent cars.

  298. Michael Bacon Says:

    Peter@297,

    Yeah, they appear to be making very good cars — from both performance and style perspectives. And, as battery life improves and prices (hopefully) come down, I could see them grabbing an increasing share of the market.

  299. Greg Kuperberg Says:

    Peter – Just like Rolls Royce, Tesla makes an excellent boutique product. By that standard, yes, they could succeed. But they insist that isn’t the master plan. Their stated goal is that they will be the new automotive revolution to replace gasoline by electricity in general. After that their goal is to also make the electricity carbon-free with a twin investment in solar PV power generation:

    http://www.teslamotors.com/blog/secret-tesla-motors-master-plan-just-between-you-and-me

    In other words, they are selling a dream. They have to, because the target customer is a rich environmentalist who wants that dream, and not a rich materialist who wants a trophy car.

    As judged by their stated goal, the Toyota Prius has accomplished about 100 times more.

  300. Douglas Knight Says:

    Scott 288, in what sense are Grover-like algorithms an empirical question? Have people written down quantum algorithms that are inspired by Grover’s algorithm, but which take advantage of the structure in the problem, that they can’t analyze?

  301. Scott Says:

    Douglas #300: The quadratic speedup that Grover’s algorithm gets over classical brute-force search is, of course, provable (as well as provably optimal). All the complications arise from the fact that in real life, pure brute-force search is essentially never the best one can do! You can do backtrack search, simulated annealing, etc. etc. So then the question arises of whether Grover still does better than the most clever classical algorithm one can think of. And, in the likely event that it doesn’t, can one combine Grover’s algorithm with a classical backtrack or heuristic algorithm in order to get the advantages of both? In some cases, a clever classical algorithm has “already squeezed much of the juice” out of the problem, so that combining with Grover yields diminishing returns. As a simple example, by using the classical birthday attack, you can find a collision in a hash function with a range of size N, by examining ~√N hashes on average. Quantumly, if you combine Grover with the birthday attack, then the complexity improves to ~N1/3 but no better than that — so compared to classical, you get a 2/3-power speedup, rather than the square-root speedup of “pure Grover.” The same happens for the general problem of finding a local optimum. On the other hand, it’s conceivable that the adiabatic algorithm could get a polynomial speedup over simulated annealing that’s even better than quadratic in some practically-relevant cases—but at present, that remains a speculation awaiting evidence.

  302. Henning Dekant Says:

    Greg #299, given that you seem to have general issues with entrepreneurs who sell visions, I think you must have really *despised* Steven Jobs in the early days.

  303. Michael Bacon Says:

    Greg@299,

    “They have to, because the target customer is a rich environmentalist who wants that dream, and not a rich materialist who wants a trophy car.”

    You’ve got to take a step back and look at this from just a slightly different perspective. I think they’re shooting for rich materialists as well. In fact, I think, just like space mining asteroids, the market they see isn’t fundamentally about rich kids with hobbies. This isn’t what Elon is thinking about at all, nor I dare say, are his investors.

  304. James Says:

    Henning #302:

    I was thinking the same thing. Greg seems to be the nemesis of innovation and technological development. Thank god he wasn’t alive 200 years ago or the industrial revolution would have probably never happened.

  305. Vadim Says:

    I think that Tesla is building up their expertise and technology while also charging an early-adopter fee to ensure their company can stay afloat through the early, uncertain times that every new company goes through. The rich environmentalists are beta-testing their product and paying quite a bit for the privilege, and there being fewer of them enables Tesla to better work the kinks out – imagine having it in true mass production and finding out there’s a flaw requiring an expensive recall of every vehicle. When the time comes to target it at the masses, assuming they get to that point, they’ll be in the game, while others who might not have taken such a pragmatic approach might be on the sidelines.

    Still though, I imagine everyone would agree that the Tesla story is infinitely less controversial than D-Wave’s; at least everyone seems to agree that Tesla is actually making electric cars.

  306. Michael Bacon Says:

    Vadim@305,

    “Still though, I imagine everyone would agree that the Tesla story is infinitely less controversial than D-Wave’s; at least everyone seems to agree that Tesla is actually making electric cars.”

    LOL. Now, that’s surely a huge difference. 😉

  307. Henning Dekant Says:

    “Still though, I imagine everyone would agree that the Tesla story is infinitely less controversial than D-Wave’s; at least everyone seems to agree that Tesla is actually making electric cars.”

    Are we sure about this agreement?

    Maybe one should make the case that it’s a bit premature to call it a car if it doesn’t have an engine? Maybe we should rather coin a different term, after all the attribute “electric” is also a bit generic. I suggest “battery operated vehicle” i.e. BOV. 😉

  308. Douglas Knight Says:

    Scott, thanks for the example of a Grover+ algorithm, but you explained that it is well-understood, theoretically, not a question, let alone an empirical one. Yes, the adiabatic algorithm is hard to analyze, so it seems likely to be answered empirically, but you said that there is also an empirical question about the quality of Grover-like algorithms for combinatorial optimization.

    I suppose that until we have quantum computers, there isn’t a market in hard to analyze quantum algorithms. (except for ones like quantum annealing, which suggest easier architectures) Maybe it’s easy to write down algorithms whose value is best evaluated empirically, but no one does it because it is, for now, a dead end. Is this what you mean?

  309. Scott Says:

    Douglas #308: I’m sorry if my response was unclear. The empirical question about Grover’s algorithm concerns how much additional speedup it will give you in practice—over and above backtracking, simulated annealing, and all the other sensible things one would try classically—when it’s layered on top of those other things. Admittedly, this is not as hard a question as understanding the performance of the adiabatic algorithm, and it’s probably one that we could get a pretty good handle today through a combination of (new and old) theory and simulations on classical computers. A good PhD thesis for someone!

    And no, it’s certainly not true that it’s a “dead end” to write down algorithms whose value is best evaluated empirically, or that no one does it (!!). Simulated annealing, the adiabatic algorithm, genetic algorithms, tabu search, Quantum Monte Carlo, survey propagation, and DPLL are all famous and justly-celebrated examples of such algorithms.

  310. Scott Says:

    James #282/#304: I find your criticisms of Greg completely unfair. He’s not saying that engineering must always take a backseat to theory; rather, he’s saying that the best, most useful, most practical engineering is the kind that’s informed by theory rather than contemptuous of it. Maybe you disagree, but that’s a very different claim than the one you’re ridiculing.

  311. Scott Says:

    Rahul #291:

      [Discussing the applications of QC is] like discussing whether Time Travel, or Teleportation would have awesome practical applications. Or if the Yeti might make a good mountain soldier. Probably, but will we ever realistically have any of those?

    OK, let’s consider your examples one by one.

    Time travel (into the past, which I assume is what you meant) would almost certainly violate the laws of physics, although we don’t know for sure. For QCs, by contrast, it’s their impossibility that would require a serious revision to known physics.

    Nevertheless, I did once coauthor a paper about the awesome practical applications of time travel—and more interestingly, about the problems you still couldn’t solve, even with a time-travel-enhanced quantum computer! The mathematical tools we developed for that ended up having other applications.

    As for teleportation: well, we do have fax machines, and even quantum teleportation (at least of individual atoms). If you meant instantaneous teleportation, then again that violates known physics (in fact, precisely as much as closed timelike curves would). If you meant the practical ability to take a physical object like a human body, encode it as pure information, and then reconstruct it on command given the information, then I’d say that’s a wonderful thing to think about the consequences of, and in fact I (like many philosophers, sci-fi fans, and singularity/LessWrong types) do think about the consequences of such things. But compared to QC, I’d say two differences are that (1) we don’t even have a well-developed theory with which to pose the questions, and (2) it seems even further from practicality.

    As for the Yeti, within the next century we’ll probably be able to genetically engineer something quite “Yeti”-like, if you ignore all the ethical impediments. (And there might even be a few existing humans who could qualify… 😀 ) If, on the other hand, you meant finding an “organic” Yeti rather than building your own, then of course that’s not a technological question at all, but only a question about the particular history of evolution on earth—one to which the answer seems to be negative.

  312. Is it a QC? | Ajit Jadhav's Weblog Says:

    […] paper (and its coverage in the media) came up soon later, at Prof. Scott Aaronson’s blog [^]. At 300+ comments (as of this update), there is a lot of speculation, skepticism, and hilarity of […]

  313. AV Says:

    Scott #311, I never meet rigour proof of that idea:
    “For QCs, by contrast, it’s their impossibility that would require a serious revision to known physics.” Some subtle combination of known physical laws may prevent QC (in principle).

  314. Greg Kuperberg Says:

    Scott #310 – Actually James’ comments struck me as so over the top and credulous that it was hard to see them as so super unfair. (Not to mention that he is anonymous.) E.g. this gem:

    What if your extreme views end up prevailing and killing the honest effort of these guys?

    “Honest effort”, good grief, LOL. I can’t resist a recent quote from Vern Brownell of D-Wave:

    When in the history of computer science has there been something that’s 11,000 times faster than the previous technology?

    11,000 times faster than the previous technology. How’s that for an honest effort.

  315. Greg Kuperberg Says:

    AV #311 – In the middle of all of the scorn against D-Wave, it is easy to forget that most of us in this discussion are in fact convinced that QC is a good idea. Yes, in principle some subtle, unforseen combination of the laws of physics could prevent quantum computing. You can never rule out the unforseeable.

    But the thing is, most of the skeptics of quantum computing begin with certitude that this unforeseen 4th law of thermodynamics exists, and that it only needs some elaboration, or in some cases that it doesn’t even need any more elaboration. Only when they try to explain their intuition with publishable research, then they have a hard time of it. At the serious level, quantum error correction is a formidable opponent of the skeptics.

    (And it is uncanny that D-Wave is running so many victory laps without yet any quantum error correction.)

  316. John Sidles Says:

    ———————–
    Greg Kuperberg expresses a consensus view  “Some subtle, unforseen combination of the laws of physics could prevent quantum computing.”
    ———————–

    Or the obstruction(s) to quantum computing might arise from physics that is subtle albeit completely familiar!

    Works like Howard Carmichael’s three-volume An Open Systems Approach to Quantum Optics (1993) — which introduced the then-radical notion of “unravelling” (Section 7.4, p. 122) — Statistical Methods in Quantum Optics I: Master Equations and Fokker-Planck Equations (1999), and Statistical Methods in Quantum Optics II: Non-Classical Fields (2007), remind us that the low-energy quantum dynamics of Nature is quantum electrodynamics (QED).

    In sharp contrast, introductory textbooks — for example, the Feynman Lectures on Physics or Nielsen and Chuang’s Quantum Computation and Quantum Information — assume that the low-energy quantum dynamics of Nature is a strictly unitary flow on a strictly finite-dimensional and strictly spatially localized (and thus error-correctable) Hilbert space.

    How confident should be be that Feynman/Nielsen/Chuang models of quantum dynamical flows describe the low-energy physics of nature to the extraordinary degree of precision that is required even to implement scalable BosonSampling, much less scalable quantum computing?

    Nothing in Howard Carmichael’s works — or any other advanced text on quantum dynamics (known to me) — provides substantial mathematical, theoretical, or experimental grounds for confidence that QED as Nature’s fundamental low-energy physics is compatible with scalable quantum computing. As a litmus test, questions like “What physical mechanisms obstruct scalable BosonSampling experiments?” at present lack satisfactory answers, both in principle and in practice.

    Conclusion  Moderate optimism regarding the feasibility of “traditional” scalable quantum computing is entirely reasonable; outright confidence is less reasonable; near-certainty is unreasonable. And this sustained uncertainty is good news for D-Wave and the entire STEM community (young QIT researchers especially).

  317. Scott Says:

    John Sidles #316: I agree, Richard Feynman’s approach to quantum mechanics is way too naïve and simplistic for 21st-century STEM/QIT engineers. And the main reason is that it ignores QED, an exciting recent development about which Feynman appears to have known nothing.

  318. Rahul Says:

    Scott #317:

    Did that come with a sarcasm warning?

    I’m asking sincerely (in my naivete) but my sarcasm detectors were set off by “QED, an exciting recent development about which Feynman appears to have known nothing”

  319. Michael Bacon Says:

    John@316,

    “[S]ubstantial mathematical, theoretical, or experimental grounds for confidence that QED as Nature’s fundamental low-energy physics is compatible with scalable quantum computing”

    If you don’t get hung up on the word scalable, then perhaps: http://arxiv.org/abs/quant-ph/0004107

    Rahul@318,

    No warning label, but sarcasm nevertheless 😉

  320. Scott Says:

    Rahul #318: <sarcasm>Sarcasm? About Feynman’s ignorance of the theory he co-discovered, and his consequent inability to appreciate John Sidles’s profound insight, elaborated over dozens of blog comments, that QED violates the strict unitarity of QM? I don’t know what on earth you’re talking about.</sarcasm>

  321. John Sidles Says:

    ———————–
    Scott Aaronson proclaims (circa #317)  “Richard Feynman’s approach to quantum mechanics is way too naïve and simplistic for 21st-century STEM/QIT engineers.”
    ———————–
    LOL … 20th century STEM history provides plenty of scientific, technological, and mathematical reasons to accept Scott’s assessment at face value!

    One scientific (and comedic) reason  Feynman’s vehement (and utterly mistaken) public rejection of the Hanbury-Brown-Twiss effect (as vividly recounted by the first-person witness Venkatraman Radhakrishnan) reminds us that the pullback from field theory to finite-dimensional Hilbert space can be sufficiently subtle as to confuse even luminaries like Feynman.

    One technological (and optimistic) reason  Outstanding 20th century polymaths that included Norbert Weiner, John von Neumann and Richard Feynman all wrote extensively on the future development and fundamental physical limits of microscopy … and yet they failed utterly to appreciate the severity of the restrictions that (decoherent) radiation damage imposes, and they failed too to appreciate the feasibility of high-resolution imaging via the exchange of (coherent long-wave) magnetic resonance quanta. It is natural to wonder whether today’s polymaths are similarly overlooking transformational technological opportunities … and we can all hope so!

    One mathematical (and challenging) reason  In quantum field theory, a reasonably satisfactory mathematical appreciation of Feynman’s (empirical) “ghost rules” awaited the insights of Fadeev and Popov, which in turn drew upon insights in (classical) geometric dynamics that can be traced back to pioneers like Cartan, Kolmogorov, Arnol’d, and Mac Lane. It is natural (and encouraging, and even comforting) to reflect that unraveling the dynamical trajectories of D-Wave’s devices plausibly will require comparable algebraic, geometric, and informatic advances.

    Conclusion  The history of the 20th century STEM enterprise vividly shows us that (in the words of Gandalf the Grey) “Even the wise cannot foresee all ends.”

    These lessons from 21st century STEM history convey some mighty useful insights to young STEM researchers … and to 21st century STEM enterprises like D-Wave. 🙂

  322. James Says:

    Scott #310:

    What theory is Tesla contemptuous of? I think many people on here would agree that Greg has revealed himself to be an extremist wacko. His statements about dwave so far were looking semi-rational although definitely biased and inexplicably bitter, but I think the overall sense of normalcy was still preserved mainly because there is so much fodder on dwave.

    However as Henning #285 delicately put it to him: “The idea that D-Wave is about as terrible as Tesla and Virgin Galactic is certainly novel and puts your views into perspective”

    And your own credibility, Scott, is certainly not getting enhanced when you go out of your way to defend even the most ridiculous statements of this guy.

  323. Rahul Says:

    James #322:

    “I think many people on here would agree that Greg has revealed himself to be an extremist wacko.”

    I may not agree with all Greg writes, but his posts seem eminently reasonable to me.

  324. Vadim Says:

    I agree with Rahul. I find Greg’s posts, almost without exception, to be very interesting and thought provoking, doubly so when he says something I disagree with. Calling him an extremist wacko is just out of line, IMO.

  325. Bram Cohen Says:

    A general comment on D-Wave’s scoffing at theory: Lots of really big engineering advancements were the result of scoffing at the conventional wisdom and plowing through difficult challenges until eventually succeeding, but in all cases the conventional wisdom being scoffed at was engineering practice, and there was a clear underlying theory as to why the new approach was a good one.

    In a strict apples-to-apples comparison, the Boson Sampling experiments demonstrate much, much more progress towards practical quantum computation than D-Wave’s do, and arguably still would even if the results of D-Wave’s experiments were performing a lot better.

  326. John Sidles Says:

    ———————–
    Bram Cohen offers: “A general comment on D-Wave’s scoffing at theory …”
    ———————–
    Bram, can you provide some examples of “D-Wave’s scoffing at theory” (articles, preprints, or talks)?

    Isn’t academic “scoffing” at D-Wave’s technological demonstrations far more prevalent than the reverse? Whence this marked asymmetry?

    The practice of “scoffing” generically contributes little to academic discourse (as it seems to me) … and so one hopes that D-Wave largely or wholly refrains from this practice.

  327. Rahul Says:

    Bram Cohen #325:

    What is the range of problems that can be solved via Boson Sampling?

  328. James Gallagher Says:

    John Sidles #316

    It’s important to understand that QED could, theoretically, be formulated in a Schrodinger picture as evolution of a (finite) state vector via some Hamiltonian. The fact that we don’t know how to construct the Hamiltonian to get the correct time evolution doesn’t mean it don’t exist.

    QED doesn’t disobey the fundamental postulates of QM

  329. Scott Says:

    Bram #325:

      Lots of really big engineering advancements were the result of scoffing at the conventional wisdom and plowing through difficult challenges until eventually succeeding, but in all cases the conventional wisdom being scoffed at was engineering practice, and there was a clear underlying theory as to why the new approach was a good one.

    Couldn’t have said it better! Of course, if I said it, I’d be “scoffed at” myself as a self-serving, out-of-touch, ivory-tower fantasist, but I hope the creator of BitTorrent will have slightly more credibility on this question.

  330. Greg Kuperberg Says:

    Whereas D-Wave is the opposite. They notoriously scoff at theory, but their engineering is more conventional than they care to admit. (At least their engineering so far; we always have to heed the parable of the stone soup in this discussion.)

  331. John Sidles Says:

    ———————–
    James Gallagher observes (correctly): “QED [Feynman-style] doesn’t disobey the fundamental postulates of QM [Nielsen/Chuang-style]”
    ———————–
    Yes, but of course the converse implication does not follow logically (or even plausibly!).

    Nature strictly requires that (low energy) QM/QIT computational devices be constructed out of the electron, nucleon, and photon fields of QED field theory. This restrictions ensures that the QED dynamics of electrons, nucleons, and photons can be computationally simulated (to any required precision) using the (scalably error-corrected) logic gates of QM/QIT. But this does not imply — by any simple logical or physical argument — that (scalably error-corrected) QM/QIT logic gates can be experimentally simulated using the QED dynamics of electron, nucleon, and photon fields.

    Common-sense conclusion  (QED ⊂ QM/QIT) ≢ (QM/QIT ⊂ QED)  🙂

  332. James Gallagher Says:

    John Sidles #331

    Feynman’s Path Integral fomulation of Quantum Mechanics is mathematically identical to the Schrödinger (and Heisenberg) pictures of Quantum Mechanics. And QED is pretty clearly understood with path integrals – it’s how Feynman worked out how to do the calculations.

    So I assume you’re just talking about some engineering restriction (at low energy) rather than a fundamental problem – otherwise I don’t understand your argument.

  333. Henning Dekant Says:

    Scott #317, it really is a shame that Feynman never gets any respect. 😉

  334. Henning Dekant Says:

    James #322, unless Greg develops a habit of sloshing Tesla BOVs I really don’t think he should be called an “extremist wacko”.

  335. The Geek’s Reading List – Week of May 24th 2013 | thegeeksreadinglist Says:

    […] https://scottaaronson-production.mystagingwebsite.com/?p=1400 […]

  336. John Sidles Says:

    James Gallagher opines (circa #331):

    (A)  “Feynman’s path integral fomulation of quantum mechanics is mathematically identical to the Schrödinger (and Heisenberg) pictures of quantum mechanics.”

    (B)  “And QED is pretty clearly understood with path integrals – it’s how Feynman worked out how to do the calculations.”

    (C)  “So I assume you’re just talking about some engineering restriction (at low energy) rather than a fundamental problem – otherwise I don’t understand your argument.”

    ——————

    James, assertions (A-C) each raise plenty of interesting questions, that are associated to a fascinating literature.

    Regarding (A)   Back in the 1950s and 1960s, there were plenty of prominent physicists who scathingly criticized path integral formalism on grounds that path integrals were not (in James’ phrase) “mathematically identical to the Schrödinger (and Heisenberg) pictures of Quantum Mechanics”:

    ——-
    “Path integrals suffer most grievously from from a serious defect. They do not permit a discussion of spin operators in a simple and lucid way. … It is a serious limitation that the half-integral spin of the electron does not find a simple and ready representation.”
        — <some random 60s theorist>
    ——-

    Fortunately, much-cited articles like Lawrence Schulman’s much cited “A Path Integral with Spin” (Phys Rev 176, 1968) subsequently set the record straight, and a concise recent survey of qubit-friendly path integral methods is Naoum Karchev’s “Path Integral Representation for Spin Systems” (arXiv:1211.4509v1, 2012).

    Regarding (B)   In regard to the strengths and limitations of path integral frameworks in general, an exceptionally engaging personal account is Gerard ‘t Hooft’s “The Glorious Days of Physics: Renormalization of Gauge Theories” (arXiv:hep-th/9812203v2, 1998), which vividly shows the extraordinary theoretical difficulties that must be surmounted in extending QM/QIT formalisms to encompass field-theoretic dynamics (whether one regards these theoretical difficulties are “fundamental” or not is a matter of personal taste!).

    Regarding (C)   In regard to the “engineering restrictions” that field theory imposes on QM/QIT formalisms, whether one regards these restrictions as “fundamental” versus “engineering” is again matter of personal taste! To mention just one dynamical restriction (among dozens) that QED imposes on QM/QIT, the infrared dynamics of QED is intimately bound-up in issues associated to spatial localization in general, and the thermodynamical Third Law in particular. Are these thermodynamical/localization issues mathematical? … theoretical … informatic … practical? Hmmm … why not all four?

    Conclusion  These issues aren’t easy, and appreciating D-Wave’s achievements will require a substantially better grasp of them than the QM/QIT community possesses at present.

  337. Scott Says:

    Henning #333: Fun history, but it leaves out Bernstein-Vazirani and Simon—the CS people who paved the way for Shor!

  338. Henning Dekant Says:

    Scott #337: Fair criticism, in my deification attempt I kinda sacrificed giants with lesser name recognition 🙂

  339. James Gallagher Says:

    Hi John #336

    Yes, I understand that the entire Standard Model and its associated gauge theories aren’t “clearly understood” wrt to path integrals, but I don’t think Quantum Computing depends on isotopic spin symmetries, higgs mechanism, QCD or anything remotely close to those energy scales. On the other hand, QED is important because it describes photon interactions with matter ALMOST EXACTLY – and at least good enough accuracy to concern QC theorists.

  340. John Sidles Says:

    James Gallagher opines (circa #339):

    “Yes, I understand that the entire Standard Model and its associated gauge theories aren’t ‘clearly understood’ w.r.t. to path integrals, but I don’t think Quantum Computing depends on isotopic spin symmetries, Higgs mechanism, QCD or anything remotely close to those energy scales.”
    ——————
    That assertion is correct, and yet QM/QC/QIT progress in recent decades shows plainly that the low energy subtlety of the Standard Model — which as t’Hooft remarks (arXiv:hep-th/9812203v2) “should be called the Standard Theory” — is eminently worthy of our scientific, theoretical, experimental, and mathematical respect.

    To appreciate this, we reflect that Nielsen and Chuang’s QM/QC/QIT textbook (for example) postulates a dynamical world whose quantum trajectories are integral curves of unitary flows on linear Hilbert spaces of fixed dimensionality.

    Yet by mechanisms that t’Hooft’s essay vividly describes, Nature’s truly fundamental ‘Standard Theory’ seeks to evade these QM/QC/QIT restrictions with the same inanimate ingenuity (and exasperating vigor) that thermonuclear plasmas seek to escape their confining magnetic fields.

    Mechanisms by which QED fields seeks to escape QM/QC/QIT restrictions include:

    •  The dynamical state-space of QED (which includes the vacuum state) formally has infinite dimension (for both low-energy and high-energy excitations).

    •  The fermionic particle fields of QED generically radiate photons into the vacuum, such that the QED dynamical state-space practically has infinite dimension.

    •  Dimensional restriction by “tracing over” photons radiated to infinity induces a stochastic flow dynamical flow whose mean component is non-unitary.

    It is appealing to imagine that these objections can be addressed by introducing the elements of ideal cavity QED: perfectly reflecting walls and perfectly absorbing detectors. Yet even in principle, the relativistic invariance and informatic causality of QED impose restrictions on reflective and absorptive processes (via Kramers-Kronig relations for example) which are sufficiently restrictive that QED-compatible designs for scalable BosonSampling experiments exhibiting ideal performance (for example) have not yet been exhibited even in principle.

    Both the modern-day textbooks of quantum optics (for example, Howard Carmichael’s textbooks per comment circa #316) and the modern-day experiments in quantum optics (which struggle to create correlated optical-wavelength photons by any process that realistically scales to large numbers of photons) provide ample grounds for humility in the face of these challenges.

    Conclusion (with further reading)  The principles articulated by Gerard t’Hooft in his Physics StackExchange question Why do people categorically dismiss some simple quantum models? (and comments/answers there-to by many persons, including Peter Shor), that are rebutted by Scott Aaronson in his essay “Multilinear Formulas and Skepticism of Quantum Computing” (arXiv:quant-ph/0311039v4, and references therein), have created an intellectual milieu in which (in recent years) expertise in the Standard Theory of particle physics is becoming moderately anti-correlated with confidence in the feasibility of scalable Quantum Computing.

  341. Scott Says:

    John Sidles #340:

      …expertise in the Standard Theory of particle physics is becoming moderately anti-correlated with confidence in the feasibility of scalable Quantum Computing.

    I call bullshit on that alleged anti-correlation! The only “particle physics expert” I can think of who doubts the feasibility of scalable QC is ‘t Hooft (and Wolfram, if you want to count him). And on the other side, you have to count John Preskill, Ed Farhi, Ray Laflamme, etc., as well the countless string theorists and other HEP theorists who consider the feasibility of QC so obvious that the question doesn’t particularly interest them (it’s “mere engineering”). And then, of course, you have to count the large number of people—in CS and other fields, but also just on the blogosphere—who doubt the feasibility of QC, but are also ignorant of particle physics! 🙂

  342. John Sidles Says:

    —————-
    Scott avers “You have to count John Preskill, Ed Farhi, Ray Laflamme, etc., as well the countless string theorists and other HEP theorists who consider the feasibility of QC so obvious that the question doesn’t particularly interest them (it’s “mere engineering”).
    —————-
    Citations to this effect would be most illuminating! Because a full-text search of the arxiv server for the combined terms “quantum computing” AND “obvious” suggests that there aren’t many (and apparently, none by the authors you mention).

    Which is good, eh? Wouldn’t too-wide acceptance of the unproved notion that “the feasibility of QC so obvious as to be uninteresting” constitute a dereliction of the academic duty that (according to Scott Aaronson’s essay “D-Wave: Truth finally starts to emerge”) is plain:

    Scott Aaronson’s academic guidance  “[What] academics are supposed to do […] is to be skeptical and not leave obvious questions unasked.”

    Moreover, it is plausible that t”Hooft’s opinions are influenced by the history that he personally experienced (and largely shaped), that he describes so vividly in “The Glorious Days of Physics” (arXiv:hep-th/9812203v2, page 6):
    —————-
    “Nowadays we can easily observe that the Yang-Mills theory, the theorems of Higgs, Englert and Brout, and the sigma model were among the most important achievements of the ’50s and the ’60’s. However, most physicists in those days would not have agreed. Numerous other findings were thought to be much more important.”

    “As at prehistoric times, it may have seemed that the dinosaurs were much more powerful and promising creatures than a few species of tiny, inconspicuous little animals with fur rather than scales, whose only accomplishment was that they had developed a new way to reproduce and nurture their young. Yet, it would be these early mammals that were decisive for later epochs in evolution.”

    “In a quite similar fashion, Yang-Mills theories, quantum gravity research and the sigma model were insignificant little animals with fur compared to the many dinosaurs that were around: we had numerous strong interaction models, current algebras,† axiomatic approaches, duality and analyticity, which were attracting far more attention. Most of these activities have now disappeared.”
    —————-
    Conclusion  One lesson of field theory is that today’s “small mammals” of quantum computing skepticism may perhaps exhibit greater evolutionary potential than is generally appreciated! 🙂

  343. asdf Says:

    Scott, are any fancy theorists (physics or CS) seriously open to the idea that 1) QC is physically realizable, and 2) that’s less of a big deal than it sounds, because it’s equivalent to classical computation? For example, factoring is in P and there is some mechanical process (not yet known) that transforms QC algorithms (such as Shor’s) into classical algorithms.

  344. DIY Says:

    My question arises from my limited understanding of some conversation above.

    I once tried to understand Quantum Field Theory, but really couldn’t. I noticed they use a Lagrangian, not a Hamiltonian.

    Can Quantum Field Theory (or more specifically the Standard Model) be understood in terms a Hamiltonian, a Hilbert space and a Schroedinger equation? That would be easier for me to visualize. Could it be made to look (approximately, or in some limit) like some kind of quantum cellular automaton. (Assume Minkowski space, no curvature, and choose a global time t.) Or are there a bunch of problems with infinities.

  345. Scott Says:

    John #342:

      Citations to this effect would be most illuminating! Because a full-text search of the arxiv server for the combined terms “quantum computing” AND “obvious” suggests that there aren’t many…

    Well, if someone thought something was too obvious to be worth their time, then—with the single, notable exception of Lubos Motl—they wouldn’t write long disquisitions about why it wasn’t worth their time, would they? 🙂 And besides, how come I need to provide citations, whereas you can simply assert, with no evidence, the absurd proposition that expertise in particle physics is “moderately anti-correlated” with belief in the scalability of quantum computing? I’m feeling less and less patience for your habit of using your genial, avuncular manner to smuggle in bullshit under the radar…

  346. John Sidles Says:

    ———————
    DIY asks: “I once tried to understand Quantum Field Theory, but really couldn’t. I noticed they use a Lagrangian, not a Hamiltonian.

    Can Quantum Field Theory (or more specifically the Standard Model) be understood in terms of a Hamiltonian, a Hilbert space and a Schroedinger equation?
    ———————
    DIY, a very useful start toward answering your question is to ask the converse question, which has the advantage of being mathematically well-posed:

    “The dynamical trajectory of an isolated qubit (either classical or quantum) is the integral curve of Hamiltonian flow on a state-space (the classical state-space being S^2, and the quantum state-space being S^3). What (singularity-free, real-valued) Lagrangian action function yields that same dynamical flow via a variational principle?”

    The answer is none at all … which is why the random 60s theorist (of comment circa #336) criticized path integral formalisms so harshly! 🙂

    The obstruction to (singularity-free, real-valued) Lagrangian descriptions of qubit dynamics is topological, and appreciating that this obstruction applies to classical (Bloch) qubit dynamics is a necessary prelude to appreciating that it applies to quantum (Heisenberg) qubit dynamics.

    There aren’t many textbooks that discuss these topics; one starting-point is Vladimir Arnol’d’s Mathematical Methods of Classical Mechanics, which can enjoyably be read side-by-side with Michael Spivak’s Physics for Mathematicians: Mechanics I. These are two terrific books!

    Then with reference to the starting assumptions of Spivak’s Chapter 13 “Variational Mechanics”, the obstruction is that neither S^2 (the classical qubit state-space) nor S^3 (the quantum qubit state-space) has the topology of a tangent bundle manifold TM (which are the only manifolds for which the derivation of Lagrange’s action goes through).

    Conclusion  Hamiltonian mechanics is more general than Lagrangian mechanics — both in principle and in practice — in consequence of obstructions that are associated to state-space topology (both classical and quantum). But to mathematicians, these topological obstructions are so obvious that (in Scott’s useful phrase) “the question doesn’t particularly interest them”! 🙂

  347. Anon-agree! Says:

    Scott #345: Agreed!! It is not necessarily rancorous if it is true! 🙂

  348. Rahul Says:

    Scott #345:

    I’m feeling less and less patience for your habit of using your genial, avuncular manner to smuggle in bullshit under the radar

    Well said. He does sound very patronizing / condescending.

  349. Alexander Vlasov Says:

    It is hardly even to imagine a paper in a refered physical journal with a claim that some particular design of scalable quantum computer must work because it is obvious. The only thing obvious nowadays for some experts is absence of obvious obstacles.

  350. Scott Says:

    asdf #343:

      Scott, are any fancy theorists (physics or CS) seriously open to the idea that 1) QC is physically realizable, and 2) that’s less of a big deal than it sounds, because it’s equivalent to classical computation?

    Out of hundreds of “fancy theorists,” so far in my life I’ve met exactly one who claimed to seriously believe that P=BQP: namely, Ed Fredkin. (And I couldn’t tell whether he just enjoys being contrarian.)

    If you believe P=BQP, then not only do you need to believe that there are fast classical algorithms for factoring, discrete log, etc.—thereby raising the obvious questions of what those algorithms look like, what your evidence is for their existence, and why you’re not either sharing that evidence with the NSA, GCHQ, etc. or trying to exploit it for your own nefarious ends. Rather, if your belief extends to FBPP=FBQP (i.e., that classical computers can efficiently solve any search and sampling problems that quantum computers can, not just decision problems), then my work with Alex Arkhipov shows that you probably also need to believe in a collapse of the polynomial hierarchy—something most of us consider almost as bad as P=NP!

    And for such a steep price in improbability, it’s not even clear how much explanatory benefit we get. After all, even if a polynomial-time classical algorithm existed to simulate QCs, it would arguably require a further leap of faith to imagine that the universe “knew” about that algorithm—which might, after all, be radically different from and more complicated than what we take the “real” laws of physics to be. And quite possibly the polynomial speedups of Grover’s algorithm, etc. would survive unscathed.

    Having said all that, I do think we should keep an open mind about the possibility that P=BQP, unlikely though it seems. That’s why, in my talks, popular writings, etc., I’m constantly reminding people that P≠BQP is “just” a conjecture, not a proven fact—and I’m also constantly correcting journalists when they write copy that implies otherwise! 🙂 And I chose to spend my career working in complexity theory because I hope one day we will have enough understanding to turn such conjectures into theorems.

  351. DIY Says:

    Thanks, John Sidles. Can anyone else also offer a perspective on my question #344.

  352. Scott Says:

    DIY #351: You ask very big questions, but here are some short answers. Yes, any quantum field theory can be formulated in terms of a Hamiltonian, not only in terms of a Lagrangian. And yes, physicists’ overwhelming preference for Lagrangians creates a huge barrier to understanding—not only to you, but also to other non-physicists, like (to take one example) me.

    This is a classic insider/outsider divide—similar to how (for example) differential geometry experts never want to write down any explicit coordinates, but for people just learning the subject coordinates are easier. Those who have scaled Mount Lagrangian loudly proclaim that everything looks incomparably simpler from up there, while those of us down in the Hamiltonian Lowlands just have to take their word for it.

    One of my ambitions in life is to
    (1) understand quantum field theory, and then
    (2) having understood it, write a book explaining it to other CS theory / discrete math types.
    And now that I have tenure, maybe it’s finally time to start thinking about doing it… 😉

  353. Greg Kuperberg Says:

    Just in general, there are a lot of computer scientists — and some mathematicians and physicists and engineers and so on — who pretty well understand polynomial time computation, but who aren’t all that expert in quantum computation and who have faith in the polynomial Church-Turing thesis. So, faced with the results of quantum computation, there are only two ways out: Either BQP is not realistic, or if it is realistic, then maybe BQP=P. (Or a more careful statement might be BQP=BPP, although it is a likely conjecture that BPP=P.)

    It’s a bit hard to say “seriously”, since the entire tradition of casting doubt on QC without learning it very well has never been quite completely serious. That said, I agree with Scott that most of the skepticism of BQP is that, usually for a reason having to do with noise, BQP is not actually a realistic complexity class. This is a safe position in a cheap sort of way since BQP obviously isn’t operatively a realistic class at the present time.

    However, since the main driver of QC skepticism is faith in the polynomial Church-Turing thesis rather than expertise in quantum error correction, there are actually some skeptics who hold out some belief that BQP might just be BPP. For example, in this thread Lipton wondered whether factoring is in BQP after all (which it wouldn’t be if you believe that factoring is hard and that BQP=BPP):

    http://rjlipton.wordpress.com/2011/01/23/is-factoring-really-in-bqp-really/

    Or, last year Peter Sarnak told me that it might not mean so much that factoring is in BQP, on the argument that there might well be a polynomial-time classical algorithm for factoring. I pointed out to him that Simon-Shor-Kitaev can actually find the isomorphic type of any finite abelian group; that seemed to sway him some.

    Now, to be fair, all of the first-rate researchers who speculate that BQP=BPP who I know are neither dogmatic nor sarcastic. They would be the first to admit that they aren’t expert in QC, and they are simply curious about the field as skeptics. I think that this type of skepticism is okay. In fact, back in 1997, I was briefly such a skeptic too — at a time when I hardly knew what Shor’s algorithm was and I also knew nothing about quantum error correction. If I hadn’t been so surprised by QC (just as earlier I was surprised by quantum probability itself), I might not have liked it so much.

    What is much more frustrating is incurious or terminally stubborn skeptics. That tends to select for the belief that noise censors QC. Both of the holdout defenses of the polynomial Church-Turing thesis look pretty weak by now, but noise censorship is the stronger of the two.

  354. Bill Kaminsky Says:

    To DIY at #344 and #351:

    I fear John’s answer is going to distract you horribly from your quest to learn quantum field theory. Many practitioners of quantum field theory learn to calculate quantities that agree fabulously with experiment without ever knowing what the heck a “topological obstruction” is in general, let alone what one is in the context of trying to make mathematically rigorous path integrals over continuous variables for spins.

    So, here’s a practical answer to your question:

    Can Quantum Field Theory (or more specifically the Standard Model) be understood in terms a Hamiltonian, a Hilbert space and a Schroedinger equation?

    Short Answer: Most certainly, yes! Any truly comprehensive textbook of QFT formalism, e.g., Weinberg’s The Quantum Theory of Fields, Volume 1 or, perhaps more user friendly Hatfield’s Quantum Field Theory of Particles and Strings , will discuss Hamiltonians prominently and make quite clear that the whole edifice of perturbative QFT can be seen to be about calculating matrix elements of a time-translation operator $\exp( -i H t )$ for some Hamiltonian $H$.

    (Sidenote about the particular virtues of these 2 references: Hatfield kinda uniquely even has 2 chapters doing QFT out explicitly in the Schrodinger representation—i.e., with an explicitly time dependent wavefunction. I say “kinda uniquely” because it’s far, far more common in QFT to use Heisenberg and interaction representations, and these representations move all time dependence, or at least all nontrivial time dependence, out of the wavefunction and into your operators for observables. Explicit use of the Schrodinger representation is rare in QFT because time-independent wavefunctions just seem more appropriate in most people’s minds in any Lorentz invariant setting. With that kudo to Hatfield’s comprehensiveness, I must now honor Weinberg. Weinberg, being as magisterial as he is wont to be, explicitly references Hilbert spaces frequently, whereas even Hatfield doesn’t, let alone the standard grad textbook QFT authors like Peskin & Schroeder or Zee. And with that said, I must say Zee nevertheless is a truly fine bit of pedagogy, IMHO. Peskin & Schroeder never really floated my proverbial physics boat… but your mileage may vary.)

    A Bit of Background: The main reason why you see Lagrangians heavily emphasized in QFT books is that most QFT books concern themselves with particle physics and thus are deeply concerned about relativistic QFT. The Lagrangian formalism is thus very nice since it makes Lorentz invariance manifest (i.e., if the Lagrangian density is Lorentz invariant, the action will obviously be Lorentz invariant, and so will everything else down the line in the Lagrangian formalism). In contrast, the Hamiltonian formalism, especially in the Schrodinger representation of time-independent operators and time-dependent wavefunctions, introduces a preferred time. As such, it isn’t obviously Lorentz invariant… even though, of course, Hamiltonians associated with Lorentz invariant Lagrangians end up making Lorentz invariant answers.

    A secondary reason why many QFT books emphasize Lagrangians is because they emphasize the path integral formalism, which is naturally expressed entirely in terms of Lagrangians (as opposed to the “canonical formalism” which still involves Hamiltonians prominently). The emphasis on path integral methods is mostly due to the post-1970’s realizations that (1) path integrals are really nice for talking about gauge theories and (2) path integrals are really nice for talking about *non*-perturbative effects in QFT.

  355. James Gallagher Says:

    Just to expand on John Sidles confusion as to why Gerard ‘t Hooft, a true giant of modern Physics, might not believe in Quantum Computation.

    It’s not because he thinks the physics underlying the Standard Model might contrive to somehow prevent QC in some subtle manner, but rather it’s because he believes the laws of physics are deterministic.

    And being a gentleman, and a genius, he doesn’t entertain ridiculous “get out” arguments a la Bohmians.

  356. Greg Kuperberg Says:

    The issue of Lagrangians vs Hamiltonians in QFT is a lot like the issue of path summation vs state evolution in quantum algorithms. In that context, is path summation so bad? I tend to think of quantum algorithms in terms of state evolution, but if anything path summation is a more nuts-and-bolts viewpoint.

  357. David Poulin Says:

    More on the Smolin & Smith (SS) paper.

    Shortly after I posted my last comment on this page, I received an email from Troyer where he showed me the results of some numerical simulations they did some time ago, very similar to the SS paper (O(3) model instead of O(2)). Although these mean-field models do reproduce the bimodal distribution reported in arXiv:1304.4595 (as they are deterministic), they do not match the D-wave device on a case-by-case analysis for different instances: the problem instances that are easy for the D-wave device can sometimes be hard for the SS model. This is interesting new evidence supporting the quantum nature of the D-wave device.

  358. Bill Kaminsky Says:

    Regarding David Poulin @ #357:

    Thanks for the not-even-yet-off-the-presses update on Troyer’s work!

    Let me see if I understand correctly how Smolin and Smith have changed the debate on the D-Wave device data and how Troyer’s newest work leaves matters presently.

    1) Before Smolin and Smith’s paper, everyone that had published anything about the D-Wave chip data was essentially only asking this question: Is the chip’s dynamics predominantly driven by thermal activation or by incoherent quantum tunneling?

    2) Smolin and Smith, in essence, demonstrated that this question might be premised on a faulty assumption. Namely, one shouldn’t assume that there always is an energy barrier between the D-Wave’s chip initial and final state. Not only is it possible that the D-Wave device’s final state simply is the adiabatic continuation of the original state, but crucially this still remains possible even in the completely incoherent regime where the chip’s Josephson junction devices act just like classical vectors rather than spin-1/2’s. To wit, no matter where you are on the spectrum from full coherence to full incoherence, the energy well around the energy minimum that’s the chip’s initial transverse-field-polarized state might just get “dragged” (Smolin and Smith’s word) to become the energy well around the minimum that’s the chip’s hopefully-solution-encoding final state. If so, no energy barrier ever needed to be crossed in order to produce a solution.

    3) To show (2) is indeed a pertinent scenario, Smolin and Smith ran some simulations on the “D-Wave Problem” of random $\pm J$ couplings on a chimera graph
    with up to 108 O(2) classical vectors — i.e., the completely incoherent regime — and found that indeed at this size scale around 50% of the time such adiabatic “dragging” of the energy well happens and thus no energy barrier ever need be crossed to get from the initial state to the final state in these instances. As such, the bimodal distribution that had been associated with incoherent quantum tunneling might have been occurring with no tunneling whatsoever! The two modes of the distribution might simply reflect the following 2 dire scenarios:

    Scenario A: The well around the minimum *didn’t* get dragged, and thus you *do* need to surmount a barrier. Alas, there’s so much noise tunneling is completely suppressed, and you thus must record a failure

    Scenario B: The well around the minimum *did* get dragged, and thus you *don’t* need to surmount a barrier. Alas, there’s still so much noise that tunneling is completely suppressed, but now tunneling is irrelevant and you thus get to record a success.

    4) But now, given your comment #357 David, it appears Troyer has run some simulations and found that things aren’t as dire as scenarios A and B above. This is because Troyer has found at least some randomly generated instances of the $\pm J$ spin glass do *not* exhibit such adiabatic “dragging” when you run the classical vector simulation — i.e., an energy barrier *does* have to be crossed to get from the initial to final state according to the classical vector simulation — and, despite this, the D-Wave device was empirically observed to work well on that instance. So it would appear noise is not yet at that critical level that all tunneling is suppressed, and the D-Wave device can exhibit incoherent quantum tunneling.

    Am I understanding the situation correctly?

  359. D-Wave的量子计算机是否为真存在疑问 | 爱板网 Says:

    […] 之前Google和NASA宣布合作设立量子实验室,购买D-Wave的量子计算机研究机器学习。然而,D-Wave的量子计算机是否是真的量子计算机,长期以来一直备受争议。MIT知名计算机科学家Scott Aaronson是量子计算机的资深批评者,他在其博客了记录了各方对D-Wave量子计算机的质疑和讨论。 […]

  360. Ray Says:

    @Scott Why do you respond to John Sidles? Haven’t you figured out yet that this guy is a joke?

  361. So, it is a QC (at least this week!) | Ajit Jadhav's Weblog Says:

    […] it was Prof. David Poulin commenting at Prof. Scott Aaronson’s blog once again [^], alerting some new work from Prof. Troyer. Unlike in his last comment (on the same post, when he […]

  362. John Sidles Says:

    —————–
    James Gallagher opines (circa #355): “Gerard ‘t Hooft, a true giant of modern Physics, might not believe in Quantum Computation … not because he thinks the physics underlying the Standard Model might contrive to somehow prevent QC in some subtle manner, but rather   because he believes the laws of physics are deterministic.”
    —————–
    Your theory is plausible, James Gallagher, and to the degree that it is accurate, then Gerard ‘tHooft’s views are more conservative than the radical views of Tony Zee, whose Quantum Field Theory in a Nutshell opines:

    [p. 60] “Some physicists are looking for a formulation of quantum electrodynamics without using A_u, but but have so far failed to turn up an alternative to what we have. It is conceivable that a truly deep advance in theoretical physics would involve writing down electrodynamics without writing A_u.” [p. 521] “In all previous revolutions in physics, a formerly cherished concept has to be jettisoned. If we are poised before another conceptual shift, something else might have to go. Lorentz invariance perhaps? More likely, we will have to abandon strict locality.”

    A third class of skeptical opinion, which extends the radical-to-conservative progression <radical skepticism> ⇒ tHooft⇒ Zee ⇒ <conservative skepticism> regards Lorentz invariance, strict dynamical locality, and informatic determinism (via Lindbladian unravelling) as foundational physical principles to be respected scrupulously. These conservative-minded researchers reject the radical speculations of ‘tHooft and Zee, and instead embrace the well-verified dynamical physics of modern quantum optics (per example, Howard Carmichael’s works cited in comment circa #316).

    Each variety of skeptic has a different explanation for the unanticipated slow pace of progress toward the QC milestones of the QIST Roadmap (of 2002 and 2004), and for the exceeding difficultly of the practical and theoretic challenges that are encountered even in simpler QC Roadmaps (BosonSampling for example) undoubtedly poses substantial challenges for this third (conservatively optimistic) class of field theorist. Conservatively skeptical QC researchers (very reasonably) regard these difficulties as entirely natural:

    QC Challenge I  The high-finesse/low-loss cavities of quantum optics experiments acts to break the foundational Lorentz invariance of QED.

    QC Challenge II  The high-efficiency dynamical coupling of photon source currents to spatially separated photon sink/detector currents acts to break the foundational dynamical locality of QED (because high-efficiency requires that the correlation of the source currents with the sink currents be near-perfect).

    QC Challenge III  The informatic coupling of photon source currents to spatially separated photon sink/detector currents (via Lindbladian fluctuations) acts to break the foundational informatic locality of QED (because physical observation of Alice’s source/emitter currents is required to carry zero causality-violating classical information regarding Bob’s absorber/detector currents).

    What research roadmaps are naturalfor conservatively skeptical QC researche? An ultra-conservative roadmap is for researchers to keep doing what they are already (successfully!) doing: pulling-back the dynamics of the Standard Theory onto tractable-dimension algebraic state-spaces (typically tensor product state-spaces) via mathematical techniques of ever-increasing sophistication.

    For students especially, the mathematical toolset of Mikio Nakahara well-rated text (available in paperback) Geometry, Topology and Physics (2003) nicely complements the admirable reading list of field theory texts that Bill Kaminsky’ suggested (in comment circa #354). Also recommended is Bill Landsberg’s recent text Tensors: Geometry and Applications (2012) which summarizes and integrates Landsberg’s (many) fine articles, which include classics like “On the geometry of tensor network states” (arXiv:1105.4449, 2012) and “Geometry and the complexity of matrix multiplication” (Bull. AMS, 2008).

    Conclusion I  Please let me join with Scott (and everyone else) in lamenting the scarcity of textbooks that pullback these various QM/QIT/QED considerations onto a QIT-centric framework that is physically unifying and mathematically natural.

    Conclusion II  Scalable QC/QIT experiments seek to saturate and/or effectively break (simultaneously) three foundational principles of quantum field theory: Lorentz invariance, dynamical locality, and informatic locality; thus it is unsurprising that the standard texts of the Standard Theory have (up to the present time) not pointed toward a technically feasible path to scalable QC/QIT technologies.

    A regrettable circumstance  Scott’s assertion (circa #341) that “Countless string theorists and other HEP theorists consider the feasibility of QC so obvious that the question doesn’t particularly interest them (it’s ‘mere engineering’)” — if we assume that this claim is accurate — surely is a regrettable circumstance. Because these QM/QIT/field-theoretic issues manifestly are sufficiently tough that it is neither necessary nor feasible nor desirable that everyone think alike in regard to them! 🙂

  363. Rahul Says:

    John Sidles #362:

    if we assume that this claim is accurate — surely is a regrettable circumstance. Because these QM/QIT/field-theoretic issues manifestly are sufficiently tough that it is neither necessary nor feasible nor desirable that everyone think alike in regard to them!

    Not really so regrettable so long as we have able minds like your own working very hard 24×7 to do these issues justice. Obviously, not everyone thinks alike.

    So long as we have you we’ll avoid the dreaded consensus.

  364. David Poulin Says:

    Bill Kaminsky @#358

    I am not sure what ‘incoherent quantum tunnelling’ means. All the discussions in terms of energy barriers are a bit misleading because a system with a time-dependent hamiltonian does not preserve energy. Quantum tunnelling is a vaguely used term which I can only interpret rigorously as a second order perturbation transition: it is a transition between two states whose hamiltonian matrix element is 0, but that have a non-zero matrix element in H^2 (the square of the hamiltonian), being connected through a common, higher energy state.

    In any case, the paper is out now, so you can read it arXiv:1305.5837

  365. John Sidles Says:

    David Poulin, thank you for that fine reference to Wang et al. “Comment on: ‘Classical signature of quantum annealing'” (arXiv:1305.5837, 2013).

    It is interesting that Wang et al. base their analysis substantially upon the Landau-Lifshitz-Gilbert (LLG) equation (per Wang et al. eqs. 1-4), citing Gilbert’s recent survey article “A phenomenological theory of damping in ferromagnetic materials” (IEEE, 2004).

    The LLG equation is a starting-point for much research in quantum spin imaging (and many other STEM disciplines), and in this student readers of Shetl Optimized may be interested in the annotation that our QSE Group’s shared BibTeX database provides for Gilbert (2004):

    ———
    Annotation  Gilbert (2004) makes passing reference to a much-cited 1955 conference conference proceedings: “T.L. Gilbert, ‘A Lagrangian formulation of the gyromagnetic equation of the magnetic field,’ Phys. Rev. 100 (1955) 1243” (227 citations). In turn, Gilbert’s 1955 article is associated to Gilbert’s never-published 1955 thesis, which per Gilbert’s own (2004) assessment contains “a number of errors and misconceptions.”

    From the viewpoint of geometric mechanics, these various errors and misconceptions (that according to Gilbert have plagued the LLG literature for more than 50 years!) arise for the obvious-to-geometers reason that the (classical) state-space manifold S_2 has the wrong topology to be a cotangent bundle manifold, per Spivak chs. 12-13 [see Shtetl Optimized comment circa #346 above]. For this geometric reason, Gilbert’s 2004 Lagrangian derivation of the LLG equation (when worked-out in detail) is topologically obstructed, and formally fails.
    ———

    Conclusion  The LLG equation has proven itself (in dozens of experiments) to be phenomenologically correct. Yet contrary to frequent citations in the contemporary theoretical literature (including this week’s Wang et al.‘s arXiv:1305.5837), Lagrangian methods — when associated to action functions that are smooth and real-valued — do not suffice to derive the LLG equation. This is one practical reason why — for better or worse — our QSE Group has for several years regarded LLG dynamical systems as exemplary cases for motivating engineering students to learn the basic elements of geometric Hamiltonian mechanics. In our view, teaching STEM students to derive correct results via theoretical methods that are both mathematically unsound and historically outdated is pedagogically dubious! 🙂

  366. Bill Kaminsky Says:

    David Poulin @ #363

    Thanks for your quick response. My intuition about tunneling comes from situations when you use path integrals to treat the motion of quantum particle moving with friction in a potential.

    Note that such an approach still applies to superconducting devices. Instead of considering an actual massive particle, you can define a fictitious particle with location corresponding to the phase-difference variables of a Josephson junctions and effective mass determined by the capacitances of the he junctions. You’ll find such a particle moves in a cosinusoidal potential defined explictly in terms of the Josephson energies of the devices. Alternatively, albeit much less rigorously, you can imagine the qubit to be a spin 1/2 particle, parametrize it continuously in terms of spin coherent states, and consider movement on the “classical potential” that obtains when you replace the spin-1/2’s in your Hamiltonian by classical O(3) vectors. It’s this latter (and admittedly tenuous in terms of mathematica rigor) picture that I had in mind given Smolin and Smith’s classical O(2) model of the D-Wave qubits and Troyer et al’s similar O(3) model. The friction the particle/coherent-state-spin feels is due generally to linear coupling to a bath of oscillators (e.g., photons or phonons, aka the “spin-boson” model)

    With that preface, here’s what I meant by “incoherent quantum tunneling.” Consider a non-degenerate double well system (I pray my ASCII art renders right),

    V |
    | \ 1 __ 2 /
    | \__/ \ /
    |________\__/_ x

    and imagine the depths and widths of the energy wells are such that if you were to consider the ideal, zero-coupling-to-the-environment case, the ground state of the particle-in-a-double-well system would have most of its amplitude in the lower well, #2, and the first excited state would have most of its amplitude in higher well, #1.

    Furthermore, imagine that an appropriate linear combination of these lowest two energy eigenstates can nicely represent the particle essentially localized in either of the wells.

    Finally, initialize the system with the particle localized in the higher well, #1.

    If you work out the path integral math*, you’ll find 3 regimes for motion out of well #1 and into #2

    1) Ideal, Zero-Coupling-to-Environment Regime: You get Rabi oscillations of the particle moving back and forth between wells #1 and #2 with characteristic period having to do with the gap between these lowest two energy levels.

    2) Dissipative, but Still “Coherent,” Quantum Tunneling Regime As you turn on coupling to the environment, you’ll find the particle still moves back and forth between wells 1 and 2, but at a slower rate. Meanwhile, population builds more and more in just the ground state (i.e., the one essentially localized in the lower well, #2). One speaks of the energy gap being reduced or “renormalized” by coupling to the environment. Eventually this suppression of the rate (equivalently suppression of the renormalized tunneling gap) is exponential in the coupling strength. We still call this tunneling “coherent” since it’s still oscillatory. It’s just damped.

    3) Dissipative, “Incoherent” Quantum Tunneling The particle can tunnel from the higher well, #1, to the lower well , #2… but once it does, it never comes back. So you just see steadily declining (increasing) population in the higher (lower) well #1 (#2). Note that this is fundamentally a dissipative phenomena. If you remove the potential bias and make the two well bottoms equal potential, you’ll entirely suppress this effect. We call such tunneling “incoherent” in that the Rabi oscillations are beyond their critical damping, and you never get a full oscillation.

    Note that all 3 regimes persist all the way down to zero temperature for the environment. This motion from the higher well to the lower well need having nothing to do with thermal activation.

    * The classic references are:

    Leggett, et al. “Dynamics of the Dissipative Two-State System” Rev Mod Phys 59 1 (1987)

    Hanggi, Talkner, and Borkavec “Reaction-Rate Theory: 50 Years after Kramers” Rev Mod Phys 62 251 (1990)

    and all the work from the 1980’s cited within. From a historical standpoint, it’s worth noting that the experiments from the 80’s and 90’s that established these phenomena were the whole reason people ever considered superconducting qubits at all in the late 90’s and 00’s.

  367. Bill Kaminsky Says:

    Gosh, I have a problem with collecting my thoughts into something brief. The brief version of my last comment is:

    *****
    Quantum Tunneling corresponds to Rabi Oscillations. We speak of “incoherent” quantum tunneling when the Rabi Oscillations pass a critical damping due to environmental noise such that you never see a full oscillation, just decay out of a metastable energy minimum into a stable one.
    ******

    Aaaah, that’s much better. 🙂

  368. Pablo Says:

    Hello Mr. Aaronson,

    The best “classical” algorithm can sort an array with a time complexity of O(nlogn) on classical computers. What is the complexity of the best sorting algorithm for quantum computers like D-Wave?

    Regards,

    Pablo.

  369. Bill Kaminsky Says:

    Pablo @ #368:

    Scott can elaborate, but the basic message is quantum computers *cannot* outperform classical search by more than a constant factor. That is, it still takes $\Omega( N log N )$ binary comparisons to sort a list of $N$ items.

    See, for example:

    Peter Hoyer, Jan Neerbek, Yaoyun Shi. “Quantum complexities of ordered searching, sorting, and element distinctness” http://arxiv.org/abs/quant-ph/0102078 and references cited therein

    ( Sidenote: Probably the most notable reference contained therein is Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser. “A limit on the speed of quantum computation for insertion into an ordered list,” http://arxiv.org/abs/quant-ph/9812057 )

  370. Bill Kaminsky Says:

    By the way, Pablo @ #368, the above cited lower bound (# 369) for quantum sorting of a list holds for *any* quantum computer, not just ones attempting adiabatic quantum optimization on stoquastic Hamiltonians like D-Wave’s.

    Indeed, posing sorting problems on D-Wave’s current architecture would be quite inefficient. Indeed, posing any problem on D-Wave’s current architecture that’s not immediately expressible in terms of finding the ground state of an Ising model is quite inefficient.

  371. Scott Says:

    Bill #369, #370: Thanks! I have nothing to add to your excellent answer.

  372. Jon Lennox Says:

    Scott @ #350:

    (i.e., that classical computers can efficiently solve any search and sampling problems that classical computers can, not just decision problems)

    Personally, I’m quite confident that classical computers can solve any problems that classical computers can.

  373. john Says:

    Shor @99: “This is exactly like arguing that if you look at the Wright Brothers’ first flight at Kitty Hawk, they could have gone farther, faster, and much more cheaply if they had just used an automobile.” <- well said

    Also…Scott @317: "…Richard Feynman’s approach to quantum mechanics is way too naïve and simplistic for 21st-century STEM/QIT engineers. And the main reason is that it ignores QED, an exciting recent development about which Feynman appears to have known nothing." Just to be clear, as sarcasm doesn't always come across in text…you know that Feynman received the Nobel prize for QED (along with Schwinger and Tomonaga)…right?

    I would note in passing that the worst-case time complexities often cited regarding classical vs. quantum algorithms can be somewhat misleading. For example: NP-complete problems have regimes of polynomial complexity and regimes of exponential complexity, separated by something akin to a phase transition. Though both quantum and classical computers exhibit exponential complexity in the worst case scenario quantum computers could solve in polynomial time many problem instances which, classically, fall into the exponential regime. As such, a quantum computer would redraw the boundaries of what is 'easy' vs 'hard' to solve in particular problems, hence, if your real-world instance of an NP-complete problem is in the affected region, then huge practical benefits could be reaped with a quantum implementation, though this isn't apparent from the worst-case complexity.

  374. Vadim Says:

    Jon Lennox,

    Now that you mention it, we never hear much about the P vs. P problem.

  375. Scott Says:

    Jon #372 and Vadim #374: Duhhh, fixed!

  376. Scott Says:

    john #373:

      Just to be clear, as sarcasm doesn’t always come across in text…you know that Feynman received the Nobel prize for QED (along with Schwinger and Tomonaga)…right?

    I use sarcasm when I feel the need to establish a floor of obviousness for the discussion. You, I’m afraid, are somewhere down in the floorboards. 🙂

  377. john Says:

    Thanks for making the sarcasm explicit, Scott. Given your apparent understanding of physics, I couldn’t be sure…

  378. Scott Says:

    john #373:

      Though both quantum and classical computers exhibit exponential complexity in the worst case scenario quantum computers could solve in polynomial time many problem instances which, classically, fall into the exponential regime.

    Well, yes, that’s precisely the hope for the quantum adiabatic algorithm. But whether there’s any practically-relevant set of instances for which that hope is actually realized remains a huge open question. You can construct instances where the adiabatic algorithm exponentially outperforms classical simulated annealing, but as it turns out, you can also construct instances where simulated annealing exponentially outperforms the adiabatic algorithm! And no one yet knows whether either of those types of instances will be particularly relevant in practice. My MIT colleagues Ed Farhi and Jeffrey Goldstone (among others) have been working on that question analytically and with numerical simulations for more than a decade, but ultimately it might have to be settled empirically.

      Given your apparent understanding of physics, I couldn’t be sure…

    I don’t deny that I have plenty to learn, but what specifically did I say about physics in this thread that was wrong?

  379. John Sidles Says:

    Scott says: “I use sarcasm when I feel the need to establish a floor of obviousness for the discussion.”

    In respect to the STEM community’s still-emerging quantum disciplines, in the present decade it is neither necessary, nor feasible, nor desirable that everyone share the same “floor of obviousness.” And neither is it desirable that sarcasm be the dominant mode of discourse in establishing the 21st century’s new quantum floors! Because in the [lightly paraphrased] and notably unsarcastic vision of Martin Luther King ():

    ————————
    I’ve Been to the Mountaintop (QIT-adapted)
    “We’ve got some difficult days ahead. Like anybody, I would like to embrace sarcasm. Sarcasm has its place. And yet the Creator encourages us to go hopefully up the mountain [Pisgah], and from its summit we behold a promised land of quantum STEM enterprises. Then we are happy; we are confident; we aren’t worried about anything; we foresee that the glories of 21st century STEM enterprises will shine as brightly as those of every previous century.”
    ————————

    The commenters here on Shtetl Optimized discern at least three “Pisah-Sights (as King’s generation called them) for 21st century quantum computing.

    The Orthodox Quantum Pisgah-Sight  Texts like the QIST Roadmaps of 2002–4 and Nielsen/Chuang Quantum Computation and Quantum Information describe for us a quantum-scientific Promised Land, and provide an explicit scientific roadmap for getting there. And that scientific path is quantum computing via error-corrected qubit gates and/or Hamiltonian adiabatic flow.

    The D-Wave Quantum Pisgah-Sight  Devices like D-Wave’s quantum computer now show us a quantum-technological Promised Land, and provide an explicit technological roadmap for getting there. And that technological path is quantum computing via thermally annealed qubits.

    The Mathematical Quantum Pisgah-Sight  Texts like Landsberg’s Tensors: Geometry and Applications (2012) — which includes a terrific prologue essay “Clash of Cultures — are beginning unveil (accessibly to 21st century STEM students!) a quantum-mathematical Promised Land, including an explicit mathematical roadmap for getting there. And that mathematical path is efficient algorithic computations and accurate dynamical simulations via trajectories that unravel on varietal state-spaces.

    Remark  The enterprises associated to these three Pisgah-Sights are scientifically, technologically, and mathematically compatible. Sarcastic deprecations of these Pisgah-Sights are common mainly because there exist few (if any) objective grounds for deprecating any one quantum Pisgah-Sight relative to the other two. They are all three of them terrific visions that (best of all!) are entirely compatible with one-another.

    Conclusion  The wisest sages of every age — Mark Twain writing in his ultimate story Extract from Captain Stormfield’s Visit to Heaven (1907) is canonically exemplary — remind us that journeys toward Promised Lands generically take a long time, encounter unexpected difficulties, take unforeseen paths, and generate plenty of snark and sarcasm along the way.

    Those of us who are so fortunate as to actually arrive at our Promised Lands generally find that they are wonderful places … wonderful in consequence of their surprising and even comedic diversity!

    That is why is the comedic diversity that Shtetl Optimized so abundantly provides to the 21st century quantum computing community is exceedingly valuable and much-appreciated. 🙂

  380. Mike Says:

    Scott#378,

    “Given your apparent understanding of physics, I couldn’t be sure…”

    Yeah, I wondered about this as well. Maybe he was just trying to “get even” for your comment about him being somewhere “down in the floorboards,” by making an ad hominem attack based on some undescribed lack of knowledge on your part regarding physics.

    The difference, of course, is that by his own admission, he didn’t get that your comment was meant to be sarcastic, which, given how over the top it was, says quite a bit about his own skills of analysis. 🙂

  381. Michael Bacon Says:

    John@379,

    Not sure what your point was, but I’m pretty sure you understood that Scott was being sarcastic. 😉

  382. John Sidles Says:

    Readers of Shtetl Optimized should be aware that the comment numbering system is indeterminate and changing, andso it can happen that posts ascribed to “John” can be confused with posts by “John Sidles”.

    We are two different people, and in particular, “John Sidles” posts may be read as stupid/ignorant/boring, but they never (by deliberate intent) are snarky/sarcastic.

  383. Michael Bacon Says:

    John Sidles@382,

    Since we’re all about setting the record straight, for the record, my comment “John@379” was meant for you!! And, of course, I wasn’t accusing you of being sarcastic; rather, I was crediting you with being able to recognize it. 🙂

  384. Henning Dekant Says:

    So, the other day a guy walked into the bar and asked me what a quantum computer is …

    Well, it wasn’t quite exactly like that, but you get the picture. So, I had to think back to Scott’s #252 answer and thought it’s a bit too waffly for that particular situation. I decided to start with the qubit, explain superposition and entanglement. Then I stressed that only machines that feature entangled qubits are usually referred to as quantum computers. Before I completely lost the fellow, I also tried to make the point that there are very different architectures that differ in versatility as well as in the theoretic understanding of potential speed-ups. Finally, I pointed him to my attempt at a QC taxonomy.

    My point (besides trying to bring this thread back to its original focus) is that insisting that quantum computing always means universal QC, or at least proven quantum speed-up QC, is a lost cause. It’s too complex for a casual questioner. I think there needs to be a baseline definition of what makes QC, and then on top of that one can make the point that not all QC is created equal.

    (BTW Scott, you mentioned that you contemplated a textbook on QED to learn more about it yourself. I think this is a great idea, some excellent literature was conceived that way. Steven Weinberg produced a great text when teaching himself GR, and you’d bring a unique perspective to QED)

  385. Scott Says:

    Henning #384:

      insisting that quantum computing always means universal QC, or at least proven quantum speed-up QC, is a lost cause. It’s too complex for a casual questioner.

    With all due respect, I think that’s bullshit! 🙂 How complex is it to say: “Look, if everything this fancy quantum thingamajig did could be done just as well by the Dell laptop on your desk, then there wouldn’t be much point in spending millions of dollars on the fancy thingamajig, would there be? No siree! That’s why our goal is to build, not just any quantum thingamajig, but one that does something better than classical thingamajigs. Sure, it doesn’t have to do everything better—and believe me, it won’t—but it should at least do something better. Don’t let anyone trick you into buying a quantum doodad that doesn’t do anything better, just because the word ‘quantum’ sounds impressive. That’s just common sense, right?”

  386. John Sidles Says:

    ————–
    Scott says: (#385) “Our goal is to build, not just any quantum thingamajig, but one that does something better than classical thingamajigs.”
    ————–
    This goal begs the unanswered question (per comment #224): What does does the phrase ‘better than classical’ mean exactly? Does ‘better than classical’ have any meanings that are scientifically plausible, objectively verifiable, mathematically rigorous, practically useful, socially inspiring, and generally embraced?

    The 2004 QIST roadmap was at least specific in its quantum computing goals (Section 3.0):

    •  By the year 2007, to encode a single qubit into the state of a logical qubit formed from several physical qubits, and

    •  perform repetitive error correction of the logical qubit, and

    •  transfer the state of the logical qubit into the state of another set of physical qubits with high fidelity, and

    •  by the year 2012, to implement a concatenated quantum error-correcting code.

    In retrospect, the QIST goals overestimated technical feasibility of error-corrected quantum computation, and (more importantly as it seems to me), underestimated the sustained pace of transformational advances in algorithms and simulation (both classical and quantum).

    Conclusion  Now may be a good time to refine our understanding of the phrase “better than classical.”

    Perhaps sarcasm can provocatively specify “a floor of obviousness” to assist this refinement?

  387. Henning Dekant Says:

    Scott #384, any quantum thingamajig build so far could be outperformed by your Dell laptop or in some instances my abacus 🙂

    It’s not that I don’t want to get the point across that you are making, but the Googles of the world won’t stop calling their new toys a quantum computer, and from a terminology standpoint I think that is justifiable, but only tangentially related to the question if it is particularly good QC.

    Seems to me you’d be packing more punch by focusing on the latter.

  388. Gil Kalai Says:

    Scott: “For example, there’s BosonSampling, and other proposals that would almost certainly let you do somethingthat’s hard to simulate classically, but without giving you full quantum universality…

    For me personally, the central question is whether or not I can at least see a “straight path forward” to getting an asymptotic speedup over any possible classical algorithm, under mathematical conjectures that I believe, and assuming quantum mechanics continues to be valid. For universal quantum computing, and for intermediate proposals like BosonSampling, there is such a path. For D-Wave’s annealing approach, by contrast, I’d say that no one has ruled out the possibility of such a path, but no one has made a compelling case for one either.”

    Scott’s comparison between D-wave annealing approach and BosonSampling is quite interesting. Let me take off for a little while my quantum computer skeptic hat and say a few words regarding it. I truly like BosonSampling and I even devoted a post on my blog to discuss it. I agree with Scott’s own opinion that he expressed several times that BosonSampling will most likely not manifest anything which is hard to simulate classically without quantum fault-tolerance, and that with quantum fault-tolerance it can be quite easier to go ahead and build a universal quantum computer than to implement quantum fault tolerance on BosonSampling. But I see nothing wrong with Scott’s hope (that he also expressed a few times) that BosonSampling will somehow exhibits quantum supremacy even without full-fledged quantum fault tolerance. I think that the experimental effort around BosonSampling is very nice.

    My thoughts about D-Wave are similar. Overall, I beg to like the D-wave endeavor! I share the common opinion that BQP probably does not contain NP, but nevertheless the idea that quantum computers will give some advantage for intractable problems is very reasonable. (And I don’t see anything wrong with people hoping that QC will give a big speedup for general NP-complete problem, or even that BQP contains NP.)  I share Daniel’s opinion  that in order for D-Wave systems to demonstrate real advantage quantum fault-tolerance needs to be implemented but I don’t see anything wrong with hoping that quantum advantage can be demonstrated without implementing quantum fault-tolerance. It will be interesting to examine this for the next generation of D-Wave machines. Of course, it will be interesting also to examine what D-Wave computers are actually doing in terms of entanglements. For a commercial initiative the D-Wave agenda is more appealing compared to the BosonSampling agenda.

    Let me note that in spite of various counter-hopes one emerging insight from both BosonSampling and Quantum annealing of various types is that for quantum speedup the real barrier needed to be crossed is that of quantum fault-tolerance. This reinforces the need to draw the quantum fault-tolerance barrier formally, in mathematical terms, as clearly and as generally as possible. This is what my research effort is about. Putting back my skeptical hat on, it is not a secret that I expect that the quantum fault-tolerance barrier cannot be crossed at all.

  389. Joe Fitzsimons Says:

    Henning, I think you are taking a narrow view of what it means for a device to do something your desktop can’t. You seem to be only concerned with time complexity, where the line is blurred in the sense that a fixed size quantum computer can always be simulated by a classical computer given sufficiently long. However, quantum computers have a far more clear cut advantage in other areas such as communication and query complexity. It’s not clear to me that the DWave device can offer any advantage in terms of these measures over a classical computer, where as, for example, such advantages have already been clearly demonstrated in quantum optics (for example).

  390. Scott Says:

    Henning #387:

      It’s not that I don’t want to get the point across that you are making, but the Googles of the world won’t stop calling their new toys a quantum computer, and from a terminology standpoint I think that is justifiable, but only tangentially related to the question if it is particularly good QC.

      Seems to me you’d be packing more punch by focusing on the latter.

    Well, I don’t care that much about terminology. If your suggestion were adopted, then I’d simply redefine my field as Good Quantum Computing (GQC), and tell people sitting next to me on planes that I’m a GQC researcher who studies the theoretical limits of GQCs and hopes GQCs will eventually be built. When they inevitably asked, I’d acknowledge that there was a parallel field of Bad Quantum Computing (BQC), often known just as Quantum Computing (QC), but simply say that that wasn’t my field.

    All this raises a question, though: why should particular managers at Google and Lockheed get to control what the term “quantum computing” means? Why should they get to define QC=BQC, overriding the quantum computing research community’s previous understanding that QC=GQC? From the research community’s perspective, why should we cede such control without a fight? OK, I guess I do care about terminology. 😉

  391. Michael Bacon Says:

    John Sidles@386,

    “Perhaps sarcasm can provocatively specify “a floor of obviousness” to assist this refinement?”

    There, I knew you could be sarcastic as well! Good work!! 🙂

  392. John Sidles Says:

    —————–
    Scott asks (with rhetoric removed)  “Why should [Group B] get to define [quantum computing], overriding the quantum computing research community’s [Group A] previous understanding?”
    —————–

    The Technology Experts Panel (TEP) of the 2002 and 2004 Quantum Information Science and Technology Roadmaps — regarded as a consensus summary of the “[Group A] understanding” in Scott’s question — strongly advocated flexible evolution in regard to technical classifications and definitional distinctions relating to quantum computation (QC):

    •  “It was the unanimous opinion of the TEP [QIST Technology Experts Panel] that it is too soon to attempt to identify a smaller number of potential ‘winners’ ”

    •  “Considerable evolution of and hybridization between the various approaches has already taken place and should be expected to continue in the future, with some existing approaches being superseded by even more promising ones.”

    •  “The quantum computation (QC) roadmap was released in Version 1.0 form in December 2002 as a living document.  … They [the TEP] intend to revisit these important issues in future updates.”

    •  “When taking on a basic scientific challenge of the complexity and magnitude of QC, diversity of approaches, persistence, and patience are essential.”

    •  “As one looks to the future development of QC one should anticipate the need for an increasing industrial involvement as the first steps into the realm of scalability are made. ”

    •  “The quantum computer-science test-bed destination that we envision in this roadmap will open up fascinating, powerful new computational capabilities.”

    Four objective observations

    ◆  Contrary to promise, neither the QIST TEP nor any other organized body of QM/QC/QIT experts has systematically “revisited these important issues.”

    ◆  D-Wave’s technological achievements reasonably fulfill the QIST TEP’s envisioned “considerable evolution of and hybridization between various approaches” in association to “increasing industrial involvement” requiring essentially a “diversity of approaches, persistence, and patience.”

    ◆  At the present time, no mathematical, theoretical, or experimental grounds exist, that are adequate to exclude D-Wave’s technology from the class that the QIST TEP called “a smaller number of potential [QC] winners.”

    •  In recent years, the QIST TEP’s envisioned “fascinating, powerful new computational capabilities” have turned out to encompass transformational advances associated to algorithmic capabilities … via algorithms whose origins are crucially grounded in concepts that arise naturally in QM/QC/QIT; algorithms whose transformational strategic impact is immediate because they can be efficiently implemented on classical computing hardware.

    Conclusion  The guidance that is summarized in QIST’s QEP Roadmaps of 2002 and 2004, and in particular the technical classifications and definitional distinctions associated to that guidance, have become sufficiently outdated as to be grossly inadequate to the evolving enterprise of science, technology, engineering, and mathematics that is modern-day “quantum computing.”

  393. Light Blue Touchpaper » Blog Archive » A further observation on quantum computing Says:

    […] been poured into developing quantum computers, but even advocates of quantum computing admit they don’t really work. As our February paper argued, a hydrodynamic interpretation of quantum mechanics may suggest […]

  394. Nobody Special Says:

    About terminology (potentially Scott #390 but also everyone). Normally I’m one to champion the descriptive nature of language, that it is based on usage instead of formal rules (prescriptivism) but in this case I see at least some justification for prescriptivism as the alternative is significant ambiguity.

    Trivial example:The next big future guy made a bet on longbets.org that a quantum computer would be sold by some time. Did he win or did the term simply become more inclusive?

  395. Scott Says:

    John Sidles #392: I’ve been wanting to say this for a while, but your latest comment finally pushed me over the edge.

    You can take your QIST Roadmaps and shove them.

    I don’t need any “Roadmap”; I use GPS!

  396. John Sidles Says:

    —————–
    Scott suggests  “You can take your QIST Roadmaps and shove them.”
    —————–

    To the degree that the QIST Roadmaps belong to anyone, they belong to the members of the Technology Experts Panel (TEP), whose consensus (as of 2004) the Roadmaps reflect. These TEP members all are still active in QM/QC/QIT/QSE research:

    ▮  Chair: Dr. Richard Hughes – Los Alamos National Laboratory
    ▮  Deputy Chair: Dr. Gary Doolen – Los Alamos National Laboratory
    ●  Prof. David Awschalom – University of California: Santa Barbara
    ●  Prof. Carlton Caves – University of New Mexico
    ●  Prof. Michael Chapman – Georgia Tech
    ●  Prof. Robert Clark – University of New South Wales
    ●  Prof. David Cory – Massachusetts Institute of Technology
    ●  Dr. David DiVincenzo – IBM: Thomas J. Watson Research Center
    ●  Prof. Artur Ekert – Cambridge University
    ●  Prof. P. Chris Hammel – Ohio State University
    ●  Prof. Paul Kwiat – University of Illinois: Urbana-Champaign
    ●  Prof. Seth Lloyd – Massachusetts Institute of Technology
    ●  Prof. Gerard Milburn – University of Queensland
    ●  Prof. Terry Orlando – Massachusetts Institute of Technology
    ●  Prof. Duncan Steel – University of Michigan
    ●  Prof. Umesh Vazirani – University of California: Berkeley
    ●  Prof. K. Birgitta Whaley – University of California: Berkeley
    ●  Dr. David Wineland – National Institute of Standards and Technology: Boulder

    With regard to the widely circulated view (per comment circa #341) that “the feasibility of QC is so obvious that the question doesn’t particularly interest [many theorists] (it’s ‘mere engineering’)”, it would be instructive to learn whether the QIST TEP members themselves, having the benefit of a decade of research and experience, would endorse such a broad claim.

    Conclusion  An after-action report by the QIST TEP members, that thoughtfully considered the issues that are being so vigorously discussed here on Shtetl Optimized, as a guidance (informed by experience) to further QM/QC/QIT/QSE research, would comprise (as it seems to me) a resource of terrific value (for young STEM researchers in particular)! 🙂

  397. Scott Says:

    John Sidles #396: You can gather together the most brilliant, creative people in the world, but if you charge them with producing a boring bureaucratic document for funding agencies, then a boring bureaucratic document for funding agencies is exactly what you’ll get. I’ve seen that dynamic play out over and over and over in my own life.

    One possible reason is that the different creative people will be creative in different ways, and any document they produce will reflect only the boring common denominator that doesn’t offend any of them. But whatever the reason, if you were trying to predict the future of science, you’d be better off looking at the actual research papers that those creative people were writing at the same time, than at their boring bureaucratic document.

  398. Greg Kuperberg Says:

    Gil – If all that D-Wave did was hope for some acceleration of unstructured NP-complete problems, then I’d be inclined to like them too. But they don’t just hope it, they hype it. They also have high 8-figure funding, maybe soon 9 figures. They also say that everyone else in the field besides them is doing it wrong. They say that the so-called “gate model” is a diseased choice and that the majority of the field is a pack of sheep for sticking to it.

    I can see that behind the scenes there are real engineers at D-Wave (many of whom actually are physics PhDs) who are trying to do something good. More power to them. But how can I like a sales pitch that involves so much contempt for both theory and experiment that doesn’t come from D-Wave itself.

  399. John Sidles Says:

    ————
    Scott asserts “You can gather together the most brilliant, creative people in the world, but if you charge them with producing a boring bureaucratic document for funding agencies, then a boring bureaucratic document for funding agencies is exactly what you’ll get.”
    ————
    The premise “The brilliant, creative QIST TEP members were charged with producing a boring bureaucratic document for funding agencies” is trebly dubious:

    Q1  Were the QIST TEP members really charged with “producing a boring bureaucratic document”?

    Q2  Was the resulting QISt Roadmap’s audience really confined to “funding agencies”?

    Q3  Were the QIST TEP members really so deficient in “brilliance and creativity” as to deliberately subordinate their (admitted) brilliance and creativity to the bureaucratic audience of Q1 and Q2?

    The plain answers (as it seems to me) are “no”, “no”, and “no.”

    As an in-depth account of the workings of high-level technology assessment panels, please allow me to commend to Shtetl Optimized readers the in-depth account of John von Neumann’s “Teapot Committee” of 1953, as detailed in Neil Sheehan’s magisterial A Fiery Peace in a Cold War (2009), in particular Book IV’s chapters 29–36, which begin with “Seeking Scientific Validation” and end with “`OK Bennie, It’s a Deal'”.

    Caveat  Discussions that I had as a graduate student with John von Neumann’s brother Michael establish that the popular characterization of von Neumann’s hawkish political views (as recounted by Sheehan in chapter 30 When Hungary Was Mars) inadequately reflects John von Neumann’s historical and strategic appreciation of post-WWII geopolitics. Von Neumann’s internal political appreciation was comparably sophisticated and nuanced to his mathematical and scientific appreciation, and his public political persona amounted to a readily-grasped yet all-too-readily caricatured summary of it.

    Remark  The QIST-Teapot parallel is of course not exact, for common-sense reason that the aggregate strategic influence of the 21st century enterprises that are broadly associated to QIST’s charter will plausibly be more transformational than the influence of the Teapot Committee’s missile-and-space roadmap of the 20th century … and so the responsibilities and opportunities that presently are embraced by the QIST TEP (and its evolutionary successors) are manifestly greater than the responsibilities and opportunities that were embraced by von Neumann’s Teapot Committee.

    Conclusion  The QIST TEP members deserve a second chance to conceive a better roadmap for QM/QC/QIT/QSE … with the attempt itself having comparable value (for young STEM professionals especially) to the product.

    The four-word summary “In dreams begin responsibilities” (Delmore Schwartz) 🙂

  400. Rahul Says:

    Scott #397:

    I think what you are describing is analogous to the well known disaster that’s called “Design by Committee”

  401. Rahul Says:

    @Scott #395:

    You can take your QIST Roadmaps and shove them.

    I’ve been itching to type something similar to John Sidles for a while, so glad Scott finally did.

    If I may, John Sidles, some friendly suggestions: Why don’t you try commenting like a normal person? Ease off those verbose, bloated commentaries oozing with citations and references. Often tangential citations too (though I’ll admit I’ve found some interesting links there). But moderation, please.

    You are writing blog comments, for crying out loud, not an academic publication or an editorial. Your style’s as out of place as a priest preaching his staid Sunday sermon at a frat party. Other things to ease off may be that condescending, holier-than-thou attitude and appeal to endless authority.

    Just a friendly heads-up dude. We realize you are wise but with a lil’ change in format & attitude your message may get across better!

  402. James Gallagher Says:

    The Computer Laboratory at Cambridge (UK) are much more amusing than D-Wave going by the latest paper referenced in comment #393. http://arxiv.org/abs/1305.6822

    Imagine if these two groups team up.

  403. Scott Says:

    Gil #388:

      I agree with Scott’s own opinion that he expressed several times that BosonSampling will most likely not manifest anything which is hard to simulate classically without quantum fault-tolerance, and that with quantum fault-tolerance it can be quite easier to go ahead and build a universal quantum computer than to implement quantum fault tolerance on BosonSampling. But I see nothing wrong with Scott’s hope (that he also expressed a few times) that BosonSampling will somehow exhibits quantum supremacy even without full-fledged quantum fault tolerance.

    You misstated my views about BosonSampling and error correction. If technologies like optical multiplexing can be made to work, then I see no good reason why BosonSampling couldn’t be scaled to (say) 30 photons with no explicit error-correction, though probably not to 1000 photons. But crucially, with experimental BosonSampling, getting to ~30 photons is the entire point! You don’t want 1000 photons, for instance because then you couldn’t even feasibly verify with a classical computer that your BosonSampling device was working. And 30 photons is already enough to verify directly that your transition amplitudes are given by the permanents of 30×30 complex matrices. Since complex permanents can’t even be approximated in the polynomial hierarchy (unless PH collapses), for me it’s very hard to imagine how that could be the case while Nature was still simulable in BPP.

    Another relevant point: with BosonSampling, even if you lose (say) half of the photons on the way through, the resulting distribution is still very likely to be intractable to simulate classically. This hasn’t been proven yet, but I’d guess that it can and will be proved (under similar assumptions as for the lossless case) without having to invent radically new techniques.

    More broadly, I’d say there are two fundamental differences between BosonSampling and the D-Wave approach.

    Difference #1: With BosonSampling, there are strong complexity-theoretic arguments that if we could scale to an interesting size, then we’d be doing something hard for a classical computer. (Indeed, the arguments are arguably even stronger than they are for Shor’s algorithm.) For quantum annealing, by contrast, there are currently no such arguments, and not for lack of looking for them! Maybe one can get an asymptotic speedup with quantum annealing, but at present, I think we have to consider that a wide-open empirical question, with no convincing theoretical or numerical evidence either way.

    Difference #2: With BosonSampling, in some sense your task is vastly easier. You just need to build some device that’s faster to run than to classically simulate, for a reason traceable to the exponentiality of Hilbert space, and under clearly-stated and plausible complexity assumptions. You don’t need to solve a practical optimization problem, or perform any other “useful” task. Why not? Because usefulness was never any part of what we claimed or promised! Obviously that makes it much harder to raise money for BosonSampling than for D-Wave—but it also seems to me to make the goal much closer to honest realization.

    It’s probably a direct consequence of Difference #2 that with BosonSampling, the gap between reality and what was claimed in the press was noticeably smaller than for D-Wave. Having said that, the gap was still way too large for my taste! And that’s why, while BosonSampling has gotten maybe 0.1% as much press coverage as D-Wave, I’ve put maybe 15% as much effort into fighting BosonSampling hype as I’ve put into fighting D-Wave hype.

  404. Jay Says:

    Scott #403/404

    Could you explain why a device computing the permanent is not useful? It’s sharp-P complete, so why can’t we just reduce any instance of interest to this problem?

  405. Scott Says:

    James Gallagher #402: Oh, god. I’m probably through arguing with Anderson and Brady for this lifetime—the previous thread should’ve sufficed to convince any reasonable person that they don’t have a clue and don’t have any interest in acquiring one.

    (Note also that their latest arXiv submission was already redirected from quant-ph to “General Physics,” the holding pen for crackpots.)

    Briefly, though, their latest paper seems to follow the tired tactic of giving a “local hidden-variable model” that violates the Bell inequalities, through the simple device of redefining the word “local,” to encompass influences that any ordinary person would call “nonlocal.”

  406. Scott Says:

    Jay #404:

      Could you explain why a device computing the permanent is not useful? It’s sharp-P complete, so why can’t we just reduce any instance of interest to this problem?

    The fundamental problem is that a BosonSampling device doesn’t compute the permanent. (Indeed, precisely because the permanent is #P-complete, no quantum device can compute it efficiently unless BQP=P#P.)

    Instead, a BosonSampling device does something much weirder: it samples from a certain probability distribution in which the probabilities are defined in terms of permanents. Now, the whole thing Alex Arkhipov and I did was to give a detailed argument for why that weird task is still probably hard for classical computers, as an indirect consequence of the #P-completeness of the permanent! Alas, our argument doesn’t imply that the weird task is itself useful for solving #P-complete problems, and indeed I doubt that’s the case.

    For more info, see our paper or this blog post.

  407. Douglas Knight Says:

    Saying that Boson Sampling isn’t in P is weaker than saying you can’t verify a Sampler. In particular, you can verify a permanent oracle. Is there any hope that BS with N photons decomposes into BS with N-1 photons in a similar way that allows an interactive proof?

    Of course, most of the scientific interest is already covered by N=30, so verifying N=1000 doesn’t have much of a clear point.

  408. Douglas Knight Says:

    Indeed, it the ABMS paper says that Boson Sampling is probably verifiable, doesn’t it?

  409. Jay Says:

    Thank you Scott. What kind of quantum machine would you prefer for your 2^6th birthday, quantum annealing or boson sampling?

  410. Scott Says:

    Douglas #407, #408: It’s an excellent question. Yes, as we wrote in the paper, it might be that there’s some scalable way for a BosonSampling device to convince a skeptic that it’s doing something not efficiently classically simulable—possibly via an interactive protocol. We don’t know how to do it, but we also don’t have any argument that rules that out. (All we know for sure is that there can’t be a non-interactive witness that particular complex permanents are large, without causing P#P=MA and hence a collapse of the polynomial hierarchy.)

    Furthermore, as you correctly pointed out, ABMS gave an interactive protocol for a different but related problem. Specifically, they show that you can test, in randomized polynomial time, whether a given oracle solves the Gaussian Permanent Estimation problem or is far from doing so.

    From their result, one can deduce the following consequence. Suppose someone claimed to have a “magic classical box” M for approximate BosonSampling, which produced a deterministic output M(r) given a uniform random string r as input. Then you could verify that person’s claim in the complexity class BPPNP^M.

    Now, notice the two big caveats: to perform the verification,

    (a) you need to be a BPPNP machine, with oracle access to the alleged BosonSampling box, and

    (b) the box needs to produce a deterministic output, once you feed it some random bits as input. (A condition not satisfied by quantum mechanics itself.)

    But yes, if you ignore those caveats, then ABMS gives exactly what we were asking for! 🙂 It’s certainly a step forward.

  411. Scott Says:

    Jay #409:

      What kind of quantum machine would you prefer for your 2^6th birthday, quantum annealing or boson sampling?

    LOL, good question, but a nearly-impossible one to answer without knowing the specs. How big of an instance can the BosonSampler solve? How close is the quantum annealer to zero temperature (i.e., to implementing the “true” adiabatic algorithm, as Farhi et al. envisioned it)? As my 26th birthday nears, it’s also extremely possible that we’ll discover new theoretical results—for example, that BosonSamplers can or can’t solve some useful (or at least verifiable) problem, or that the adiabatic algorithm can or can’t outperform Quantum Monte Carlo in realistic situations—that makes one or the other the clear favorite.

    For now, though, I suppose I shouldn’t “measure a gift QC in the mouth.” I’ll take either. 🙂

  412. Henning Dekant Says:

    Greg, #398 admittedly you are following this probably much longer than I am, but in recent years I cannot remember to have seen any such strong derision of other QC approaches coming out of D-Wave.

    Do you have some links that’ll shed some light on this bad behaviour?

    As for my part, while I am happy to cheer on D-Wave, I still want to see universal QC – the sooner the better.

  413. Henning Dekant Says:

    Scott told J.S. to shove the QC roadmap – scientifically speaking 🙂

    Yet, I find this somewhat dated document still rather useful. Didn’t know about it, would have been a good input when trying to come up with my QC taxonomy.

  414. Henning Dekant Says:

    Just in case somebody shares my passion for boring, dated, bureaucratic roadmaps, here’s the link 🙂

  415. Scott Says:

    Henning #413, #414: Thanks, I just had another look at the roadmap. I actually vaguely recall Umesh asking me to supply some material back when he was coauthoring it.

    Despite my earlier abuse, the “roadmap” is actually quite good as bureaucratic documents go. On further reflection, I’d say it only becomes bad in the hands of Sidles—who somehow finds support in its pages for whatever his weird hobbyhorse-of-the-day might be, and invokes it in comment after comment instead of directly addressing the question at hand. In a similar way, one could argue that the works of Plato and Aristotle were actually pretty good, but they became bad over 2000 years of monks discussing them nonstop instead of just letting them rest and doing productive science.

  416. Rahul Says:

    From the QC Roadmap in 2004:

    “The panel’s members decided that a desired future objective for QC should be to develop by 2012 a suite of viable merging-QC technologies of sufficient complexity to function as quantum computer-science test-beds in which architectural and algorithmic issues can be explored. The ten-year (2012) goal would extend QC into the architectural/algorithmic” regime, involving a quantum system of such complexity that it is beyond the capability of classical computers to simulate.”

    In hindsight how are we doing on those ambitious goals?

  417. Gil Kalai Says:

    Hi Scott, thanks for the enlightening answer. I see. You do believe that without full-fledged fault tolerance, a system with 30 photons can exhibit a clear proof for “quantum computational supremacy,” (namely exhibit a distribution unreacheable by classical computers). Excellent! I completely agree that this possibility, while still depending on unproven mathematical conjectures and requiring much experimental progress, is exciting and it certainly should be pursued.

  418. Scott Says:

    Rahul: I’d say we’d doing pretty well, if you remove the one clause “beyond the capability of classical computers to simulate”! 😉

    But if that was a “10-year goal” of the 2004 Roadmap, then shouldn’t we have until next year?

    (Clarification: I don’t actually care about the answer! I myself would never, ever issue “10-year plans” for quantum computing research, or put any stock into anyone else’s. Accelerator physics might be able to operate with 10-year planning horizons, but quantum computing certainly can’t at this stage, and I think it’s counterproductive to try. So I’m completely unfazed if Sidles, Dyakonov, or anyone else waves such a plan in my face and talks about a real or imagined failure to live up to it. “Don’t blame me, I never promised beyond-simulable QC by 2014!” In fact I spent a decade trying to fight inflated promises as hard as I could.)

  419. Scott Says:

    OK, here’s an analogy that hopefully explains my view about “Roadmaps.” Imagine that in the year 1900, someone forced a panel of mathematicians to guess when Fermat’s Last Theorem was going to be proved. And imagine that the estimates were all over the map, but the median was 50 years.

    Now imagine that the same person, in 1950, observed that, while math had admittedly made some progress in the previous 50 years, FLT was still unproved—and therefore railed against the mathematicians’ failure to live up to their own hype.

    I’d hope the mistake would be obvious here. Yes, mathematicians are the world’s experts on how to prove FLT: what the obstacles are, which easier problems should be tackled first, which approaches seem promising and which don’t, etc. To whatever extent mathematicians don’t know the answers to such questions, no one does. But it doesn’t follow, at all, that mathematicians have any special insight into how long it will take! If you really need a statistical estimate of how long it will take (why?), then you don’t want an algebraic number theorist: you want a bookie, or better yet Nate Silver.

    (It’s probably relevant that—as Daniel Kahneman explains in his recent book—studies have proved the common wisdom that people working on a software project are absolutely terrible forecasters of how long the project will take to finish! Rather than asking them, you’d do much better simply to look up the statistics of how long similar projects have taken in the past. And when it comes to quantum computing and other basic research, the obvious problem is that no one knows what should count as a “similar project.”)

  420. Rahul Says:

    Scott #419:

    As an outsider, the confusion seems about whether to treat QC more as an endevour akin to proving Fermat’s Last Theorem or akin to the Manhattan Project, Man on the Moon, or some such project.

    I agree with you: QC is more akin to FLT but a lot of people (especially the Roadmappers) seem to try to nudge QC in the other direction.

    It takes a lot more guts to be humble and say sincerely “Look, this is an important Question for humanity to know. Please fund me but I don’t know if and when I’ll have an answer nor if the answer will be practically useful. “

    Far more convincing to funding sources if you promise them some timelines and benchmarks and gate dates. Has the effect of making a project far more mature that it is.

  421. Rahul Says:

    Scott says:

    But if that was a “10-year goal” of the 2004 Roadmap, then shouldn’t we have until next year?

    You have a point. Dunno.

    I suspect the 2004 was a revised version. Maybe the original Roadmap went out in 2002.

  422. Alexander Vlasov Says:

    Henning Dekant #412, I saw some reasoning about usual QC, e.g., here
    http://dwave.wordpress.com/2011/06/22/do-not-disturb-my-quantum-circles/

  423. Scott Says:

    Rahul #420:

      It takes a lot more guts to be humble and say sincerely “Look, this is an important Question for humanity to know. Please fund me but I don’t know if and when I’ll have an answer nor if the answer will be practically useful. “

      Far more convincing to funding sources if you promise them some timelines and benchmarks…

    Yes, precisely! It’s not that, as basic researchers, we can’t promise the politicians any deliverables over a 10-year timeframe. But the type of deliverable we can promise is “something cool and unexpected that propels human knowledge forward while adding to our nation’s glory—we’re not entirely sure what yet, or which of us are going to produce it.” If ever a decade elapsed when we couldn’t deliver on that promise—and theoretical computer science hasn’t yet had such a decade, between the 1960s and today—then I’d say the politicians would probably be justified to cut off our funding.

  424. John Sidles Says:

    —————————
    Scott Aaronson reminds us “In fact I [have] spent a decade trying to fight inflated promises as hard as I could!”
    —————————
    Unstinting efforts like Scott’s “to fight inflated promises” are commendable, and unquestionably Shtetl Optimized has been a major battleground in this vital-yet-unending struggle! 🙂

    In regard to quantum-technological expectations (both realistic and inflated), both the patent literature (per comment circa #178) and the QIST Roadmap (per comment circa #379) are outstanding resources regarding the crucial and fascinating quantum-technological topic of single-photon sources (SPS)  a capability that of course is crucial to 30-photon BosonSampling experiments that Scott foresees (per comment circa #403).

    For STEM students especially, the 2005 review article Single-Photon Sources by Brahim Lounis and Michel Orrit provides a good introduction, as do Howard Carmichael’s quantum optics texts (per comment circa #316). Single-photon sources also play crucial roles in the following patent applications (among dozens):

    • Charles H. Bennett, “Interferometric quantum cryptographic key distribution system”, US 5307410 A

    • Daniel Gottesman, “Quantum error-correcting codes and devices”, US 6128764 A

    • Isaac Liu Chuang, “Nuclear magnetic resonance quantum computing method with improved solvents”, US 6218832 B1

    • Atac Imamoglu and Stephen Mark Sherwin, “Quantum computation with quantum dots and terahertz cavity quantum electrodynamics” WO 2000036561 A2

    • Charles Santori, “Quantum-dot photon turnstile device”, US 6728281 B1

    • Jeremy Hilton, Sergey Rashkeev, Anatoly Smirnov, Alexandre Zagoskin (of D-Wave) “Optical transformer device” US 20030021518 A1

    More broadly, quantum SPS capability is crucial to a long-cherished objective of von Neumann, Weiner, Wiener, and Pauling, namely the objective of single-atom microscopy, as surveyed (with further references) in Spin Microscopy’s Heritage, Achievements, and Prospects (2009).

    As is commonplace in quantum research, the crucial ideas of entangled-photon BosonSampling experiments and entangled-photon quantum spin microscope experiments are theoretically entangled in multiple crucial respects:

    The quantum dynamics of BosonSampling and quantum spin microscopy is identical … in that single-photon fields serve (in both BosonSampling and quantum spin microscopy) to maximally entangle source/emitter currents with sink/detector currents.

    The temporal scales of BosonSampling and quantum spin microscopy differ appreciably yet not essentially … in that BosonSampling’s optical photons have a frequency of order 100 THz, whereas spin microscopy’s photons are of order 1 GHz (at most); thus BosonSampling measurement dynamics is about 10^5 times faster that the measurement dynamics of quantum spin microscopy (and when you think about it, a rate-factor of “only” 10^5 is not such a very big difference!) 🙂

    Remark for students  Quantum students (in particular) tend to conceive optical-frequency photon detection as an instantaneous process, in contrast to radio-frequency photon detection as a continuous stochastic process; a crucial lesson of QIT/QC/QIST research is that this distinction is artifactual and even illusory (as Hanbury Brown famously taught Feynman, per comment circa #321). It is mathematically and physically legitimate — and exceedingly instructive too! — to keep in mind the following

    Isomorphism  high-efficiency BosonSampling experiments are dynamically equivalent to high-efficiency optical-frequency quantum microscopes in which the photon-detectors continuously observe the photon-emitters.

    Thus arises another intimate link between BosonSampling and quantum microscopy:

    The naive causal intuitions of BosonSampling and quantum spin microscopy are reversed, and in high-efficiency experiments, these casual intuitions are wrong in both directions … in that in BosonSampling it is natural to regard the photon emitters as “causally controlling” the photon detectors, whereas in quantum microscopy this intuition is reversed: the photon emitters (of the sample) control the photon detectors (of the microscope). Needless to say, orthodox cavity QED/quantum measurement theory informs us that both causal intuitions fail utterly in the limit of high-efficiency photon emission/detection (per comment #340).

    Conclusion  The threefold mathematical, physical, and technological isomorphism between high-efficiency BosonSampling experiments and high-resolution quantum spin microscope technologies suffices to ensure that we quantum researchers all belong to (in the words of Melville’s Ishmael to Captain Peleg) “the great and everlasting First [Quantum] Congregation of this whole worshipping world; we all belong to that; only some of us cherish some crotchets no ways touching the grand belief.”

    Open questions touching the grand [quantum] belief  Q1  In light of QIST’s (excellent) survey of single-photon source physics, and the decade of subsequent research, are visions of 30-photon BosonSampling technologies more reasonably assessed as hope, or as hype?   Q2   The per-spin Shannon channel capacity of magnetic resonance imaging and spectroscopy has doubled every two years since since 1945; for how many more decades can this doubling cadence be theoretically sustained, and if so, by what practical engineering path(s)?   Q3   Can scientifically reasonable roadmaps, having technologically reasonable milestones, guided by reasonable mathematical physics, in service of objectives that are reasonable both economically and strategically, be set forth that reasonably encompass both the modern vision of 30-photon BosonSampling and the older (yet enduringly viable) von Neumann/Feynman/Weiner/Pauling objective of comprehensive microscopy?

    An appeciation of Shtetl Optimized  And of course, there is the perennial overarching question (especially in regard to Q1–3):   Q4  What “crotchets touching the grand [quantum] belief” do the readers of Shtetl Optimized cherish most dearly?” As everyone appreciates, these (wildly differing!) “crotchets” make reading Shtetl Optimized immensely instructive and enjoyable for us all. And for this wonderful instruction and enjoyment, appreciation and thanks are hereby extended to all contributors to this Shtetl Optimized discussion, and especially to Scott, for his patient labors in service of the quantum polity. In this shared appreciation (to borrow again from Melville) “we all splice hands!” 🙂

  425. Henning Dekant Says:

    Alexander #412, I am looking for an actual aggressive take-down of other QC approaches. I mean look at the profile of D-Wave’s physicsandcake blogger.

    She’s hardly the girl to engage in vitriolic take downs of other research approaches. Her write-up just explains in layman terms why less controlled qubits in adiabatic QC are easier to realize. Hardly the kind of stuff that’ll insult other QC researchers.

  426. Henning Dekant Says:

    Scott, #423, I think this harks back to a bigger question. Funding of basic, non-applied science. A mature society should realize that this is not a short term investment but needs to be an ongoing effort.

    Applied science on the other hand can be handled much more like an investment decision. Unfortunately most politicians have at best an understanding of the latter. Resulting in absurd twisting and contortions of basic research to come up with some near term pay-off.

  427. Alexander Vlasov Says:

    Henning #425, I cited that just because D-Wave quite politely and accurately expressed they point.

  428. Henning Dekant Says:

    Alex #425, Ah OK 🙂

    Surely, there must be some examples of the inflammatory remarks that D-Wave critics here often reference?

    Or was D-Wave deviously tricky in not leaving any paper/web trail?

  429. John Sidles Says:

    Henning Dekant asks (circa $428): “Was D-Wave deviously tricky in not leaving any paper/web trail [of] the inflammatory remarks that D-Wave critics here often reference?”
    ———————————–
    You ask a good question Henning Dekant, that (taken at face value) has an answer that will be illuminating to those Shtetl Optimized who do not appreciate that Dr. Suzanne Gildert (who works for D-Wave) writes/sustains an outstanding weblog, Physics and Cake, that is notably deficient in sarcasm, mockery, sardonicism, cynicism, and abuse (even self-admittedly mistaken abuse, per Scott’s #415, for example).

    Were these traits to disappear entirely from Shtetl Optimized we would miss them. As Mark Twain’s story Journalism In Tennessee (circa 1871) expresses it:
    ——————
    The Spirit of the Tennessee Press

    “The inveterate liars of the Semi-Weekly Earthquake are evidently endeavoring to palm off upon a noble and chivalrous people another of their vile and brutal falsehoods with regard to that most glorious conception of the nineteenth century, the Ballyhack railroad. … The crawling insect, Buckner, who edits the Hurrah, is braying about his business with his customary imbecility, and imagining that he is talking sense.”

    <Twain’s editor instructs him> “Now that is the way to write–peppery and to the point. Mush-and-milk journalism gives me the fan-tods.”
    ——————
    We all hope that Scott’s writing in Shtetl Optimized remains “peppery and to the point” … specifically in the vital and humane sense that Twain’s essay celebrates. Yet obviously too, the best elements of Shtetl Optimized do not depend essentially upon sarcasm, mockery, sardonicism, cynicism, or abuse; and neither are these corrupting ingredients the best part of any forum that contributes substantially to the STEM polity that sustains us all.

    Conclusion  Good on `yah, Suzanne Gildert of PhysicsAndCake/D-Wave, for the many thoughtful (and joyous!) STEM-related essays that are hosted by your exemplary weblog.

  430. Scott Says:

    Henning #428: You must not have been searching too hard! 🙂

    Here’s Geordie Rose agreeing with QC skeptics, insofar as their skepticism applies to “traditional” QC:

      BTW I side with Gil in the debate going on here, but for different (but related) reasons. The ideas underlying the circuit model of quantum computation are not good ideas, and I don’t think useful quantum computers based on circuit model ideas will ever be built.

    (Incidentally, Geordie’s comment helps to substantiate a point I’ve been making over and over: that, much like Communists and Fascists, the QC-is-impossible camp and the QC-is-practical-now camp are less diametrical opponents than two sides of the same aggressively-wrong coin! Among other similarities, both camps fail equally to appreciate just how enormous the gaps can be between where technology is today and what the laws of physics ultimately allow.)

    Oh, and if a blog discussion doesn’t do it for you, check out this gem from an article in Wired:

      “Over the years,” says Rose, “I’ve come to strongly believe that [the gate model] is just simply a really rotten idea.”

    LOL!

  431. Bill Kaminsky Says:

    Henning @ #428

    Surely, there must be some examples of the inflammatory remarks that D-Wave critics here often reference?

    Or was D-Wave deviously tricky in not leaving any paper/web trail?

    There’s been no devious trickery. Geordie Rose, CTO and co-founder of D-Wave, is and has always been a zealous advocate* of all of his company’s endeavors. It’s not hard to find things on the Internet where he’s promoting these endeavors in ways that make many in QC academia cringe.

    For example, here’s an excerpt of an interview he did with Forbes in May 2011 when the D-Wave One chip got rolled out.

    What is the benefit of using quantum annealing as a computation method over quantum search or quantum factoring?

    Quantum annealing can be applied to a much broader range of problems than the more specialized algorithms such as factoring or unstructured search. Quantum annealing is a method of solving optimization problems, and once you start looking, you find these problems in almost every discipline and walk of life – genetics, finance, machine translation, bioinformatics, medical diagnosis, to name just a few.

    Quantum annealing is also a much more natural way of running a quantum algorithm. In quantum annealing, the qubits always remain in what is known as the ‘ground state’. This is the configuration that the system naturally wants to be in, (in the same way that water will run downstream and find its own level). Many other algorithms – such as factoring – require the qubits to be maintained in highly unstable excited states, which make it extremely difficult to control them precisely enough to perform even a small factoring computation.

    Another question that has come up in discussing D-Wave’s systems is that your papers don’t seem to demonstrate any entanglement, which a lot of people in the Quantum Computing field think is necessary to have significantly fast computing. (I’m thinking specifically of Schoelkophf Lab’s experiments at Yale.) Do the D-Wave systems demonstrate entanglement? If not, how does that impact the speed of the optimization calculations?

    It is certainly possible that entanglement is an important resource for running many quantum algorithms. However what role it specifically plays – if any – in quantum annealing is not well understood…

    [ Full interview at: http://www.forbes.com/sites/alexknapp/2011/05/26/q-and-a-with-d-waves-dr-geordie-rose-on-quantum-computing/ ]

    Footnotes:

    * Please note that I mean “zealous advocate” in the sense of what is the proper, and indeed even laudable**, role for an attorney in an adversarial legal system. I don’t mean it at all in the original sense of “zealot” as a religious militant.

    ** Please also note something that’s hopefully needless to say: I don’t believe “zealous advocate” is a proper, let alone laudable, role for a scientist. Personally, I’d say “cautious skeptic” is the proper role for a scientist. As for what’s the proper balance of “zealous advocate” and “cautious skeptic” in the public persona of a CTO of a tech company, I don’t exactly know. Certainly your run-of-the-mill capitalist*** who plunk down O($10 million) on your endeavors seems to want zealous advocacy in public.

    *** Of course, I’d like to think that if I were a rich capitalist, I wouldn’t be a run-of-the-mill member of my class. If I plunked down beaucoup de $$$, I’d prefer enough of a “cautious skeptic” in the CTO’s public persona to ensure my smartest fellow capitalists have no doubt that I put my $$$ down thinking the whole thing a big 10:1 or 100:1**** bet and not at all thinking it’s a sure thing. (In short, I’d want my smartest fellow capitalists to know I am not some wide-eyed, sci-fi fan who doesn’t really know the nuts-and-bolts of what he’s investing in. Even shorter, I’d like my smartest fellow capitalists to worship my acumen and fear my wrath! 😉 )

    *** Actually, I regard D-Wave as a case of Knightian uncertainty. I give odds of that order of magnitude just to make the Dutch Book Bayesians reading the blog happy.

    Finally, full disclosure once again: I’ve not only met Geordie in person, I’ve been employed by him (i.e., took $$$) in 2008-2009 as an off-site consultant. Also, for what it’s worth, I personally like the guy even if as I explained in Footnote (**), I’d want him to adopt a different public persona as CTO if it were my millions keeping his company going.

  432. Henning Dekant Says:

    Scott #430, yes that’s more the kind of language I was expecting. Have to agree with Bill #431, given Geordie’s high profile he has to strike a delicate balance, but these statements certainly are way heavy on the zealotry side of the scale, although with these articles you never know the exact context in which this statement was made.

    To give him the benefit of the doubt he may have stated just previous that “we are focusing on trying to get a QC device to the market. We looked at the gate model but for that purpose it’s not cutting it.” This would certainly shine a different light on the statement. After all, I presume that Geordie knows the QC landscape. I.e. if we were to identify a suitable candidate for topological QC the gates for the gate based model would be wide open (stupid pun intended).

  433. Nobody Special Says:

    Actually I find physicsandcake kind of depressing. In the last ten postings. Dating back to 2011. There’s maybe two articles of any depth on QC. The rest is just cheerleading.

  434. Henning Dekant Says:

    N.S. #433 Physicsandcake is blogging for a general audience that doesn’t necessarily have a physics background, she is addressing nobody special but not necessarily you 🙂

  435. Bram Cohen Says:

    The problem with showing potential benefits for adiabic quantum computing might be our poor understanding and the tremendous practical success of classical stochastic algorithms. Any theoretical results for showing adiabic benefits would have to produce a deeper understanding than what we already have, which tends to be of the ‘hey look, it works’ variety, and any benefits would have to be over and above what can already be done, which can already find almost all solution for reasonably hard problems up to an impressively large scale.

  436. John Sidles Says:

    ———————-
    Scott avers  “Much like Communists and Fascists, the QC-is-impossible camp and the QC-is-practical-now camp are less diametrical opponents than two sides of the same aggressively-wrong coin!”
    ———————-
    We can moderate the provocative rhetoric of this statement, and retain its scientific and mathematical common sense, by inquiring whether the following two hype-to-hope ratios are both reasonably assessed as Ο(1):

       <BosonSampling Hope>/<BosonSampling Hype>
             ≈?≈
       <D-Wave Annealing Hope>/<D-Wave Annealing Hype>

    In regard to D-Wave Annealing, plenty has been said already, and summaries that are reasonable, compatible, and polite include both Scott’s remark (per comment circa #403) “Maybe one can get an asymptotic speedup with quantum annealing […] we have to consider that a wide-open empirical question, with no convincing theoretical or numerical evidence either way” and Geordie’s remark (per Bill Kaminsky’s outstandingly thoughtful comment circa #431) “It is certainly possible that entanglement is an important resource for running many quantum algorithms. However what role it specifically plays — if any — in quantum annealing is not well understood.”

    In regard to BosonSampling, the case for hope has been make by Scott (in the same comment circa #403 as above) “I see no good reason why BosonSampling couldn’t be scaled to (say) 30 photons with no explicit error-correction.” Yet the 2002 and 2004 QIST Roadmaps soberingly remind us:

    ————————-
    Section 6.5  QIST Optical Quantum Computing Summary

    Initialization of [optical] qubits requires fast, reliable, periodic (on-demand) SPSs [single-photon sources]. Each pulse must contain one and only one photon. It must be possible to demonstrate nonclassical interference (e.g., Hong-Ou-Mandel [HOM] interferometery) between two single-photon pulses.

    5.2.1 Weaknesses

    5.2.2.1  A reliable, periodic SPS [single-photon source] has not been demonstrated.

    5.2.2.2  Discriminating SPDs with demonstrated efficiency >99% have not been demonstrated.

    5.2.2.3  It is difficult to mode-match and stabilize very many, multiply nested, interferometers.

    5.2.2.4  The scheme requires photon detection […] on a time scale of nanoseconds.
    ————————-

    It is sobering that during the post-QIST decade, no scalable technologies for achieving the QIST objectives relating to SPS (and BosonSampling) have been seriously proposed, much less experimentally demonstrated. The empirical scaling instead has been that generating correlated n-photon output states requires Ο(1000^n) input photons, and (to my knowledge) no feasible/reasonable technology means of remediating this exponential inefficiency is presently envisioned.

    Exercise  From the data in Hanbury Brown and Twiss’ seminal study Test of new type of stellar interferometer on Sirius (1956), deduce that the star Sirus emitted approximately 10^{37} photons for each correlated pair detected by Hanbury Brown and Twiss. Then argue from principles of quantum optics for either the proposition “Cavity QED methods enable scaling to 30-photon correlations without exponential degradation of correlated detection rates” or alternatively the proposition “Scaling to 30-photon correlations will necessarily entail exponential degradation of correlated detection rates.”

    Conclusion  The QM/QC/QIT/QIT communities can reasonably join with both Geordie and Scott in optimistically hoping that scalable advances in D-Wave Annealing and BosonSampling technologies can both be achieved — and (equally important) that collegiality and humor can both be sustained during progress toward these objectives — in which eventuality future generations may regard “Geordie & Scott” as comparable in deserved acclaim to celebrated pairings like Hardy & Littlewood, Mizar & Alcor, Orthanc & Minas Morgul, orthoclase & plagioclase, Land’s End & John o’Groats, Strom Thurmond and Essie Butler, Rocky & Bullwinkle, and Damon & Pythias. That would be good! 🙂

  437. Alex Selby Says:

    I tested the hunch (#270) that you could solve D-Wave’s native problem in software faster than D-Wave, and found (assuming no mistakes) that you can beat D-Wave by quite some margin using a simple C program. As far as I can see, this proves that D-Wave is not currently a useful computational device since it means you can make a universal D-Wave simulator in software.

    I also speculate that the D-Wave device isn’t doing anything very amazing, though admittedly this is a bit of an extrapolation. One reason for this guess is that a cut-down version of the program that only allows itself to look locally appears to have better behaviour than D-Wave, not just in absolute speed terms, but also in scaling from easy to hard instances.

    The program is here: https://github.com/alex1770/QUBO-Chimera, and there is a writeup here: http://www.archduke.org/stuff/d-wave-comment-on-comparison-with-classical-computers/.

  438. Bill Kaminsky Says:

    Alex @ #437

    Very neat! As for your extrapolation, namely…

    I also speculate that the D-Wave device isn’t doing anything very amazing, though admittedly this is a bit of an extrapolation. One reason for this guess is that a cut-down version of the program that only allows itself to look locally appears to have better behaviour than D-Wave [my emphasis], not just in absolute speed terms, but also in scaling from easy to hard instances.

    …I think it always should be borne in mind that adiabatic quantum computing (AQC) / quantum annealing (QA) based on tranverse Ising models is the most local of possible AQC/QA approaches. The Pauli-X operators encoding the transverse field just drive single bit flips in the problem Pauli-Z basis. Thus, simple perturbation theory treating the transverse field strength as your small parameter gives you the intuition that n-bit flip events will be suppressed exponentially in n as they’ll only appear at nth order in the perturbation series.

    In contrast, AQC/QA based on Heisenberg couplings is not only capable of universal quantum computation (i.e., simulating any gate model computation), but also — albeit the following gets quite “handwaving” — is capable of nonlocal transitions via driving rather arbitrary permutations among problem Pauli-Z basis states. (The handwaving comes in the form of saying “Well, recall Heisenberg aka “exchange” couplings drive SWAPs, not simple bit flips!” and then pointing to the existence of Heisenberg-model-magnets in nature that are “spin liquids” where the ground state manifold is always “resonating” between many different Pauli-Z states… something that’s much different than normal magnets — “spin solids” as it were — where spins get locked into specific Pauli-Z diagonal states with rare transitions among ’em.)

  439. Rahul Says:

    Scott, it strikes me as very odd that you are not examining at all the quality of Alex Selby’s work when you promote his (probably) premature claims. You hold D-Wave to extremely high scientific standards, but you don’t bother even questioning whether this guy’s “hello world”-type toy programs are really doing anything interesting of any relevance to what D-Wave is really solving. Confirmation bias maybe?

    And after all the whining about McGeoch’s paper you just walked out of the room without posing even one of your questions about it? When the optimization expert behind that paper is right in front of you and you have the chance to show to the whole world what a great defender of science you are, you just walk out? I have a bunch of names that I want to call you right now but I will first give you a chance to explain better what really happened.

  440. blaze Says:

    Yeah, I have to agree with Rahul on this one. We all admire you and look up to you as a leading light on all of this.

    But, come on, when it came time to walk the walk, you ran away.

  441. Jay Says:

    Rahul and blaze,

    I myself find Scott present attitude perfectly reasonable. When what you hear is not informative enough, you stand up and leave, period. It may be more romantic, and a temptation, to stand up and fight, but truth is there are far more interesting things on which an academic can spend his time.

    PS Rahul: I also disagree Scott holds D-wave extremly high standards. He has for example been willing to grant Dwave with quantum beef on what now appears quite little grounds. Of course that’s easy to say in retrospect.

  442. John Sidles Says:

    ———–
    A Reader says “I have a bunch of names that I want to call you [Scott] right now but I will first give you a chance to explain better what really happened.”
    ———–
    I’ve received about as much “peppery” criticism from Scott as anyone, and so please let me say that Scott’s many contributions to our community, in research and in reaching, are greatly deserving of our admiration, and our respect, and our thanks … NOT our name-calling. So thank you, Scott Aaronson, and “please pass the pepper!” L)

    Bill Kaminsky has posted numerous good comments, and his most recent (circa #438), concerning the difficulty of finding ground states of frustrated spin systems, was (at it seemed to me) especially good. It is perhaps not widely appreciated (among Shtetl Optimized readers), that three-qudit systems are the simplest frustrated QIT systems, and their dynamics geometrically unravels on the varietal manifolds that Landsberg calls as “triple Segre products” in Section 5.5 of his Tensors: Geometry and Applications (2012, per comment circa #379), and that the study of these varietal manifolds was pioneered by Volker Strassen (of efficient matrix-multiplication fame).

    Landsberg’s survey helps us to appreciate that the dynamical phenomena that Bill Kaminsky’s comments discuss manifest themselves nontrivially in contexts that span many disciplines in science, technology, engineering, and mathematics. In the face of these challenges, and these opportunities, and these responsibilities, who has time for name-calling?

    Conclusion  Rather than calling each other “a bunch of names” and demanding explanations of “what really happened”, we are all of us far better off reading the modern literature of condensed matter physics, algebraic geometry, and field theory, and seeking a better understanding of D-Wave annealing and BosonSampling in the dazzling (sometimes even too dazzling!) illumination that this literature shines for us. We are all of us lucky to have this illumination, and we should celebrate and share our good luck.

  443. Scott Says:

    Rahul #439, blaze #440: No, of course I’m not 100% confident in Alex Selby’s results (just as I’m not 100% confident in the USC results). But they were clearly worth linking to, for the following reasons:

    (1) Selby reports that his code outperformed the D-Wave machine on exactly the same benchmarks tested by McGeoch and Wang, and he makes his code available so you can check for yourself.

    (2) When you read Selby’s descriptions of what he did, you see a mind—a spark of intellect—trying to figure out the truth about the D-Wave device, something you conspicuously don’t see when you read McGeoch and Wang, or when you hear McGeoch speak. (I’m sorry to be so blunt, but I’m trying to answer your questions.)

    Now, as for why I didn’t challenge McGeoch in person: well, has it occurred to you that it’s just damn uncomfortable to humiliate someone face-to-face, particularly if she seems genial and friendly? And that I saw no possible way to get to the crux of the issue without humiliating her?

    Furthermore, suppose I had challenged her in person. Then people would call me mean-spirited, a broken record, etc. More generally, I face this dilemma with every egregiously-wrong claim that comes my way, whether it’s a solution to P vs. NP or a claim to have classically violated the Bell inequality. I can either respond and get called an asshole, or not respond and get called a coward. Or I can respond when and how it’s comfortable for me, and get called both an asshole and a coward.

    So, in summary, I no longer feel like I know how to handle the burden of being right. Would it make people happier if I just quit blogging?

  444. Anon-agree! Says:

    Absolutely not

  445. Greg Kuperberg Says:

    Scott – Don’t hold it all back, tell us how you really feel. 🙂

    I think that it would be very helpful for McGeogh to explain how her claims of a 3,600x D-Wave speed-up square with Troyer’s code which is faster than the D-Wave machine, according to Figure 21(b) of arXiv:1304.4595 by Boixo et al. But: (1) Anyone in the room could have asked that question, it didn’t have to be Scott. (2) Actually she should have thought of the question herself. (3) If she doesn’t, that’s okay, other people can discuss the question instead; quantum computing is not the Spanish Inquisition and it is not about her.

    At least as far as I know! I have been warned that no one expects the Spanish Inquisition.

  446. John Sidles Says:

    Scott, the “Mizar & Alcor” pairing (of comment circa #436) referred to John Wallace and Edmond Halley’s celebrated astronomical aphorism, that persons who “see small things but overlook much greater” are said to Vidit Alcor, at non Lunam Plenam (persons who see Alcor [a faint star in Ursa Major] yet overlook the Full Moon).

    It seems (to me) that a guy named Scott Aaronson spoke plainly and well (in comment circa #403) “Maybe one can get an asymptotic speedup with quantum annealing […] we have to consider that a wide-open empirical question, with no convincing theoretical or numerical evidence either way.” And that statement could have been stronger: is our understanding of classical annealing algorithms fundamentally stronger than our understanding of quantum annealing? At the fundamental level, do we really understand Alex Selby’s C-code all that much better than than D-Wave’s hardware?

    Is there really any individual person now alive — on our entire planet Earth — whom we could confidently trust to answer this class of questions?

    Conclusion  The salient feature of quantum information theory is not the “Alcorian” fragments of knowledge that are possessed by individual persons and/or institutions, but rather the “Lunam Plenam” of our near-complete ignorance.

  447. Jordan Says:

    Your honesty is refreshing Scott. Anyone judging you is being an obnoxious backseat driver. It’s perfectly human to face dilemmas between conformity and honesty (see for example, the famous Asch Conformity Experiment.) We can’t preclude that there were many “cowards” in the room who had misgivings about McGeoch’s talk and would have spoken if only someone else had been the first to break the ice (or not.. maybe you really would have been the cheese that stands alone. *crickets chirping*, *tomatoes being thrown*) It’s not surprising that the experience left you doubting your own sanity given the evidence that social conformity situations can actually alter our perception.
    (http://www.darkcoding.net/research/Berns%20Conformity%20final%20printed.pdf)
    Don’t feel bad Scott! You’re anything but a coward and you’re completely normal. There are formidable evolutionary and neurophysiological reasons why it is so difficult to speak out against a group consensus and being a nice guy doesn’t make it any easier!

    The truth is important and the truth will come out (on whatever terms you are comfortable with.) This blog is extremely illuminating, and I for one am very grateful for the work you’re doing to shine a bright light on the claims being made surrounding D-Wave. Nobody should second-guess the judgment to go with tact and congeniality in a situation which might have genuinely been awkward and could have unduly humiliated the speaker. Instead of the asshole/coward dichotomy, perhaps its better to reframe the choice as compassionate mensche/fearless champion of truth. Everyone needs to cut Scott some slack (especially Scott) 🙂

  448. Scott Says:

    Jordan #447: Thanks so much!

  449. Henning Dekant Says:

    Rahul #439, blaze #440, if Scott thought that things could get unreasonably heated and ugly, then it probably wasn’t the best venue to bring this up. As Jordan pointed out these questions will be asked nevertheless, and as far as I am concerned there’s no need to be confrontational about it.

    For what it’s worth, I am genuinely sympathetic to D-Wave, even have a small bet going that they’ll eventually outperform classical machines, but this is just a means to keep my focus on this question. I wish them all the best and want them to become a sustainable success, but in the end this of course requires ironclad and plain in sight performance gains.

    Also let’s not forget Scott is a first time father with an infant baby at home, that alone qualifies for a lot of slack in my book 🙂

  450. Henning Dekant Says:

    Bill Kaminsky #438, what in your opinion are the leading contenders for AQC/QA based on Heisenberg couplings?

  451. Charles Says:

    A belated response to Nobody Special #266: I’m extremely skeptical of Figure 3 in Peng et. al. They’re looking at random (odd) numbers with up to 16 bits and waiting until they have alpha = 1/8 (!) confidence. But even over the largest range, 16 bits, trial division would take less than 5.8 attempts on a random odd number of that size (to get alpha = 1). I would be much happier with the results if they had (1) used only ‘hard’ semiprimes, getting less benefit from being Hamming-close to 0 and having fewer numbers which are factors, and (2) used a higher confidence. Of course adding more qubits would be terrific, but presumably quite hard.

  452. Cathy McGeoch Says:

    Scott #443 I wish you had asked your question.

    In fact the Boixo et al. paper was mentioned on one of my slides, but as you know I ran out of time and rushed at the end. I didn’t realize the question loomed so large in your mind or I would have made a point of mentioning it.

    I don’t think it would have been as humiliating as you imagined.

  453. Rahul Says:

    To just clarify, the poster in #439 (who seems quite obnoxious) wasn’t me. While I don’t have a monopoly on my name, it’s annoying (especially if he was trying to be malicious; I don’t know).

    As it stands, I’m quite happy about Alex Selby’s effort as well as Scott’s stand on all this.

  454. Rahul Says:

    With the precedent of “Fake Rahul #439” now my suspicious mind wonders if Cathy McGeoch #452 is indeed the real deal or not.

    Just a ( possibly redundant) note of caution that people on blogs may not be who the seem they are, and one could play a lot of mischief that way.

  455. Gil Kalai Says:

    Scott: (Comparing the “QC-is-impossible camp and the QC-is-practical-now camp”) “Among other similarities, both camps fail equally to appreciate just how enormous the gaps can be between where technology is today and what the laws of physics ultimately allow.”

    I think that this enormous gap is something everybody seriously dealing with quantum computation is aware of, and indeed this gap is what makes the scientific issue so exciting. There are, of course, different opinions and interpretations. The major question is how and to what extent we shall be able to push technology forward, and to what extent we shall gain more understanding on what the laws of physics ultimately allow.

    Let me mention where I think Scott’s approach is faulty. Scott underestimates the enormous  different physical reality represented by  operating quantum computers, he fails to appreciate the large uncharted theoretical territories that can be relevant to the feasibility of quantum computers, and he is not fully aware of the major revolution in the field of quantum physics already represented by quantum computing and quantum error-correction.

  456. Cathy McGeoch Says:

    Greg #445 here is my take on how the two papers compare.

    First, the point of our study was to compare publicly-available software solvers to those provided by D-Wave. The broad specifications were set by a client (unnamed at the time but it turned out to be a consortium of search engine and space rocket builders). Our tests were never meant to be used to compare platform speeds, and it is wrong to use our data to support arguments either way.

    Just as our experiments are not relevant to the question studied by Boixo et al,, its hard to find results in their paper that can be compared to ours. Here are some reasons.

    1. They do not report any runtimes for the problem sizes we tested. They make some predictions about how things will scale, but that’s a very hazardous undertaking when hardly anything is known about the underlying computation models. I would rather wait and see.

    2. They measure time differently, they set program parameters to reflect “ideal laboratory conditions” in ways that would not be available to a practitioner (i.e. the client), and they use a slightly different instance class. All perfectly reasonable, but I don’t know a defensible way to do the cross-platform comparisons.

    3. Two ways to examine the tradeoff between time and solution quality are: (1) give them the same amount of time and look at how solutions compare; and (2) let them run until they find the same solutions and then see how long it took. Our paper contains both. Boixo et al use (2), but they look at times to solve 10 percent, 50 percent, etc. of the entire test set. I have no idea how to translate those numbers to be relevant to our instance-by-instance measurements.

    That being said, had I known about their solver I certainly would have asked to borrow a copy for our tests.

  457. Scott Says:

    Rahul #453:

      To just clarify, the poster in #439 (who seems quite obnoxious) wasn’t me. While I don’t have a monopoly on my name, it’s annoying (especially if he was trying to be malicious; I don’t know).

      As it stands, I’m quite happy about Alex Selby’s effort as well as Scott’s stand on all this.

    Thanks for clarifying! I really did think you two were “the same Rahul,” and it did confuse me.

  458. Rahul Says:

    Cathy McGeoch #456:

    Since you use the term “client”, I feel compelled to ask, Was this a paid assignment?

    Were you commissioned to do this study by D-Wave or the then unnamed consortium or someone else?

    In fields I come from, it is common to have a conflict of interest statement. Is this the convention in Comp. Sci. too? If I missed it in your paper, my apologies.

  459. Rahul Says:

    The one clear statement I loved from Cathy’s post (#456) was this:

    Our tests were never meant to be used to compare platform speeds

    Paging Geordie Rose. He seems to have been using (misusing?) the McGeoch and Wang study for exactly this purpose everywhere.

  460. Scott Says:

    Cathy McGeoch #452 and #456: I appreciate your coming here to the lions’ den; I really do!

    On reflection, I think that indeed I should’ve asked my question. I’m sorry for being too weak-willed.

    But as for your not knowing how large the question loomed in my mind: well, since you gave me a dirty look when I said my name, I assumed you’d seen this blog post! 🙂 And if you’d seen the post, then you would’ve known that the comparison to Boixo et al. was the central question on my and many other people’s minds. So I hope you can forgive my surprise at not seeing it raised.

    The reason that question looms so large for me and like-minded folks here is this. While we’re sure you’re carefully and accurately reporting the results of the experiments you did, ultimately we care less about any particular experiment than about the broader scientific question: is the D-Wave device getting any quantum advantage over classical computation, or not?

    To address that question, it’s necessary to compare the device, not to this or that particular software package, but to the best classical software for solving D-Wave’s QUBO problem. Now, obviously we can’t know what’s the best possible classical software for this specific problem, and probably it hasn’t even been written. But having said that, it seems only fair to spend a tiny fraction of the effort that D-Wave spent on optimizing their hardware and software, on optimizing the classical software! That’s exactly what Boixo et al. did, and when they did it, they found that the purported advantage for the D-Wave device evaporated. Claiming that such a striking finding isn’t relevant to you, because they were measuring runtime and solution quality a little differently than you did, feels weak to me.

    Now, I can understand if you focused on the question you did, simply because your job was to meet the “broad specifications” of “a consortium of search engine and space rocket builders.” But if so, then I can only reply that the consortium of search engine and space rocket builders was asking the wrong questions. And they could’ve come here at any time to learn, free of charge, which questions they should’ve been asking! 😀

  461. Scott Says:

    Gil Kalai #455:

      I think that this enormous gap is something everybody seriously dealing with quantum computation is aware of, and indeed this gap is what makes the scientific issue so exciting.

    What I meant was simply that, from my own personal perspective, the QC-is-impossible camp mistakes a limitation of current technology for a limitation of the laws of physics, while the QC-is-practical-now camp mistakes a capability of the laws of physics for a capability of current technology. But these two mistakes are just the contrapositives of each other, and are logically equivalent!

  462. John Sidles Says:

    ————————-
    Scott assertion (1) “When you read Selby’s descriptions of what he did, you see a mind — a spark of intellect — trying to figure out the truth about the D-Wave device, something you very conspicuously don’t see when you read McGeoch and Wang, or when you hear McGeoch open her mouth.”

    Scott assertion (2) “Ultimately we care less about any particular experiment than about the broader scientific question: is the D-Wave device getting any quantum advantage over classical computation, or not? To address that question, it’s necessary to compare the device, not to this or that particular software package, but to the best classical software for solving D-Wave’s QUBO problem.
    ————————-
    It is regrettable (for everyone) that the personal rhetoric-based negativity of assertion (1) has dominated the impersonal science-based positivity of assertion (2).

    Gigantic quantum elephants  The QIT community dances around many “gigantic elephants” (in Scott’s useful phrase), and two of the biggest are (1) Despite promises to the contrary, and with no explanation or excuses, the QIST Technical Experts Panel (TEP) has remained silent for one whole decade, which is bad. (2) Because we have no clear notion of the practical or fundamental limits to “the best classical software”, and because the performance of algorithms (both classical and quantum) has been improving at an astounding pace, comparisons of “the best quantum software” to the “best classical software” too-easily degenerate into rhetorical squabbling, personal criticisms, cherry-picking, boosterism, and turf-defending; these tendencies too are bad.

    Conclusion  Negative rhetoric harmfully distracts the QIT community from attending to its elephants.

  463. Gil Kalai Says:

    Haa, what you just wrote, Scott (#461), could have made sense if we were talking about a small gap. But given that it is an enormous gap, the exciting scientific question is where the barrier between the ultimate future technologies and the ultimate limitations described by the law of physics lies. My expectation is that the barrier ultimately will be the fault-tolerance barrier. There is quite a way to go both on the side of technology, and on the side of theory to reach this barrier which will not allow quantum speedup.

  464. Cathy McGeoch Says:

    To clarify: as Matthias points out, they do report runtimes for their Simulated Annealing code up to n=512. (I was referring to published hardware times; they are working question.)

    Bottom line: I think their SA solver would have performed very well in the first of our three tests. Exactly how well I do not know.

    How much that success should be attributed to constant factors and how much to the algorithm (i.e. how does it scale with n), vs platform speeds, I don’t know.

    How much of that success should be attributed to the choice of instance class (random model) to test on, I don’t know. Given that SA does so well, I’m starting to wonder if this random model is really challenging enough to be interesting.

    How well it would perform on the other problems we looked at, I don’t know.

    And don’t forget Moore’s law. How well their solver will perform a year from now (when D-Wave and conventional hardware both get faster and bigger), I don’t know.

    Fundamentally, I think the experiments in our paper and (imho) in the Boixo et al paper are far too small in scope to support any conclusions about the larger picture.

  465. Cathy McGeoch Says:

    Rahul #458 D-Wave retained me as a consultant in September 2012 to help them with a project. A potential client (of theirs — I wasn’t told who it was at the time) had specified a set of 5 benchmark tests for their system (hardware + software) to pass as a condition for purchase.

    The broad outlines of the benchmarks had been set (what solvers to use, generally what kinds of input classes, etc), but there were a lot of details left out. I was basically asked to “vet” the specs and to carry out some pre-benchmark benchmarks to make sure that the tests would be replicable and reliable, and efficient.

    I did the work, wrote my report for D-Wave, and yes, collected a consultant’s fee. (Presumably they ran their benchmarks (on a different chip) to their client’s satisfaction.) One of my conditions was that I would get to keep the data and publish papers on it (this is academia, after all).

    I am not aware of any convention about disclaimers. I think the results are not nearly as exciting as the press does. For a dedicated hardware chip to be only 3600x faster than commercial software — on exactly one problem class — is not that impressive. And on the other tests it was just tied for best.

    And while we’re on the topic, I very much disagree that our experiments were “rigged” to make D-Wave look good. The general framework for the tests was specified by the client (arguably an adversary in this context), and the details of the experimental designs if anything, favored the software.

  466. Scott Says:

    Cathy McGeoch writes:

      Our tests were never meant to be used to compare platform speeds, and it is wrong to use our data to support arguments either way.

    And again:

      Fundamentally, I think the experiments in our paper … are far too small in scope to support any conclusions about the larger picture.

    And:

      I think the results are not nearly as exciting as the press does.

    Cathy, thanks so much for saying these things. Because of your saying them, I hereby retract, and apologize for, the one negative thing that I said about you personally on this thread.

    I only hope that the next time Geordie, or a popular writer, repeats the “D-Wave is 3600x faster” claim, they’ll note that the consultant on the experiments behind that claim made the three statements above.

  467. Cathy McGeoch Says:

    Scott #460

    I had seen your “front page” blog post (“rigged”), and sent you an email trying to clarify what I saw as some basic misconceptions about the whole point of our work, and pointing out that I didn’t see how our data was relevant to your question. Did you not receive it?

    I was not aware of this ongoing discussion page — it was mentioned Friday after you left, and I just took a look at it for the first time yesterday. I don’t think I sent you a dirty look. Seriously?

    We have to disagree, I guess, on whether NASAgoogleUSRA asked the “wrong questions” about performance of a product they were planning to use in the field, compared to viable alternatives (such as buying 10,000 desktops and running CPLEX on them all, which would just about cost the same). I think they are perfectly reasonable questions, and it seems likely (assuming they bought the upgrade package 😉 that next’s year’s model will be even better.

    There is room in this world for both “can it compete on my problems” and “what is the fundamental nature of the thing” type experiments. For reasons listed in #464 I don’t think the Boixo et al. paper will be the last word on the latter question.

  468. Cathy McGeoch Says:

    Scott #466 apology accepted. Did you really not get the email that mentioned all this?

  469. Cathy McGeoch Says:

    Scott #466 Reporters gotta report. It would be nice if the scientists capable of reading these papers could get things right!

    As far as I can see, neither D-Wave nor Geordie have been trumpeting this 3600x number in a misleading way. If you look at the press announcements on the D-Wave (and google etc) sites, they’re pretty sober.

  470. Scott Says:

    Cathy: I didn’t recall getting any email from you, and indeed was vaguely wondering why you never contacted me to offer your side of the story. So I just searched and—guess what? I found two emails from you in my Gmail spam folder. Oof!

    So, not only do I blame Google for choosing the wrong comparisons for your experiment, I also blame them for losing your email, and thereby setting the stage for some very avoidable unpleasantness.

  471. Bill Kaminsky Says:

    To answer Henning’s question @ #450…

    Bill Kaminsky #438, what in your opinion are the leading contenders for AQC/QA based on Heisenberg couplings?

    …I’d say the leading contenders are still superconducting flux qubits like those used by D-Wave. If D-Wave wanted to put Heisenberg style couplings (i.e., at least Pauli-XX as well as Pauli-ZZ) onto their chips they’d have to do some combination of

    (1) having capacitive as well as inductive couplings between pairs of qubits

    AND/OR

    (2) refining their inductive couplings so that the individual SQUIDs comprising each qubit can be differently coupled to each of the SQUIDs on another qubit, something which may also require replacing some individual Josephson junctions on their qubits with SQUIDs (recall a SQUID is a loop of wire punctuated by 2 Josephson junctions).

    Requisite Disclaimer: Heisenberg couplings may or may not help adiabatic quantum minimization / quantum annealing. One can handwave like I did in Comment #438 about “spin liquids” and say that kind of magnetic order looks a lot more amenable to driving transitions between different problem basis states than having the usual “spin solid” magnetic ordering, but it’s up in the air whether this is a road that leads anywhere. What undeniably is true is that once you got Heisenberg couplings instead of just a transverse field, you can:

    A) run quantum algorithms that definitely pose a “sign” problem were one to try to classically simulate them with Quantum Monte Carlo, which ain’t really a surprise since you can in fact…

    B) pose QMA-complete problems, and thus be a universal adiabatic quantum computer.

    Note also I imagine the folks at D-Wave is fully aware of this. Certainly people D-Wave has collaborated with and/or employed (other than yours truly) are fully aware of this. For example, a good reference on Quantum Monte Carlo and the sign problem are these slides from Matthias Troyer:

    http://wiki.phys.ethz.ch/quantumsimulations/_media/sign.pdf

    and a good reference on the minimal set of pairwise couplings necessary to pose QMA-complete problems is

    Jacob Biamonte and Peter Love, “Realizable Hamiltonians for Universal Adiabatic Quantum Computers” http://arxiv.org/abs/0704.1287 [published as Phys. Rev. A 78, 012352 (2008) ]

    The reason, I imagine, why D-Wave hasn’t tried yet to have Heisenberg couplings on their chips is that it’s just harder engineering-wise. I’d wager, however, if their transverse Ising model based chips seem not to be outperforming classical algorithms, they’ll try to add Heisenberg couplings to their chips. While some might denigrate that evolution as a “stone soup” scenario, I think it’s only fair to try out easier designs first.

  472. Greg Kuperberg Says:

    Cathy – You’re obviously available for an honest conversation, and it goes without saying that that’s a good thing.

    Still, I think that you’re waffling on two essential points that need to be made crisp. First, the world’s newspapers clearly took you to mean not just that D-Wave is 3,600 times faster than off-the-shelf software, but in fact that it’s 3,600 times faster than a desktop-class computer in general, at the specific benchmark that you tested. For instance in USA Today you said, “On one of the tests we have run, it is faster, much faster. Roughly 3,600 times faster than its nearest competitor.”

    Isn’t it now clear that you did not actually test the nearest competitor? That is, the actual nearest competitor at the time you were interviewed by USA Today was Troyer’s simulated annealing code, not CPLEX. I might agree that you can be excused for not knowing that this competitor existed, but knowing what you now know, isn’t it fair to say that this article in USA Today is wrong? (I’m asking about this article specifically; I think that it is too nebulous to say that the press in general exaggerates.)

    Or at the very least, if you want to hedge that you haven’t yet tested Troyer’s code, that the article is probably wrong. On that note, how long would it take you — now that you have been mentioned by name in NPR, the New York Times, USA Today, Nature, the Economist, etc. — to go ahead and benchmark Troyer’s computer program to your standards? But still, even if you haven’t yet tested Troyer’s program, wouldn’t you agree that it casts major doubt on what the press thought you meant?

    Second, here is a direct quote from your client Geordie Rose:

    When D-Wave was founded in 1999, our objective was to build the world’s first useful quantum computer. The way I thought about it was that we’d have succeeded if: (a) someone bought one for more than $10M; (b) it was clearly using quantum mechanics to do its thing; and (c) it was better at something than any other option available. Now all of these have been accomplished, and the original objectives that we’d set for ourselves have all been met.

    It sounds like you do not endorse the claim that D-Wave has yet accomplished goal (c), that any device that they have yet built is necessarily better at anything than any other option available. Is that correct?

  473. Scott Says:

    Cathy (con’t): Like Greg, I do strongly disagree with the claim that D-Wave and Google aren’t misleading people about your results. For me, the crux of the matter is this: when you combine the performance of Matthias’s simulated annealing code (not to mention Alex Selby’s), with your own disclaimers about what can’t be concluded from the comparisons that you did, the entire case for the existence of any practical benefit from the current D-Wave devices collapses.

    In practice, of course, you’re never trying to solve QUBO with the D-Wave constraint graph, but only some other optimization problem that has to be reduced to D-Wave’s problem, incurring considerable loss. But we now know that, even in the silly case that you did want to solve D-Wave’s QUBO problem and nothing else, you’d be better off to ask Matthias for his code (or download Alex’s code) and run it on your laptop, rather than spending $15 million for D-Wave’s giant black box.

    And therefore, I claim that any reference to your results, in the context of why it could make sense for Google and NASA to buy a D-Wave machine, is ipso facto a misleading reference.

  474. Nobody Special Says:

    Cathy #466, I’m a little confused. If we can’t use the results to compare platform speed. Then why is 10,000 desktops a correct cost comparison? It seemed like, for a minute you put Selby’s evidence and your own tests on equal footing. In which case doesn’t it seem reasonable that Google could have just bought a single machine and invested a trivial (cost-wise) amount of time optimizing software to solve the problem?

  475. Scott Says:

    Cathy (con’t): Regarding the word “rigged,” please see comment #170 and comment #173, where Greg and I both clarified that by “rigged,” we didn’t mean to imply any sort of dishonesty on your part, but only that your comparisons seemed extremely uninformative compared to Troyer’s about the substantive questions at hand. Knowing what I know now, I would tend to say: the comparisons you did came “pre-rigged” by your clients; you weren’t the ones who rigged them! 🙂

  476. John Sidles Says:

    ——————–
    Scott Aaronson proclaims (IN BOLD LETTERS): “The entire case for the existence of any practical benefit from the current D-Wave devices collapses.”
    ——————–

    Observation I  Now it is necessary only for reporters to quote Scott verbatim — without providing substantial further scientific, technological, engineering, or mathematical context (of which there is plenty) — for Shtetl Optimized itself to be plainly guilty of the same excesses of which it stridently accuses D-Wave.

    Observation II  Cathy McGeoch, please let me say that your posts here on Shtetl Optimized (circa #465 and #467-9) have been models of collegial good manners, reasoned exposition, adult restraint, and common sense.

    Thank you, and please keep it up.

  477. Alexander Vlasov Says:

    Couple of technical questions: 1) may be I missed something, but that particular software for quantum annealing is used by Troyer et.al? 2) If Alex Selby code maybe compared with annealing for zero temperature parameter?

  478. Scott Says:

    John Sidles #476: I stand 100% by what I wrote in bold letters. The world can quote me on it, either with or without the surrounding explanations for why it’s a perfectly-accurate summary of the situation. (Note that I didn’t say there will never be any practical benefit from any future D-Wave device, nor that it’s impossible that some practical benefit even from the current device might be discovered, but rather that no serious case has been made for any practical benefit from the current device.)

    Meanwhile, I’m sorry to say that you, John Sidles, are hereby banned from this blog for 3 months. I’ve finally had it up to here with your unique combination of condescension, weird fixations, obsession with tone over substance, refusal to address the actual points at hand, and refusal to learn anything from what anyone else explains to you, all wrapped up with a bow of avuncular kindliness.

  479. Cathy McGeoch Says:

    Greg #472 One thing I’ve learned from this experience is don’t believe everything you read in the papers. I have seen quotes around things I never said, and wouldn’t have said. From reporters I never spoke to or exchanged email with.

    Re timeline: This started when the public affairs director at Amherst College wrote a little profile on this work for the college website, after I gave a local talk in my department. He interviewed me in late April, and his story appeared on the website sometime the first week of May.

    Mathias Troyer got in touch and we spoke on 5/2; this was around the time I first became aware of their paper. After that conversation I tried to mention their paper to any reporter I spoke with. With mixed results, obviously.

    Re the USA Today quote: I don’t remember exactly what I said on the phone, but I was referring to the nearest competitor of the three we looked at (CPLEX, akmaxsat, Tabu) in that test. That was the context of the conversation. So I would call it a misquote or a truncated quote, that ended up being misinterpreted.

    Re testing their code: It would probably be difficult to run tests using the exact same test environment and conditions (all that code and equipment is still at D-Wave, and Cong (who developed it and knows how to run it) has gone back to grad school. But I could probably manage some similar tests on my local systems. Matthias and I have had some conversations about giving me access to their code, but understandably they’d like to complete their own experiments first. We’ll see.

    Re goal (c): well I suppose it depends on what you mean by “available.” And on what you mean by “better.” I don’t think there’s a short answer to that, and would rather stay out of it.

  480. Cathy McGeoch Says:

    Scot #473 My understanding is, there are lots of applications in machine learning where solving this native problem is important, and basically the inner loop operation in a larger complicated algorithm. The constraints on the hardware graph are not restrictive because it is easy to choose “sample points” that match. Its easy to see why Google and NASA might be interested.

    And, of course, they probably want to use it to solve a big variety of combinatorial optimization problems.

    I don’t think is one paper “collapses the entire case” for the usefulness of D-Wave. The way they define “faster,” while perfectly reasonable in context, leaves a lot of room for D-Wave system to be useful on various subsets of problems.

    Probably GoogleNASA’s best strategy at this point is to hire a grad student (like the one who’s PhD work is described in the Boixo et al paper) to work his magic for both conventional and quantum platforms.

  481. Scott Says:

    Cathy #480:

      The way they define “faster,” while perfectly reasonable in context, leaves a lot of room for D-Wave system to be useful on various subsets of problems.

    That’s exactly the part that I disagree with. What’s a single example of a situation where—if you actually cared about the answer, as opposed to being able to say you used a quantum “trophy computer”—you would want to issue your function call to the D-Wave machine, rather than to Troyer’s code?

  482. Cathy McGeoch Says:

    One more post for now; I need to get back to my real life.

    Alex Selby and I have had several email conversations about the results posted in his blog. He has made a few wrong assumptions about our experiments (not necessarily his fault, since many details were omitted from the conference paper due to the 10 page limit). Because of those errors, I don’t think his data supports his summary conclusion 1 or summary conclusion 2. We are working on how to reconcile our test results.

    John #476 Thanks for the kind words; I had some hesitation about jumping in. Sorry about the banishment.

  483. Peter w. Shor Says:

    Hi Scott,

    I think it is much more mean-spirited to walk out of Cathy McGeoch’s talk early, not even staying to hear the end, and then later accuse her of incompetence when she isn’t there to defend herself, than to wait until the end of the talk when you’ve heard everything she had to say, and ask (moderately politely phrased) penetrating questions. I wish you had stayed to ask her questions, so that we could have heard her responses to them.

  484. Peter w. Shor Says:

    Scott:

    I think the questions about: “Can you solve this specific problem on D-Wave’s machine faster with a conventional computer?” are completely misguided at this point. I want to go back to my comparison to the Wright brother’s airplane. Here are the comparisons:

    a1) Does it fly?
    a2) Is it actually doing quantum annealing? That is, does their chip use quantum processes for computing on a fundamental level?

    b1) Is flying useful for anything?
    b2) Can quantum annealing eventually be used to solve any practical problems better than classical computers?

    c1) Will it get you from New York to Boston faster than a train?
    c2) Does it currently solve some problem faster than classical computers?

    You are concentrating on question c. The Wright Brothers’ plane would have failed miserably when faced with this question. I think we should be focusing questions a) and b). For question a), I think there is some evidence that it actually flies (although not very well yet—but there are more generations of D-Wave chips coming). For question b), I am not yet convinced that quantum annealing can do anything useful.

  485. Greg Kuperberg Says:

    Cathy – I am happy to doubt a direct quote in the newspapers, when I know to doubt it. I have been quoted in the papers as well and I have learned how to get quoted properly. But yes, it can happen. I’m happy with the statement that USA Today did not quote you properly.

    But as for this issue of testing Troyer’s code, I like your comment that you could just test it on your own computers at Amherst. Surely that is a good thing to do.

    I don’t think that it’s the right time to make excuses for what Google and D-Wave could be thinking. Frankly D-Wave has made a mess of things, and I don’t think that you should further mix that mess into your answers about questions about your own work.

  486. Scott Says:

    Peter #483: To be honest I walked out, less because of anything Cathy said or didn’t say, than because of my astonishment that neither you nor anyone else was asking those penetrating questions. I figured that I had already said my piece on my blog, and everyone knew that, and the questions were obvious anyway—so the polite thing to do would simply be to sit back and wait for others to ask the questions. When a full hour passed and no one did, I started to fear that the whole exercise might have had more to do with your friendship with Prof. McGeoch, or something like that, than with a desire to reach the truth about the underlying issues—and that my asking the questions might have been seen as an impertinence in that context.

  487. ObnoxiousRahul Says:

    Shor #483, I completely agree! That was exactly my point in #439 but I got called “obnoxious” by Scott’s groupies.

    To me it still looks like Scott deliberately avoided the proper scientific discourse and preferred going back to being an internet troll because he has lots of supporters here many of whom probably are not experts on the topic and barely have any understanding of what’s going on.

  488. Rahul Says:

    Assuming Alex Selby is right, classical codes then seem about 1000x faster than DWave.

    The difference in hardware costs would be ~10^4 ($10M for DWave vs. ~$1000 for Alex’s 3.2GHz i7-965 )

    Hence on a performance-per-dollar metric DWave seems about 10^7 worse. Not a good metric.

    OTOH, one statistic might be interesting to know: For D-Wave what’s the production cost of their SQUID core chip versus the cooling cascades etc.

    If scaling the chip count / qbit count could be done at close to fixed cost, that’d paint a rosier picture.

  489. Don Crutchfield Says:

    Scott, would you slag off esteemed scholars (as you have been in this post) if you didn’t have tenure?

  490. Greg Kuperberg Says:

    Peter – Look, I sort-of understand your impulse to defend the other side in this discussion. Sort of. However, I was just reading about progress superconducting qubits last night due to Martinis, Schoelkopf, et al. I really think that you are doing a disservice to the field with comparisons to the Wright Brothers and with statements like this: “I think there is some evidence that it actually flies (although not very well yet—but there are more generations of D-Wave chips coming).”

    Don’t you understand that D-Wave has taken the easy way out, while academic labs push the technology forward? It is explicitly stated in a recent review article that however you learn how to make a superconducting qubit, you can then easily make hundreds of them. The hard part is not making many qubits, it’s making good qubits. The people who are making the best Josephson qubits don’t want to multiply qubits that are better and better, but still aren’t ready for prime time.

    Whereas D-Wave has rushed forward to replicate qubits that aren’t even close to the state of the art, on the idea that annealing on them will be good anyway. And you know as well as I do that the theoretical basis for annealing with incoherent qubits is weak.

    Now you could say, let them try it anyway, it just might work even though the theory is weak and the physical technique doesn’t look competitive. Which is logically defensible, but don’t make comparisons to the Wright Brothers.

    The only reason that the Wright Brothers had any victory, is that until 1908, their work was better grounded in theory. If they had been like D-Wave, they would have bragged that they didn’t need any wind tunnel tests, nor three-axis flight control, on the argument that their “planes” were faster than land motorcycles. They could have tried to beat motorcycles with human cannonballs. Yes, human cannonballs fly, but they don’t work.

    You also shouldn’t blame Scott for the red herring of “New York to Boston” or “is it currently faster”. Because it’s not his red herring, it’s D-Wave’s. The words are right there, Rose claims that his device is currently faster.

  491. Scott Says:

    ObnoxiousRahul #487: On the contrary, I think this episode perfectly illustrates both the disadvantages and the advantages of blogs compared to face-to-face conversation. Yes, on blogs, people misinterpret signals, act rude, and level accusations at each other that they never would face-to-face. But in the process, at least absolutely everything gets out into the open. Notice how I managed to learn orders of magnitude more from Prof. McGeoch from a few blog comments, than I did from having her in the same room with me from an hour—a situation where she wanted to stick to her largely-irrelevant slides, I didn’t want to be seen as impertinent, and so we were both constrained by social protocol from getting to the heart of the issue.

    On another note, I suppose I now know the answer to the question, “are the arguments against the latest D-Wave hype so compelling that I would still be persuaded by them, even if Peter Shor himself were to come out in D-Wave’s defense?” So, thanks Peter! 🙂

  492. Scott Says:

    Don Crutchfield #489: LOL!

  493. Rahul Says:

    Peter w. Shor #484 says:

    I think the questions about: “Can you solve this specific problem on D-Wave’s machine faster with a conventional computer?” are completely misguided at this point. I want to go back to my comparison to the Wright brother’s airplane.

    The problem with DWave is if everyone was satisfied D-Wave has made a truly quantum machine then having that many qbits in itself would have been commendable. No need to prove it’s faster than a classical competitor or anything.

    What we are stuck with is a black box which seems somewhat classical, somewhat quantum with nobody really sure how quantum it is and whether the quantum-ness is what gives it its computing power.

    In this scenario the only thing one is left with as an evaluation metric is to see how fast it is versus a conventional machine.

    Note that DWave itself is touting it as a practical machine not some research curio. In which case it seems perfectly valid to ask “Is it faster?” Let’s be glad not many are yet asking “Is it faster per dollar spent on it?”

  494. Henning Dekant Says:

    Rahul #454, people posting under fake handles on blogs! Oh my god, what has the world come to! Next time I may not even be able to trust these nice people from Nigeria who ask me to help them transfer money 😉

    Actually, on my fringe blog that I host for people who delight in debunking pseudo-science, this problem was quite common. To mitigate this without compromising anonymity or requiring registration, I now report a hash tag based on the IP address and email.

    Of course this is a pretty weak counter-measure, but at least it helps with identifying drive-by identity spoofing.

  495. Rahul Says:

    ObnoxiousRahul Comment #487 :

    It’s perfectly fine, and perhaps true, accusing us here of not being experts, or even being idiots.

    But it’s a bit funny being accused of being “Scott’s groupies”: personally I’m in the QC skeptics camp (to the extent of thinking we are over-allocating resources to it) which would make it somewhat hard to get accepted into Scott’s inner circle / fan-club, I think. 🙂

  496. Alexander Vlasov Says:

    Greg #490, that review you are talking about? I supposed D-Wave hardware is state of art

  497. Henning Dekant Says:

    Greg #490, “taken the easy way out” in the context of a business start-up makes no sense. Clearly Microsoft took the easy way out when they bought DOS.

    Whatever get’s you to the market and allows you to stay competitive goes. It’s not a beauty contest.

  498. Henning Dekant Says:

    Scott #478, your earlier statement …

    “Note that I didn’t say there will never be any practical benefit from any future D-Wave device, nor that it’s impossible that some practical benefit even from the current device might be discovered”

    … illustrates some unfamiliarity with how these investment decisions are made. The investment represents an estimated NPV (net present value).

    Clearly you put very bad odds on D-Wave succeeding (although you have the stone soup scenario). But I think it is quite apparent, that it is not irrational to assign a non vanishing probability on D-Wave outperforming the classic Moore’s law. You plug this into the cash sums that you could gain in the case that this (conservatively low estimated) scenario holds.

    I can guarantee you that in the case of giants like Google you will be able to put the chances quite low and still get a $10M NPV. It is a perfectly rational business decision.

  499. Douglas Knight Says:

    Peter Shor, you and many other people in this thread complain that Scott asks the wrong questions. But in his original post, he asked every question that anyone has suggested he didn’t ask.

    Is it “misguided” also to ask about the value of this particular machine? As he said in the comments, he was directly asked about a purchase decision. McGeoh and Wang’s paper was commissioned as part of purchase decision. People are asking these questions. Is it wrong to answer them?

  500. Douglas Knight Says:

    Henning, you seem to be talking about buying equity in DWave. As far as I know, Google and Lockheed simply purchased machines, not equity. Do you have a source that says otherwise?

  501. Scott Says:

    Henning #497:

      “taken the easy way out” in the context of a business start-up makes no sense. Clearly Microsoft took the easy way out when they bought DOS.

      Whatever get’s you to the market and allows you to stay competitive goes. It’s not a beauty contest.

    That’s an ironic choice of analogy! I bet IBM wishes, in retrospect, that there were some academic CS bloggers around back in 1981 to warn it not to empower the Microsoft monster, by agreeing to buy an OS from it that Microsoft didn’t even yet have. 🙂

    But more broadly, I’ve heard some variant of your argument from lots of people. “Look Scott, all’s fair in love, war, and business! D-Wave is just doing whatever it needs to do to stay competitive, and telling its investors and customers whatever will benefit it the most as a company. Why can’t you understand that?”

    Yet, even as they praise D-Wave for its Nietzschean rule-breaking, its naked will to qubits, many of the very same people profess shock and outrage at me, for supposedly violating the rules of academic decorum. I’m expected to be unfailingly polite, cautious, and restrained in my criticisms, even as D-Wave is actually admired for waging a total PR war. It’s that hypocrisy that annoys me more than anything else in this entire debate.

    Now, it’s to your great credit that, unlike most D-Wave supporters, you’ve never once suggested that I should be doing anything other than what I’m doing now (i.e., asking questions, countering hype, trying to provide a clearinghouse for informed skepticism). So I guess I’ll just continue doing those things!

  502. Rahul Says:

    Cathy McGeoch #465:

    I am not aware of any convention about disclaimers [conflict of interest statements].

    Thanks for the clarifications. I’m quite surprised Comp. Sci. does not have a convention about declaring potential conflicts of interests.

    If there isn’t, I strongly think there ought to be one. What do others think? There’s a good reason to mentally adjust our priors about a paid consultant versus a disinterested third party.

    PS. I’m not at all doubting Cathy’s integrity, but in general, declaring potential conflicts is sound policy. It seems the norm in most academic disciplines.

  503. Henning Dekant Says:

    Bill Kaminsky #471, thank you for your very insightful and in-depth answer. Hope you won’t mind if I’ll quote at length next time I blog about D-Wave 🙂

  504. Douglas Knight Says:

    Rahul, I don’t think conflict of interest statements are common in academic papers, outside of medicine. I think universities require annual conflict of interest statements, but that’s secret. Many papers state funding sources, but that’s because the funders want it for advertising or record-keeping, not as a statement of interest.

  505. Scott Says:

    Henning #498:

      I can guarantee you that in the case of giants like Google you will be able to put the chances quite low and still get a $10M NPV. It is a perfectly rational business decision.

    I actually agree with you, as I said earlier in the thread, that D-Wave has a positive net present value. Unfortunately, as far as I know, almost all of its NPV derives from the possibility that D-Wave will do one or more of the following three things:

    (1) Continue selling “trophy quantum computers” that don’t actually outperform classical computers into the indefinite future—using paid consultants to convince its vanity customers that they’re getting something for their money, even if the consultants themselves protest that their experiments are being misinterpreted. Give QC as a whole a bad name.

    (2) Use its patent holdings to harass other groups who try to do superconducting Josephson-junction QC in a more rigorous way.

    (3) Appropriate the very ideas from academic QC that it’s been badmouthing for its entire history—and thereby get eternal glory for building the first useful QCs, while the academic QC community receives eternal blame for “trying to hold D-Wave back,” with only a few historians realizing that the truth was precisely the opposite. (I’m not an expert, but I’ve read that the Sabin/Salk controversy played out almost exactly like that.)

    In all three cases, I agree that D-Wave could very well become profitable as a company—but that doesn’t mean I have to like it! Indeed, in each scenario, the only practical implication for me seems to be that I should do even more than I currently am to warn my colleagues about the eventuality and try to prevent it from happening.

  506. Henning Dekant Says:

    Rahul #465, only see a need for a disclosure if the party has a vested interest in getting a certain result.

    As the potential customers’ interest was to get objective answers to the questions that they posed, I don’t see a requirement for this under the given circumstances.

  507. Henning Dekant Says:

    Scott #505, you misunderstood what I was referring to.

    Maybe I wasn’t clear enough on this: I wasn’t talking about the NPV of equity in D-Wave’s business, but how a company like Google approaches the question if they should invest into something like a D-Wave machine. Big difference.

    What they certainly did, was calculating a NPV or internal rate of return (essentially the same thing) on the $10M investment that they made (of course the real number wil be larger when taking into account staffing).

    To simplify: You consider the scenario that D-Wave can deliver faster performance growth than classical machines, in this scenario there will be significant contributions to Google’s bottom line. Now you discount these future amounts back to a present value weighted by the probability that your D-Wave machine can actually deliver the goods.

    Since you stated that there is a non-vanishing probability that D-Wave may deliver some better results, taking this risk is almost a foregone conclusion, when taking into account the size of Google, and their data processing needs.

    My point is, Google’s business decision is rational, based on a non-vanishing probability that D-Wave may be able to eventually deliver a significant speed-up. The questions they were asking of Cathy McGeoch to this end were perfectly adequate.

    If your “scam” scenario came to pass Google would eventually write-off the investment and D-Wave will be toast. D-Wave doesn’t want this, so they will try everything to improve their chips to prevent this form happening. You’d call it stone soup, I’d call it market forces.

  508. Scott Says:

    Henning #506:

      As the potential customers’ interest was to get objective answers to the questions that they posed…

    But I think that’s a completely absurd assumption! Everything we know suggests that Google and NASA—or at least the relevant parts of them—are as heavily invested in hype and misrepresentation at this point as D-Wave itself is. They win by being seen as buying a cool product that puts them at the forefront of technology. D-Wave wins by selling them that product. Journalists win by breathlessly covering the story. D-Wave’s consultants win by confining themselves to the narrow, not-really-relevant questions they were asked, and thereby enabling this whole circus. D-Wave’s investors win. Academics who play along win. In such a win-win-win-win-win-win environment, you might ask, why would anyone choose to be an insufferable, killjoy party-crasher ranting about what’s actually true, and whether D-Wave’s claims of a genuine quantum speedup from its current devices are justified or not?

  509. Michael Bacon Says:

    Henning Dekant@507,

    You’re right that even a non-vanishing chance of a huge hit is probably well worth the investment for a company of Google’s size and persuasion 😉 .

    My only quibble is that I would say that it’s well worth the investment “regardless” of the questions ask, and not that the question are “perfectly adequate”.

    Even if you totally discount the money, I think that Google and the folks involved are at least to some degree actually interested in the science and, dare I say, the “truth” of the various claims. I give them more credit than I would most companies in this regard. They seem to make decisions based on a number of factors, and IRR is only one, albeit an important one. So, in that sense, to the extent that they’ve reviewed the back and forth following the announcements, I doubt whether even they would argue, as you apparently do, that the questions were “perfectly adequate.” 😉

  510. John Preskill Says:

    I sometimes scroll past posts by John Sidles which are too long or too far off the point to interest me.

    I prefer that option to a ban.

  511. Silas Barta Says:

    I left without asking questions, not wanting to be the one to instigate an unpleasant confrontation, and—I’ll admit—questioning my own sanity as a result of no one else asking about the gigantic elephant in the room.

    But … but … that’s why no one says anything! Because they’re *all* thinking that!

  512. Bram Cohen Says:

    A technical question for everybody familiar with state of the art optimizers: Isn’t there an interchange format which will let you mix and match optimization problems against arbitrary solvers? Shouldn’t that make running such comparisons very straightforward?

    In that vein, doesn’t it seem likely, given the specificity with which the benchmarks for McGeoch’s paper were given, that the people funding it knew *exactly* what the results would be beforehand, and carefully selected them to make the results look good? This situation of a CS person getting paid to run benchmarks as part of due diligence for a hardware sale is something I’ve never heard of before, and would seem to imply that the people doing the buying simply didn’t have the expertise to figure out what was going on, and got duped.

    And yes, I do mean ‘duped’. Running an exhaustive solver against a stochastic solver is a demonstration that stochastic beats exhaustive, not that there’s anything notable about the specific solvers involved.

  513. Reader Says:

    I think it was established here that Google & NASA did not invest in D-Wave but bought specific version of D-Wave product. Either D-Wave machine showed private results that impressed the buyers or it is what Scott 508 says

  514. Scott Says:

    John Preskill #510: OK. As a result of your intervention, and the esteem in which I hold you, I hereby commute John Sidles’s blog ban from 3 months to a mere 2-week “cooling-off period.”

  515. Henning Dekant Says:

    Michael B. #509, actually I completely agree, “perfectly adequate” was simply meant to indicate that it was sufficient due diligence to decide wether to invest. (Time’s money so I really can’t stand customers who turn evaluations into multi-year dissertation research). It was enough to base a rational decision on. As such good enough. More would have been wasting all party’s time.

    Certainly assume that Google has some genuine interest in the science since IP drives their business.

  516. Henning Dekant Says:

    Reader #514, if you buy something like a D-Wave you buy into the future of the plattform. So yes, while they just bought one machine for now, the certainly bought into the upgrade path as well. That is, of course, unless Scott #508 cynical view is true. While there is of course a marketing benefit to be had for the $10M (and that certainly factors in), there’d also be a future backlash if the machines turn out to be a dead end.

  517. Henning Dekant Says:

    Scott, I think the main problem I have with the view you express in #508, is that you assume that the individuals who make these decisions live in an accountability free world.

    Admittedly, I get where one may acquire this stance. It seems to be the way the world works when observing American politics and how some CEO pump and dump their stocks (while bleeding an enterprise dry to then be just rewarded with the next CEO gig).

    But fortunately, this is not the reality for most companies, and never for those people not on the very top of the heap. Stuck in the middle results matter.

    Hartmut Neven’s career wouldn’t be helped if he bought a dud, and trying to hide this would just compound the damage.

    He’s name is now very much associated with D-Wave, and there is no doubt in my mind that he will work hard to deliver results.

  518. Henning Dekant Says:

    Scott #501, my apologizes for dominating the airwaves right now, but you kinda caught onto my epic “MS buying DOS” example, and that prompts some elaboration.

    For the longest time I hated MS because they forced DOS and Windows 3.1 on me (geez, I feel old). But what I didn’t quite realize at the time is that this was the catalyst to open up the market and break up IBM’s monopoly. Ironically in the end it was MS that delivered on the promise that Apple made in their famous 1984 ad.

    If there would have been an intrepid academic back then succeeding in preventing IBM not to sell DOS, there wouldn’t have been dirt cheap PC compatible machines that eventually enabled Linus to do his thing 🙂

    The IBM’s of the world finance interesting QC research but they aren’t in a hurry. Why cannibalizing their super computing cash cows?

    You need a hungry start-up like D-Wave to force their hands. If D-Wave starts out with something not quite awesome it doesn’t really matter to me, as long as they get the ball moving, and it eventually moves in the right direction.

  519. James Gallagher Says:

    Henning Dekant #518

    The comparison with Microsft early 1980s and D-Wave is very weak. In the early 1980s hardly anybody had any ideas/predictions about the potential of operating system software on mass production desktop PCs, whereas, today, many people have such ideas and predictions about the potential of Quantum Computing.

    D-Wave is just attempting to get in quick with a piece of rubbish that hardly meets any of the technological requirements that need to be attained before attempting basic QC demonstrations – and in the end this will dismay the watching public to such an extent that QC research may well be affected worldwide

  520. Greg Kuperberg Says:

    Scott – Of course I agree with the your outcomes (1), (2), and (3), they are exactly my thinking too. Optimistically (1) could be the most likely outcome, that the D-Wave will succeed by selling what I would call trophy models, until that is no longer fashionable.

    A couple of points about (2) and (3). Remarkably, from 1909 onward the Wright Brothers regressed to a position a bit similar to where D-Wave seems to me to be now: They were riding on a wave of popularity, but their product was not competitive. The Wrights responded with (2), patent lawsuits against the competition. These patent lawsuits, ironically issued by the people who created the first convincing airplane, significantly retarded the development in the US of actually useful airplanes.

    Now, I have no tangible feeling or evidence that D-Wave will behave this way. I can only say that there have been a lot of patent disputes in the computer industry.

    Regarding (3), the Salk/Sabin controversy indeed followed a trajectory much like what you suggest, but not exactly. Salk rushed forward with a polio vaccine that was not ready for prime time. Except for that shortcoming, other people could have “invented” the same type of vaccine. But he never appropriated the ideas of people like Sabin. Instead, he took credit for a cyclical downtown in polio cases. And his national vaccination program interfered with other people’s funding and ability to run clinical trials. (As a result, Sabin ran his clinical trials in the Soviet Union, of all places.)

    I think that it would be much harder for any one company to lobby against other people’s quantum devices as Salk did with Sabin’s vaccine. For various reasons, the United States as a whole is not going to put all QC eggs in one basket as it basically had to with polio.

  521. Greg Kuperberg Says:

    By the way, although some sources connected Google’s purchase of a D-Wave device to the McGeoch-Wang paper, actually Hartmut Neven started co-authoring D-Wave papers in 2008. So the Google sale is actually 5 years in the making.

  522. Peter w. Shor Says:

    Scott:

    You ask

    “What’s a single example of a situation where—if you actually cared about the answer, as opposed to being able to say you used a quantum “trophy computer”—you would want to issue your function call to the D-Wave machine, rather than to Troyer’s code?”

    Troyer’s simulated annealing code is highly optimized for a single probability distribution of instances. When I was at AT&T Labs working with David Johnson, I saw the following happen much too often: you take optimization code that is highly optimized for one probability distribution, and run it on a different probability distribution, and it fails miserably.

    In fact, there was indeed an example of this happening in Cathy’s talk. The algorithm akmaxsat was designed to run on “randomly generated instances with a high clauses-to-variables ratio.” When Cathy ran it on random weighted MAX-2SAT instances, it worked extremely well. When she ran it on the Chimera-graph structured QUBO instances, it performed very poorly.

    It is too bad that Troyer didn’t try out his simulated annealing package on different probability distributions of the native problem for D-Wave’s chip. Until he does, I don’t believe there is any evidence that his code will beat D-Wave on any problems except those it was specifically tailored to solve. You may object that I am saying that all he has proved is that “the sheep are black on one side”, but in the experimental analysis of algorithms, you come across sheep that are black on one side quite often.

  523. Greg Kuperberg Says:

    Peter – Is Troyer’s code any more optimized for a specific probability distribution of instances than the D-Wave device? The histograms in arXiv:1304.4595 suggest that the quantum annealing (both simulated and physical) has a sharp segregation into “hard” and “easy” cases. The classical simulated annealing code has a more uniform spread.

    I agree that different distributions should be checked, although the thing is, changing the input distribution is a perpetual question: You could always try to reshuffle the cards again in D-Wave’s favor. For that matter, in one major, major respect, the main benchmark is clearly tailored to favor D-Wave, since it is on the same graph as the connectivity of D-Wave’s “qubits”. In another paper, a D-Wave device was only really good at (non-rigorously) computing the trivial Ramsey numbers R(2,n), and already had trouble computing R(3,3).

    Besides, according to figure 21(c) in the paper, it looks like Troyer’s quantum annealing code wasn’t all *that* much slower than the D-Wave device. And that was just with an 8-core CPU.

  524. Peter w. Shor Says:

    Bram Cohen: you say “Running an exhaustive solver against a stochastic solver is a demonstration that stochastic beats exhaustive, not that there’s anything notable about the specific solvers involved.”

    This demonstrates that you have not read McGeoch’s paper. She compared D-Wave’s chip to one stochastic solver and two exhaustive solvers. For the class of problems where the “3600 speedup” was observed, CPLEX, one of the exhaustive solvers, beat the stochastic solver hands down.

    Of course, you can say that she picked the wrong stochastic solver (Mathias Troyer, in fact, showed that a highly tuned stochastic solver works very well). However, that is a case of 20-20 hindsight on your part.

  525. Henning Dekant Says:

    James Gallagher #519, you are missing the point, early 1980 there were certainly prediction on the impact of personal computing (I have some old such “futuristic” books in my attic). In the end it was MS that brought on the change in an initially decidedly sub-par manner.

  526. Noon Says:

    Bram 512: There is a python library, PuLP, which lets you configure different solver backends (say CPLEX or Gurobi) – https://pypi.python.org/pypi/PuLP/.

  527. Scott Says:

    OK, just arrived at my hotel at Purdue University (where I’m speaking tomorrow), and can’t wait to resume talking about D-Wave! (While riding the shuttle bus from Indianapolis, through endless farms and prairies, I could read comments as they arrived but it was too painful to type them.)

    But … I’m exhausted. So let me just respond to one or two comments, then deal with the rest in the morning.

  528. Scott Says:

    Silas Barta #511:

      But … but … that’s why no one says anything! Because they’re *all* thinking that!

    Yes, of course! I’ve seen that dynamic play out over and over again in seminars. In this particular case, however, I had the excuse that I’d already said my piece on my blog, and figured I should let others speak so as not to sound like a broken record (but then they didn’t).

  529. Peter w. Shor Says:

    Greg:

    The only things I’m truly convinced of by the Boixo, Troyer, et al paper is that Troyer is an amazing programmer.

    After having seen enough heuristic code that’s tuned to one probability distribution fail to work well on others, I would be surprised if Troyer’s tuned simulated annealing code worked as well as D-Wave’s chip for all probability distributions on instances, but I’ll admit it’s possible. I wouldn’t be surprised if the simulated quantum annealing code did, since it’s not just heuristic, but has good theoretical reasons to reflect the chip’s performance.

    If D-Wave wants a better chance of getting a convincing speed-up, they might want to try building their chip so it can also handle non-stoquastic evolutions.

  530. Greg Jones Says:

    Scott,

    I read this blog on occasion, in particular the D-Wave discussions because I live in Vancouver and want to see this local company succeed. Something about your activity as “Chief Skeptic” has always seemed off to me and this afternoon it finally occurred to me what it is! You (and to an even greater extreme, Greg Kuperberg) devote a lot of time to criticizing D-Wave, apparently “in the name of science” and “in the quest for truth.” But the thing is, you are always battling the media and PR, and this is not the realm of science or truth! Why don’t you ever read one of D-Wave’s ~60 peer-reviewed papers (available here: http://www.dwavesys.com/en/publications.html) and offer some real criticisms of the real science? (and here I’ll point out that the McGeoch paper and the USC papers are not D-Wave papers) That would be far more interesting.

    Now to quote Greg # 490: “Don’t you understand that D-Wave has taken the easy way out, while academic labs push the technology forward? It is explicitly stated in a recent review article that however you learn how to make a superconducting qubit, you can then easily make hundreds of them. The hard part is not making many qubits, it’s making good qubits.”

    Sometimes, Greg, you make good points and sometimes you say things that are so off-base I wonder if you have any idea what a real physical computer system (quantum or otherwise) actually is. I don’t know what “recent review” you are referring to, but that is a fundamentally bogus statement! If you build one fully optimized qubit, you can’t just multiply that qubit with no consequences. Add a second qubit and a way to couple them together, and your first qubit will change (in particular, it will get noisier). Add hundreds of qubits, and it’ll get way worse. Plus, all of the qubits will not be identical to one another in a real fab, so now you need mechanisms for correcting discrepancies between qubits. These mechanisms require control signals, which require control signal lines. Now you have to supply multiple control lines per qubit within a cryogenic refrigeration system, so you end up with on-chip devices for programming purposes (because there are only so many lines you can fit into a fridge) and these change the qubit environment as well. All in all, a handful of optimized qubits are absolutely not representative of what they’ll be like when you scale them to hundreds and beyond.

    BTW – all of these details are easily available if you actually read D-Wave’s state-of-the-art papers, rather than just criticizing the media and PR.

  531. Scott Says:

    OK, one more response before I crash.

    Peter #522:

      Troyer’s simulated annealing code is highly optimized for a single probability distribution of instances. When I was at AT&T Labs working with David Johnson, I saw the following happen much too often: you take optimization code that is highly optimized for one probability distribution, and run it on a different probability distribution, and it fails miserably.

    Thank you for offering a clear conjecture, of the sort that would have to be true for the current device to be useful! I was waiting for that, while all I got from many commenters here was yet more variations on the “Argument from Google and NASA Can’t Possibly Have Not Done Their Homework.”

    I’d be thrilled for the debate to be conducted on the empirical ground you suggest. That is, let the D-Wave folks search for a distribution over instances where Vesuvius clearly outperforms Troyer’s code. If they find one, then give Troyer the chance to re-optimize his code for the new distribution. (Why is that legitimate? Because “distribution over instances” roughly corresponds to “the needs of a particular customer.” And whatever a given customer’s needs were, it would surely be cheaper to hire Troyer for a month than to buy a D-Wave Two!) If Troyer (or someone else) succeeds, then let D-Wave go search for a new distribution over instances, and so on.

    If this game ever terminates, with D-Wave making the last move—at that point I’ll admit that D-Wave’s current device has indeed achieved something interestingly indicative of a quantum speedup. What I previously acknowledged only as a not-yet-ruled-out logical possibility, will have ended up being the case. D-Wave’s success still won’t “apply retroactively”—that is, it will remain true that the company and its hypesters egregiously misled people about what had been demonstrated back in May of 2013—but at least D-Wave will have finally cashed in the check.

  532. Henning Dekant Says:

    Scott, #531 well duh 😉

    Of course that’s the dance that now has to go forward, how else can can I go on betting that D-Wave will come through …

  533. Greg Kuperberg Says:

    Boy, Matthias Troyer has become the great arbiter of truth in this discussion, and he hasn’t posted even one comment!

    I think that a more basic point needs to be made here, which is whether — even absent any pre-existing code by anyone like Troyer — it makes much sense to compare a D-Wave device to off-the-shelf software, in a contrived benchmark that is basically a clone of D-Wave’s SPD connectivity. Comparing that to CPLEX, is like a race between a fixed-target unguided test missile and a door-to-door express limo service. You’d certainly hope that the missile would win!

    In fact Cathy McGeoch said repeatedly in the press that her test only makes limited sense. The problem is that both journalists and D-Wave both carried on as if it were a full victory.

    For that matter, the context should tip any expert that simulated annealing would probably be thousands of times faster than CPLEX. I’m sure that Troyer is a fabulous coder, but it’s hardly the point. Maybe someone should post it as a Project Euler problem and see what happens.

  534. Bram Cohen Says:

    Shore #524: WHAT????

    Sorry for assuming that the stochastic solver did the best, but this is really, really basic stuff here, and for it to underperform exhaustive search is a massive red flag which should have made the paper not pass peer review.

    Stochastic solvers have an apples-to-apples advantage over exhaustive search algorithms because both of them work by trying out solutions until they find the best one, but stochastic solvers have an advantage because they’re optimized to find a solution quickly, where exhaustive search solvers are forced to make the vast majority of their choices based on the requirement that they eventually complete. For exhaustive search to do better than stochastic search, the ‘heuristic’ of the stochastic search must somehow be doing worse than the exhaustive one, which closely resembles the stochastic search’s heuristic when it can, and slavishly follows the needs of being exhaustive the rest of the time. In other words, for this to happen the heuristic must be downright harmful.

    With the stochastic algorithm tossed aside as being obviously broken, that leaves two exhaustive algorithms competing against the dedicate hardware’s stochastic one, which as I said previously is not even vaguely an apples-to-apples comparison.

    So now I’ve looked at the actual paper, to see if I could get a sense of what’s going wrong.

    First off, aside from the stochastic solver underperforming the exhaustive solver (at one size severely!) there’s this odd data point, where problems of what appears to be around size 20 aren’t solved after a full 491 milliseconds by the stochastic solver around 5% of the time. This is an extremely long time on an extremely small problem, one where simply trying random assignments without any heuristic at all should have found a solution.

    One possibly telling hint of what’s going on is that the TABU length is set to min(20, n/4). This is a fairly large TABU list. It isn’t clear how good the best solutions which were found are, but if they have less than 10 set wrong then you’d expect for the solver to set all but the 20 hardest bits right, then flip the hardest bits to some set of values, then be forced to start flipping the easier bits back, just because the TABU list requires that be done, even though the heuristic correctly indicates that those are massive steps backwards.

    Given how specific the instructions from the grant-givers for this paper were, it’s entirely possible that this is not merely an ‘unoptimized’ solver, but a ‘deoptimized’ solver, whose parameters were set to something just plain bad.

    And that’s even ignoring the fact that only trying a single stochastic heuristic is fundamentally unsound. There’s a reasonably long menu of discrete optimization heuristics to choose from, each of which has problems it vastly over- and under-performs on, and each has parameter tunings with the same property. To simply throw a dart at a board and pick one isn’t ‘fair’, it’s random and statistically insignificant. This is especially true of problem instances where runs take 500 milliseconds. You should at least spend an hour (literally an hour!) messing around with the knobs built into any decent stochastic search package to see if you can do better.

    In case anyone’s wondering where I come from being such a know-it-all about stochastic search, I was the one who came up with Walksat, which is the basis for TABU.

  535. Liveblogging WWII and Quantum Discussion | Pink Iguana Says:

    […] Shtetl-Optimized, D-Wave: Truth finally starts to emerge, here. Comments sort of read like Liveblogging WW2 above. Don’t miss when Cathy McGeoch shows up in […]

  536. Cathy McGeoch Says:

    #531 It won’t terminate — that game is exactly how experimental work moves forward in the study of heuristics for NP-Hard problems (filling in the holes where theory is missing). Maybe you missed this from the first page of our paper:

    These results can be regarded as “snapshots” of performance for specific implementations on specific instance sets, nothing more. Experimental studies of heuristics are notoriously difficult to generalize. Furthermore — unlike experiments in the natural sciences — no experimental assessment of heuristic performance can be considered final, since performance is a moving target that improves over time as new ideas are incorporated into algorithms and code. Future research will likely turn up better solvers and strategies.

  537. Nobody Special Says:

    Henning Dekant #507. You seem to be arguing that any non-zero probability of any performance gain above their current platform is worth $10M to a company the size of Google. This seems rather clearly erroneous.

  538. Cathy McGeoch Says:

    Bram #542 Cong and I spent 2 weeks and hundreds of CPU-hours before the main test runs, looking for the best combination of runtime parameters for the three software solvers to run on the QUBO problems. This pilot study is described in Section 3 of the paper.

    The hardware and Blackbox were not similarly tuned beforehand — they were run with their usual default settings.

    It is just not true that stochastic solvers always outperform exhaustive solvers (including CPLEX). It is common for one approach to do very well on instance class A and another to do very well on instance class B.

    It is possible that we missed the golden combination of parameter settings for these problems. But we had no dog in this fight and were not rooting for D-Wave to win.

    The paper was read by three reviewers before acceptance, and it won Best Paper award. I stand by the integrity of the work.

  539. Scott Says:

    Cathy McGeoch #536: In flatly asserting that the game won’t terminate, I think you’re selling D-Wave too short! 🙂

    Logically, it’s entirely possible for the game to end—with D-Wave finding a distribution over instances for which its machine outperforms any classical algorithm that anyone can come up with, even if (to make things fair) the classical algorithm designers know the distribution in advance. For example, we think that would be exactly the situation if you used Shor’s algorithm to factor products of random n-digit primes—as far as anyone knows, factoring is still classically hard even if you know the numbers were drawn from that distribution. With quantum adiabatic optimization, it remains a huge open research question whether any such distribution over instances exists—but that’s my point! As far as I’m concerned, this is what a genuine D-Wave victory would look like.

  540. Cathy McGeoch Says:

    Scott #539 Ok I revise my assertion to “don’t hold your breath.”

    I remain skeptical that an experimental result (of the form: D-Wave outperforms all known classical algorithms on this set of inputs) could be reliably turned into a proof of a polynomial-time asymptotic bound. It’s harder than it looks.

  541. Cathy McGeoch Says:

    Bram #534 for example, leave out the D-Wave part and look at the three solvers in our paper: akmax (branch&bound exact), tabu (heuristic search), and CPLEX (LP-quadratic programming exact).

    In the first test, CPLEX clearly outperforms the other two. In the second test, akmax and tabu clearly outperform CPLEX and tabu is slightly faster than akmax. In the third test (very roughly summarizing) tabu is generally best, CPLEX and akmax are not bad on about half the instances, and akmax is terrible on about 1/3 of them.

    That kind of mixed outcome is typical in this research area. I would consider it a red flag if something like that didn’t happen, because then the question arises whether the instances are broad enough to be interesting.

    I agree that it would have been nice to get a bigger collection of heuristic strategies to test. We looked around and couldn’t find anything suitable in the public domain. It is also an obvious next step to collect more instance classes for testing (both Chimera-structured and not).

    We tried as many combinations as we could find, and ran as many (well-considered) tests as we could within the given time frame. There is lots more that couldn’t be fit into the conference paper, but will appear in the full paper.

  542. Scott Says:

    Cathy #540:

      I remain skeptical that an experimental result (of the form: D-Wave outperforms all known classical algorithms on this set of inputs) could be reliably turned into a proof of a polynomial-time asymptotic bound.

    To clarify, I’m not asking for a mathematical proof that no efficient classical algorithm exists for the distribution (which would require proving P≠NP, as a first step)! All I’m asking for is the experimental result itself (of the form, “D-Wave outperforms all known classical algorithms on this set of inputs”). Except that, before talking about “all known classical algorithms,” one should at least invest a Troyer/Isakov level of effort in algorithm design (i.e., ~0.1% of the effort that went into the D-Wave machine itself).

  543. Cathy McGeoch Says:

    Scott #542

    Suppose (1) someone announces tomorrow that algorithm DW beats all known classical algorithms on test set A. Then (2) next week someone will announce a new classical algorithm X that does well on test set A. Then (3) someone discovers a test set on which X does poorly in comparison to others. And repeat. This is the normal course of things, as mentioned in our paper.

    We didn’t claim to have looked at all known classical algorithms, by any means. But we did look at three strong candidates known to us at the time. A lot of effort has been invested in making those solvers fast (and in two cases, fast + general).

    And look what happened: step 2 has sort of come to pass (although there is still the question of mapping their results to ours, and its just one of the three we looked at, and a slightly different input class). Step 3 awaits.

    I think it will have to wait until Troyer’s moratorium lifts. Given the simplicity of their solver (which accounts for its speed) it is not hard, as a thought experiment, to come up with input subclasses classes that might make it look bad.

    If no such class exists, we have empirical evidence that the simple approach taken by SA can solve efficiently everything in this NP-hard problem class. Which is why I’m not holding my breath.

  544. Cathy McGeoch Says:

    (Sorry, posted before finished)

    Or it could turn out that SA is exponential time and so is quantum annealing, for the problem. They could each be polynomial-time on different subclasses of problems.

    Next question for the practioner/client is: which subclasses match my application? Which approach is better for the instances I need to solve? Which motivates our work in the first place.

  545. Scott Says:

    Cathy #543: No, I absolutely wouldn’t conjecture that simulated annealing can solve “everything in the NP-hard problem class.” But the relevant question is just whether it (or related approaches) can do everything that the D-Wave machine can do. If you think the D-Wave machine is doing something fundamentally “classical” (or “classical enough,” e.g. simulable by QMC with no sign problem), then you’d be presumably conjecture that the answer is yes.

  546. Greg Kuperberg Says:

    Cathy #543 – You’re admitting here that the entire research plan of your paper was to compare an exotic $10 million device to off-the-shelf software on off-the-shelf personal computers. Now, I don’t have any accusation against your paper as a paper; if the referees liked it and it won a conference award, then more power to you. But in the real context of the question, this is pretty weak tea. The fact that reporters came running to you should have set off alarm bells.

  547. Michael Bacon Says:

    Excuse an off-topic comment, but I just wanted to tell Scott that his new paper:

    http://arxiv.org/pdf/1306.0159.pdf

    looks absolutely fascinating. When he had time to write it, I don’t know, but I do know he must have finished it before getting embroiled in this D-Wave brouhaha 😉

  548. John Smolin Says:

    Forget the “Troyer moratorium.” Send me the cases that need to be run on classical simulated annealing and I’ll do it. The code for classical SA is trivial, and I got it running competitively with the cases for D-Wave 108 qubits without even trying to find a good annealing schedule.

  549. Peter W. Shor Says:

    Bram Cohen says: “for [a stochastic solver] to underperform exhaustive search is a massive red flag which should have made the paper not pass peer review.”

    This may be true of SAT, but it isn’t true for the travelling salesman problem. If you want to come within 1% of the optimum or so, there are some simple heuristics that usually work well. If you want to come within 0.1% of optimal, there are a number of more complicated heuristics that take longer but will work. But if you want to come within 0.01% of optimal, you need to use highly sophisticated techniques using cutting planes and branch and bound. This is faster than heuristic search because it develops lower bounds that it uses to avoid searching large portions of the space. If you just want to come within 0.01% of optimal, you stop the program before it finds the exact solution, and I believe it will find a good tour much faster than any known stochastic solvers.

    CPLEX does something like this. Whether or not it is quicker than a stochastic solver depends on the structure of the problem (it worked in one of Cathy’s test cases, and much slower in the other).

    The benchmark for the D-Wave testing was how often an algorithm could find the optimal (or rather, how often it was the best of the tested algorithms), a requirement which is more favorable to exact solvers than some other benchmarks.

  550. Peter W. Shor Says:

    Greg Kuperberg Says:

    Cathy #543 – You’re admitting here that the entire research plan of your paper was to compare an exotic $10 million device to off-the-shelf software on off-the-shelf personal computers. Now, I don’t have any accusation against your paper as a paper; if the referees liked it and it won a conference award, then more power to you. But in the real context of the question, this is pretty weak tea. The fact that reporters came running to you should have set off alarm bells.

    Greg: This is exactly the kind of reasoning that I was comparing to denigrating the Wright brothers by saying that an automobile was faster, cheaper, and more comfortable than their first test flights. If the D-Wave machine is the first quantum annealer, comparing the development cost of the machine to the marginal cost of a laptop (which, if you consider the whole history of computing, had development cost at least 10,000 times that of the D-Wave machine, which may go down to a factor of 10 if you just include the cost of the factories they were made in and the cost of designing the two chips).

    And if you think that somebody should stay away from an area of research just because reporters are interested in it, you are totally out of your mind. When reporters come to you, the best you can do is try to explain your viewpoint as clearly as you are able to, and hope for the best. I am sure that this is what Scott and Cathy have both done.

  551. Random Guest Says:

    TL;DR for the saga from the perspective of an outsider:
    Step 1: Post an outlandish claim to provoke a response.
    Step 2: Get People from prestigious universities to do the R&D for you for free.
    Step 3: Profit 🙂

  552. Outraged Says:

    Greg Kuperberg #546,

    All due respect for your academic credentials but honestly you have no business commenting on cost comparisons and PR issues related to quantum vs classical optimization. Several people have already attempted to point out and explain to you your blatant fallacies and yet you seem to either be incapable of understanding or are simply pretending to not get it for the sake of maintaining your outlandish statements. You sound like you have absolutely no understanding of what kind of enterprise could ever possibly make quantum computing practical and what it would take to do so in realistic economic/societal conditions. So please stop.

  553. Greg Kuperberg Says:

    Peter – You say, “This is exactly the kind of reasoning that I was comparing to denigrating the Wright brothers by saying that an automobile was faster, cheaper, and more comfortable than their first test flights.”

    We’ve been through this several times. As I said, if you want to call it a straw man, you have a point. But it’s not my strawman, it comes from D-Wave itself. I quoted Geordie Rose right here in this thread. If the Wright brothers had bragged themselves that their flyer was faster, cheaper, and more comfortable than an automobile, then it would have been fair denigration that the claim was wrong.

    And I did not mean to imply that Cathy should have stayed away from her research just because reporters would be interested. I said that her research in and of itself was fine. My statement was that she should have been more careful about what was announced.

  554. Peter W. Shor Says:

    What is the quote from Geordie? He did say that he believed he has shown that a D-Wave machine “was better at something than any other option available.”

    How you go from there to comparing the marginal cost of a laptop to the capital cost of a D-Wave chip and believing that this is a sensible comparison is beyond me.

    For D-Wave to be a success, they will have expand their customer base, and I suspect that to do that, they need to reduce these costs by around two orders of magnitude. That’s only going to happen when their chips have been convincingly demonstrated to be useful.

    I do agree that D-Wave has not yet convincingly demonstrated that their chips are better at anything.

  555. Greg Kuperberg Says:

    Peter – “I do agree that D-Wave has not yet convincingly demonstrated that their chips are better at anything.”

    Then you agree on the real point. Rose has said the opposite, and he has in mind the McGeoch-Wang announcement.

    Everything else is coloratura.

  556. Henning Dekant Says:

    Nobody Special, #537 don’t you know that google has googol profit? So for all practical purposes the probability can get pretty low.

    Anyhow, I always do my probabilities in integers, could enough for a ROI estimate 😉

  557. Henning Dekant Says:

    Greg #555, you are still missing the point: A D-Wave machine can at this time compete with classical hardware. They do this with a radically different approach to computing after just ten years of development.

    That’s why it makes sense for big businesses to start paying attention, in case they keep that momentum going, you want to be the first to benefit from it. If not it’s a fairly small write-off.

    You clearly don’t get this business logic, but unlike Outraged #552 I really think you should keep flogging that horse 🙂

  558. Cathy McGeoch Says:

    I’m done.

    Scot #491, re your remarks on the merits of blog versus questions-after-talks, I find I prefer the latter venue. At least there is some chance that the people with the questions have made a good-faith effort to understand what the work is about; and a certain level of civility prevails.

    The downside of a blog is that the signal-to-noise ratio is too low, and the discourtesy too high, to encourage wide participation.

    Good bye boys. Have fun storming the castle!

  559. Henning Dekant Says:

    Cathy, Thank you for coming here and standing up to this rowdy crowd!

  560. Peter W. Shor Says:

    Greg:

    “Everything else” is quite relevant to D-Wave’s business model. If they believe they can continue selling machines for 15 million and make money that way, they are dreaming. And by comparing the cost of a laptop with 15 million dollars, you are implying that these comparisons are on an equal basis: that D-Wave would not be able to make money selling more of them more cheaply. I suspect this is totally wrong.

  561. Nobody Special Says:

    Henning Dekant #557 – You seem to keep switching your argument. Purchasing a D-Wave Two yields a negative NPV unless you can make money with that piece of hardware during it’s lifespan (or that of the project). Even so I’d argue that Google is unlikely to profit directly from it. You’re not likely to have some percentage of search hits running on the D-Wave and if you use the D-Wave to do some optimization on Google’s machines. You immediately get slapped with all the limitations of classical machines. I’d be willing to bet that you wouldn’t see anything more than a constant improvement above their current platform. However at this point although you have a positive NPV you are now in competition with just about every other piece of technology which starts to erode the claim that D-Wave was a rational choice (except in the broadest sense of the term).

    Now you could argue things that have to do with the marketing of the Google brand or attracting some research talent. However those are pretty difficult to quantify reliably in a non-trivial sense. In fact given that Google in it’s primary LOB has little in the way of competition it’s even arguable that it’s marketing dollars that they don’t need spent.

    So investing dollars in pure research is, IMHO the most credible answer to “Why buy this?” and while I wholeheartedly believe that pure research needs to be funded it’s really not something you would talk about in terms of NPV because the *expectation* of pure research is that it will not end with a product.

  562. Greg Kuperberg Says:

    Peter – On the contrary, you’re supposing a hypothetical business model for the future. You have a labor-dominated company that has in total spent roughly $100 million according one of the news clips. They have sold exactly two machines, both of which were trophy purchases that they were immediately lent to academia. But of those machines are $30 million in sales, which is a significant fraction of what they have spent in total.

    I can’t say what the business model will look like later. So far, the business model has been to spend venture capital, and survive on a small number of very expensive sales. From their point of view, they’re already making money. Of course the venture capitalists are not making money, but they are a rather different group. As Henning Dekant has explained, they could be happy with extreme long shots.

  563. Peter W. Shor Says:

    Greg:

    Neither of us know what D-Wave’s business model really is. I am operating from the assumption that they actually have one, that it is not totally insane, and that it is based on the assumption that they will be able to prove that quantum annealing is good for something.

    You seem to be operating on the assumption that they know their machine will never be good for anything, and thus that they are purely scam artists.

  564. Greg Kuperberg Says:

    Peter – No I do not think that they know that their machine will never been good for anything. Nor would I say that they are in the legal sense “scam artists”. What is true is that they are in a “heads we win big, tails we still win some” line of business. Even the most honest startup company, maybe especially the most honest one, would admit that that venture capital often works that way.

    Besides, by business standards, a trophy product isn’t a scam at all, it’s what the customer wants. You said that they are “dreaming” if that is how they plan to succeed. But how do you know that? After all, it works for Lamborghini.

    Since business acumen has been mentioned so often in this thread, wild claims from D-Wave might well serve their business interests, but they do not serve the interests of the QC community as a whole. That’s why I think that someone should respond.

  565. Alexander Vlasov Says:

    Greg #564, may be it is good that they do not serve that? May be it would be even better if it would be dosen different paths instead of one way.

  566. Greg Kuperberg Says:

    Alexander – Look, academics are not above wild exaggerations. It is generally accepted within academia that a path to success based on exaggeration is parasitical. Not necessarily a scam or a fraud in the actionable sense, but nonetheless not constructive. The business world is a little different, because what is sometimes said there is “there is no such thing as bad publicity”.

    Anyway you asked me whether D-Wave’s hardware is state of the art. The standard benchmark of a qubit how long it keeps its memory. I.e., how long it takes to dephase (the T2 time) or after that to reach its thermal state (the T1 time). According to slides by Lidar, the T1 time of D-Wave’s qubits is between 10 and 100 nanoseconds. State-of-the art superconducting qubits have a T2 time of more than 10 microseconds, which is 100 times better. The T1 time is by mathematics always at least twice the T2 time.

    D-Wave’s answer to this is that the standard figure of merit doesn’t matter. In other words, they argue that it doesn’t matter that Schrodinger’s cat quickly wanders in and out of the box. They claim that the cat spends enough time in the box that it works anyway. This claim has not been mathematically disproven, but it has little theoretical support. Either way, they sidestep what everyone else thinks is the benchmark.

  567. Mateus Araújo Says:

    @Scott: Nevermind D-Wave, don’t you intend to blog about arXiv:1306.0159? At this point in time, that would interest me much more!

  568. Henning Dekant Says:

    NS #561, admittedly I am getting at my wits end to explain something that seems patently obvious to me. Let me – for the last time – as other’s here must be getting rather tired of the redundancy, try to break it down:

    a) D-Wave represents a completely new computing plattform, and they are investing in the plattform not just one machine.
    b) It may outpace classical computing
    c) Google get’s all its gigantic revenue streams from data crunching, even a minute improvement will have huge pay-offs.

    => You invest to acquire IP and know-how about the new plattform just in case.

    Google, you suggest, somehow should already attempt to run production searches on this first machine to justify the investment. Frankly, the idea is so absurd, I had to read it twice.

    Really don’t know how many other ways, I can explain forward looking investment decisions that discount back potential cash flows. C’mon it’s not rocket science.

    Maybe, if you think of investment risk estimates as an exercise in Bayesian reasoning? If this still doesn’t compute for you I *now* officially resign from trying to explain it again.

  569. Henning Dekant Says:

    Greg #562, “trophy purchases that they were immediately lent to academia.”

    Who do you think will hold the resulting patents?

  570. Gil Kalai Says:

    Greg: “It is generally accepted within academia that a path to success based on exaggeration is parasitical.”

    Dear Greg, isn’t it a bit of an exaggeration to state your own academic attitude as the “generally accepted” view, without any evidence to support it?

    Individual scientists and academics are very different, and there are also systematic differences between disciplines. Some scientists are overly optimistic while other are overly pessimistic, some are more opinionated while others don’t have very strong opinions. Some sell their achievements more aggressively while others do not. Some tend to exaggerate and some do not. Part of our jobs as scientists is also to make the best choices on these matters.

    It is also not a big deal. If Scott on this blog says a zillion when he refers to 50, we can correct for that by simply multiplying by fifty and dividing by a zillion :).

    One thing I am proud about my academic debate with Aram Harrow, that Scott mentioned above, was that over eight long posts and many comments the word ‘hype’ was not mentioned even once.

  571. Greg Kuperberg Says:

    Gil – Fair enough! There are several schools of thought in academic research, and one of them is that wild exaggeration of results is a parasitical route to success.

    There is also the question of whether playing a double game, where people’s technical results in papers are all carefully stated and objective, but on blogs and in press interviews they crank up the hype. I don’t like that either, but there is a school of thought that it is reasonable behavior.

  572. Bram Cohen Says:

    McGeoch #541: If you optimized the solvers, why are you using the exact same parameters for completely unrelated problems? For that matter, why are you using the exact same amount of time between restarts for different problem sizes instead of making it proportional to the problem size? Restarting after 3500 could easily cause restarts before a solution was reached on problems of size 400.

  573. Gil Kalai Says:

    Dear Greg, you are a perfect mathematician, but one thing I would change about you is to hype your ass a little. Since I am not sure if this English expression makes full sense let me explain: You tend to belittle and underestimate your own stuff and a little hype can do you only good.

  574. Henning Dekant Says:

    Greg #571, exaggerations in university press releases are pretty common these days, a regrettable side effect of exposing academia to market pressure and running university’s like businesses, which IMHO they shouldn’t be.

    We’ve been over this ground many times, my POV is that businesses should be allowed to paint a vision.

    Now, if you could point to some D-Wave marketing collateral that clearly runs afoul of false advertising laws, that’ll substantiate all the gnashing of teeth, but short of this we are just rehashing differences in opinion.

    I’m with Mateus Araújo #567, without Cathy this is getting pretty redundant and boring now. About time to put this thread to rest.

  575. Nobody Special Says:

    Henning Dekant #586 – “D-Wave represents a completely new computing plattform, and they are investing in the plattform not just one machine.”

    Probably not the way you think. There are, from my viewpoint two things developers are doing. Either i) attempting to adapt an existing Google problem in to something where the minimum of E(s)=\sum_i x_{i}y_{i}+\sum_i_j J_{i,j}s_{i}s_{j} is optimal for something they want. ii) Running said problems.

    That’s it. Really. I’m sure there are some details involved with the API but that’s the real work.

    i ) Can be done without a D-Wave One, Two, Three or Six-Million. It can be done with paper and pencil – which I hear often come for considerably less than $10M. Your suppliers may vary.

    ii) Can ALSO be done without any d-wave hardware at all.
    As we are seeing you can get the same effect and perhaps better running SA. The only thing you gain by owing a D-Anything is the particular probability distribution that whatever’s in the black box happens to optimize for.

    Investing in this particular platform in this particular way to gain an advantage to Google’s main LOB – which is advertisements – doesn’t make sense.

    “Google, you suggest, somehow should already attempt to run production searches on this first machine to justify the investment. Frankly, the idea is so absurd, I had to read it twice.”

    Apparently you should have read it at least four times because you somehow read the opposite of what I wrote. I was suggesting the crazy notion that to have a positive NPV you need to make money. I then assert that the D-Wave one will IMHO make Google little or nothing directly. The two ways you could do that are a) to run something ON the machine or b) use it to OPTIMIZE an existing platform. Since a) as I said pretty clearly is not likely. Then you have to be making money from b) then the D-Wave project has a negative NPV. b) means you are looking at the same classical barriers. So again the NPV for D-Wave looks bad.

    “Google get’s all its gigantic revenue streams from data crunching, even a minute improvement will have huge payoffs.”

    This is probably where you are making an error. NPV as a decision tool is used to compare projects. Not just as a stand-alone number. So if your conclusion is still “Google’s decision is rational” then this statement does not force that conclusion. Since it would have to be equal to or better than a number of other sources of minute benefit. In fact the more you want to think any possible advantage will be worth $10M the weaker your argument gets. Since the number of probable number of other solutions increases.

    Again, if this is pure research then I applaud Google for investing in it and hopefully it’s not misdirected but because Scott makes a good argument. I’m guessing it is.

  576. Henning Dekant Says:

    NS #575, yes mapping something meaningful to the Ising equation is the first step and of course you don’t need a D-Wave machine for it.

    But as the much ridiculed Ramsey number paper illustrated actually mapping something optimally to D-Wave’s architecture is yet another issue.

    D-Wave may now get to the point were you can do something useful with it, after everything’s said and done that’s what Google bets its investment on.

  577. Nobody Special Says:

    Henning Dekant #575 – If by “mapping something optimally to D-Waves architecture” you mean something that performs well. Then again I think you’re missing something rather important. You are no longer simply looking at a specific problem where some input values perform well on the D-Wave hardware. You are looking at something that might perform well with some input values and you know performs badly in others. You also know that SA on hardware several orders of magnitude less expensive is better in some input values. This was at least partially clear in McGeoch’s paper.

    So if we’re betting on a near term productive outcome and not just funding pure research. Why is D-Wave the better bet than SA?

    In order for the NPV of D-Wave to be equivalent to SA the benefit must be higher. That effectively kills your assertion that any minuscule amount of performance is sufficient to justify the decision and with it your treating Google’s decision as rational as a foregone conclusion.

    To be clear I’m not disputing that Google might be betting I’m arguing with your rather strident assertion that Google’s bet is necessarily rational…and your condescending attempts to use NPV and a non-vanishing probability to justify it.

  578. johnstricker Says:

    Hey Scott, you made the news in Germany, in the respectable newspaper “Die Zeit” no less! Along with some nice personal information (tenure, parenthood), and on the science side no “crazy stuff” about QC such as “tries all solutions in parallel, and picks out the correct one” ;-), noting “special properties” of “certain algorithms”. It was nice to read.
    Thank you for your work, and take care!

  579. johnstricker Says:

    D’oh, forgot the link:
    http://www.zeit.de/wissen/2013-06/quantencomputer-test/komplettansicht

  580. Scott Says:

    johnstricker: Thanks so much for that link! I don’t read German, but I used Google Translate to get the gist of it. I think the translation could be published, with all the Google Translate errors left intact, and it would still be a thousand times better than most D-Wave articles published in the English-language press. 🙂

  581. Scott Says:

    Everyone else: sorry for leaving lots of stuff unresponded-to! They kept me busy all day at Purdue. I’ll respond tomorrow morning (well, as soon as I can), then finally close off this thread, since I agree with Henning #574 that it’s reached the point of diminishing returns.

  582. Will it quantum? « QuantumBlah Says:

    […] about D-Wave in the aftermath of this preprint (and a related study). I highly recommend reading Scott Aaronson’s post about the topic, from which it seems that the jury is still out on whether the D-Wave architecture will provide […]

  583. Kenneth W. Regan Says:

    If your Mac like mine is wonky loading the whole Die Zeit page, you can take the final “komplettansicht” out of the URL in comment #579. The article is well thought out, and puts into perspective that this discussion is an important moment in science—how important may depend on the answer to “is-it-quantum-computing-or-not?” A corollary to the importance is that while one needs to separate out a lot of “color commentary” in the thread to get the (roast-)beef like the article does, the “color” will (IMHO) also be valuable for the human story and the scientific context.

  584. Misunderstanding? Says:

    I thought I was following this whole thread pretty well, but I was surprised no one took issue with this comment from Cathy (#467):

    “We have to disagree, I guess, on whether NASAgoogleUSRA asked the “wrong questions” about performance of a product they were planning to use in the field, compared to viable alternatives (such as buying 10,000 desktops and running CPLEX on them all, which would just about cost the same)”

    She seems to be arguing that the single unit D-Wave’s performance is equivalent to that of 10,000 desktops, and thus the prices of those quantities should be compared. However, my understanding is that the single unit D-Wave could be beaten by a single laptop computer (running appropriate code, not CPLEX) on the same test instances, thus making the latter comparison the relevant one in terms of cost?

  585. Alexander Vlasov Says:

    Greg #566, Thank you, it is interesting because at moment of they first sell I saw some discussion that they hardware quite cool. It would be interesting to hear more from some “superconducting professionals”. As for exaggeration they are less than for traditional gate qc (x/0=\infty, x \ne 0), because here we do not have QC at all (at least officially).

  586. Bram Cohen Says:

    Misunderstanding #584: She was referring to running CPLEX specifically, which is extremely slow on these particular problems. I don’t think CPLEX can actually parallelize like that though.

  587. Rubbernecker Says:

    Serious question. When you were a kid, did you go around making accusations that Dumbo couldn’t possibly fly because his theory of flight is questionable, a 707 is much faster, and besides everyone can see his ears are TOO SMALL?

  588. John Smolin Says:

    Rubbernecker #578:

    I know you’re just trolling for comedy but…..

    I don’t know about Scott, but I certainly would have complained if Disney went around saying that though Dumbo was small, pretty soon a new line of Dumbos (TM) would be better than jet aviation at least for some long-haul routes and that anyone who didn’t think this likely was some kind of fool who DOESN’T BELIEVE IN PROGRESS.

    If, on the other hand, your point is that D-Wave should be taken merely as entertainment, I could live with that.

  589. Scott Says:

    John Smolin #588: LOL! Yes, I was just thinking that, if I HAD expressed skepticism as a kid about the plausibility of flying elephants, I would have been on even firmer ground than with D-Wave skepticism. But I assumed rubbernecker’s point was that, even if elephants don’t fly, if everyone else wants to believe they do, then saying they don’t makes you a miserable, grumpy person deserving of ridicule. Which, indeed, has been the prevailing attitude almost everywhere throughout human history — the scientific attitude cuts strongly against human nature in that way.

    PS. When you feed the Die Zeit article into Google Translate, D-Wave’s devices get rendered in English as “elephants.” Seriously — try it!

  590. Greg Kuperberg Says:

    I’d like to request a faithful translation to English of the article in Die Zeit. Google Translate runs roughshod over nuance. It also has particular problems with German because of changes in word order and because of word concatenations. I’m quite interested to read this article properly.

  591. Peter W. Shor Says:

    Google Translate also messes up the title: “This chip expects better than a roast beef sandwich”: “rechnen” means both expect and compute.

  592. Chris Lott Says:

    Hartmut Neven’s research blog explains the DWave purchase as enabling them to move algorithms they have already developed for the machine “from theory to practice”, to see if they actually work. From this we can conclude that 1) they don’t know what, if anything, will actually work, and 2) they are writing algorithm patents, just in case it does. Note that major patents cost plenty to develop, to pursue a large number of them is already a sizable fraction of this machine’s cost. So could these patents end up being valuable, even if the DWave approach does not pan out? Could they view the DWave platform as just a way to write such patents (i.e. “A quantum approach to . . .” for just about any problem)? Won’t many approaches be effectively like stochastic hill climbing in the end?

  593. Peter W. Shor Says:

    Major patents cost plenty to develop? Not compared with the D-Wave machine. The cost is essentially the patent lawyer’s salary. Google will have their in-house patent lawyers, who make reasonable (but not partner-in-major-law-firm) salaries. You could pay for decades of in-house patent lawyer-hours with the D-Wave machine salary, and that will pay for many more patents than I can conceive Google filing on algorithms related to quantum annealing.

  594. Kenneth W. Regan Says:

    I’ll kick off a crowdsource translation effort if others will kick in a paragraph each…or until someone finds a “Die Zeit in English” page :-).

    —————————-
    This Chip Computes Better Than a Roast Beef Sandwich

    Has the D-Wave company built the computer of the future? NASA and Google have [already] purchased the purported quantum computer. Even so, researchers doubt the machine.

    Scott Aaronson can really afford to lean back a little. At 32 years he has just plucked down a lifelong [tenured] position as Professor [sic] at the renowned Massachusetts Institute of Technology (MIT). Colleagues refer to him as the “Wunderkind”, and he became a father just a short time ago. If only not for this business, which always calls the computer scientist again to make heated entries in his blog: A Canadian [startup] company by the name of D-Wave Systems claims it has built the “first commercial quantum computer” in the world.
    ————————

    Words in […] are extra but IMHO hinted by the German construction, while “business” is my punning innovation since the German “Sache” meaning “thing” has overtones of “affair”.

  595. Chris Lott Says:

    Cost of patents includes development, writing, application, and long-term maintenance. People involved include the engineering team and the patent attorney and assistants, and all the accompanying overhead. There are often many iterations in the application process. For important ones, you need to apply separately in many different countries. You would be surprised to know how much it actually costs to acquire and maintain (and perhaps defend) a major international patent, a substantial patent effort is not insignificant compared to DWave’s cost.

  596. Translator Says:

    a few more paragraphs (corrected Google Translate output – not very polished; some physics explanations do not seem clear/correct to me, but I didn’t try to improve – just translate 😉
    (I hope Scott is ok with having his blog (ab)used for theis purpose…)

    […]

    To Aaronson and many of his colleagues this is a very bold claim. For
    more than 15 years dozens of research groups have been trying to
    develop a machine that could, thanks to the principles of quantum
    mechanics, solve specific computational tasks faster than conventional
    computers. Current efforts focus on arrangements of atoms or ions that
    are trapped with lasers and electric fields in a high vacuum and
    deliberately manipulated. Researchers also experiment with photons,
    superconductors and so-called quantum dots in solids.

    Because elementary particles and atoms can occupy multiple states at
    the same time, a quantum computer consisting of many of these quantum bits (qubits) can in some special algorithms theoretically handle far more values ​​than a conventional computer with the same number of bits.

    However, all approaches towards a quantum computer are still far from
    a machine that can perform more than the most simple arithmetic operations outside of a painstakingly constructed laboratory environment. Previous highlights of basic research include entangling 14 calcium ions for a short time, and using a laboratory quantum computer of a few qubits to decompose the number 21 into its factors 7 and 3.

    Thus, the machine of D-Wave seems to come from the future. In the
    latest version an incredible 512 qubits are supposedly computing, which were pressed in the form of superconducting wire loops on a three
    millimeter microchip. A big black box shields the cold chip (of -273 degrees Celsius) from environmental influences. The direction in which the current circulates in one of the loops, corresponds to the binary
    values ​​of 0 respectively 1.

  597. Scott Says:

    Cathy McGeoch #558: If you’re still reading this, I’m genuinely sorry that you found my blog to be an uncomfortable environment. The fact that I found the group meeting to be an uncomfortable environment—less because of you than because of my own colleagues—is no excuse, and I’ve decided that I would communicate more effectively if I learned to moderate my tone better. I’m going to work at it.

    You might be right that science blogs will never have “wide participation.” On the other hand, the quest for truth has never had, and probably will never have, particularly wide participation either.

  598. Scott Says:

    Translator #596:

      I hope Scott is ok with having his blog (ab)used for theis purpose…

    Yes, of course, and thank you!

  599. Translator Says:

    the next subsection (the rest – the real meat about speed-up, McGeoch, Troyer etc. – I’ll leave for others or some other day; the parts about “Hamiltonian function” and “qubit loops” sound weird in the German original, too, and are not how I would describe the D-Wave machine)

    […]

    Cooler chip for computing

    D-Wave was recently able to sell one of the elephant-sized computational boxes for 10 million U.S. dollars to Google and
    NASA. The companies want to test it in the Ames Research Center,
    California. Two years ago, the defense contractor Lockheed Martin had
    already invested in D-Wave Systems as well as Amazon founder Jeff
    Bezos and the technology investor In-Q-Tel, who also works for the
    CIA.

    Might the D-Wave chip even bring the breakthrough? Or is the quantum
    computer a case of false labeling? Scott Aaronson, who explores the
    possibilities and limits of quantum computing at MIT, was suspicious of the alleged miracle machine from the start, as were many other
    basic researchers. Quickly Aaronson declared himself in his blog
    the “chief skeptic” of D-Wave. Scathingly, he remarked already in 2007 that the machine were as helpful in optimization problems in the industry “as a roast beef sandwich”.

    At the time, D-Wave had just introduced its prototype with 16 supposed
    qubits and announced, that this machine could solve all NP-complete
    problems. This is a class of functions that are computable on
    classical computers only with very great cost of time. One
    example is the “traveling salesman problem” in which a number of
    cities is to be visited, so that the total distance is as short as
    possible.

    In 2011, the industry researchers of D-Wave finally published a
    paper in Nature in which they outlined the workings of their
    computer. According to this, the computer of D-Wave can find the
    smallest value in an optimization problem, which mathematically
    corresponds to the solution of the so-called Hamiltonian of a quantum
    system. The D-Wave machine transforms this computational problem into a physical obstacle course: The individual loops on the chip are
    placed in a configuration that corresponds to a binary formulation of
    the Hamiltonian. The temperature of the chip is decreased to 20 mK,
    and within a short time the electric currents in the ultra-cold loops shift such that the entire system occupies the state of minimum
    energy. Similar behavior is displayed by the atoms in molten glass when
    the melt is cooled slowly.

    In the D-Wave computer the optimal arrangement of the qubit loops is
    interpreted and read out electronically. The machine then spits
    out the optimal solution of the Hamilton equation. Physicists call
    this “adiabatic” computing. Compared to other approaches to realize a quantum computer, it has a great advantage. The reason is that the latter must be controlled by a series of custom commands, which are realized, e.g., in the case of ions via very precise microwave pulses. They inform the system of coupled qubits which logical operation it should perform.

    The D-Wave computer works differently: “In the cryostat no microwaves
    need to be injected,” says the quantum physicist Frank Wilhelm-Mauch
    of the University of the Saarland, who had seen the D-Wave computer during a research stay in Canada. The D-Wave chip can control the interaction between the individual qubits by means of electronics between the superconducting loops. Therefore, a system could be much more easily ramped up to contain more qubits, since it does not have the problem of the difficult external control, says Wilhelm-Mauch. At the same time, however, the hundreds of qubits in the latest D-Wave chip do not mean that the Canadians are more advanced than other groups of researchers [says Wilhelm-Mauch]. Since besides optimization tasks, D-Wave cannot solve anything. “Thus, the device is a single-purpose machine,” says Wilhelm-Mauch.

  600. Scott Says:

    Henning Dekant: Your comments about the wonderful accountability in the corporate world left me questioning my own sanity—as, I admit, perspectives different from mine often do! So, in an attempt to regain balance, I decided to read Is Enron Overpriced?—the now-famous Fortune article by Bethany McLean that’s credited with helping to trigger the fall of Enron. I’d encourage everyone else who’s interested in the D-Wave situation to read McLean’s piece as well. It captures beautifully the feeling of vertigo and confusion you feel when you realize that
    (a) a huge number of people—including very smart ones who you respect—are completely gung-ho about some commercial venture,
    (b) the fundamentals just don’t make any sense, and
    (c) you’re the one who’s going to be vilified if you point it out.

    On the other hand—and in the interest of honesty—another thing that jumped out at me about McLean’s piece was its cautious, understated tone. That’s something that I’ll try hard to learn from going forward, as I realize that I lose my temper too easily.

  601. wolfgang Says:

    Scott #599
    >> the fundamentals just don’t make any sense

    So here is my prediction:
    i) In 1-2 years we will see the D-wave IPO, supported by a wave of companies (Yahoo, Amazon, etc.) who do not want to be left behind by Googel – buying their latest quantum annealer in large numbers.
    ii) Scott (who is known to make large bets), Greg K. and others will short the stock, because the fundamentals make no sense.
    iii) Unfortunately, the naysayers will be wiped out when the stock doubles after Jim Cramer recommends it on mad-money.
    iv) The stock will subsequently drop after a bearish article in an obscure German newspaper explains that a cheap open-source optimizer easily outperforms the D-wave device.
    v) A group of Dem. and Rep. senators will use this opportunity to bring legislation to finally outlaw such communist software and strengthen software patents.

  602. Michael Bacon Says:

    Scott@600,

    “On the other hand—and in the interest of honesty—another thing that jumped out at me about McLean’s piece was its cautious, understated tone. That’s something that I’ll try hard to learn from going forward, as I realize that I lose my temper too easily.”

    No! Don’t do that . . . wasn’t that cautious tone the most incorrect aspect of the article?

  603. Nobody Special Says:

    Scott #600 – I’ve worked in industry (although now I work on the administrative side of a university) and Henning Dekant is right that most companies do hold people accountable for their actions but the margin of error is huge. Oxford university surveyed companies and collected data on 5400 IT projects. They found evidence which suggested that on average large project (> $15M) went 45% over budget, 7% over schedule and delivered 56% less value than expected.

    17% fail so badly that the people surveyed believed that it threatened the stability of the company.

  604. Jared Says:

    I’ll continue where translator left off. My German is very rusty but using Google translate and manually fixing issues seems good enough:

    Hardly speed advantages

    However, optimization tasks are really important: Google wants to develop computer algorithms that fish for files with a certain mark faster from a large database. At NASA it will help in the search for exoplanets, exactly how is unclear. Lockheed Martin is said to have interest in it, however, to optimize the 24 million lines of code from a new fighter jet, the journalist and defense expert Sharon Weinberger speculated in an article for the BBC.

    [second photo caption: Two D-Wave computers, which were tested in a laboratory of Amherst College in the city of the same name in the U.S. state of Massachusetts.]

    However, it is not yet clear that D-Wave will be faster with these problems than a well-optimized classical computer. So far on no account is the chip of Canadians [faster]: this is confirmed by a study of the theoretical physicist and computer expert Matthias Troyer of the ETH Zurich, soon to be published in a scientific journal. Troyer was able to test a 128-qubit chip by D-Wave in the spring. He then wrote an algorithm that can calculate “adiabatic” on a classical computer – with the result that it was 15 times faster than D-Wave’s alleged quantum computer. In the future, Troyer wants to compare the 512-qubit chip with his classical algorithm, and the results are expected in the coming months.

    “There have not been enough studies conducted to make a final statement,” they say at D-Wave. And they refer to another study, which already prominently circulated in English-language media: the American computer scientist Catherine McGeoch solved one of three tested optimization tasks 3,600 times faster with D-Wave than with conventional optimization software on a PC. But that does not prove the superiority of D-Wave, says McGeoch, who was paid for their study of D-Wave: “Our test really does not say a lot about how fast the machine actually is.” Because McGeoch compared the supposed miracle machine with computers that ran commercially available software – and thus had not been tailored to the optimization problem like D-Wave.

    Especially her study made no statement as to whether quantum effects in D-Wave play a role, says McGeoch. And that’s the crucial question in the end, according to experts: Are the superconducting loops actually entangled together on the chip during the calculation, as D-Wave has claimed for years?

    Only then could a D-Wave chip be faster in the future than conventional optimized supercomputer. In fact, the physicist Troyer has examined this question in his study as a first external researcher: his team compared the behavior of the supposed miracle machine with the simulation of a quantum computer. The results were in good agreement.

    However, this is contradicted by an online essay from two well-known computer scientists from the D-Wave competitor IBM. They argue that the results so far of D-Wave reproduce even when quantum effects play no role in the machine. Troyer’s team has challenged this interpretation in an online comment.

    From the perspective of renowned basic scientists, however, it still speaks something to the contrary, that the D-Wave qubits calculate while coupled. The Canadian machine should just be isolated from the environment for a nanosecond. At least one of the Innsbruck quantum physicists, Rainer Blatt, said that one day when he visited the D-Wave machine last year. Such a short “coherence time” in conjunction with the rather slow clock rate of the D-Wave machine makes it hard to imagine, in Blatt’s opinion, that the loops are entagled in the calculation. Even Immanuel Bloch at the Max Planck Institute for Quantum Optics expresses “a certain skepticism, given the very short coherence times.”

    More useful than a laptop?

    The single quantum properties that D-Wave can really show and could possibly cause an acceleration, are tunneling processes, said Blatt. They can lead the currents in the superconducting loops to move slightly faster in the state of lowest energy. In the future, the company wants to take error correction algorithms from the laboratory researchers, with which the coherence time of entangled ions and atomic ensembles have already been improved.

    So far Scott Aaronson has also departed from his skepticism a bit. “It could be that D-Wave has success at the end,” he says. However, it is also very revealing that basic research would have said from the beginning that D-Wave needs these error correction algorithms, only have the Canadian company want to know none of it. The main reason for his resignation from the post of supreme skeptic, however, was an invitation from D-Wave in February 2012. He was withdrawing his earlier comments, he wrote on his blog after visiting the Canadian industrial laboratories. D-Wave has built a machine that could definitely calculate better than a roast beef sandwich. “The only question is whether it is more useful than a laptop.”

    Finished!

  605. Noon Says:

    New relevant result by Sanjeeb Dash at IBM: http://arxiv.org/abs/1306.1202 – A note on QUBO instances defined on Chimera graphs

    ” … We observe that after a standard reformulation of QUBO problems defined on 512 node Chimera graphs as mixed-integer linear programs (MILP), they can be solved to optimality with the CPLEX MILP solver in time comparable to the time reported by McGeogh and Wang for the D-wave quantum computer.”

  606. A Merry Clown Says:

    Scientific discourse is the art of juggling decorum, truth and humor. A high-wire feat, attempted under imposing shadows cast by giants and above the distraction of merry dancing clowns.

    The “appropriate” tone for scientific discourse seems to be:
    (a) Cordial. Always credit others for their hard work and good intentions (allow or at least pretend that others are basically well-intentioned, except in rare situations where there is proof of egregious misconduct).
    (b) Biting, merciless and hard-nosed on the substantive issues. The truth deserves no less.

    Perhaps the harsher (b) is, the gentler and more thorough (a) should be. After-all, human beings are what they are.

    Certainly, provided one adequately treads through the niceties in (a), there’s no reason to worry about hurting anyone’s feelings in (b). Anyone who makes scientific claims in a professional or public arena should be prepared to put on their big boy pants or their big girl pants and have their claims face the brutal gauntlet of scientific scrutiny. All attempts should be made to avoid even the appearance that any part of (b) contains personal barbs or insults (unless these barbs happen to be to be hilarious.)

    Outside of science the rule is: whoever flings the horseshit the hardest wins.

  607. Scott Says:

    Merry Clown #606: That’s incredibly well-said. Essentially, what Shtetl-Optimized readers got to see this past week was me falling off the high wire (with tenure the safety net below? 🙂 ). I failed at the human level—though admittedly, while attempting an extremely difficult balance, and while distracted by clowns and giants. At the scientific level—i.e., your level (b)—I still stand by what I said.

  608. Rahul Says:

    Cathy McGeoch says:

    Scot #491, re your remarks on the merits of blog versus questions-after-talks, I find I prefer the latter venue. At least there is some chance that the people with the questions have made a good-faith effort to understand what the work is about; and a certain level of civility prevails.

    The downside of a blog is that the signal-to-noise ratio is too low, and the discourtesy too high, to encourage wide participation.

    I didn’t see much of uncivil commenting in this thread, at least not against Cathy. In any case, I think academics have been getting too used to cushy academic discourse under the conventions of which often even blatant incongruities are allowed to pass unquestioned just to avoid offending the person concerned.

    I’ve suffered through many a talk of the type Scott mentioned where one wonders why none of those stalwarts in the room is questioning the speakers obviously flawed assertions.

    Fine Cathy, return to your ivory tower where in deference to your seniority and tenure no one will ask you those pesky questions and people file in and out meekly from your talks. Of course, I do realize that you have no obligation to engage the masses on here, but when you attribute your leaving to lack of civility etc. that’s a bit irritating.

    PS. If you think this blog discourages wide participation you haven’t perused the list of commentators who frequent it closely enough.

  609. Rahul Says:

    Scott #607:

    I failed at the human level—though admittedly, while attempting an extremely difficult balance, and while distracted by clowns and giants

    Personally, I think you did an admirably good job and I can’t really see your failure on this thread. At any level.

  610. Scott Says:

    Greg Jones #530:

      Something about your activity as “Chief Skeptic” has always seemed off to me and this afternoon it finally occurred to me what it is! You (and to an even greater extreme, Greg Kuperberg) devote a lot of time to criticizing D-Wave … But the thing is, you are always battling the media and PR, and this is not the realm of science or truth! Why don’t you ever read one of D-Wave’s ~60 peer-reviewed papers … and offer some real criticisms of the real science? … That would be far more interesting.

    The short answer is that I have looked at some of the papers out of D-Wave, and the ones I saw seemed perfectly reasonable. But they also didn’t really bear on the question that interests me (and most people): namely, whether there’s any evidence for a genuine quantum speedup over classical computation. Instead, they dealt with all sorts of subsidiary issues, like how to encode other optimization problems as QUBO’s, etc.

    As for my and Greg’s obsession with “media and PR”—well, it reminds me of a common criticism of Richard Dawkins, which goes as follows.

      “Why are you so obsessed with the crude, vulgar, ‘PR’ aspects of religion—like a guy in a tall hat going around telling people that they’ll burn in hell if they use condoms? Why don’t you focus instead on the sophisticated arguments of theologians, or the beautiful religiously-inspired art and literature, or the selfless works of charity?”

    To this, I think Dawkins can justly say (over and over, till he’s blue in the face): look, the philosophical arguments might be interesting, the art might be breathtaking, and the charity might be praiseworthy—but none of that has any bearing on the truth or falsehood of the doctrines promoted by the guy in the tall hat. And hundreds of millions of people literally believe those doctrines, and they act on their beliefs, and that’s what affects the future of the world much more than any of the things you mentioned.

    I refuse to accept the notion that there’s a separate realm called “PR and media,” where the rules of intellectual integrity don’t apply, those rules being reserved for scientists talking to other scientists. Even basic research is far from an autonomous enterprise: it depends on the public’s goodwill and support for its continued existence. I firmly believe that, if QC hype grows to unsustainable levels, without being checked by skepticism, that will ultimately come back to bite QC as a research endeavor also.

  611. Tim Converse Says:

    In solidarity with John Sidles, I have decided to ban myself from (lurking on) this blog for a period of three months. (I chose this alternative over a three-month hunger strike, for obvious reasons). My self-imposed ban also applies to reading the second half of ‘Quantum Computing Since Democritus’ – too bad, I was enjoying it.

    Scott, there’s an arbitrary and petulant off-with-their-heads quality to banning people for some amount of time that you just decided on just now. At least publish some sentencing guidelines!

  612. Scott Says:

    Tim Converse #611: Err, did you see that John Sidles’s ban has been commuted to a 2-week “cooling-off period,” as a result of John Preskill’s intervention? So maybe you only want to ban yourself from this blog for 2 weeks, and only not read (say) the last chapter of QCSD?

  613. Graeme Says:

    Hey Scott,

    One part of (*) is not quite right. It was when you wrote it, but this paper by Sanjeeb Dash has come out since: http://arxiv.org/abs/1306.1202

    It turns out that, properly used, off-the-shelf software (actually CPLEX) can outperform Dwave on 512 spins, solving the problem in 0.2 seconds by mapping it to a mixed integer linear program. Scales pretty well too, solving 20000 spins in 45 seconds.

  614. Scott Says:

    Graeme #613: Well then, I stand corrected! 🙂

    (Though I did refer carefully to D-Wave outperforming “certain” off-the-shelf solvers, which one could interpret to exclude the parts of CPLEX that Sanjeeb Dash used.)

  615. Jay Says:

    In commiseration for Tim, who unfortunatly was forced to break his own self ban so as to publicize it, I have decided for a two-weeks part-time hunger strike.

  616. Greg Kuperberg Says:

    I was wondering why CPLEX did not do better in these benchmarks. I had always thought of its methods as a great implementation of great algorithms.

  617. Scott Says:

    Jay #615: LOL! I’ve also frequently attempted “part-time hunger strikes,” but they typically fail after just an hour or two, when I find myself hitting the fridge at the slightest pang of pre-hunger.

  618. Greg Kuperberg Says:

    I refuse to accept the notion that there’s a separate realm called “PR and media,” where the rules of intellectual integrity don’t apply, those rules being reserved for scientists talking to other scientists.

    Yeah, well said. I’m equally against these double games. For one thing, where do you draw the line? Someone could say “The title? Oh, that’s just public relations. The objective science begins with the abstract.” Or someone could say, “The abstract? That’s just public relations. Read the text of the paper.” Or then, “The paper? That’s just public relations. You can download the objective science as a binary data file from my FTP directory.”

    For the record, I’ve also looked at the papers by D-Wave. (Most of them are conveniently posted in the arXiv.) Some of these papers have reasonable science. But some of them have some of the same wishful thinking as what D-Wave says outside of its papers.

    For example, there is the paper “Does Adiabatic Quantum Optimization Fail for NP-complete Problems?” [arXiv:1010.0669] The authors hold out hope that BQP contains NP by means of adiabatic quantum algorithms. They seem to miss two fundamental points. One is that their glimmers of hope also generally apply to classical simulating annealing, leading one to “hope” that BPP contains NP. (Indeed the universal solvent nature of simulating annealing has led some people to think things like that in the past.)

    The other point is that the real purpose of trying any heuristic algorithm on an NP-hard problem is to discover new sectors of that problem that are softer, and thus might not be NP-hard. The purpose is not to try to capture NP-hardness with a polynomial-time algorithm, because that’s ridiculous.

  619. Sanity Checker Says:

    Greg Kuperberg #618: I think you might be misinterpreting the statements in arXiv:1010.0669. They are just saying that there is a very long way to go before statements like, “The quantum adiabatic algorithm fails for random instances of NP-hard problems” can be deemed justified.

    First, you have to show failure for all possible problem Hamiltonians that you can use for encoding your problem.

    Second, you have to show failure for all possible adiabatic evolution paths.

    Third, you have to show that none of the excited states that you might be getting upon “failure” can be used to exactly solve the problem at hand in polynomial time.

  620. Greg Kuperberg Says:

    No, there is only a short way to go, depending on what you mean by “justified”. It’s a conjecture that BQP does not contain NP. Yes, this is only a conjecture, it has not been mathematically proven. But either you believe the conjecture or you don’t. It’s as simple as that.

  621. Sanity Checker Says:

    Greg #620: I believe in the conjecture that all of quantum mechanics is just a stupid joke pulled on humans by the flying spaghetti monster. But I had never even dreamt of inserting my belief as the fundament of a rigorous looking scientific argument. I should try it sometime.

  622. Greg Kuperberg Says:

    One minute you believe that P ≠ NP, the next you believe that BQP ⊉ NP, and after that you’ve got miracle healing on television, Hare Krishnas, and pastafarianism. I agree, blind faith is a dangerous thing.

  623. Sanity Checker Says:

    Still, Greg, you seem to be unwilling to get it through your head that arXiv:1010.0669 is only pointing out that arXiv:0908.2782 offers nothing more than proof by example.

  624. aram harrow Says:

    Sanity Checker #619, #623, etc.:
    1010.0669 is a good example of why a lot of researchers don’t like reading D-Wave papers. Its central claim (there exist adiabatic paths with large gap) is vacuously true! They also introduce an interesting heuristic, which may be useful in some cases, but the result is substantially oversold.

    Why “vacuously true”? Here is the adiabatic path
    1) start with all sigma_x terms
    2) linearly transition to sum_i (-1)^{s_i} sigma_z^i,
    where s_1, …, s_n is the solution.
    3) linearly transition to the final Hamiltonian.

    Trust me, there’s a big gap (it’s too boring to prove it here).

    This is obviously of no help in solving NP-complete problems because you need the witness to come up with the path. And this is the fundamental problem with the claim that “there exist” good adiabatic paths. It’s vacuously true, and by itself, not interesting for algorithms.

    Now, buried in the paper is more than this. Their family of alternate paths is less trivial, and may turn out to be a useful heuristic, or maybe even could have provably good performance in some cases. But this potential nugget of usefulness is packaged in a really unfortunate way.

  625. Sanjeeb Says:

    Graeme #613

    One needs to be cautious in interpreting my statements in http://arxiv.org/abs/1306.1202.
    I would prefer not to talk about the Ising Spin model as that is not what I tested, but specific QUBO instances where both node and edge weights are randomly generated (McGeogh and Wang say for their tests: “Weights are drawn uniformly from the set {-1,+1}”. Various people have pointed out that if small changes are made to the setting I use, then the problems may become hard to solve with CPLEX (which is what you would expect of NP-complete problems).

    So it may very well be that D-wave is faster than the best current methods on classical computers for specific distributions of weights for the Ising model, and the corresponding weights for QUBO. I do not know enough to comment on that.

  626. Sanjeeb Says:

    I should have said McGeoch and Wang in my previous email.

  627. Alex Selby Says:

    Sanjeeb, you should know that the description of the problem instance in the McGeoch-Wang paper is not correct. Try and calculate the average minimum value and you will see a big discrepancy with their reported values, such as -815.2. (I learnt this after trying six different models trying to eliminate the discrepancy, and after extensive email correspondence with Prof McGeoch.)

    The described problems are indeed pretty trivial, which lead me up the garden path somewhat. But I think the problems actually used are considerably harder, or at least have considerably harder instances.

    I’ve written something about it here:
    http://www.archduke.org/stuff/d-wave-comment-on-comparison-with-classical-computers/

  628. Peter W. Shor Says:

    Sanity Checker says:

    Greg Kuperberg #618: I think you might be misinterpreting the statements in arXiv:1010.0669. They are just saying that there is a very long way to go before statements like, “The quantum adiabatic algorithm fails for random instances of NP-hard problems” can be deemed justified.

    Greg Kuperberg #619 says

    No, there is only a short way to go, depending on what you mean by “justified”. It’s a conjecture that BQP does not contain NP. Yes, this is only a conjecture, it has not been mathematically proven. But either you believe the conjecture or you don’t. It’s as simple as that.

    I feel obliged to point out that Greg here is spouting complete nonsense, and that Scott Aaronson, at least, should know enough enough to recognize this. There is no scientific basis for believing that random instances of NP-complete problems are NP-hard. In fact, for many probability distributions, random instances of NP-complete problems are easy. The only distribution for which they are proven to be NP-hard is from Levin’s paper, and these random instances are not constructible.

  629. Peter W. Shor Says:

    Greg: let me apologize for being harsh in my last email. But let me also say that, unlike what you seem to be implying, the BQP ⊉ NP conjecture says nothing about how well quantum algorithms will do on random instances of NP-hard problems, because constructible probability distributions over NP-hard problems are not known to be NP-hard on average.

    Personally, I think that some constructible probability distributions over NP-hard problems do not lie in P, but that none of them are provably NP-hard on average. And I think it is quite possible that some probability distributions over NP-hard problems are easy for quantum algorithms but hard for classical algorithms.

  630. Greg Kuperberg Says:

    Peter – You don’t have to apologize for being harsh. However, I am wondering if you might want to apologize for being wrong. 🙂

    I don’t see what the content of Dickson and Amin [arXiv:1010.0669] has to do with random instances of anything. They don’t discuss random instances except once in passing. They’re looking at the max clique problem. They justify it by citing a Karp reduction from exact cover to max clique. It’s true that Altschuler et al [arXiv:0908.2782] look at random instances, but of exact cover rather than max clique. It would be awkward to make random instances of max clique that match random instances of exact cover; Dickson and Amin don’t even try. (Indeed, this sort of mismatch is a main reason that average-case hardness within NP is a difficult topic.)

    I just read over arXiv:1010.0669 again and it still looks like a wish that BQP contains NP. If I’m missing some caveat that makes this paper less off the wall, you’ll have to show me where it is in the paper.

  631. Greg Kuperberg Says:

    Peter – As to your other question, of course there are probability distributions over NP-hard problems that are in BQP on average but probably not in BPP. It’s your own result! You could take a distribution over systems of binary equations that can be anything in principle, but usually gives you factoring expressed in base 2.

  632. Greg Kuperberg Says:

    Oh I see my mistake now. Anonymous respondent “Sanity Checker” slipped the phrase “random instances” into his response to me, and I didn’t notice because I was thinking about arXiv:1010.0669 itself. Harrumph.

  633. Greg Kuperberg Says:

    Sanity Checker #623 – Unwilling to get it through my head? For someone exercising the right to anonymity, you sure are vehement! Pretty quick on the draw too, because you responded in 25 minutes with a detailed rebuttal to the first mention of arXiv:1010.0669. Hmm…

    I agree that arXiv:0912.0746 has a somewhat odd emphasis. The cast their paper as a response to the proposal that BQP ⊇ NP because of adiabatic quantum computation. Instead of saying that this proposal is unlikely and that people believe that BQP ⊉ NP, they say that “unfortunately” they find evidence against. Now, adversarial instances are really enough to make the point, but they mention in the introduction that that has already been done. So they up the ante to a distributional version of an NP-hard problem, which is therefore no longer provably NP-hard.

    Nonetheless, arXiv:1010.0669 goes back to NP-hardness in general, and it is not just a rebuttal to arXiv:0912.0746. It attempts to take positive steps towards solving an NP-hard problem (with no specific distribution stated) in a model that happens to look a lot like D-Wave’s actual hardware.

  634. Peter w. Shor Says:

    Greg Kuperberg:

    I believe that, despite the way it was worded (it was written by physicists and not computer scientists), the actual point of the research in arXiv:0912.0746 was to show that the canonical path does not work for solving random instances of NP-complete problems. This was in response to the idea that the adiabatic algorithm might be able to do just that.

  635. Greg Kuperberg Says:

    Peter – Obviously, as the example of mixing 3-SAT with factoring shows, “random instances of NP-complete problems” is not really a robust concept. (At least, it needs an extra idea to identify important problems in NP and input distributions.) Nonetheless I agree that arXiv:0912.0746 does something reasonable: it analyzes uniformly random EC3.

    I also agree that the true complexity of uniformly random EC3 is anyone’s guess. Relatively recently I learned Lipton’s result that the permanent of a matrix is random self-reducible, and I was stunned. It’s too bad that it’s hard to get results like that within NP.

    Anyway, the more relevant point that I wanted to make is about arXiv:1010.0669, which has some of the same D-Wavism as D-Wave’s public relations department.

  636. Peter w. Shor Says:

    Greg: There are equally stunning results on randomly self-reducible lattice problems within NP. See M. Ajtai: “Generating Hard Instances of the Short Basis Problems”, and all the cryptography and complexity papers that follow this up. There are no known randomly self-reducible NP-complete problems.

  637. Greg Kuperberg Says:

    On the contrary, Feigenbaum and Fortnow showed that if an NP-complete problem is random-self-reducible in a straightforward way, then the polynomial hierarchy collapses. I was trying to remember that paper yesterday.

    http://cs.yale.edu/homes/jf/FF.pdf‎

  638. David Poulin Says:

    I totally agree with Scott’s statement (*).

    I invite all experts on quantum information science to express their view on this statement by leaving a comment here. Many other experts in the field with who I have discussed this issue agree with me, and I hope that the press can become aware of this apparent consensus.

    Statement:

    (*) D-Wave founder Geordie Rose claims that D-Wave has now accomplished its goal of building a quantum computer that, in his words, is “better at something than any other option available.” This claim has been widely and uncritically repeated in the press, so that much of the nerd world now accepts it as fact. However, the claim is not supported by the evidence currently available. It appears that, while the D-Wave machine does outperform certain off-the-shelf solvers, simulated annealing codes have been written that outperform the D-Wave machine on its own native problem when run on a standard laptop. More research is needed to clarify the issue, but in the meantime, it seems worth knowing that this is where things currently stand.

  639. Thomas Spieker Says:

    Hey Scott-

    I don’t know if this topic has been brought up yea as I’ve only been able to make it through about three hundred of these comments lol so at the risk Ive being redundant I’d like to ask you a question regarding Quantum speedup/General speedup in the D-Wave devices as well as make a few other points:

    Based on all the evidence that has accumulated, we can reasonably make the assumption that something involving Quantum bizarreness is happening in the D-Wave devices. You yourself admit this.

    Echoing this as well is Sleth Lloyd whose opinion is one of the most unbiased views on this situation I’ve come across so far during this extreme hype/skeptic 5h%tstorm.

    BBC May 22nd:
    But even some former D-Wave critics have been won over – at least in part. Seth Lloyd, a professor of mechanical engineering at the Massachusetts Institute of Technology who has long been involved in quantum computing, says that when Lockheed Martin first got interested in D-Wave, he tried to dissuade them from buying it. Lloyd himself had been involved in developing the principles behind the adiabatic quantum computer, but says his group didn’t patent the idea because they didn’t think a practical machine could really be built. “I was probably wrong, and [Lockheed and D-Wave] were probably right,” he now says.

    http://www.bbc.com/future/story/20130516-big-bets-on-quantum-computers/3

    I think its radically apparent that we are seeing an exponential speedup in the D-wave devices if we compare each generation of the Dwave devices.

    The 16 bit generation- you said yourself was no more computationally useful than a “roastbeef sandwhich” as it could barely solve even the most elementary Soduku problems…

    The 128 bit generation-
    (84-108 Qbits)

    According to Lidar and Troyer results this generation actually competed well (1/15th as fast) versus the optimized simulated annealing code the Sergei Isakov recently wrote.

    Doesn’t going from being barely able to compute a soduku puzzle on the 16-bit machine to finding Ramsey Numbers, and solving problems of Binary classification in image mapping and 3d protein folding and the 108 Q-Bit machine seem like exponential evolution/speedup in processing growth?

    The 512 bit generation:

    http://graphics8.nytimes.com/packages/pdf/business/quantum-study.pdf

    “As a case in point, our second project compares the V5 hardware chip used in our first study to a V6 chip that became operational after the study was completed.”- Mcgeoch and Wang

    V5 (439 Q-bits) versus V6 (502 Q-bits) speedup comparison:

    “Thus V6 is 5.6 times faster than V5 at k = 1 and 3 times faster
    at k = 1000” -McGeoch and Wang

    http://graphics8.nytimes.com/packages/pdf/business/quantum-study.pdf

    Here we see even more precise results which also point exponential speedup, a 16 percent increase in Q-bit size resulting in a 300 percent and 560 percent processing speedup respectively.

    Why are you overlooking the fact that there seems to be exponential growth in speed/processing capabilities between the actual D-Wave chips/hardware generations?

    The validity of the above combined with educated opinion that there is something quantum happening in the devices would imply then that in all probability we are seeing a phenomenon akin to Quantum exponential speedup.

    Also, I must agree with Cathy the the announcements from D-Wave and Google have been sober for the most part. Geordie commented one time 4 weeks ago, before he may have even been aware of the Mathias, Troyer and Selby results, that D-Wave’s device led the market in speed. Its certainly logical to say that this hype is purely a result of the CERTAIN media outlets writing what they want to believe/what they think is interesting (I’ve also read quite a few articles from major media sources that are very even handed and make reference to many of your points).

    Based on the above evidence correlating to the facts that 1. there is probably something Quantum happening in D-Waves devices 2. There exists exponential speedups between D-Waves chips/generations and 3. This hype really hasn’t been fanned in the slightest by D-Wave or Google, why then would you want to resume your post as D-Wave chiefs skeptic?

  640. Peter w. Shor Says:

    David Poulin:

    I would like some clarification as to whether Cathy McGeoch and Troyer et al. were using the same probability distribution over instances. Once I get this clarification, I will agree with Scott’s statement.

  641. Greg Kuperberg Says:

    Peter – It may or may not be the same down to the last detail. But I bet it can be fixed to satisfy you. And I hope that it can be quickly arranged, with the help of McGeoch, Isakov, and Troyer. It would be silly if you held out just because of fixable discrepancies.

  642. Nobody Special Says:

    Thomas Spieker #639 – Taking your numbers for granted and ignoring a bunch of problems with your understanding of complexity. Are you saying if I showed you a processor that increased 1000% in a hardware generation your conclusion would be that “we are seeing a phenomenon akin to Quantum exponential speedup”?

  643. Peter w. Shor Says:

    Greg:
    I am sure it can be arranged, although I’m not sure it can be done before the embargo on Troyer’s paper is lifted.

  644. Sanjeeb Dash Says:

    Alex Selby #627:

    I have heard second-hand that different data sets than those mentioned in the McGeoch-Wang paper were actually used (instead of random QUBO as mentioned in the paper, supposedly random Ising, subsequently mapped to QUBO). These do seem hard for most exact methods (that produce optimal solutions). I hope to see the publication of the exact instances or an erratum pointing out the precise data distribution. It would then seem that the problems should more properly be called random Ising and comparisons should be with best-in-class solvers. It would not seem appropriate to take a TSP solver such as Concorde, map Euclidean TSP instances to max-cut, and then state that the TSP solver is better at max-cut than the max-cut solver. This is not to say that CPLEX cannot be trivially beaten; here is a 3-variable example in CPLEX LP file format where dynamic programming will destroy CPLEX (from Aardal and Hurkens: Hard equality constrained integer knapsacks)

    Maximize
    x1 + x2 + 3×3
    Subject To
    12223×1 + 12224×2 + 36671×3 =149389505
    Bounds
    Integer
    x1 x2 x3
    End

  645. Science dress code manifesto | Are You Shura? Says:

    […] The sale of the second D-Wave superconducting optimizer raised some additional, but rather old problem. Let us consider a scientific work based on a computer code. What should be a guideline for publication of the results? There are different ideas, e.g., science code manifesto was suggested more than year ago, yet without visible success. Really, it may be awkward sometimes make all code used for preparation of the work to be freely acceptable and open source, however even less restrictive version (aka “dress code”) could prevent many problems actively discussed recently. […]

  646. Frank Verstraete Says:

    I agree with Scott’s statement. In fact it is very mild.

  647. D-Wave – tõde kvantarvutist? | Ivari sahver Says:

    […] D-Wave: Truth finally starts to emerge. […]

  648. Het quantum aan de macht? – deel 2 Says:

    […] pas recent dat hun toestel van dichterbij bekeken kan worden door wetenschappers. De belangrijkste criticus van hun werk laat echter geen spaander heel van het quantum gedrag van hun systeem. De toekomst […]

  649. Reading List for 28 May 2013 » Cafe Turing Says:

    […] ◊ Scott Aaronson :: D-Wave: Truth finally starts to emerge […]

  650. JF Puget Says:

    I have run McGeoch&Wang instances with CPLEX. Bottom line, when using latest CPLEX release, some multi threading, and a CPLEX friendly model then CPLEX performance is much better than what is reported in the paper. Yet, it is not competitive with D-Wave machine.

    Results are described here: https://www.ibm.com/developerworks/community/blogs/jfp/entry/d_wave_vs_cplex_comparison_part_2_qubo?lang=en

  651. Future Tech Updates Says:

    […] machine AND more companies are becoming interested in using the device. For the alternative view, here is an article pointing out some of the short-comings of the D-Wave computer. So, there is not yet a 100% ringing endorsement […]

  652. Will you or will you not? | Wavewatching Says:

    […] an exhausting rekindling of the D-Wave melee on his blog,  Scott Aaronson's latest paper, "The Ghost in the Quantum Turing Machine", is a welcome change […]

  653. GOOGLE’S QUANTUM COMPUTER PROVEN TO REAL THING (ALMOST) | TECH in AMERICA (TiA) Says:

    […] D-Wave: Truth finally starts to emerge (scottaaronson.com) […]

  654. Scott Says:

    The article trackbacked above (in comment #653) ends with something that I’ve seen over and over, and now think of as the Fundamental Error of D-Wave Journalism:

      For what it’s worth, Google is matter-of-fact in calling the D-Wave a quantum computer … At Google, the semantics aren’t nearly as important as the task at hand.

    Journalists see D-Wave, Google, and NASA making bombastic claims, and they see many academics being skeptical. They don’t understand the details, so they assume the only possible explanation is that the machine “works”—i.e., that it’s useful or even necessary for “the task at hand”—and the only problem is that D-Wave hasn’t yet dotted all the i’s and crossed all the t’s to the ivory-tower eggheads’ satisfaction.

    The reality—that Google and NASA just spent $15 million for a machine that can be outperformed on its own native problem by a laptop, and that’s not the slightest bit useful for “the task at hand,” unless that task is studying the machine itself—now looks firmly established. It’s not even denied by D-Wave’s knowledgeable supporters, if you read their defenses carefully. But it seems like this reality goes so far against people’s preconceptions, that the general public (or at least the media) is still unable to process it.

  655. Appreciative Reader Says:

    Thanks for weighing in on the recent press Professor Aaronson, I was hoping you might make a comment or two but was unwilling to pull you back in myself! 🙂

    Just to be completely clear (although you were pretty darn clear in your post above), my take from Alex Selby’s most recent update is precisely what you wrote — that the DWave device used in McGeoch and Wang can be beaten by a single core laptop when run on the problem types tested. Would you say that is a fair assessment?

  656. Scott Says:

    AR #655: Well, Troyer et al.’s results about the 512-qubit machine have not yet been released (I’m told they will be in a couple months). But yes, it currently looks from Selby and from Garg like the better performance of classical heuristics persists as far out as one looks.

  657. Harold Says:

    Has anyone tried to model the D-Wave system as a purely classical device? That is, forgetting quantum mechanics completely, if each qubit starts in a random classical state and the system evolves in a classical way will such a system produce a solution to the D-Wave problem? How long will it take? Could we approximate the flip time for a qubit and the number of flips that would be needed to find a solution? Thus removing all factors other than the classical / quantum question. Comparing such a calculation to the results obtained by McGeoch et al should shed light on whether or not D-Wave’s QA technology is really all that amazing.

  658. Daniel Freeman Says:

    It’s like some sort of Dadaist performance advertising…

  659. Aaronson and Arkhipov’s Result on Hierarchy Collapse | Combinatorics and more Says:

    […] see a reason for why it could be avoided for the bosonic device. Indeed I was surprised when Scott offered the belief that 30-boson machines can be implemented without fault-tolerance. (In an ideal world, Scott and […]

  660. Tim Marsalis Says:

    A very interesting post.

  661. BosonSampling and (BKS) Noise Sensitivity | Combinatorics and more Says:

    […] Aaronson also expressed guarded optimism that even without quantum fault-tolerance BosonSampling can be demonstrated by boson machines for […]

  662. Komplexitetsklasser | Arkeoblog Says:

    […] Företaget D-Wave har sålt vad de påstår vara en fungerande kvantdator till bl.a. Google. Trots en enorm hype vet dock ingen […]

  663. Calcium mass: best for cardiac risk assessment: international consortium weighs in.(Clinical Rounds): An article from: Family Practice News · WWW.DBESTREVIEW.COM Says:

    […] D-Wave: Truth eventually starts to arise – Scott Aaronson […]

  664. Out of the AI Winter and into the Cold | Wavewatching Says:

    […] is that the AI winter is the ultimate parable of warning to make his case (as was pointed out by an anonymous poster to his blog).  I.e. he thinks the D-Wave marketing hype can be equated to the over-promises of AI research in […]

  665. Moe Aboulkheir Says:

    With regard to Google/Lockheed: I’m sure there were plenty of competent physicists and chemists working at Toyota/IMRA when Pons & Fleischmann were spirited away to Japan.

  666. The Fall of Contemporary Crypto: Quantum Computing's Challenge to Cyber Security ← Security Mutant Says:

    […] computers seems to be becoming more nuanced, but there are still prominent critics such as Scott Aaronson of MIT who argues that the exhibited quantum behavior won’t actually allow the machine to outperform […]

  667. Blue Note Tech Blog » Is That Quantum Computer for Real? There May Finally Be a Test Says:

    […] created by D-Wave Systems, in Burnaby, British Columbia, really offers the claimed speedups, whether it works the way the company thinks it does, and even whether it is really harnessing the counterintuitive weirdness of quantum physics, which […]

  668. NFTF » Is That Quantum Computer for Real? There May Finally Be a Test Says:

    […] created by D-Wave Systems, in Burnaby, British Columbia, really offers the claimed speedups, whether it works the way the company thinks it does, and even whether it is really harnessing the counterintuitive weirdness of quantum physics, which […]

  669. Jonathon Says:

    As biting and invective as your comments are unrighteously construed to dominate over the demure tunes of relaxed ease and the stupor of unreflecting acquiescence and dulled aptitude, I am grateful to have stumbled across your entertaining and informative blog, and do hope you tirelessly put up the good fight while doing the groves of academe right with a sound trimming, or rather axing.

    What I take away from all of this is that D-Wave is playing the Schroedinger line much too far – at least in the layman’s understanding – as it seems hitherto undetermined whether their box contains a bona fide QC even *after* opening! 🙂

  670. Scott Aaronson and “Quantum Computing since Democritus” « Farkas' Dilemma Says:

    […] He’s skeptical about D-Wave […]

  671. David Gonzales Says:

    Perhaps Google wants quantum computing for legal reasons: Incorporating quantum computation would make it legally plausible to claim they have no physical capability of knowing what their computers are doing. Whether true or not, this would add another few ridiculous months to any proceeding. “Entangling” D-wave and quantum physics into the court case.

  672. Emilio Koguchi Says:

    Find one’s own path in life is exceedingly important. This is from C. Jung.

  673. Jenner Says:

    simulated annealing is a heuristic search method. DWave’s machine isn’t doing heuristic search. This is the reasons why (*) isn’t relevant. Can you provide a non heuristic optimization algorithm that outperforms DWave’s machine? If so, this would be highly relevant.

  674. Anne Says:

    Although most people have accepted that D Wave hasn’t worked out, I am hesitant to call it a failure or scam. I enjoyed reading your post – you have quite a sense of humour!

  675. MIT's 'Chief D-Wave Skeptic' Weighs In - Advanced Computer Learning Company Says:

    […] Since 2007, MIT computer science professor and “Chief D-Wave skeptic” Scott Aaronson has been at the center of the debate about D-Wave. Below, a few of Aaronson’s greatest moments in snark, taken from his blog. […]

  676. Nando de Freitas Says:

    The following papers shed light on the method of Alex Selby.

    From Fields to Trees. Firas Hamze and Nando de Freitas. In Uncertainty in Artificial Intelligence (UAI). Pages 243–250. Arlington‚ Virginia. 2004. AUAI Press.
    http://arxiv.org/ftp/arxiv/papers/1207/1207.4149.pdf

    Information Theory Tools to Rank MCMC Algorithms on Probabilistic Graphical Models. Firas Hamze‚ Jean−Noel Rivasseau and Nando de Freitas. In Information Theory and Applications Workshop (ITA). 2006.

  677. Philosophy can be done better! Also, a critical review of Tegmark’s MUH | The Daily Pochemuchka Says:

    […] claimed P vs. NP proof is wrong” with respect to Vinay Deolalikar’s claimed proof, and here for a discussion of Google’s D-Wave “quantum computer” acquisition – also […]

  678. Artificial Intelligence, The Brain as Quantum Computer – Talk about Disruptive | CloudRamblings Says:

    […]  and Is the D-Wave Real? […]

  679. D-Wave Systems Quantum Computer | All Star Activist Says:

    […] ^ Jump up to:a b Scott Aaronson (16 May 2013). “D-Wave: Truth finally starts to emerge”. […]

  680. Steven Watanabe Says:

    Scott #597

    I’m glad to see that you have a very open mind about various discussions on your blog. Many blog owners only allow posts that fit their agenda. I think the “Signal-to-noise” is just fine. It gives the readers something to object to, increases questions and inspires additional thought.

  681. Monash Says:

    What a silly comparison! A heuristic-based method was compared to an enumeration-based exact method !

  682. TRANSCEND MEDIA SERVICE » Confused About the NSA’s Quantum Computing Project? This MIT Computer Scientist Can Explain. Says:

    […] found briefly is that there is pretty good circumstantial evidence that there’s some kind of quantum annealing behavior, which means that there’s a little bit of quantumness […]

  683. Steve Jurvetson Says:

    Time to pull out the Karl Popper text on “truth” =)

    http://googleresearch.blogspot.ca/2015/12/when-can-quantum-annealing-win.html

    “Our aim as scientists is objective truth; more truth, more interesting truth, more intelligible truth. We cannot reasonably aim at certainty. Once we realize that human knowledge is fallible, we realize also that we can never be completely certain that we have not made a mistake.”
    ― Karl R. Popper

  684. Shtetl-Optimized » Blog Archive » Google, D-Wave, and the case of the factor-10^8 speedup for WHAT? Says:

    […] the comment sections of one my previous posts, D-Wave investor Steve Jurvetson even tried to erect a victory stele, by quoting Karl Popper about […]

  685. Scott Aaronson on Google’s new quantum-computing paper | fabTechnoid Says:

    […] yet another classical algorithm on the stage, which is Selby’s algorithm, which I think was first announced on my blog. It’s a local-search algorithm, but it’s one that is able to figure out that the […]

  686. 3Q: Scott Aaronson on Google’s new quantum-computing paper - Tech in America Says:

    […] yet another classical algorithm on the stage, which is Selby’s algorithm, which I think was first announced on my blog. It’s a local-search algorithm, but it’s one that is able to figure out that the […]

  687. Trae Says:

    You mean to tell me that I shouldn’t consider bullshit media hype about coconut oil and snake oil to be regular run of the mill bullshit?

    So you mean to tell me when I turn on the TV and it tells me that Hydroxycut or Coconut Oil or Snake Oil w/e will make me loose 100lbs in a week that I shouldn’t consider biology or chemistry or research the actual science behind the claims?

    Come on man. D-wave doesn’t control what gets put out in the media. The Media talks about Quantum Computing like it’s going to give humans a direct phone line to God himself so we can finally stop going to Church and just Google all the questions we have for our priest/preacher/imam on whatever day it is you attend service.

  688. Thomas Murphy Says:

    You are so completely wrong. Stopped reading after the first paragraph. I you have an issue, grab a tissue. D-Wave has already proven you wrong.

  689. Thomas Murphy Says:

    I read your post. Never mind I rescind previous comment.
    You’re half right.