Geordie Rose at MIT

While there are many, many things in this world that I’m bent on destroying, D-Wave Systems has never been one of them. Ideally, I’d simply let the D-Wave folks do their thing (namely, try to build an adiabatic quantum computer) while I do my thing (namely, study the fundamental limits of quantum computers). It was only when, in connection with D-Wave, cringe-worthy claims about quantum computing started appearing all over the press that I felt a professional obligation to say something.

Now that I’m “involved,” though, I also need to keep you ablog of any notable further developments. And presumably, D-Wave founder Geordie Rose coming to MIT to meet with our quantum information group counts as notable.

Two months ago, you’ll recall, we were graced by a visit from D-Wave’s Mohammad Amin and Andrew Berkley, but I’d never before had the pleasure of meeting Geordie. At least formally, the reason for his visit was not to defend D-Wave, but to present “four hard problems” for us to solve. These problems were as follows:

  1. Find a practical adiabatic factoring algorithm. Because of the equivalence of adiabatic and standard quantum computing, we know that such an algorithm exists, but the running time you get from applying the reduction is something like O(n11). Geordie asks for an O(n3) factoring algorithm in the adiabatic model. It was generally agreed (with one dissent, from Geordie) that reducing factoring to a 3SAT instance, and then throwing a generic adiabatic optimization algorithm at the result, would be a really, really bad approach to this problem.
  2. Find a fault-tolerance threshold for adiabatic quantum computing, similar to the known threshold in the circuit model. Geordie asserted that such a threshold has to exist, because of the equivalence of adiabatic and standard quantum computing. However, others immediately pointed out that this is not so: the equivalence theorem is not known to be “fault-tolerance-preserving.” This is a major open problem that many people have worked on without success.
  3. Prove upper and lower bounds on the adiabatic algorithm’s performance in finding exact solutions to hard optimization problems.
  4. Prove upper and lower bounds on its performance in finding approximate solutions to such problems. (Ed Farhi described 3 and 4 as “so much harder than anything else we’ve failed to solve.”)

While none of these problems are new to the quantum computing community, they’re all extremely good ones, and all (indeed) extremely hard.

Of course, we did also discuss some controversial, red-meat, “did-D-Wave-build-a-quantum-computer” sorts of questions, so I owe it to you to provide a few highlights from that discussion.

Seth Lloyd, who’s been more sympathetic to D-Wave than most of us, correctly pointed out that D-Wave has a “credibility problem in the scientific community.” He discussed in great detail the experiments D-Wave ought to be doing to convince scientists that they’re really seeing quantum effects. I strongly agreed with Seth, adding that I’d rather see two coherent qubits than thousands of incoherent ones. Of course, even if D-Wave could demonstrate two-qubit entanglement (and Geordie says it’s the “next thing on the list”), there would still remain the enormous issues of scalability and of the limitations of the adiabatic algorithm in solving hard optimization problems. But at least we could be more comfortable in saying that what they currently have is a tiny quantum computer.

Geordie conceded that, so far, D-Wave has no direct evidence for entanglement among two or more qubits. He nevertheless argued that they have indirect evidence (basically, that their data are better fit by a simple quantum model than a simple classical one), and that the lack of direct evidence is solely due to the difficulty of performing the requisite measurements. Seth replied that, despite the difficulty, D-Wave would “do itself a big favor” by performing the measurements.

Seth also mentioned D-Wave’s claims to the popular press — for example, about the ability of quantum computers to solve NP-complete problems — as a major factor in its scientific credibility problem. Geordie admitted that some of D-Wave’s publicity was (here he paused for a few seconds) “not inaccurate, but verging on inaccurate.”

Note: Geordie now says that he was only talking about the reporting on D-Wave; in his words, “I stand by 100% anything I’ve ever said to anyone about these machines.” At the time, I understood him quite clearly to be talking about D-Wave’s own publicity; it’s strange that he would have hesitated to admit that reporters have misunderstood things. But I freely admit that I might have misheard or misinterpreted him.

I asked Geordie about the result of Bansal, Bravyi, and Terhal that the planar Ising spin graph problem admits an efficient classical approximation algorithm — thus calling into question D-Wave’s whole strategy of solving other NP approximation problems by mapping them onto Ising spin graph instances. Geordie replied, first, that their machine can handle many non-planar links, and second, that Bansal et al.’s algorithm merely trades an exponential dependence on n for an exponential dependence on 1/ε. I agreed that their algorithm isn’t practical, but argued that its mere existence would have to be dealt with in any attempt to convert approximate solutions of the Ising spin graph problem into approximate solutions of the original optimization problems.

So, where do we stand? Here’s my attempt at a fair summary:

  • The people at D-Wave are not conscious frauds; they genuinely believe in what they’re doing.
  • On the other hand, much of the publicity surrounding D-Wave can be safely rejected. To some academics, even one or two public misrepresentations are enough to destroy a company’s credibility. Others, however, prefer to ignore press releases — seeing hype, exaggeration, and even outright falsehoods as just a necessary part of raising money — and to concentrate solely on a company’s communications with experts. Where you fall between these extremes probably depends on your personality more than anything else.
  • In the past, I criticized D-Wave (rightly, I think) for failing to share information with the scientific community in a good-faith manner. To their credit, they’re now making more of an effort to communicate.
  • Thus far, by Geordie’s own account, there’s no direct evidence that D-Wave’s machine actually produces entanglement at any stage of its operation (which all agree is a non-negotiable requirement for quantum computing). Geordie says that producing such evidence will be the “next thing on the list.” The Sudoku stunt was worthless from a scientific perspective; it did not answer any of the questions that physicists need answered.
  • Even if D-Wave managed to build (say) a coherent 1,024-qubit machine satisfying all of its design specs, it’s not obvious it would outperform a classical computer on any problem of practical interest. This is true both because of the inherent limitations of the adiabatic algorithm, and because of specific concerns about the Ising spin graph problem. On the other hand, it’s also not obvious that such a machine wouldn’t outperform a classical computer on some practical problems. The experiment would be an interesting one! Of course, this uncertainty — combined with the more immediate uncertainties about whether D-Wave can build such a machine at all, and indeed, about whether they can even produce two-qubit entanglement — also means that any talk of “lining up customers” is comically premature.

105 Responses to “Geordie Rose at MIT”

  1. nextquant Blog Says:

    Scott,

    thank you very much for this post. It summarizes a lot of good points about the D-Wave situation. I’d also say that any talk of “lining up customers” is a contradiction to the current state of knowledge about their superconducting machines.

    One question, though… inherent limitations of the adiabatic algorithm – could you please explain this limitations in a little more detail?.

  2. Scott Says:

    could you please explain this limitations in a little more detail?

    Sure. This is a huge sprawling field by now, but to summarize in a few paragraphs:

    The adiabatic algorithm is basically a quantum version of simulated annealing. As such, it has certain advantages over simulated annealing (including an ability to “tunnel” through thin enough potential barriers), but it also shares many of the disadvantages. In particular, suppose that a fitness landscape has a local optimum with an extremely wide basin of attraction, as well as a global optimum with an extremely narrow basin. Then results indicate that the adiabatic algorithm will generically get stuck at the local optimum, just as simulated annealing will.

    (All of this is assuming you use a simple, linear path between the initial and final Hamiltonians, which you can move along at an arbitrary time-varying rate. If the path itself can be arbitrary, then people know much less. Note that there’s always some path between the initial and final Hamiltonians that lets you solve the problem instantly — the question is how you can find such a path without already knowing the answer!)

    Now, assuming the fitness function is an arbitrary black box, everything I said above has been proven rigorously. In the case where the fitness function corresponds to (say) the number of satisfied clauses of a 3SAT formula, it’s harder to prove things. Reichardt constructed explicit SAT instances with the narrow basin / wide basin behavior, for which he was able to prove that the adiabatic algorithm takes exponential time (as usual, assuming a linear path). But when it comes to (say) random 3SAT, or even random 2SAT, no one has been able to say anything about the adiabatic algorithm’s performance at a rigorous level. And of course, random k-SAT is still a huge idealization of what people want to solve in practice.

    So a lot of the work on the adiabatic algorithm has been empirical. Using simulations on classical computers (which as you can imagine, require massive amounts of computer time), Farhi’s group has identified cases where the spectral gap (which is inversely related to the running time) seems to decrease polynomially with the instance size n, and other cases where it seems to decrease exponentially. But since n is never more than about 20 in these simulations, it’s incredibly hard to draw asymptotic conclusions from them.

    In summary, to really understand how well the adiabatic algorithm performs on problems of practical interest, we need a quantum computer! Barring that, all we can say is that it seems to provide significant speedups for certain types of instances, but for other types of instances, seems not to be much better than classical simulated annealing. (One can at least hope for Grover speedups compared to simulated annealing, but even those seem to require clever choices of “annealing schedule.”) In particular, it’s fantastically unlikely that the adiabatic algorithm would be a “magic bullet” for solving NP-complete problems in quantum polynomial time, which I think was the impression created by a lot of the D-Wave publicity.

    Anyway, that’s the short answer. 🙂

  3. sean barrett Says:

    The issue of fault tolerance in an adiabatic QC is a fascinating one. My understanding of existing approaches for FTQC (those based on error correcting codes, and those based on topological QC) is that the elementary logical operations must be restricted to a discrete set. (If the set of allowed operations is continuous, how does the FTQC scheme “distinguish” an intended gate from an intended one with an additional small error?) It’s not obvious to me how one can do this within an adiabatic QC, but perhaps that’s just a failure of my imagination.

  4. Not Even Right Says:

    Scott,

    I hope you can solve at least one of the 4 problems suggested by that D-wave’s representative.

  5. wolfgang Says:

    > there are many, many things in this world that I’m bent on destroying

    like what?

  6. Geordie Says:

    Scott, Scott, Scott… Always looking on the bright side 🙂

    I wanted to clarify two issues raised here.

    1. The comment about “verging on inaccurate” was specifically about the reporting on our descriptions of what these machines could do (solving np-hard problems) and not on anything I’ve ever actually said. You are quoting me out of context. I stand by 100% anything I’ve ever said to anyone about these machines.

    2. It is comically incorrect to claim that it is too early to start looking for customers for two reasons.

    The first is that there are already buyers and sellers of quantum computers for research (Bruker NMR machines) and our systems are already much more useful and interesting than these.

    The second is that we expect that even for fairly small systems (~1,000 qubits, which we plan to do this year) this type of special purpose hardware can beat the best known classical approaches for instance classes where the class embed directly onto the hardware graph even if the “spins” are treated entirely classically, which we assume is a worst-case bound. Often forgotten in this type of conversation is the fact that there is a long history of simple special purpose analog hardware outperforming general purpose machines. If you want an example, look at Condon and Ogielski’s 1985 Rev Mod Sci article–their Ising model simulator beat the fastest Cray of the time in Monte Carlo steps/second. You can’t draw conclusions about the general utility of this type of approach without looking at details.

  7. Scott Says:

    > there are many, many things in this world that I’m bent on destroying
    like what?

    I dunno … mandatory gym classes, malaria, homophobia, a large percentage of CO2 emissions, the belief that if science doesn’t know everything then it therefore doesn’t know anything, the Republican Party’s power base, really bad physics popularizations… (mind you, I haven’t yet figured out how I’m to destroy any of them).

  8. Joe Says:

    Geordie asserted that such a threshold has to exist, because of the equivalence of adiabatic and standard quantum computing.

    There is a simple counter example to any claim that a universal quantum computer must be fault tolerant.

    We can build universal quantum computers using globally controlled spin chains of a single species where control over the ends of the chain is also present. Such a system cannot be made fault-tolerant however (at least for spin-1/2) since you can only reset up to two qubits at a time (the ends of the chain) independent of how long the chain is. It then becomes impossible to remove entropy from the chain as quickly as it enters.

    It’s an interesting fact that universality does not imply fault-tolerance, but there are quite a variety of counter-examples.

  9. rrtucci Says:

    I’ve heard in the press that D-Wave is working in collaboration with (and, therefore, has a tacit seal of approval from) NASA and GOOGLE and HARVARD BUSINESS SCHOOL. I’m looking forward to the next D-Wave press release, wherein we are alerted to the fact that MIT has joined this club

  10. Jonathan Vos Post Says:

    Is it possible that: (1) mandatory gym classes, (2) malaria, (3) homophobia, (4) a large percentage of CO2 emissions, (5) the belief that if science doesn’t know everything then it therefore doesn’t know anything, (6) the Republican Party’s power base, (7) really bad physics popularizations…
    are equivalent in any sense OTHER than that (a) you are bent on destroying them, and (b) that you haven’t yet figured out how to destroy any of them?

    To begin with, George W. Bush visibly and publically puts more energy into running, bicycling, treadmilling, brush-cutting, and otherwise working out, than to moral or intellectual effort, so that his very fit body carries a very unfit mind with no capacity for change nor introspection nor curiosity nor compassion.

    George W. Bush has done less for malaria than the Bill and Melinda Gates Foundation.

    George W. Bush encourages a deeply homophobic administration, regardless of Cheney’s daughter.

    George W. Bush, mainling petroleum into america’s veins (here, have a hit of this primo Alaskan Wilderness oil, with a delighful soupcon of ecological destruction and extinction) somehow admitted grudgingly in last night’s State of the Union address that there might maybe perhaps be some sort of global climate change, but otherwise has driven Earth to having a venus-like Greenhouse Gas catastrophe.

    George W. Bush supports Intelligent Design, going a step beyond Ronald Reagan (who played an evolutionary biologist in “Bedtime for Bonzo”) saying “evolution is only a theory.”

    George W. Bush is married to the Republican Party’s power base (I think the ceremony was performed by Rove, down in Cheney’s undisclosed location).

    Now, if I can only transform this to really bad physics popularizations, I will have proven that all 7 reduce to each other, and the hierarchy of things that you admit to being bent bent on destroying has collapsed to a single equivalence class.

    So close to formal proof… should I invoke the Hillary Lemma, or the Barack-Banach Space Theorem?

  11. roland Says:

    As i understand D-Wave doesn’t know for sure that their qubits are entangles. Nevertheless Geordie claims that their quantum computers are already more useful than some other model . Scott, does that make any sense?

  12. Greg Kuperberg Says:

    [Uncertainties about the science mean] that any talk of “lining up customers” is comically premature.

    But it’s not a comedy, Scott. For all of the confusion about what D-Wave has achieved, Rose is completely clear that he wants to line up customers. He may well achieve that much. In a sense he already has customers; he has lined up venture capital.

    If sober members of the QC community agree that it is outrageously premature for him to line up customers, then it has an ethical duty to warn those customers away. Many potential customers are completely unprepared to judge D-Wave’s credibility; they have no one but us to advise them. Even if you think that distortions in PR are irrelevant to the science, they certainly are relevant to this issue. So it makes no sense to brush away the Sudoku stunt as “not answering” physics questions. It may not be interesting as science, but it does shed a harsh light on other questions.

    After all, what would you think if D-Wave promised special quantum NMR methods to map cancer? How much better is it if it’s only money at stake?

  13. Stas Says:

    In a sense he already has customers; he has lined up venture capital.
    The art of rainmaking… This company exists almost 10 years as a “start-up”, why not to bs more investors into the venture…

  14. Scott Says:

    Roland, I’m not sure which exact claim of Geordie you’re referring to, and your question can be interpreted in several ways.

    It’s widely conjectured that, if a quantum computer has no entanglement (i.e. is in a separable mixed state at every time step), then it can solve no more problems in polynomial time than a classical computer can. Certainly we don’t know of any problem for which a completely unentangled quantum computer would provide any asymptotic advantage (though it’s been an open problem for a decade to rule this out).

    On the other hand, in Geordie’s comment here, he seems to say that even with no entanglement whatsoever, he still expects a speed improvement over conventional computers. If that were so, then the Orion (is it still called that?) would really be a fast special-purpose classical computer. I can’t evaluate that possibility, and can only note that it’s completely different from what they said they were trying to do. From my perspective, it’s as if they would give up on computer-building altogether, start a profitable business selling menswear, and then confidently exclaim: “You see?! The doubters were wrong!”

  15. nextquant Blog Says:

    Hi Scott,

    thanks for your explanations on quantum adiabatic computation!

    it’s as if they would give up on computer-building altogether, start a profitable business selling menswear

    Besides ‘menswear’ aka superconducting computing there also seems to be an option for ‘rubber boots’ aka leveraging the patent portfolio.

  16. nextquant Blog Says:

    Hi Geordie,

    buyers and sellers of quantum computers for research (Bruker NMR machines) and our systems are already much more useful and interesting than these

    To me, this comparison would only make sense if D-Wave would target the same market as Bruker with its NMR systems, i.e. academic research. Of course, in comparison to versatile NMR spectrometers you’d offer a very specific hardware (superconducting devices) and software (QAC) platform.

    Is this a new element in D-Wave’s go-to-market strategy?

  17. darren Says:

    Hi Scott,

    I don’t think the analogy of giving up computer manufacturing in favor of menswear is really correct. Whatever their ultimate product is, it will come as a direct result of what at least began as an effort to build a real quantum computer. If their work has led them towards something slightly different, so be it. That’s a pretty common pattern in research.

    As for whatever public misunderstanding or misuse of the term “quantum computer” may result, we have to admit that many branches of science have been down this very path before.

  18. Scott Says:

    Darren: Maybe a better analogy would be if Virgin Galactic, or some other company working on private spaceflight, decided instead to put its technology into amusement park rides. Yes, in some sense amusement parks solve the same problem as low-Earth orbit (namely, turning disposable cash into weightlessness) — in fact you get much more weightlessness for your money. But had people known from the beginning that that was going to be the outcome, they wouldn’t have paid nearly as much attention.

  19. John Sidles Says:

    Here’s a serious question: What other quantum technology companies have given MIT/xQIT talks like DWave’s?

  20. milkshake Says:

    The comment “…there are already buyers and sellers of quantum computers for research (Bruker NMR machines) and our systems are already much more useful and interesting than these”…. is not verging on inacurate. What Gordie Rose is saying here is actually a propaganda of the Scientologist caliber:

    People use these NMRs mostly to figure out the exact structure of small molecules or to study proteins. NMRs are irreplaceable for even the most basic organic chemistry work hence every single academic chemistry department and every single pharma company is obliged to spend hundreds thousands – if not millions – on buying these instruments – so right there you have pretty sizable global market that can support 3 major brands (Varian, Bruker, Jeol) plus the companies that build the superconducting magnets and console components for them.

    I encourage D-Wave investors to call Bruker and ask how many instruments Bruker sold over the last 2 years and what percentage of those newly installed instruments are fully dedicated to quantum computing. The fact that very few academic groups also use their NMRs for manipulating multispin systems in organic molecules to observe the entanglement effects has absolutely no bearing on commercial prospects of D-Wave.

    If I understand this correctly: Is it true that D-Wave never even demonstrated the proof of concept? Isn it true that the demo of their prototype was carefully staged for the benefit of investors to appear like it was providing such proof – with a full internal knowledge that it was not?

    You know, I used to work for a start-up company, in early 90s – it was combichem biotech and our “proprietary core technology” did not work so great – hence lots of smoke and mirrors.

  21. nextquant Blog Says:

    It seems that D-Wave recently got another round of funding with $17M.

    Mind says: you aware of any other quantum computers around the world that are ready for commercialization (or at least near), whether in private industry or through government efforts?

    Geordie Rose says: Unfortunately no-one but us is really trying, at least in the public domain. So no.
    So, what about Bruker? 😉

  22. John Sidles Says:

    Perhaps there is not so much more to be said about D-Wave, than has already been said, but there is IMHO a great deal more to be said about the commercial potential of QIT.

    How many people noticed in the 4 December Nature that this year’s CCDC molecular structure prediction contest was won with with a perfect score (for the first time ever)?

    And that this score was achieved with a quantum simulation program (for the first time ever)? And that these next-generation quantum simulation technologies are being rapidly commercialized?

    This points out an empirical fact that IMHO is very important for young QIT researchers. Namely, humanity’s capability to accurately simulate quantum-level systems has been increasingly exponentially for the last forty years.

    This is about as long as computer power has been increasing exponentially, and there is no sign that either trend of exponential improvement will end anytime soon. Good.

    Yeah, but do these new techniques embody all the latest sexy ideas of QIT? Ya, sure, you betcha, and definitely! Although admittedly, not every mathematician, quantum chemist, and QIT researcher appreciates that this cross-fertilization is taking place.

  23. John Sidles Says:

    Hmmm, I see this week’s Nature has a followup article on molecular structure prediction.

    What’s that? You say you don’t see the word “quantum” anywhere in that article? Well, it’s there … if you think carefully about the mechanisms by which molecular polarizability really works.

    Scott, these Nature articles raise issues that are very exciting for QIT students: (1) these new enterprises depend critically on the quantum insights and algorithmic techniques of QIT, (2) these enterprises have plenty of room to grow, and (3) they are now hiring! 🙂

    IMHO, the above considerations have more than sufficient weight to counterbalance the prevailing dyspeptic skepticism on this thread about the D-Wave enterprise

    Hopefully, students will find this thread to be improved by Nature’s citing specific grounds for everyone to be immensely optimistic about both the present and the future of QIT.

    For students, optimism is a vitally important resource, and that is why we professors are always on the lookout for technically sound reasons to embrace it. 🙂

  24. rrtucci Says:

    John Sidles,
    Sadly, in Quantum Computation, academics in the US are mostly uninvolved in anything that might have commercial prospects. They seem to consider anything commercial as being dirty, below their elevated station. They are oblivious to the fact that if they collaborated with commercial ventures (or even created their own), they would be producing new jobs for their students.

    I think D-Wave is a phony company that will go bankrupt eventually. But Gordie Rose is right when he says that he has no competition. That is part of the reason why this bizarre company has survived so long. It’s simple biology. If there are no wolves, the deer population will grow to an unhealthy size.

  25. Hans Says:

    John Sidles,
    what do the molecular structure calculations have to do with QIT? To me it looks like all they did was to play around with quantum mechanics, but the actual codes they wrote were normal codes, i.e. basically matrix inversions etc… I am not an expert in CS, (I am actually a physicist involved in structure calculations), so forgive me if i am missing the point here.

  26. John Sidles Says:

    Hans, that’s a great question! Since you work in the field of structure calculations, maybe you could comment upon the widespread perception (which I share) that we don’t really know why quantum chemistry codes (and also condensed matter codes) work as well as they do?

    Just to set the stage, the contest-winning AMS/GRACE code presently computes chemical binding energies with an accuracy of (in physics-friendly units) about 0.001 eV per atom in the molecule. That is pretty incredible accuracy for a large, strongly correlated quantum system, containing hundreds of electrons, whose energy scale is of order a Rydberg unit (13.6 eV) per atom.

    Recognizing that the achievement of this level of accuracy is a genuinely well-posed mathematical and scientific mystery is (IMHO anyway) a necessary first step toward believing that QIT has a lot to say about solving it.

    My own preferred idiom for thinking about these issues has evolved to be mainly quantum informatic (and geometric), and I think it is pretty fair to say that the scientific community is only at the very beginning of applying these tools to practical quantum simulation problems.

    From this point of view, large protein molecules are big quantum computers, that are in contact with low temperature reservoirs, which are equivalent to covert measurement and control processes, that slowly drive the “computer” into its lowest-energy state.

    IMHO, an exceedingly thought-provoking discussion upon this topic is a recent preprint by Beylkin, Mohlenkamp, and Pérez. But if you ask “Yeah, but how precisely does this work connect to QIT” … well … that’s why they call it “research!”

    The overall idea being, that if ab initio quantum chemistry codes were designed by QIT theorists, they might (1) look very different and (2) work very well indeed. 🙂

  27. joe Says:

    John: You might be interested to know that your wish may come true – Alan Aspuru-Guzik at Harvard (http://aspuru.chem.harvard.edu/About/) did his Ph.D. on the development of a quantum Monte Carlo code for chemistry applications, and is now working on quantum algorithms for chemistry. I wouldn’t be surprised if, sooner-or-later, some of his later quantum computing work (and exposure to QIT) inspires changes in his quantum chemistry codes. Time will tell.

  28. Peter Shor Says:

    Hi Scott,

    I keep seeing people making the claim that if they don’t have any entanglement, quantum machines are no better than classical machines. This has been conjectured, but it is not known whether it is true, and I’m not even sure that it is widely believed. It certainly is not widely believed in the sense that the Riemann hypothesis, P != NP, or that factorization is impossible in polynomial time are widely believed.

  29. Scott Says:

    Hi Peter,

    Yes, I mentioned this as an open problem earlier in the thread (see comment #14). It’s actually one of my old flames — I worked on this problem for some time at Berkeley without success, and would love to revisit it someday.

    My own conjecture is that quantum computers without entanglement give no more power than BPP. But even if one doesn’t buy into that, it’s certainly true that no problem is known for which quantum computers without entanglement give any asymptotic improvement over known classical algorithms. That seems to me like a key point in assessing D-Wave.

    Incidentally, notice that the power of quantum computing without entanglement, unlike the power of quantum computing with entanglement, might in principle depend strongly on the details of the model — even things as seemingly trivial as qubits vs. qutrits and real vs. complex amplitudes.

  30. Clemens Adolphs Says:

    Scott: (Not to be taken too seriously) You want to “destroy” mandatory gym classes? Although I mostly agree with what you write, I must reject that!
    In a country, where approximately two third of its population are overweight? Heavens!

    When I was in school, I did not like gym classes, either. Today, however, I’m thankful for a gym teacher that got my butt up at least for two ours a week 😉 Here in Germany, parents and politics are complaining that there are too few gym classes 🙂

  31. Scott Says:

    Clemens, to whatever pathetic and inadequate extent I do exercise today, it’s in spite of, rather than because of, my prison-camp experiences in high school. Maybe gym classes are better in Germany, but even if so, I still wouldn’t make them mandatory. (Or at the very least, I’d give students a wide choice of activities, and allow running, ping-pong, etc. in place of team sports.) I attach the highest suspicion to mandatory activities of any kind, including math and science classes.

  32. Job Says:

    What are the features of Quantum Computers without entanglement that might give them an advantage over classical machines?
    I only have a basic understanding of Quantum Computers but i don’t see what Quantum Computers without entanglement would bring to the table. Can someone explain?

  33. Douglas Knight Says:

    Germany doesn’t have mandatory Gym. Vocational school is an alternative.

  34. Scott Says:

    Job, a quantum computer in an unentangled pure state certainly can’t do anything more than a classical computer. But suppose the computer is in an unentangled mixed state — that is, a probability distribution over pure states. In that case, it takes exponentially many classical bits to write down the state of the machine at any given time step — just as in the entangled case. Furthermore, the transition from one unentangled mixed state to another one could in general be very complicated — that is, the final mixed state might still be unentangled, but because of a completely different decomposition into pure states than for the initial mixed state. For this reason, we don’t in general know how to simulate such a machine efficiently on a classical computer.

    Now, to some of us, it’s “intuitively obvious” that this is just a technical issue, and that separable mixed states can’t actually be exploited to get a significant speed advantage (or possibly, any speed advantage) compared to classical computers. But try proving it!

  35. milkshake Says:

    I grew up under communism and our elementary school gym teacher was a lunatic bully and certifiable sadist. He aspired to be professional athletic trainer but somehow he ended in our school and he hated every bit of his job, especialy the clumsy kids. Apart form giving failing final gym grades to kidies, he liked to call the parents to complain and throw his weight around. One of his threats was mandatory asignment into additional class of physical therapy intended for people born with a handicap or with a serious weight problem – very embarassing thing for a kid. This threat of ultimate shame of being forced to attend the “special needs class” was always hanging above me.

    Just when I got to junior high (and for the first time enjoyed respite from a psycho gym teacher) all of sudden in the middle of the math class a teacher asked all boys to step to the front to the blackboard and there was some very gym-trainer-looking lady. This lady ordered us to do stretching, pushups and then few rather difficult and painful excersizes. I got very frightened – that special needs physical therapy thing again! I was not paying attention before to what was said when she was introduced – but it was clear to me it gotta be that damned mandatory class for fatsos and mishappened ones – and I must do everything in my power to not to end up there! So I exerted myself, tears coming out from the effort – and to my horror the instructor lady came and stood right next to me and she was watching me very closely – and she stopped everybody else and asked me to do more terrible excersises. I was petrified because I knew she was a professional instructor for defectives and I couldn’t fool her!

    Then she said: “young man, I rarely see someone of your buid that has such an amazing joint flexibility – please would you like to join our full-time ballet school ?

  36. milkshake Says:

    Douglas Knight: You are talking about the “Gymnasium”= academically-oriented high school, in German, Czech.

    (In US the word meaning of gym is different = 1. sports class 2. the excersize room with the fitness machines)

  37. matt Says:

    Scott,

    In the problem you mention (unentangled mixed state), you mean that at every step the density matrix is an incoherent mixture of product states? Then, to complete the description of the problem, what set of gates are you allowed when building this machine? (you can’t allow unitary operations on two qubits or you’d make entanglement) Do you mean I can make any operation on a fixed number of qubits that preserves the property of having an unentangled mixed state? Or am I only allowed any operation on a fixed number of qubits that sends any unentangled mixed state into another unentangled mixed state? Or what?

    Matt

  38. Scott Says:

    Matt, you can apply any operation on a fixed number of qubits (including an arbitrary unitary one), so long as in this instance it happens to preserves the property of having an unentangled mixed state.

  39. John Sidles Says:

    I had the same question as Matt … I tentatively decided that maybe there is only *one* kind of POVM-based (what other kind is there?) quantum computer whose internal state remains unentangled for *all* unentangled inputs.

    Namely, a noisy quantum computer. That noise being POVM-equivalent to a covertly observed quantum computer. That observation process being such that each observed qubit swiftly collapsed to one of only two stationary binary states. That collapse rate being faster than the computer clock rate.

    The net result being, of course, QIT-equivalent to a classical digital computer … which seems like kind of a boring answer.

    But on the other hand … heck … maybe a prosperous industry could be founded upon unentangled QIT technology! That might even create jobs for information theorists! 🙂

    Seriously Scott (or anyone), if we restrict ourselves to spin-1/2 qubits, is there any “exotic” quantum computer architecture (meaning a QIT architecture other than that of a classical binary computer) whose internal state remains unentangled for all input unentangled states?

    Alternatively, if we seek to eliminate quantum entanglement, are we thereby forced to embrace not only a restricted computational architecture, but also a restricted set of input states?

  40. Scott Says:

    John: No, I don’t think you can do anything quantum with operations that map every separable state to a separable state. (Proof: Exercise for the reader…) The interesting question is what you can do with operations that map some separable states to entangled ones, but the particular separable states you happen to have to other separable states.

  41. mp Says:

    Scott, about the “quantumness” of operations that map every separable state into a separable state, you may be interested in having a look at

    C. Bennett et al., “Quantum Nonlocality without Entanglement”, Phys. Rev. A 59, 1070 – 1091 (1999)

    Abstract:

    “We exhibit an orthogonal set of product states of two three-state particles that nevertheless cannot be reliably distinguished by a pair of separated observers ignorant of which of the states has been presented to them, even if the observers are allowed any sequence of local operations and classical communication between the separate observers. It is proved that there is a finite gap between the mutual information obtainable by a joint measurement on these states and a measurement in which only local actions are permitted. This result implies the existence of separable superoperators that cannot be implemented locally. A set of states are found involving three two-state particles that also appear to be nonmeasurable locally. These and other multipartite states are classified according to the entropy and entanglement costs of preparing and measuring them by local operations.”

    I.e., “separable” operations (if I remember correctly, they are the larger class of operations that map sep into sep, and preserve such property under tensoring) are more powerful than LOCC.

  42. John Sidles Says:

    As an aside that may be of interest to some, my colleague Joe Garbini tells me that the UW/ME Department is receiving surprisingly few applicants for its advertised faculty position in quantum system engineering. If you have any questions about this position (and especially if you willing to get an application in within the next couple of weeks), Prof. Garbini would welcome inquiries.

    Also, Scott, thank you for answering my question so clearly.

  43. matt Says:

    To fill in Scott’s exercise for the reader, suppose we start in a separable _pure_ state. And suppose we apply a sequence of operations, each one acting on a fixed number of qubits, such that each of these operations has the property of mapping _every_ incoherent sum of product states to an incoherent sum of product states. Then, we can simulate it classically as follows: after each operation, decompose the resulting state as an incoherent mixture of product states, pick one of the pure product states at random according to the given probability distribution, and then apply the next operation. The decomposition of the resulting state as pure states can be done in constant time because each of the operations acts on a fixed number of qubits.

    Now, if, as Scott says, you only know that the operations happen to send the given state into a separable state, then this doesn’t work, because after decomposition, the resulting state might not be sent into a separable state.

  44. Scott Says:

    Thanks, Matt!

  45. John Sidles Says:

    … after each operation, decompose the resulting state as an incoherent mixture of product states …

    Not to be picky, but ain’t this step NP-hard?

  46. Scott Says:

    Yes, for N-by-N systems. For 2-by-2 systems you can do it in constant time. 🙂

  47. matt Says:

    What Scott just said-as long as the gates only act on 2 (or any fixed number) qubits at a time, it should be fine, because all the other qubits are still in a product state.

    So, here’s a problem for Scott, which I am vaguely reminded of by this discussion. Take a matrix product state with fixed bond dimension, k, and consider a local Hamiltonian. The problem of optimizing the energy by varying only over 1 of the matrices in the state, and keeping all the others fixed, can be done as an eigenvalue problem, and hence in time polynomial in k. The problem of optimizing over a large number of matrices, with certain other matrices held fixed, was shown to be NP-hard by Eisert (though, this doesn’t necessarily show that finding the global minimum is actually NP-hard, because he did hold some of the matrices fixed, but it is good evidence to that effect). How about the problem of varying over 2 or 3 of the matrices at a time? The only algorithm I know is a Grobner basis algorithm, which (in worst case) can take time double-exponential in k. Maybe in this particular case it is faster. Maybe there is a faster way to do it. Anyone know? There can potentially be some practical interest in this question.

  48. John Sidles Says:

    Doh! So in quantum computing, you’re allowed to turn off the qubit interactions! That’s easy! 🙂

  49. Scott Says:

    Regarding my comment #2, on the limits of the adiabatic algorithm: I just had a long chat with Ed Farhi, Sam Gutmann, and Jeffrey Goldstone, and they stressed to me the following three points.

    1. For physicists, the word “generically” generically means something different than I meant. For computer scientists, pessimists that we are, the worst case is the “generic” case! So if we can construct an example where (say) a local search algorithm takes exponential time, we feel entitled to say that the algorithm “fails generically” on a whole class of examples of that general kind. For physicists, on the other hand, “generic” seems to mean something closer to “average-case,” which isn’t what I was talking about at all.

    2. In the case of the result of Reichardt that I mentioned, more-or-less the same result was proved earlier by Daniel Fisher, but in a physics context rather than a CS one.

    3. In both Fisher’s and Reichardt’s examples, the running time actually increases like c√n and not like cn. Of course, if the adiabatic algorithm could solve any 3SAT instance in c√n time, then while that wouldn’t improve exponential to polynomial time, it would still be a huge breakthrough and a dramatic improvement over what we know how to do classically. However, there are good reasons to think that the c√n behavior is specific to the Fisher-Reichardt example and will not carry over to other 3SAT instances.

  50. Alan Aspuru-Guzik Says:

    Dear Joe, Dear John Sidles,

    I am a “silent” lurker in this blog from time to time. This is my first post (I was mentioned) I want to talk about what John mentioned and Joe talks about:

    John Sidles says:

    >The overall idea being, that if ab initio quantum chemistry
    > codes were designed by QIT theorists, they might (1) look
    > very different and (2) work very well indeed.

    And then Joe Says talking about our research group:
    > I wouldn’t be surprised if, sooner-or-later, some of his
    > later quantum computing work (and exposure to QIT)
    > inspires changes in his quantum chemistry codes. Time will tell.

    Although my group and myself are not working in that specific direction right now, I do believe that the recent work by Steven White, Frank Verstraete, Giufre Vidal, Garnet Chan and many others is an example of what John Sidles is talking about.

    Take a look at this recent arxiv post by Schuch and Verstraete,
    http://arxiv.org/abs/0712.0483

    Cheers,
    Alan

  51. John Sidles Says:

    Dear Alan

    Thank you very much for that link to the Schuch and Verstraete preprint, which is indeed interesting … and it will take me a considerable time to assimilate.

    Rather than try to say something sensible several months or years from now, I’ll say what I think at present! 🙂

    Namely, it seems to me that our present understanding of the complexity of quantum simulation has some resemblance to our understanding during the 1940s to 1980s of the run-time of Dantzig’s simplex algorithm.

    Namely, the simplex algorithm was known to require exponential run-time in the worst (generic?) case, but in practice, such cases were (almost) never observed.

    It took quite a long time to understand this … I recall that Steven Smale made some major contributions … and the resulting explosion of work on convex sets continues even to the present day.

    As is distressingly often the case, I wish that someone who knows more about the mathematics of convex sets than me, would comment upon this!

  52. Zelah Says:

    Hi Everyone,

    Thanks in advance for any answer to my questions.

    1. Is it possible for QIP(1) = QIP (2) ?
    2. Is it possible for QIP(2) = QIP (3) ?

    Zelah

  53. Scott Says:

    Zelah, QIP(1) equals QMA, and sits somewhere between MA and PP. QIP(3)=QIP(k) for all k≥3, and sits somewhere between PSPACE and EXP. We still don’t know much about QIP(2).

    So sure, it’s conceivable that QIP(1)=QIP(2) — though that would imply AM⊆QMA, a really big result. It’s also conceivable that QIP(2)=QIP(3). But it would be absolutely astonishing if both were true, since then we’d have QMA=PP=PSPACE.

    My own belief is that neither is true — i.e., that QIP(1)≠QIP(2)≠QIP(3).

  54. daniel Says:

    Google apparently wants to use the d-wave machine to solve pattern recognition problems. In the video link above, google claims that brain functions are describable by quantum annealing processes.

    The guy in the video uses the experiences of people on a hallucinogen to support his claim ( its towards the end).

    Hmmm, this seems kind of silly. Maybe these dudes are on the hallucinogen ( cheap shot)?

    Is it at all possible that quantum information can be preserved in the “alternative bases” that the video describes inside of our hot wet brains? Seems unlikely.

  55. Job Says:

    I’m confused by that video. If quantum computers aren’t thought likely to be able to solve NP-Complete problems then why would the human brain’s success in dealing with some NP-Complete problems (image recognition, problem solving) suggest that the brain is using some elements of quantum computation?

    In addition, i don’t find that human creativity/intuition indicates that humans are making use of a process that can even somewhat efficiently solve NP-Complete problems. I think creativity/intuition is most similar to a randomized brute force search – the creative process being “let random feedback flow through your mind until something useful comes up, or not” – and it works because problems in NP can be efficiently verified. Can human creativity/intuition be used to solve problems outside of NP? Meaning, can the creative process find a solution to a problem whose solutions can’t even be efficiently verified (no random guessing)? I know a negative answer to that question doesn’t say much – on the other hand a yes would rule out creativity as a random plug and play process.

  56. Joe Says:

    daniel,

    In case you missed the disclaimer at the start, the Google Tech Talks don’t necessarily reflect Google’s position. The just seem to get in whatever interesting speakers they can.

  57. rrtucci Says:

    Too bad the conversation about Dwave degenerated into a conversation about complexity classes. Complexity classes are the opium of computer scientists:)

    Dwave has already consumed 40m, and now it’s getting 17m.
    The wonders that 57m could have accomplished in more capable hands!

  58. John Sidles Says:

    rtucci asks (implicitly): What could $57M could have accomplished in more capable hands?

    There’s no need to wonder. Let’s look at some data!

    Hmmm, if invested in the simulation-and-design company ANSYS in 2000, $57M would now be worth $1.14B.

    Or possibly more, if ANSYS’s research managers had invested the funds in upgrading the ANSYS Multiphysics Package to encompass quantum state-spaces.

    Evidently there’s money in algorithms. 🙂

  59. EvilEmperor Says:

    rrtucci: It’s amazing the insights you can get living in your mom’s basement. Where did you get your phd again?

  60. John Sidles Says:

    IMHO, posts like the above are purely rude.

    Not wishing a rude poster to have the final say (and yet with nothing particularly brainy coming to mind), here’s a link to some Ole and Lena jokes that are certified by Lutheran ministers to be clean!

    Lena: “Ole, stant in front of my car and tell me if da turn signals are vorking.”
    Ole: “Yes, No, Yes, No, Yes, No, Yes, No….”

    That ought to exorcise the evil spirits of rudeness! 🙂

  61. John Sidles Says:

    Now I know who rtucci really is. Or might be. 🙂

  62. Peter Shor Says:

    Hi Scott. I’d just like to dispute your following claim.

    1. For physicists, the word “generically” generically means something different than I meant. For computer scientists, pessimists that we are, the worst case is the “generic” case! So if we can construct an example where (say) a local search algorithm takes exponential time, we feel entitled to say that the algorithm “fails generically” on a whole class of examples of that general kind. For physicists, on the other hand, “generic” seems to mean something closer to “average-case,” which isn’t what I was talking about at all.

    I think computer scientists and physicists are using essentially the same meaning. No computer scientist (I hope!) would ever say that the simplex algorithm generically takes exponential time, even though that is indeed the worst case.

  63. Scott Says:

    Peter, let me amend what I said: the worst case is the generic case unless the worst case is “obviously pathological,” as evidenced by its being far from generic. 😉

  64. Gil Kalai Says:

    What is known on FT for adiabatic QC?

  65. John Sidles Says:

    I would like point out to students (and to myself) that Scott and Peter are discussing a substantive issue that has considerable mathematical depth to it.

    Feynman in his 1982 article Simulating physics with computers argued that generic physics problems are exponentially hard to simulate on a classical computer. But Feynman’s article is quite vague (deliberately?) about what constitutes generic physics.

    Twenty six years later, the question of what constitutes a generic quantum system still has no very good answer (as far as I know). Nonetheless, it is widely accepted, as Mike and Ike’s textbook expresses it:

    [Because] the number of of complex numbers needed to describe a quantum system grows exponentially with the size of the system … quantum simulation is likely to be an important application of quantum computers.”

    Similarly, D-Wave Systems confidently asserts that:

    Quantum computers are the only known solution to this problem. They are able to directly solve the fundamental equations of quantum mechanics for any physical system.

    These assertions have always rested upon undefined mathematical foundations. In particular, it is increasingly hard to understand why ab initio quantum chemistry codes seem to work better every year. If these codes keep improving at the present rate for another generation or two, there will be mighty few real-world systems that they can’t simulate.

    Is there a QIT reason why real-world quantum simulation algorithms (empirically) don’t seem to be running out of steam? Are there foreseeable end-points at which they will run out of steam? It is slowly dawning upon several research communities, simultaneously, that these are genuine QIT mysteries that have considerable mathematical depth and substantial real-world applications.

    Like a Brooklyn-born friend used to say: Hey, dat’s poifect! 🙂

  66. matt Says:

    Comment got cut off again! I’m guessing the use of the “less than” symbol confuses the html. Let me try one more time without that.

    Regarding the success of matrix product state codes in 1d for finding ground states, this is because of the entanglement entropy not being too large. For systems described by conformal field theories, the entanglement entropy (including the Renyi entropies with alpha less than 1, which are what you need to show ability to approximate the state by an mps) grows only logarithmically with system size as discussed by Verstraete and Cirac, 2006. For general 1d systems with local interactions and a spectral gap, the entropy can be bounded above by a function only of the gap, interaction strength, and local Hilbert space dimension, as shown by Hastings, 2007. This last result also applies to Renyi entropy and ability to approximate by an mps.

    For time dynamics, it depends. For a local perturbation (say, adding a particle to a ground state) the entropy grows only logarithmically, as discussed by Vidal, Cardy and Calabrese, Schollwoeck and others. For a “bulk perturbation” (changing the Hamiltonian) the entropy grows linearly-see recent work by Schuch, Wolf, Vollbrecht, and Cirac.

    For DFT, I don’t think we know why it works. Hohenberg-Kohn only shows that some functional exists, it doesn’t tell us about it being a tractable functional or not. Personally, I never liked the approach of teaching DFT in grad school based on the H-K theorem…it really doesn’t justify DFT, but it is given as a justification.

  67. Scott Says:

    What is known on FT for adiabatic QC?

    I think the short answer is “not much”! I’m told people don’t even know what’s the right theorem to try to prove. However, for some preliminary work, see this paper by Childs, Farhi, and Preskill.

  68. cody Says:

    Scott, “6.080/6.089 Great Ideas In Theoretical Computer Science (MIT, Spring 2008)” links to your Democritus lectures. unless of course youre still in the process of updating that and instead im being annoying… sorry if that is the case.

  69. Scott Says:

    Cody: Thanks! Fixed.

  70. John Sidles Says:

    Matt says: Regarding the success of matrix product state codes in 1d for finding ground states …

    It seems to me that a “matrix product state” (MPS) is a good example of an important class of mathematical objects that has been introduced by physicists via the time-honored definition “we know them when we see them” and the time-honored construction method “apply more duct tape.” 🙂

    As far as I know, there has not even been a serious attempt to define what MPS states are … isn’t this an essential step toward understanding why MPS states work well in “generic” calculations?

  71. matt Says:

    I’d disagree, I think we understand very well what MPS states are in 1d. They are precisely the states with bounded Schmidt rank across every cut. That is, take a 1d chain. Consider cutting it into two pieces, left and right, across some bond. If, for every such cut, the state has bounded Schmidt rank k, then it can be written exactly as a 1d mps with bond dimension k.

    What about states which don’t have good bounds on the Schmidt rank? (Every state has Schmidt rank at most 2^{N/2} for a chain of N qubits, but that isn’t very useful). If the Schmidt coefficients decay rapidly, then we can approximate the state by truncating to a state with a small Schmidt rank without too much error. You can use this idea to show that if for some alpha less than 1, we have a bound on the corresponding Renyi entropy across every cut, then the state can be approximated by a rank k matrix product state with error epsilon, where k scales polynomially in epsilon and N.

    So, it all turns into knowing how big the Renyi entropy is. For 1d gapped systems, there’s a theorem about that. For systems described by CFT, we know the answer. For time dynamics it depends, sometimes the entropies grow linearly in time and sometimes they don’t, and we can do calculations in model systems to see both cases, and we have some physical intuition about when it does what.

    Plus, there is a long-standing literature, back to Fannes, Nachtergaele, Werner on MPS from the math side.

    I’m not sure what else one could ask for in 1d, honestly (perhaps tightening the bounds on entanglement entropy as a function of gap would be nice). Now, in 2d, we don’t know much, but in 1d, can you give me a system for which MPS states work well in practice but we don’t understand why? (a little-mentioned fact here is that for 1d systems described by CFT, since you have a bound on all Renyi entropies alpha between 0 and 1, you can actually get much better than polynomial bounds on Schmidt coefficient decay, bounds which match what is observed in practice on these systems).

  72. John Sidles Says:

    Matt says: In 2d, we don’t know much, but in 1d, can you give me a system for which MPS states work well in practice but we don’t understand why?

    That’s easy, Matt! 🙂

    We restrict our attention to Abelian matrices (so the ordering is immaterial) … our MPS state now has no spatial structure whatsoever … we recognize that it is a Hartree product … we antisymmetrize to get a Hartree-Fock state … then we take sums of H-F states to build even more general variational states.

    Now we’re off to the races (computationally speaking). Because formally, the above are all MPS states–in fact they are a restricted set of MPS states (not a generalization).

    So I guess the answer to your question is: “Yes, MPS states work well in practice, specifically in ab initio chemical calculations. But only in special case of one-dimensional (nonabelian) MPS systems do we have any understanding of why.”

    And this leads to a technical question: when you said “time dynamics”, did that refer to (Hamiltonian/unitary) dynamics, or open (Kraus/POVM) dynamics? I would be *very* grateful for suggested articles in this exceedingly interesting area.

    Please accept my sincere thanks (in advance) for any suggestions, and my thanks too for the post above. 🙂

    Anyway, the above is the specific case that I had in mind when I wrote earlier on this thread that “if ab initio quantum chemistry codes were designed by QIT theorists, they might (1) look very different and (2) work very well indeed.”

    That is why, IMHO, a Golden Era of QIT is just beginning! 🙂

  73. matt Says:

    I agree that we don’t really know why Hartree-Fock states work so well, except in cases of very weak interaction. A case that I find even more interesting is the case of Gutzwiller projected states: for example, a Hartree-Fock state of spin-up and spin-down fermions, projected onto single occupancy on a site. These work well in certain cases which are very strong interacting, and I don’t really know why.

    There’s a couple technical points in writing H-F states in the usual non-abelian MPS form. That is, it’s a little tricky to write the anti-symmetrization and write the many-body state this way. As an example of the difficulty, free fermions hopping on a line in 1d have a H-F wavefunction. But, at half filling, where they have a gapless spectrum, the entanglement scales logarithmically in the system size, and so one needs a bond dimension k scaling polynomially in the system size. Still, this is not bad and you are write that these are an example of MPS.

    For time dynamics, I meant unitary dynamics, following the time evolving block decimation algorithm of Vidal. Version of this algorithm have been applied to studying open dynamics (see Zwolak and Vidal, PRL 93, 207205 and Verstraete, Garcia-Ripoll, and Cirac, PRL 93, 207204). However, in the open case I don’t think there has been the same amount of study of different initial conditions to get an idea of when the state can be represented efficiently by a matrix product operator. Perhaps this is partly because the overhead in simulating matrix product operators is higher than in simulating matrix product states, so this case has not been considered as much. Probably there is a lot of interesting stuff to be seen there. I can imagine cases in which entanglement is being generated by the unitary dynamics and destroyed the environment leading to competing effects.

  74. matt Says:

    Scott, why does it sometimes say “Your comment is awaiting moderation” and sometimes just post it right away?

  75. Scott Says:

    Matt: WordPress applies an automatic filter, looking for spam and for certain words that have been associated with trolling in the past. Often, as in your case, it will catch a completely innocuous comment, in which case please just be patient: I’ll moderate the comment as soon as I get a chance.

  76. anonymous Says:

    It was generally agreed (with one dissent, from Geordie) that reducing factoring to a 3SAT instance, and then throwing a generic adiabatic optimization algorithm at the result, would be a really, really bad approach to this problem.

    I must be missing something, but how could it even conceivably be a reasonable approach, unless factoring is NP-complete?

  77. Scott Says:

    anonymous: It could conceivably be reasonable if 3SAT instances that were derived from factoring had a special structure that let the adiabatic algorithm solve them efficiently. (I.e., if the adiabatic algorithm could in effect say, “hey wait, this 3SAT instance just encodes factoring! I’m a quantum computer, and I know how to factor!”) To me, this is every bit as unlikely as it sounds.

  78. John Sidles Says:

    Matt says: I can imagine cases in which entanglement is being generated by the unitary dynamics and destroyed [by] the environment leading, to competing effects.

    That is precisely the case we have been studying, both numerically and analytically. Yes, MPS works amazingly well. And yes, we wish we knew why!

    That is the shared experience of pretty much everyone who works with MPS states. As though we lived in a world where people coded simplex algorithms all the time, marveling at the speed and robustness of each new software package, without however knowing much about convex sets. 🙂

  79. Peter Shor Says:

    Actually, the real reason the simplex algorithm worked so well was not discovered until fairly recently, when Spielman and Teng invented smoothed analysis. So for 50 years, we really did live in a world where people coded simplex algorithms all the time without knowing why they worked. After a while, though, they stopped marveling at this miracle.

  80. John Sidles Says:

    Peter says: … For 50 years, we really did live in a world where people coded simplex algorithms all the time without knowing why they worked. After a while, though, they stopped marveling at this miracle.

    Peter, I agree completely except in one minor respect: there is pretty good evidence that the mathematical community never stopped marvelling at the simplex algorithm … an example of “sustained marveling” is this 1986 quote from Dantzig himself (citation given at end):

    In brief, one’s intuition in higher dimensional space is not worth a damn! Only now, almost forty years after the time the simplex method was first proposed, are people beginning to get some insight into why it works as well as it does.

    In retrospect, two major obstructions to understanding the simplex algorithm seem to have been (1) the problem is just plain tough and (2) the required information-theoretic tools took considerable time to develop. The parallels with modern QIT and MPS seem pretty strong to me.

    My engineering colleagues (and students) often ask “When is the payoff from QIT research going to arrive?” The answer I give is “Important payoffs from QIT research are already arriving, in the form of quantum simulation algorithms that become more efficient every year, whose mathematical (and physical) basis QIT is beginning to explain.”

    Then I review the parallels with the (well known) simplex algorithm case. This works pretty well, because there are whole engineering departments (called Operations Research Departments) that are founded largely upon the simplex algorithm and its mathematical descendants.

    It seems plausible to me that there are good prospects for QIT to repeat this success story.

    ——-
    @article {MR856311,AUTHOR = {Albers, Donald J. and Reid, Constance},TITLE = {An interview with {G}eorge {B}. {D}antzig: the father of linear programming}, JOURNAL = {College Math. J.},VOLUME = {17},YEAR = {1986},NUMBER = {4},PAGES = {293–314}}

  81. Peter Shor Says:

    Hi John,

    Certainly, a small number of people never stopped marveling, presumably including Spielman and Teng (kudos to them!). I can guarantee you that most of us computer scientists just used the simplex algorithm, and only pondered its amazing efficiently rarely, if ever.

  82. John Sidles Says:

    Peter, I am mighty impressed with Spielman and Teng’s article! It is exemplary … so thank you very much for pointing me (and others) towards it.

  83. Gil Kalai Says:

    The story of why the simplex algorithm is so good, is a little more complicated;

    (1) For a a long time people thought it is so good because they thought it is always good. Then Klee and Minty found an example where the worst case behavior is exponential Many examples followed of exponential worst case behavior.

    (2) Then came what in my mind is the best explanation why the simplex algorithm is so good, and this is Khatchian result that LP is in P. His proof was based on Nemirovskii-Shor ellipsoid method and not on the simplex algorithm.

    (3) Then there was a short period where people thought that the simplex is practicaly good and theoretically bad while other methods which are theoretically good are practically bad. Karmarkar’s algorithms and claims change the picture. Interior point methods are theoretically and practically good.

    (4) Returning to the simplex algorithm, the second major explanation why it runs quickly was the average case analysis by Borgwart Smale Haimovich Adler Megido Karp Shamir and Todd (and maybe others). Actually some versions are already a mix of average case and worse case: It is worse case over arrangements and average case over signs.

    (5) The average case analysis is known only for a single pivot rule (which is not practical) the shadow boundary rule.
    It will be very nice if other pivot rules will be understood.

    (6) Then came Spielman and Teng and proved the smothened analysis (again for the same pivot rule). The average case behavior for small Gaussian perturbations of the worse case problem is polynomial!!! (Some extensions and improvements were subsequently found). Indeed, in some sense, this is the “closest” explanation to the actual phenomenon.

    (7) It is not known if there are pivot rules that are polynomial in the worse case, and another major problem is if there are strongly polynomial LP algorithms namely (roughly) that the number of arithmetic operations is polynomial in the number of inequalities and variables. There are some very nice randomized algorithm for LP in fixed dimension and there are randomized LP algorithm with slightly subexponential behavior for general problems.

  84. asdf Says:

    I thought that Smale along with some combination of Lemke, Rumelhart(?) etc. in the 1960’s or 70’s or so proved that the “bad” cases for the simplex method fell on a surface of measure 0 or something almost as good, i.e. the mystery of why it worked so well was cleared up a long time ago. Maybe that was just folklore.

  85. Gil Kalai Says:

    asdf: Smale’s results were from the 80s and relied on a certain variant of Lemke’s algorithm. (Which is related to the shadow boundary pivot rule used by Bogwardt and all subsequent works.) The bad cases have positive measure but are rare. Spielman and Teng showed that bad cases are rare in the neighborhood of every given problem.
    I am not sure we can say that the mysteries regarding why the simplex algorithm works so well are completely “cleared up”. But it turned out to be a very fruitful area of research related to nice mathematics and computer science where the gap between theory and application is especially thin.

  86. John Sidles Says:

    Asdf says: … I thought the mystery of why it worked so well was cleared up a long time ago. Maybe that was just folklore.

    That had been my impression too. Spielman and Teng address that point as follows …

    While these average-case analyses are significant accomplishments, it is not clear whether they actually provide intuition for what happens on typical inputs. Edelman [Ede92] writes on this point: “What is a mistake is to psychologically link a random matrix with the intuitive notion of a “typical” matrix or the vague concept of ‘any old matrix.’

    Regardless of whether you think this is right or wrong, there is no doubt that Spielman and Teng’s smoothed bound is mathematically stronger than the average-case bound, and so this point of view was productive for them.

    Also, I want to thank Gil Kalai for his fine post, especially for the way it points out that there is still much work to do (and I see in Spielman and Teng that Gil had a 1992 STOC publication on randomized simplex algorithms 🙂 ).

    This whole story seems to me to be exemplary of math, science, and engineering at their best. The fundamental mathematics beautifully unites algebra, geometry, and information theory, the history of the dovetailing of these ideas is rich and colorful, the work found plenty of practical applications, and the story is not over yet.

    I hope too that the interesting posts are not over yet. For example, there is no mention of exploratory numerical work in Spielman and Teng’s article. But that would have been the first thing I would have tried! It would daunt me pretty considerably to to learn that linear programming analysts routinely skip this step. And this goes double for QIT analysis.

  87. asdf Says:

    Thanks, yes I agree that Gil Kalai’s post was great, I didn’t know what the situation was, I had thought what Smale had proved was something like what it turns out Speielman and Teng proved. I.e. I thought the simplex method was sort of like the Jordan canonical form of a matrix, i.e. where the 1’s above the main diagonal appear in pure math, but perturbing the matrix even slightly (e.g. when it’s from physical measurements with the slightest imprecision) makes it diagonizable, so measurements in the real world of stuff modelled by differential equations don’t have those t**n*exp(t) terms in their solutions resulting from non-diagonalizeability. That the cases where the simplex method worked badly just didn’t come up in practice.

  88. Brian Wang Says:

    Things that are pre-sold years in advance. Even before prototypes are working.
    Cars – tesla electric, aptera 300mpg car, Toyota prius etc..
    Planes – airbus 380, Boeing etc…
    Rockets – SpaceX Falcon9, Virgin Galactic sub-orbital
    Software – subscriptions for anticipated upgrades, which can be many years late.

  89. Brian Wang Says:

    Lining up customers is standard practice. Consider taking a business course. MIT has lots of good glasses on sales and marketing. Also, drop by the MIt entrepreneurs club, they can help you out with what is or is not comically premature

  90. James Says:

    Things that are pre-sold years in advance. Even before prototypes are working…

    The examples you gave all had established technical foundations, even if details remained to be worked out.

    Lining up customers is standard practice. Consider taking a business course.

    Nobody said it wasn’t ‘standard practice’, only that it is premature when your not reasonably sure your product works even theoretically. I imagine a business course would tell me that that is an irrelevant detail: marketing can hide the nasty warts of reality, perception is all that really matters after all. I guess that’s why I am not in business.

  91. Brian Wang Says:

    Often times what is being bought is the promise of theoretically greater efficiency (for the planes and cars.) The planes and cars could work in terms of being able to fly or drive but would be a failure if the benefits of greater performance did not materialize. Virgin galactic and Spacex are promising superior safety and lower cost. Very unclear if they can deliver. The 200+ ticket buyers for virgin galactic could get seriously disappointed if they start dieing in significant numbers when they start flying.

    Similarly it is clear the dwave systems will calculate in terms of a hardwired classical ising model using superconducting chips. It is the question of superior performance which is open. Although hardcoded optimized systems can deliver 100-1000 times more performance than general software over general hardware.

    Lining up customers – is a business issue. If people were not saying it was “standard practice” then what is calling standard practice comical ? I guess it is comical that they sold what people with money were willing to pay for.

  92. milkshake Says:

    There is a short parody written by Karel Capek. A business guy is expounding on his plan for a time-travel agency – a company that will allow people unhappy with the current political and economical situation to emigrate into historic times according to their preferences.
    The businessman goes into some detail explaining what kind of customers he will have and into what historic periods he is going to offer the transport – and how is he going to finance the project and what fee structure he would be charging for the services… He has this small problem that the time machine transport still has some kinks to work out – but except for that all pieces of the business are in place.

  93. John Sidles Says:

    Milkshake Says: … a short parody written by Karel Capek …

    Our QSE Group maintains a library of books and articles in which mathematicians, engineers, and scientists try to foresee (and often parody) the future. On this shelf, for example, is Norbert Wiener’s novel The Temptor, which centers upon a topic similar to Capek’s.

    How good is Wiener’s novel? Let’s just say that WIlliam Faulkner didn’t have to worry about the competition.

    If we restrict our attention to visionary fictional studies that appeared in the peer-reviewed literature *and* that accurately foresaw the scientific and technological future, then we find that such studies do exist. Indeed, such articles are precursors to the emergence of almost all new fields of academic endeavor.

    A good example is Henry Stommel’s visionary forecast The Slocum Mission, which appeared in April 1989 Oceanography, pp. 22-5. This article foresaw fleets of ocean-going gliders that have indeed become a reality.

    Like perfect 300 games in bowling, successful technological forecasts are not common, but neither are they rare.

    James’ comment #90 above is IMHO absolutely correct that almost all successful forecasts have solid foundations in mathematics, science, and engineering. An emerging role of QIT is to provide similarly solid foundations for the quantum technologies of the future.

  94. John Sidles Says:

    As a followup to the above, here is a link to some impressive real-time data from a new-generation thermally-powered ocean glider. Hey, the little robots are out there swimming!

    It is especially cool that these autonomous robots are powered by ambient thermal disequilibrium … which makes them more closely resemble life-forms like us.

    This project is IMHO is a very nice blend of fundamental math, physics, engineering, and observational science.

    What’s that? You say there’s no “QIT” in this ocean-glider project? The point is that the coming generation of observational surveys—particularly in biology and condensed matter physics, and also in planetary and stellar astronomy and cosmology—that will be conducted with quantum-limited instruments and interpreted in terms of ab initio quantum dynamical models.

    And it is for these coming enterprises that QIT is beginning to provide the necessary foundations. Which is one reason (there are plenty of other reasons too) that QIT is such a cool field. That’s my 2¢ anyway.

  95. Kurt Says:

    You know, what quantum computing really needs is a cheap, simple device (i.e., a “quantum transistor”) that hobbyists can play around with in their basements and garages à la Jobs and Wozniak. Of course, if such a thing currently existed then you wouldn’t be having these discussions about D-Wave. But are there any conceivable ways such a device might eventually be constructed? Maybe something optically based? Because quantum computing will be a heck of a lot more fun once this is possible.

  96. Jon Tyson Says:

    A cursory glance the claims on D-wave’s website combined with Geordie Rose’s unimpressive appearance at MIT makes me feel very sorry for the technologically-naive investors who have been duped into putting up the $17M for the latest round of D-wave funding.

    D-wave’s claim of having “demonstrated” “the world’s first commerical quantum computer” is seems blantantly fraudulent to me.

  97. John Sidles Says:

    Kurt Says: You know, what quantum computing really needs is a cheap, simple device (i.e., a “quantum transistor”) that hobbyists can play around with in their basements and garages à la Jobs and Wozniak.

    Kurt, the following post will argue that you are absolutely right. And furthermore, the post will respectfully (and with deliberate optimism) submit that what you ask for already exists, in the form of Chapters 2 and 8 of “Mike and Ike”. The only other ingredient you need is a laptop-scale computer.

    Our QSE Group came to this realization via an epiphany that occurred when two summer interns from the Electrical Engineering Department walked into our control room. The two interns were greeted with “Welcome aboard! Let’s get you started. Would you like to help us measure the noise level of the interferometer?” … and we handed them the business-end of a BNC signal cable.

    To our amazement, these junior-year students flinched as though the signal cable was a tree-snake unexpectedly descending from the canopy! It emerged that they had never before measured a real-world signal with professional intent … all of their prior experience had been with in silico simulations.

    Yet these two students proved to be very capable engineers. It took them only a week or so to adapt to the “quaint” notion of working with physical signals. 🙂 So evidently there was nothing wrong with the way these students had learned electrical engineering … instead there was something very good about it.

    This experience motivated our QSE Group to write out an explicit set of recipes that link the quantum formalism and fundamental results of Chapters 2 and 8 of “Mike and Ike” to the real world of voltages and power spectral densities, via a set of explicit design rules for ex silico numerical simulation. The idea was to teach quantum system engineering along lines similar to the way that subjects like modern electrical engineering really are being taught.

    Writing out these recipes has been fun. In order to motivate, optimize, and justify these recipes, we had to link the Mike and Ike formalism to (for example) the engineering discipline of model order reduction, the chemical discipline of ab initio quantum calculations, and the mathematical discipline of algebraic Kahler geometry. We will be presenting our QSE Group’s recipes at a Gordon Conference later this month (at which time we’ll post them to the arxiv server). And of course, many other groups are working along similar lines.

    The overall point is that modern electrical and mechanical engineers (and chemists and condensed matter physicists … and even biologists) do a substantial proportion of their most productive work in silico. Why should quantum system engineering students and researchers be any different?

    To us, the possibilities of these avenues of research seem pretty much unbounded, whether one’s interests are classical or quantum, practical or fundamental, or theoretical versus experimental versus observational. It is an exciting time for everyone … and this excitement is all based upon the informatic, algebraic, and geometric foundations that QIT provides.

    There. How’s that for academic optimism! 🙂

  98. rrtucci Says:

    Kurt said: “You know, what quantum computing really needs is a cheap, simple device (i.e., a “quantum transistor”) that hobbyists can play around …”

    As far as software is concerned, there is still a
    tremendous amount that the hobbist can do,
    stuff that is necessary and hasn’t been done yet.
    There might even be room for a small quantum computing software company. You should start your own.

    I also think new, clever software,
    just like new, clever experiments in physics, often raises
    very interesting theoretical questions that
    nobody had thought to work on before.

  99. John Sidles Says:

    That is a fine post, rtucci. Also, a strong case can be made that what you say is not a hope for the future. but a reality of the present. Which is good for everyone.

    The above post is mainly to keep a slow thread “ticking over”, until Scott starts a new one! 🙂

  100. Kurt Says:

    Thanks for the replies, John and R.R. John, I didn’t fully understand what you said, but I’ll be watching for your group’s “recipes”, as you call them, when they become available online.

  101. Stas Says:

    Regarding simplex method being efficient “generically”, would you also say that graph isomorphism is polynomial “generically”? You have to be really deliberate to create two graphs that are not obviously isomorphic or non-isomorphic. And I also believe that, similarly to the simplex method, any minimal change in a hard instance will result in an easy one.

  102. michael vassar Says:

    while we’re on classic subjects, any chances of another “Quantum Computing Since Democritus” blog post soon?

  103. Raoul Ohio Says:

    In addition to its intuitive meaning, the term “Generic” has a precise meaning in differential equations and perhaps all of applied mathematics, as illustrated by the following example.

    The state x = x(t) of many systems can be modeled by equations of the form

    x’ = f(x,p)

    where p is a set of parameters in some space P. The evolution of x(t) might have one or more properties such as being oscillatory, chaotic, or whatever. This behavior is
    ** generic **
    if it occurs for parameter values on an open subset S of P. The reason for this definition is presumably that open sets have a nonempty interior and positive measure. Also, for p in S, there is a small “ball” containing p also in S. This implies that slight perturbations of the problem maintain the property.

    For a related (but extreme) example, n by n real matrices are “generically nonsingular”. This is because the set Z in R^(n^2 where det(A) = 0 is n^2 – 1 dimensional and closed and has zero measure, so Z complement, where det(A) != 0, is open, and in fact, pretty big.

    It is not clear if this definition is appropriate for algorithm analysis in general, but it can probably be applied to linear programming.

  104. Greg Kuperberg Says:

    Regarding simplex method being efficient “generically”, would you also say that graph isomorphism is polynomial “generically”? You have to be really deliberate to create two graphs that are not obviously isomorphic or non-isomorphic. And I also believe that, similarly to the simplex method, any minimal change in a hard instance will result in an easy one.

    That has been known for a long time if generic means random input. You can color a graph by its valence, then recolor it by its colored valence, then continue for a long time until the coloring stabilizes. But, unlike the simplex method, this generic result doesn’t help you at all with hard cases. If you perturb a polytope, then not only is the simplex method usually fast, but also its output solves the original problem. No one knows a way to do that with graph isomorphism.

    One of the important classes of graphs to think about for graph isomorphism are the ultra-homogeneous graphs of Cai, Furer, and Immerman. None of the basic recoloring algorithms can possibly work for these graphs. The widely used program nauty, which is a more sophisticated graph recoloring algorithm, also performs poorly with these graphs as best I understand it. On the other hand, if you know that the graphs come from the CFI construction, then they are not hard to canonically color with a special-case algorithm.

    As I understand it, graph isomorphism is at a stalemate. By which I mean: If you show the community any known algorithm to make instances, then it can make an algorithm that usually solves them quickly. If you show the community any known algorithm to solve graph isomorphism, then it can algorithmically generate hard instances. I do not know for certain, but that is my impression.

  105. Stas Says:

    Greg, thanks for a thorough explanation! It fully answers my question. I’ve never been thinking of graph isomorphism problem as a “stalemate”, but it’s indeed a quite precise description…