Waiting for BQP Fever

Update (April 5): By now, three or four people have written in asking for my reaction to the preprint “Computational solution to quantum foundational problems” by Arkady Bolotin.  (See here for the inevitable Slashdot discussion, entitled “P vs. NP Problem Linked to the Quantum Nature of the Universe.”)  It gives me no pleasure to respond to this sort of thing—it would be far better to let papers this gobsmackingly uninformed about the relevant issues fade away in quiet obscurity—but since that no longer seems to be possible in the age of social media, my brief response is here.


(note: sorry, no April Fools post, just a post that happens to have gone up on April Fools)

This weekend, Dana and I celebrated our third anniversary by going out to your typical sappy romantic movie: Particle Fever, a documentary about the Large Hadron Collider.  As it turns out, the movie was spectacularly good; anyone who reads this blog should go see it.  Or, to offer even higher praise:

If watching Particle Fever doesn’t cause you to feel in your bones the value of fundamental science—the thrill of discovery, unmotivated by any application—then you are not truly human.  You are a barnyard animal who happens to walk on its hind legs.

Indeed, I regard Particle Fever as one of the finest advertisements for science itself ever created.  It’s effective precisely because it doesn’t try to tell you why science is important (except for one scene, where an economist asks a physicist after a public talk about the “return on investment” of the LHC, and is given the standard correct answer, about “what was the return on investment of radio waves when they were first discovered?”).  Instead, the movie simply shows you the lives of particle physicists, of people who take for granted the urgency of knowing the truth about the basic constituents of reality.  And in showing you the scientists’ quest, it makes you feel as they feel.  Incidentally, the movie also shows footage of Congressmen ridiculing the uselessness of the Superconducting Supercollider, during the debates that led to the SSC’s cancellation.  So, gently, implicitly, you’re invited to choose: whose side are you on?

I do have a few, not quite criticisms of the movie, but points that any viewer should bear in mind while watching it.

First, it’s important not to come away with the impression that Particle Fever shows “what science is usually like.”  Sure, there are plenty of scenes that any scientist would find familiar: sleep-deprived postdocs; boisterous theorists correcting each other’s statements over Chinese food; a harried lab manager walking to the office oblivious to traffic.  On the other hand, the decades-long quest to find the Higgs boson, the agonizing drought of new data before the one big money shot, the need for an entire field to coalesce around a single machine, the whole careers hitched to specific speculative scenarios that this one machine could favor or disfavor—all of that is a profoundly abnormal situation in the history of science.  Particle physics didn’t used to be that way, and other parts of science are not that way today.  Of course, the fact that particle physics became that way makes it unusually suited for a suspenseful movie—a fact that the creators of Particle Fever understood perfectly and exploited to the hilt.

Second, the movie frames the importance of the Higgs search as follows: if the Higgs boson turned out to be relatively light, like 115 GeV, then that would favor supersymmetry, and hence an “elegant, orderly universe.”  If, on the other hand, the Higgs turned out to be relatively heavy, like 140 GeV, then that would favor anthropic multiverse scenarios (and hence a “messy, random universe”).  So the fact that the Higgs ended up being 125 GeV means the universe is coyly refusing to tell us whether it’s orderly or random, and more research is needed.

In my view, it’s entirely appropriate for a movie like this one to relate its subject matter to big, metaphysical questions, to the kinds of questions anyone can get curious about (in contrast to, say, “what is the mechanism of electroweak symmetry breaking?”) and that the scientists themselves talk about anyway.  But caution is needed here.  My lay understanding, which might be wrong, is as follows: while it’s true that a lighter Higgs would tend to favor supersymmetric models, the only way to argue that a heavier Higgs would “favor the multiverse,” is if you believe that a multiverse is automatically favored by a lack of better explanations.  More broadly, I wish the film had made clearer that the explanation for (some) apparent “fine-tunings” in the Standard Model might be neither supersymmetry, nor the multiverse, nor “it’s just an inexplicable accident,” but simply some other explanation that no one has thought of yet, but that would emerge from a better understanding of quantum field theory.  As one example, on reading up on the subject after watching the film, I was surprised to learn that a very conservative-sounding idea—that of “asymptotically safe gravity”—was used in 2009 to predict the Higgs mass right on the nose, at 126.3 ± 2.2 GeV.  Of course, it’s possible that this was just a lucky guess (there were, after all, lots of Higgs mass predictions).  But as an outsider, I’d love to understand why possibilities like this don’t seem to get discussed more (there might, of course, be perfectly good reasons that I don’t know).

Third, for understandable dramatic reasons, the movie focuses almost entirely on the “younger generation,” from postdocs working on ATLAS and CMS detectors, to theorists like Nima Arkani-Hamed who are excited about the LHC because of its ability to test scenarios like supersymmetry.  From the movie’s perspective, the creation of the Standard Model itself, in the 60s and 70s, might as well be ancient history.  Indeed, when Peter Higgs finally appears near the end of the film, it’s as if Isaac Newton has walked onstage.  At several points, I found myself wishing that some of the original architects of the Standard Model, like Steven Weinberg or Sheldon Glashow, had been interviewed to provide their perspectives.  After all, their model is really the one that’s been vindicated at the LHC, not (so far) any of the newer ideas like supersymmetry or large extra dimensions.

OK, but let me come to the main point of this post.  I confess that my overwhelming emotion on watching Particle Fever was one of regret—regret that my own field, quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown.

See, from my perspective, there’s a lot to envy about the high-energy physicists.  Most importantly, they don’t perceive any need to justify what they do in terms of practical applications.  Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN.  But any time they try to justify what they do, the unstated message is that if you don’t see the inherent value of understanding the universe, then the problem lies with you.

Now, no marketing consultant would ever in a trillion years endorse such an out-of-touch, elitist sales pitch.  But the remarkable fact is that the message has more-or-less worked.  While the cancellation of the SSC was a setback, the high-energy physicists did succeed in persuading the world to pony up the $11 billion needed to build the LHC, and to gain the information that the mass of the Higgs boson is about 125 GeV.

Now contrast that with quantum computing.  To hear the media tell it, a quantum computer would be a powerful new gizmo, sort of like existing computers except faster.  (Why would it be faster?  Something to do with trying both 0 and 1 at the same time.)  The reasons to build quantum computers are things that could make any buzzword-spouting dullard nod in recognition: cracking uncrackable encryption, finding bugs in aviation software, sifting through massive data sets, maybe even curing cancer, predicting the weather, or finding aliens.  And all of this could be yours in a few short years—or some say it’s even commercially available today.  So, if you check back in a few years and it’s still not on store shelves, probably it went the way of flying cars or moving sidewalks: another technological marvel that just failed to materialize for some reason.

Foolishly, shortsightedly, many academics in quantum computing have played along with this stunted vision of their field—because saying this sort of thing is the easiest way to get funding, because everyone else says the same stuff, and because after you’ve repeated something on enough grant applications you start to believe it yourself.  All in all, then, it’s just easier to go along with the “gizmo vision” of quantum computing than to ask pointed questions like:

What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes—that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms?  What happens when it turns out that the real applications of quantum computing—like breaking RSA and simulating quantum systems—are nice, but not important enough by themselves to justify the cost?  (E.g., when the imminent risk of a quantum computer simply causes people to switch from RSA to other cryptographic codes?  Or when the large polynomial overheads of quantum simulation algorithms limit their usefulness?)  Finally, what happens when it turns out that the promises of useful quantum computers in 5-10 years were wildly unrealistic?

I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze.  And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons!  and the real reasons remain as valid as ever!”

In my view, we as a community have failed to make the honest case for quantum computing—the case based on basic science—because we’ve underestimated the public.  We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer.  The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality.  We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it.  Or if it isn’t the truth, then we want to discover what is the truth.

Many people would say it’s impossible to make the latter pitch, that funders and laypeople would never understand it or buy it.  But there’s an $11-billion, 17-mile ring under Geneva that speaks against their cynicism.

Anyway, let me end this “movie review” with an anecdote.  The other day a respected colleague of mine—someone who doesn’t normally follow such matters—asked me what I thought about D-Wave.  After I’d given my usual spiel, he smiled and said:

“See Scott, but you could imagine scientists of the 1400s saying the same things about Columbus!  He had no plan that could survive academic scrutiny.  He raised money under the false belief that he could reach India by sailing due west.  And he didn’t understand what he’d found even after he’d found it.  Yet for all that, it was Columbus, and not some academic critic on the sidelines, who discovered the new world.”

With this one analogy, my colleague had eloquently summarized the case for D-Wave, a case often leveled against me much more verbosely.  But I had an answer.

“I accept your analogy!” I replied.  “But to me, Columbus and the other conquerors of the Americas weren’t heroes to be admired or emulated.  Motivated by gold and spices rather than knowledge, they spread disease, killed and enslaved millions in one of history’s greatest holocausts, and burned the priceless records of the Maya and Inca civilizations so that the world would never even understand what was lost.  I submit that, had it been undertaken by curious and careful scientists—or at least people with a scientific mindset—rather than by swashbucklers funded by greedy kings, the European exploration and colonization of the Americas could have been incalculably less tragic.”

The trouble is, when I say things like that, people just laugh at me knowingly.  There he goes again, the pie-in-the-sky complexity theorist, who has no idea what it takes to get anything done in the real world.  What an amusingly contrary perspective he has.

And that, in the end, is why I think Particle Fever is such an important movie.  Through the stories of the people who built the LHC, you’ll see how it really is possible to reach a new continent without the promise of gold or the allure of lies.

250 Responses to “Waiting for BQP Fever”

  1. adbge Says:

    I’m not a particle physicist, but I sort of wonder if they don’t have a larger PR problem than it seems from the outside. One of the first articles returned from a Google search for “why cern” talks about the World Wide Web as one of the triumphs of CERN in the second paragraph — and, indeed, this example looks (from a brief glance) to be as cliched and unconvincing as a pure mathematician’s “Well, uh, number theory found applications in cryptography!”

    I’m reminded of Timothy Gower’s “The Importance of Mathematics”, where — as my fallible memory recalls — the man grasps for even the most tenuous connection to physics to justify pure mathematics. I don’t envy him his plight. Probably it’d be nice to be able to say “mathematics is about figuring out the true nature of reality” instead of something like “mathematics is about the implications of these axioms.” (Although, if Tegmark gets his way, mathematicians will be able to borrow the same pitch from the physicists!)

    Applications of the space program (going to the moon, etc) fit the same pattern. As a general rule, people seem to be much more comfortable pointing to extrinsic reasons as justifications than intrinsic ones — for instance, exercise because it’s good for you, rather than exercise before it’s fun.

    But, of course, we are in violent agreement when it comes to informing the public as to what these fields are really about and introducing them to the pleasure of finding things out.

  2. Scott Says:

    adbge #1: Math is not about figuring out the implications of axioms; it’s about understanding mathematical reality and the very nature of truth itself. See, didn’t that sound better? 🙂

    (This is why I call myself an “anti-anti-Platonist”: I’m not even sure what it means to literally believe or disbelieve in a Platonic realm of mathematical objects, but I do know that the people who call themselves “anti-Platonists” often have an agenda of denigrating the fundamental role of math in human knowledge, and I know they’re wrong about that.)

  3. quax Says:

    “But there’s an $11-billion, 17-mile ring under Geneva that speaks against their cynicism.”

    Wish I could share your idealism. To me it’s a historic stroke of luck that the unprecedented destructive power of nuclear weapons impressed enough brass to wonder what else the egg heads could come up with if you give them enough expansive toys to play with.

    High energy physics has been riding this wave ever since.

    Shor’s algorithm is no H-bomb, but it certainly captured the spooks imagination. I think the field will have a good long run with it.

    In an ideal world there’d be plenty of public money spend on basic research because that’s the right thing to do, but I don’t expect to see this kind of world within my lifetime.

  4. arden.arboles Says:

    I feel the whole analogy is wrong. Particle Physics or Cosmology are not applied physics. Therefore, any transfer of knowledge to applications is in principle secondary. Indeed building a machine like the LHC is a remarkable feat in engineering that already brings to the front solving many different technical problems that may be important in different local applications. But you do not need other reasons to build the LHC than to explore how Nature is at the high energy/short distance scale.

    On the other hand, Quantum Computing is on the arena of applied Quantum Mechanics. The principles of the subject are quite well known. For sure building a quantum computer would be an increadible engineering feat and learning how to avoid averaging, losing information, fully tracking the correlations, etc. such that a mesoscopic machine still works with hundreds or more of degrees of freedom in a completely controlled quantum manner could be potentially much more important than the particular use of the machine runing a particular algorithm. But the fact of the matter is that the reason for building such machine should be founded on the potential use of the machine. Otherwise just call the field nanophysics and construct any other nanodevice.

    By the way, I feel you are also mistaken with respect to your reply on the discovery of America. You could be a naïve monk with the best of intentions instead of a greedy man searching for gold or territory uncared of the natives (and you would not be the first one believing himself much better than his countrymen or contemporaries), that you would spread the germs nonetheless, contributing to the holocaust. The moral of the story here is that the outcome of the enterprise (LHC, D-Wave, colonization of America) is not going to be dictated by how you justifiy it to yourself or others, but rather on what you are able to do once you get the funding or the wind is blowing.

  5. fred Says:

    “[…]“return on investment” of the LHC, and is given the standard correct answer, about “what was the return on investment of radio waves when they were first discovered?”

    But radio wave research didn’t cost $6B.
    When things are getting that massive and expensive, it’s reasonable to expect some debate.
    Even when science is more practical, like space exploration, it’s hard to convince people of the long term benefits.
    Also, all resources are finite and we can’t pursue everything.
    In a way, it’s a blessing for you that computer science research mainly involves pen/paper/a good laptop – you’ll never have do deal with managing giant budgets and crazy deadlines on the scale of the LHC.

  6. fred Says:

    “See, from my perspective, there’s a lot to envy about the high-energy physicists. Most importantly, they don’t perceive any need to justify what they do in terms of practical applications. Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN. ”

    Hmm…
    Could just be that the general public understands that nuclear energy comes from quantum physics/E=MC^2 pre-WW2 physics – the atomic bomb is one of the most spectacular applications of theoretical physics of all times.

    I think that complexity theorists/computer scientists are just too shy and should take credit for … computers! What could be more practical than this?!

  7. Scott Says:

    arden #4:

    (1) One could also argue that existence of the Higgs boson was already “quite well known” by anyone who understood the Standard Model (e.g., you’d already needed to incorporate virtual Higgses in your Feynman diagrams for precision electroweak measurements), and therefore saying that we need to build the LHC to find the Higgs was a weak justification. Sure, we did learn a few bits of new information—e.g., that the Higgs mass is 125 GeV, and that there are apparently no superpartners at LHC energies—but we’ll also learn some new bits of information by building a quantum computer, like which types of qubits and which error-correction methods turn out to be the best.

    Personally, my approach is simply to reject all arguments of the form “every informed person already assumes X, therefore there can’t be any scientific reason to build an expensive machine to make X manifest.” I’ve met very smart people who told me their assumption had been that modern high-energy physicists had gone off the rails and had no idea what they were talking about, and that the discovery of the Higgs boson (with exactly the properties predicted by theory) actually caused them to change their minds. Likewise, you need look no further than my comments section if you want to find intelligent people whose attitude is, “who knows if quantum mechanics can actually be used to do any computation faster than you could do it classically? sure, the theory suggests so, but theory is theory, and makes all kinds of unrealistic assumptions. My guess is that there’s some undiscovered new principle that prevents QC from working, and upholds the classical Extended Church-Turing Thesis.” In my opinion, it’s a perfectly legitimate scientific goal to try and find out whether there is such a principle, and if there isn’t, then to rub the skeptics’ faces in the lack of one.

    (2) Aha, one of the many advantages of having careful, curious scientists instead of conquistadors is that, when the natives started dying of disease, they would notice something was wrong. Instead of just forging ahead, looting, enslaving, and infecting, they would stop and think about what they ought to do differently to prevent such a terrible outcome. And once they did that, I don’t think it would be too long before they hit on the idea of quarantining.

  8. arden.arboles Says:

    Scott #7:

    (1) There are many easier and less expensive experiments to put Quantum Mechanics into test than to build a quantum computer, but I don’t know of any other experiment to characterize the Higgs or to find/reject low energy supersymmetry.

    (2) I don’t doubt that you and your resourceful crew of careful, curious scientists would have avoided the holocaust in the Americas, I just doubt that you would have gathered such a crew in the XV century. For enforcing such noble principles in the discovery of America, you would have needed to ban crossing the ocean until the XX century or, considering some events in the XX century, even until more recently.

  9. wolfgang Says:

    “asymptotically safe gravity … I’d love to understand why possibilities like this don’t seem to get discussed more”

    I assume you are joking?
    If not here is a hint: asymptotically safe gravity is non-string quantum gravity.

  10. Gus Says:

    If this comic doesn’t make you laugh out loud then the problem lies with you.

    The choice of strategy to use in the promotion of quantum computing is an interesting game-theoretic puzzle. The basic-science strategy might be best in the long run, but aspiring academics need jobs right now; many would be willing to short sell the future if that’s what it takes to put food on the table.

  11. Scott Says:

    arden #8:

    (1) If the question about quantum mechanics that interests you is whether it provides greater than classical computational power—i.e., whether it violates the Extended Church-Turing Thesis—then the only way to test that question directly to try to build a scalable quantum computer. For any other kind of test, a skeptic could always maintain that there’s a polynomial-time simulation going on behind the scenes, so that at least in a computational sense, Nature is still “really” classical.

    (2) I don’t agree with you, because from my reading, even at the time there were people in Europe who realized the tragedy of looting and destroying Native American cultures and who denounced it. But those people were not in positions of power.

  12. gwern Says:

    Cowen discusses the economics scene: http://marginalrevolution.com/marginalrevolution/2014/04/particle-fever.html

    Also, are you sure you want to use the radio-wave example? Take a look at https://en.wikipedia.org/wiki/Marconi#Radio_work – the work was so cheap Marconi could self-fund, and his original goal was a wireless telegraph, which was obviously colossally valuable. Hertz may have thought his early radio work was useless, but Marconi knew better and so, he became the inventor and Hertz died – he was also in a distinct minority, as evidenced by how wirelessly telegraphy had been attacked repeatedly for years by all sorts of people because the value was so obvious.

    I’m sure you could find at least one person who argues that complexity theory is of no practical value, but they’d be wrong.

  13. arden.arboles Says:

    Scott #11 (Thanks for your replies)

    (1) Not being a quantum information scientist or a computer scientist myself I understand now why it is harder to sell the need of a quantum computer without resorting to the greedy down-to-earth benefits of the machine itself;-) But I will try to learn from you!

    (2) I agree that there were and are such people but history tends to show that they are rarely in positions of power (Americas, Africa, Australia,…) . And more often than not the narrative of the events at the time of the events (looting vs say, spreading of “modern values”) is just another weapon between conflicting people on power with different interests than the preservation of cultures.

  14. lylebot Says:

    How much funding does/did CERN/LHC get from the US, though? Wikipedia seems to suggest none, though of course funding arrangements can be complicated.

    I ask because the funding climate in the US right now seems to be heavily “application-oriented”, regardless of field. And this is due to various acts of Congress, so arguably it’s what the “public” wants.

  15. Koray Says:

    The proposed analogy is weird. Am I reading it right? “You could get totally lucky like Columbus, so this is more the reason to pursue high risk ventures.”

    This is not the kind of justification for businesses that pursue risky projects (such as D-Wave). When you pursue something with 90% chance of failure, you do it for the 10% chance of success, not for the combined “10% chance of success and an infinitesimal chance of success at something unknown, not directly related, but spectacular”.

    Most CS research is extremely difficult to sell to the general public. Quantum computing, at least to me, is the easiest to sell.

  16. fred Says:

    Scott #11
    “(2) I don’t agree with you, because from my reading, even at the time there were people in Europe who realized the tragedy of looting and destroying Native American cultures and who denounced it.”

    true:
    http://en.wikipedia.org/wiki/Valladolid_debate

  17. domenico Says:

    I think that the technology cannot exist without theory.
    Each great technological revolution happen after a great theoretical revolution, that has open the way to unexplored territories; the theories need good experimentalist to be verified, and it is not important the country where the discovery is made (the collaborations between countries are better, because the Science and the Discovery is of everyone).
    The first working Quantum Computer can be done by all, in any place (like LHC), with anyone who put the signature under the discovery, with many collaborating groups there is less total cost, and greater velocity (there is not redundancy).
    I am thinking that the discovery of the America is a Columbus discovery, a Portugal discovery, but were poorly commercially paid from the discovery (like the first Zuse computer, or the first Olivetti computer, or the first Whittle turbojet); now there is a patent search to make money in a project that will be not paid commercially, because too expensive and too long, but with great technological implication like the web.

  18. Mitchell Porter Says:

    Asymptotic safety of gravity (in N dimensions, let’s say) is generally regarded as inconsistent with the holographic principle, because the density of states at high energies will still be like an N-dimensional field theory.

    Also, a critical value for the Higgs boson mass (at the edge of metastability) is associated with the vanishing of the quartic coupling at high energies, and there are a number of ways that this might come about.

    So in the long run, perhaps Shaposhnikov and Wetterich got there first because they were employing a nonstandard hypothesis which, although not right, happened to be correct in its nonstandard implications in this particular way.

  19. Douglas Knight Says:

    quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown…they don’t perceive any need to justify what they do in terms of practical applications

    These sentences seem to me to be in contrast. QC has never managed to make this case, but particle physics doesn’t even try, does it? Maybe the difference is that QC undermines this case by invoking applications, while particle physics just bangs its chest?

  20. Darth Imperius Says:

    Your last point really does seem problematic. A great philosopher once said “man needs what is most evil for what is best in him”, and this is true even for science. Do you really think this vast industrial scientific establishment, and the rapid scientific progress we saw in the 20th century, would exist if not for World War and Cold War? Physics has been able to take advantage of its importance to the war machine, but the LHC may be the last great Manhattan or Apollo-style project. I doubt simple curiosity is enough to justify such expenditures any more, though it does seem possible that computer science can become the new physics, with fields like AI, quantum computing, cyberwar and robotics becoming the new holy grails of technological power.

    I get so tired of listening to do-gooder scientists glorify 20th century science without acknowledging the vital importance of warfare to your enterprise. Great conflicts have a way of focusing resources and heightening resolve that times of peace just can’t match. Without a Hitler or a Stalin, it’s not clear how science can maintain its funding going forward, or motivate new Von Neumanns and Tellers to pursue their Promethean dreams of power.

  21. fred Says:

    #17
    “Each great technological revolution happen after a great theoretical revolution, that has open the way to unexplored territories;”

    Hmm… what’s the theoretical revolution behind Jacquard Weaving or Virtual Reality?

  22. NKV Says:

    They had a lecture in my town following the movie, by the producer David Kaplan. I think that that was a great way to close the movie. He for example explained how Higgs particle can come out of two protons colliding. He said it’s a kind of reflection of the vaccum. You get to see what’s in the vaccum. Someone should make such brief videos complementing the movie.

  23. Lukasz Grabowski Says:

    “If watching Particle Fever doesn’t cause you to feel in your bones the value of fundamental science—the thrill of discovery, unmotivated by any application—then you are not truly human. You are a barnyard animal who happens to walk on its hind legs.”

    I find this passage completely unfunny. Cheap thrills seem to me more animalistic than the lack of them. The thrill you feel, I assume from what you wrote on this blog, you’ve felt almost all your life, like hunger or lust. Seems to be just another animalistic desire some people including you follow. Some do bdsm instead, or bunjee-jumping.

  24. Vadim Says:

    To me, the difference between the popularity of physics and TCS is that “everyone’s an armchair physicist” while the only non-experts interested in TCS are those who have an interest in computing. Not only is physics relevant to peoples’ lives, but the relevance is obvious. Most everyone has wondered at some point where the universe came from or what we’re made of at a fundamental level. But most people have no idea what an algorithm even is and can’t begin to imagine the difference between an efficient vs. inefficient one.

    On the bright side, you also don’t see people co-opting TCS for pseudo-scientific, new-age mumbo jumbo as happens with physics, especially QM. If you want to shudder, imagine what a TCS version of “What the Bleep Do We Know!?” would look like.

  25. Scott Says:

    Douglas #19: No, the two sentences seem totally in harmony to me! Particle physics correctly justifies itself in terms of the human urge to explore the unknown, whereas quantum computing incorrectly tries to justify itself almost entirely in terms of applications.

  26. JimV Says:

    Thanks for the review. I was wondering if it was worth making an effort to see it. (An effort would be required here in the suburbs.) Sounds good. Also, as the pessimists say, it may be the last “moon shot” we get. (I hope not.)

    Columbus to me was the brave and lucky ant who wandered a long way from the ant hill and found a dead dog to scavenge. That kind of exploration works in a limited way in human affairs as well as ant affairs but to find the really hard stuff you have to use science. That is, let’s see a Columbus get to Mars – that’s how I would have answered your friend (/armchair quarterback mode).

  27. Miquel Says:

    […]To me, the difference between the popularity of physics and TCS is that “everyone’s an armchair physicist” while the only non-experts interested in TCS are those who have an interest in computing. Not only is physics relevant to peoples’ lives, but the relevance is obvious. Most everyone has wondered at some point where the universe came from or what we’re made of at a fundamental level. But most people have no idea what an algorithm even is and can’t begin to imagine the difference between an efficient vs. inefficient one.[…]

    I’d like to argue that computing is as *immediately* relevant to people lives as Physics can be, if not more. The Hall Effect binds together thethis message getting stored into the hard disk of Scott’s blog server and the formation of a star billions of light years away. That’s quite an awesome connection in itself, in my opinion.

    And it shouldn’t be hard to argue that TCS and an understanding of algorithms can be more ‘immediate’ social benefits than knowing exactly the mass of the Higgs. Just a few examples:

    [*] NSA snooping scandal – this thing of tapping into Internet traffic in such an indiscriminate way – with the excuse of ‘enabling’ the detection of potential terrorists, for instance – is not only a huge breach of privacy, but also an APPALING waste of money. What ‘magic’ algorithm is the one that filters the humongous amount of dross being captured to produce some signal?

    [*] Tamper-proof voting systems – The Australian Senate electoral system, based on preferences, was “hacked” last year, allowing “parties” such as the Australian Motor Enthusiasts Party to get one senator elected (who had several videos of himself tossing kangaroo poo at his friends on Youtube).

    [*] Candy Crush Saga – What about a game which, by design, built for you to lose sooner and later? A game which has enabled his creators to raise several billions of imaginary dollars in the Stock Exchange.

    Those are just the three more immediate examples that come to my memory.

  28. Scott Says:

    Mitchell #18: Thanks, that’s exactly the insight I was looking for! So, the way I’ll think about it, from this point forward until someone else corrects me, is that what we learned from the LHC is that the Higgs is on the edge of metastability. Thus, any idea (including but not limited to asymptotic safety) that made that prediction would’ve gotten it right, and going forward, people should think about further reasons why the Higgs would be on the edge of metastability.

  29. Vadim Says:

    Miquel, I agree that the applications of computing are quite relevant to people – almost everyone uses a computer nowadays – but the methodologies of developing those applications are completely alien. It takes some background in CS to understand what a program is “made of”. To someone without that background, coding seems like an arcane art. Maybe it goes back to the fact that we all had to take physics in secondary school, but computer science was, at best, an elective for those interested in it.

  30. Scott Says:

    Vadim #29:

      Maybe it goes back to the fact that we all had to take physics in secondary school, but computer science was, at best, an elective for those interested in it.

    If that’s the case, then an obvious solution suggests itself (or two, the second being to make physics an elective)… 😉

  31. Stephen Jordan Says:

    The details of how the LHC is funded are highly unusual and very interesting. See:

    http://www.quora.com/History-of-Science/What-Is-the-backstory-regarding-how-the-Large-Hadron-Collider-came-into-being

  32. quax Says:

    “quantum computing incorrectly tries to justify itself almost entirely in terms of applications.”

    Seems to me, this is a symptom that hints at a wider perception problem of theoretical computer science in the US/Canada. Obviously, these days the pitch to funding agencies always goes for the short term gain, but at least to me, it seems self-evident that the QIS fundamental questions are most intriguing (e.g. boson sampling to probe for answers is brilliant stuff).

    I begin to wonder if there may be a general difference in how CS is perceived in Europe (Motl notwithstanding). Ever since I learned about Turing’s fundamental work I regarded theoretical CS as as an extremely rigorous field that sits between pure mathematics and physics. At least in my academic days, back in Germany, this was an uncontroversial view.

    That’s why, despite my cheering for anything that may accelerate the advent of practical quantum computing, it always seemed absurd to me to tie the theoretical advances to the latter.

  33. rrtucci Says:

    BUY LOW, SELL HIGH.
    If quantum computing (QC) and particle physics (PP) were 2 stocks, and if the stock market were not rigged by high frequency traders, hedge fund managers and investment bankers, I would advise right now:

    QC: buy, buy!
    PP: sell, sell!

  34. fred Says:

    Not sure if the movie mentions it, but this person
    http://en.wikipedia.org/wiki/Fran%C3%A7ois_Englert
    also got the Nobel prize for the discovery of the Higgs mechanism.

  35. Scott Says:

    quax #3, fred #6, Darth Imperius #20: Yes, many people seem to take the attitude that the LHC is a sort of 70-year-old remnant of the Manhattan Project—that without the one, you would never have seen governments willing to support the other. I think there’s a good deal of historical truth to that, but also four counterpoints worth bearing in mind.

    First, the further in time we get from WWII and the height of the Cold War, the less compelling the connection becomes. I.e., war may have been how we got started on the current trajectory, but it can’t possibly be the full explanation for why governments have been willing to continue on it. It’s notable, in particular, that the physicists themselves long ago stopped justifying HEP research in terms of any conceivable military application. (“It has nothing to do directly with defending our country except to make it worth defending.”)

    Second, economic competition between countries can work pretty well as a substitute motivation for war. (In the US, basic research was often justified in the 1980s by the need to “stay ahead of Japan”; today it’s often justified by the need to stay ahead of China.)

    Third, if the Manhattan Project still justifies funding the LHC, then I’d say it certainly also justifies funding the entire range of basic science! There’s not really more of a connection between nuclear weapons and the Higgs boson, than there is between (say) nuclear weapons and quantum computing, or even nuclear weapons and molecular biology.

    Fourth, if the Manhattan Project created a sort of perpetual endowment for physicists, then shouldn’t Bletchley Park have likewise created a perpetual endowment for mathematicians and computer scientists? 😀 (OK, I guess the difference is that Bletchley Park wasn’t even publicly known until the 1970s. And it did sort of create a “perpetual endowment” for math and CS; the trouble is that that endowment—the NSA and GCHQ—didn’t belong to the open scientific world.)

    Now, many people would tell me that the above points are all well and good “between us,” but they don’t have enough realpolitik in them; they don’t accurately capture the thinking of the public or of grizzled military planners. OK then, but what is my blog for, if not for saying what I think? 🙂

  36. fred Says:

    Scott #35
    I think also the media plays an important part.
    I know that in France, there’s been two physicists who won the Nobel physics price and were very good at going on TV and communicating their work and passion for physics in very simple terms.
    De Gennes in 1991, for studying order in materials.
    Charpak in 1992, for creating the successor to the bubble chamber back in 1959 at CERN.
    Those two and the French media did an amazing job in the 90s at getting the average citizen excited about serious physics.
    In the US, Green did a great job getting ppl curious about String theory. And now there are plenty of interesting TV programs about physics (esp. when Scott is on them!).

  37. A.Vlasov Says:

    Good words, but I am not quite certain that particular “basic science” you are talking about, because this post is marked by “CS/Physics Deathmatch” flag. Is it some new theory, CS or physics, or some combination?

  38. Scott Says:

    fred #36: Yes, that’s a good point. I also forgot to mention the considerable differences between Europe and the US, with respect to their motivations for science funding — for example, projects like the LHC can be justified in Europe on the grounds of “harmony between nations,” whereas the closest equivalent argument available in the US is “USA! USA! USA!” (Maybe harmony between different US states? 🙂 But then you’d need a collider that straddled the border between several states, preferably all with influential congresspeople, rather than being concentrated in only one, which was part of the SSC’s problem.) Anyway, the Quora answer that Stephen Jordan linked to goes into much more detail about this aspect.

  39. Rahul Says:

    Scott:

    I applaud your honesty at once again pointing out the hype & false advertising in a lot of QC proposals.

    A minor quibble. Your portrayal of physicists by having them say

    ….if you don’t see the inherent value of understanding the universe, then the problem lies with you.

    makes them sound a tad mean and arrogant.

    Most physicists I’ve encountered aren’t at all like that (in fact, I’ve encountered more arrogant CS guys than physicists! 🙂 ). Physicists often go out of their way in trying to explain carefully & patiently why what they are trying to find out is important & often they make a very convincing case.

    Yes, you can envy them because they succeed at convincing the public, without engaging in too much hype / false advertising.

    OTOH, I wouldn’t count it as a strategic / PR success alone. The QC guys versus the LHC guys are trying to sell two entirely different projects. I think you too easily (implicitly) assume that to an ideal, enlightened, rational observer the importance of both the underlying objectives are equal. If they were then, yes, it would be a failure of approach alone. But I don’t think they are.

    A honest case for QC does not have to be as compelling as a honest case for the LHC. Of course, none of this excuses the hype & false advertising of QC.

    But just don’t judge yourself too harshly. 🙂 Perhaps your battle is fundamentally harder than theirs because your goal is less compelling than theirs. The case for the $11-billion ring under Geneva isn’t identically to the case for QC.

  40. quax Says:

    Scott #38, you are entirely correct that Europeans are much more fond to finance large international cooperative efforts for the sake of being large international cooperative efforts 🙂

    In Germany specifically it helps that Merkel is a physicist (quantum chemistry).

    But the longer this austerity mess drags on the harder it will be to continue to justify these efforts. Although CERN has a leg up in that they really produce pretty good marketing, and know how to milk the ultimate spin-off i.e. the WWW.

    While it was kick started by the Manhattan project it’ll take these kind of concerted marketing efforts to keeping selling the public on the merits of HEP. Gus #10 comic was really spot on in this regard.

    So the HEP community was really good at running with the opportunity (although less so in the US), and it seems to me CS may indeed benefit in emulating some of this. But it’ll be an uphill struggle. HEP has big machines and the Faustian quest for understanding ‘the fabric of the cosmos’. CS is by it’s very nature closer to abstract math, a field that as far as I can tell completely gave up on trying to explain to the world what it is actually about.

  41. quax Says:

    Rahul #39, you are of course entitled to your opinion when you write “Perhaps [Scott’s] battle is fundamentally harder than theirs because your goal is less compelling than theirs.”

    … and maybe I am misreading this and you wrote this tongue in cheek.

    But IMHO the nature of BQP is no less compelling or fundamental, it after all goes to the heart of QM.

  42. GDS Says:

    Scott, I read the first three paragraphs of this post yesterday and immediately went online to find if it was playing in my town (it was, but ~20 miles from me) and whether there was a showing that night (there was, at 6:05).
    I set up an impromptu date night with my wife and we went to see it without any further information about it. We both loved it, and thought it captured the drive for discovery very well.

    Another thing that you haven’t mentioned (perhaps because it was so masterfully handled), is that it portrays women physicists as if they were (gasp!) *regular* physicists! There were no “overcoming adversity” backstories, there was no specific calling out of any woman qua woman, no ham-handed shoehorning of a woman into the story. Everyone was there; everyone poured their lives into the project, everyone celebrated. Just like they were real people! So refreshing.

  43. Scott Says:

    GDS #42: Thanks, your comment made my day!

    And I completely agree with you that the movie showcased how far women have come in physics in the most eloquent way it could—by not making a big deal about it at all.

  44. Juan Miguel Arrazola Says:

    Scott,

    How confident are you that most researchers in quantum information and computation will agree with you in saying that

    While the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer. The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality.

    Personally, applications are the main reason why I am interested in conducting research in this wonderful field. In all honestly, potential applications are what what truly motivate me the most to spend hours of hard work on these problems. I want to see quantum cryptography making the world more secure, I want to see quantum computers solve intractable problems, I want to see sensors operating at the highest accuracies allowed by the laws of nature.

    Am I a member of the minority? How am I being dishonest when I write this in scholarship/grant applications?

  45. rrtucci Says:

    Gee, thanks Scott, for convincing GDS that high energy physics is more fun and more discovery than quantum computing.

  46. Scott Says:

    Rahul #39:

      Perhaps your battle is fundamentally harder than theirs because your goal is less compelling than theirs.

    Alrighty then, let me explicitly argue that, to your “ideal, enlightened, rational observer,” the scientific case for building a scalable QC would be at least as strong as the case for finding the Higgs boson.

    Finding the Higgs was a final, crowning confirmation for the Standard Model, which dates back to the 1960s. Even though almost all experts were confident that the Higgs would be there, there could’ve been something totally unexpected (so the fact that there wasn’t was itself useful information), and the mass of the Higgs (125 GeV) was also interesting new information.

    Now, finding a quantum computational speedup would be a final, crowning confirmation for quantum theory itself, which dates back to the 1920s, and which (as I like to say) is the overarching operating system that the Standard Model runs on as “one particular application program.” Even though almost all experts are confident that a speedup will be there, there could be something totally unexpected (so the lack of such a thing would itself be useful information), and the particular kinds of qubits and error-correction methods that turn out to be needed to get below the fault-tolerance threshold will also provide interesting new information. Finally, a scalable quantum computer would be able to advance scientific knowledge by simulating other kinds of quantum systems (like quark-gluon plasmas and complex molecules).

    The only counterargument I can see is that, in contrast to producing Higgs bosons, building a scalable quantum computer is so “obviously, self-evidently” possible that we don’t even need experimental confirmation of it: if you believe in quantum mechanics, then you already believe in QC. Many HEP physicists would argue something like that. But whatever its other merits, that counterargument is not one that’s available to you, Rahul, since you’ve repeatedly doubted on this very blog that scalable QC would be possible!

    So, in summary: “less compelling”? You can kiss my tuchus. 🙂

  47. Scott Says:

    Juan #44: I’m pretty confident that majority of quantum computing researchers would not agree with this post—or would agree but only with reservations. In fact, that’s exactly why I had to express my opinion so forcefully—because I, not you, am the one in the minority here! 😀

    To answer your question, if the applications are really what excites you about QC, then of course it’s not dishonest to say so—in fact you should say so. While I personally think the main reason to build a scalable QC is that it would be f-cking awesome, I also think our field is plenty big enough for people who got into it for other reasons.

    In my opinion, dishonesty only creeps in when people dramatically oversell the practical arguments—usually not by outright lying, but by leaving out crucial caveats and unknowns, or failing to take full account of what already can be done classically.

    The three examples you mentioned raise different issues, so let’s consider them one by one.

    Quantum sensing, from what little I know about it, is a pretty open-and-shut case, with clear and undisputed practical benefits.

    Quantum key distribution is already practical (at least short distances). The trouble is, it only solves one of the many problems in computer security (point-to-point encryption), you can’t store the quantum encrypted messages, and the problem solved by QKD is already solved extremely well by classical crypto. Oh, and QKD assumes an authenticated classical channel to rule out man-in-the-middle attacks. I do think QKD could find applications in the future, especially if the bit-rate goes up and it becomes practical to send qubits to and from a satellite. But the only scenario I can imagine that would make everyone rush to adopt it, is a complete break of all public-key crypto (including lattice-based systems, which for all we know withstand even quantum attacks). I like to say that QKD would’ve been a killer app for quantum information, in a hypothetical world where public-key crypto had never existed.

    As for QCs solving intractable problems: well, I’m excited about that too! But the clearest cases of exponential speedup remain factoring, related number-theoretic and cryptographic problems (i.e., creating a market for QKD 🙂 ), and of course quantum simulation. For combinatorial optimization, machine learning, and “Big Data” problems, there are probably polynomial speedups to be had, but claims of exponential speedups have often been forehead-bangingly overblown and need to be taken with huge grains of salt.

  48. GDS Says:

    rrtucci #45, I would have gone to a movie about quantum computation if there was one, and in fact, if those were the only two movies playing, I would have picked the one about QC instead of particle physics. So rest assured, Scott, you have not led me astray by celebrating a movie about the LHC.
    Personally, I think that “It From Bit” would be an excellent title for such a project, and if the narrative could motivate to the layman that things happening are really exactly things being computed; and then ask the question about what sort of things are allowed to happen in nature and what sort of things are allowed to be computed; contrast the different historical approaches to computation vs. physics; and situate QC as a great convergence and “coming home” of the information revolution to its proper place in the natural sciences, it would appropriately convey a good deal of the excitement in this field as well.

  49. LK Says:

    I’m a particle physicist since more than a decade and for me “movies” like particle fever are just stuff for americans, always eager to “have fun”. A kind of modern sub-culture where everything is either hype or nothing. I do not recognize myself with this hollywood physics, I’m sorry.

  50. fred Says:

    Scott #46

    “the scientific case for building a scalable QC would be at least as strong as the case for finding the Higgs boson.

    Finding the Higgs was a final, crowning confirmation for the Standard Model, which dates back to the 1960s. […]
    Now, finding a quantum computational speedup would be a final, crowning confirmation for quantum theory itself,”

    Noob question – isn’t the Higgs boson (Standard Model) itself indirectly a confirmation of quantum theory? Or just trivially because so are all the alternative theories to the Standard Model?

  51. GDS Says:

    #49 LK: Wow, somebody out-Luboš-ed Luboš!

    Personally, I would be downright proud to walk into any physics department in the world with a business card bearing the title “popularizer of physics,” assuming such a position would still pay my mortgage and feed my family.

  52. Scott Says:

    LK #49: Yeah, sorry, “having fun” is really an American concept. You non-Americans wouldn’t understand it. So instead of Particle Fever, for you I’d recommend a less fun, more intellectually-serious film, like Zombeavers.

  53. Scott Says:

    fred #50: Yes, in the kind of physics done at the LHC, I think the basic principles of quantum field theory (and hence, both QM and special relativity) are “baked into” any of the alternative scenarios being seriously tested. That’s not to say that they couldn’t notice if a violation of QM or SR happened to turn up, but it’s certainly not what they’re looking for.

    More broadly, one could say that not just the LHC, but almost all of chemistry and physics has been “testing and confirming” the principles of QM over and over for nearly a century! But the key point here is that a scalable QC would test QM in a different regime than any of those past experiments did: namely, the “regime of computational complexity.” (Yes, we think many past experiments probably grazed that regime in passing—but only a scalable QC would take direct aim at it.)

  54. fred Says:

    Scott #53
    Ah, I see.
    About that “regime” of QM being different from regular physics/chemistry, where does “protein folding” fall?
    I’ve read reports suggesting close ties to complexity and QC, but maybe I misunderstood.

    http://www.technologyreview.com/view/423087/physicists-discover-quantum-law-of-protein-folding/

    “To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.”

  55. Juan Miguel Arrazola Says:

    Scott #47

    Thanks for your answer! One of the things that makes this blog so great is how much you interact with its readers 🙂

    With respect to your reply, I completely agree with you. The problem is not that we mention the potential applications of research in quantum information. The real problem comes when we oversell these applications and ignore the drawbacks.

    In fact, from my own experience, an understanding of the caveats and limits of a particular research direction usually leads to even better directions! Pretending that quantum computing or quantum cryptography can provide an advantage over their classical counterparts when they cannot, only makes it harder to understand those cases in which a real advantage can be had.

  56. Scott Says:

    fred #54: That’s a somewhat confused description of protein folding, because we know that even a quantum protein couldn’t just “magically” find the lowest-energy configuration by trying every possible configuration in parallel. For like the 10 trillionth time, that’s not how quantum mechanics works! Thus, even assuming the relevance of quantum effects to protein folding, a large part of the explanation for protein folding’s efficiency must come down to the structure of the search space. I.e., the search space must be “simple” enough that the protein’s quantum-mechanical evolution can reliably reach the ground state (which wouldn’t be so surprising, since proteins were subject to strong evolutionary pressure to satisfy exactly that property!). If so, though, one is led to ask whether classical algorithms like simulated annealing or Quantum Monte Carlo could also work just as efficiently. If they can’t, then the difference between quantum and classical that led to the performance gap would need to be something subtle about the search space—since again, not even the quantum process would be able to explore an arbitrary search space efficiently.

    Anyway, all of that was simply assuming that whatever’s in the linked preprint (which I haven’t read yet) is true! I note that the preprint is from 2011, is formatted for someplace like Science or Nature, but doesn’t seem to have appeared anywhere, which probably indicates that it was rejected (despite the exciting nature of its claims). So strong skepticism is advised.

  57. anon Says:

    I completely agree with Scott that a demonstration of scalable Quantum Computing is AT LEAST as big a deal as experimental confirmation of the Higgs boson (or inflation etc )

    Come on people – we need to be REALLY REALLY sure that Quantum Mechanics, as understood in 2014, is an EXACT description of Nature.

    Maybe the reason we are struggling with QG unification and primordial universe models is that we have TOO MUCH CONFIDENCE in the the EXACTNESS of the postulates of QM.

    A demonstration of Shor’s algorithn to a few thousand bits would pretty much seal it for QM postulates needing NO MODIFICATIONS – and would be far more useful than getting a human on Mars (for example).

  58. James Cross Says:

    Don’t you quantum computing guys need to get a Bat Cave and spend a few billion dollars on something before we can get enthusiastic about what you are doing?

    You could also promise us the Matrix or at least a Holodeck?

  59. fred Says:

    anon #57

    “A demonstration of Shor’s algorithm to a few thousand bits would be far more useful than getting a human on Mars (for example).”

    One human, okay, sure…
    But two humans, like, say, me and Angelina Jolie, and then the earth would just happen to be hit by a giant asteroid… it’s not so clear-cut.

  60. Scott Says:

    James #58:

      Don’t you quantum computing guys need to get a Bat Cave and spend a few billion dollars on something before we can get enthusiastic about what you are doing?

    If you provide the few billion dollars, I’m willing to move in to the Bat Cave.

  61. anon Says:

    And the guy who posted above that physics progress is due to war is spectacularly wrong. Newton, Faraday, Maxwell, Einstein, Heisenberg, Schrodinger, Dirac, Pauli, etc etc made their discoveries in peacetime.

    (Einstein did finalise GR during WW1 – but I hardly think himself and Hilbert were motivated by military considerations)

  62. fred Says:

    James #58

    Good news: it’s only a matter of time before Scott joins Oculus VR.

  63. LK Says:

    Scott, nothing wrong with having fun. But, as i said, as a particle physicist I do not feel correctly represented by this kind of stuff: sorry.
    I really wait for a movie about complexity theorists 😉 .

  64. quax Says:

    anon #61, IMHO you spectacularly failed at grasping the premise of the discussion.

    It’s not about the question if military spending is a prerequisite for physics progress, but rather if the amount we’ve seen spent on HEP over the last couple decades is still a consequence of its military applicability.

  65. Rahul Says:

    Scott #46:

    finding a quantum computational speedup would be a final, crowning confirmation for quantum theory itself,

    How badly does Quantum Theory need such a confirmation? I’m not sure. Haven’t the last 100 years of results & experiments made our belief in the correctness of QM very very strong? Is there a significant cohort that’s still unsure about the core of QM?

    Wasn’t the situation with Higgs a lot different? Pre LHC was there as much consensus & certainties about the theories as there is with QM? I’m sincerely asking. I genuinely don’t know.

    Ultimately though, this is a futile discussion to some extent: how compelling one finds one goal over another is bound to stay subjective. The only real test is to actually try convincing public audiences (funding bodies?) how compelling your reasons are. If Scott can do it (without hype) for QC as well as the Physicists did for the LHC, it’ll be an impressive achievement for sure.

  66. Sid K Says:

    Rahul #39 (also, Scott #46):

    Here are some reasons why I think the quest for building a full blown quantum computer and also for funding QC/QI theory is at least as compelling as the search for the Higgs boson and high-energy theory:

    1. If it were indeed true that BQP was the most computational power we can extract from the Universe, imagine what that would mean: We would have a computational characterization of the fundamental structure of the Universe. How awesome is it that Universe fundamentally restricts, via computational hardness, what any agent that might possibly come into existence in the Universe can do. Anybody who says they’re not excited about this needs also to claim that the 2nd law of thermodynamics is not exciting.

    2. Indeed, we’re already very aware of the power of quantum mechanics: lasers, transistors, nuclear bombs, high-density hard drives, atomic clocks (which feed into GPS) to name a few. If we have working large-scale QCs, how awesome would it be that we use this theory to redefine the very way we think about what information and computation is and it has real-world consequences! In other words, the fundamental mathematical structure of the Universe will have been tamed by humans! Not verified, but tamed.

    3. IMHO, QC/QI has been the freshest breath to quantum foundations since John Bell and Kochen-Specker. Quantum mechanics sets the fundamental ontology of our Universe. Insistent denials notwithstanding, I think it is safe to say we’re still confused about what picture about the Universe QM provides us. Building a QC makes this issue more pressing. When we achieve macroscopically coherent controllable quantum phenomena, it forces all of us to confront these philosophical confusions: much like modern high-energy theory and cosmology have forced us to confront the possibility of the multiverse and think more deeply about the problem of why there is something rather than nothing.

    4. Building on the previous point, no one would deny that attempts to think about and build AIs has forced us to clarify concepts related to the problem of induction and the philosophy of mind. IMHO, thinking clearly about QM/QC/QI and attempting to build QCs will force us to clarify concepts about ontology, epistemology, locality, reality, causality.

  67. jonas Says:

    fred #54: Oh, let’s make exaggerated claims!

    “Even a short journal article with 100 words can have 10^100 configurations. If the journalist tried these texts at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how the brains of these journalists do the job in hours, nobody knows.”

    Must be because the brain is a quantum computer, eh?

  68. A.Vlasov Says:

    IMHO, the Scott’s test is relevant with formal problem: if axioms of QM (at least, in a version used by QC community) could peacefully coexist with possibility of effective classical simulation (i.e. ECT)? Real quantum computer would provide an obvious NO answer to such a question.

  69. domenico Says:

    fred #21

    I am thinking that the Jacquard weaving is an automatic production of pattern weaving, where the workers are replaced by machines, so that it is an automation can be seen like a centrifugal governor (machines that control machines, or cybernetic).
    I am thinking that if the analogic machanism is reduced to atomic dimension (like a biologic control), then there is appearance of a quantum effects, so that the miniaturization (to biological dimension) give quantum calculus (a mechanism like an analogic calculus for sensor like smell, vision); if there is low temperature, then there is the wave function overlapping, and an increase of the quantum analogical effect in cybernetic.
    I think that the virtual reality can be seen like an evolution of perspective theory.
    I think that the technological revolution cannot be made without theoretical revolution: one can say that the Iron Age happen without math, without theory, but the smithing technique was a verbal knowledge, and a craft knowledge that can be translated in a theoretical metallurgy book; it is possible a random evolution, a random result (like the penicillin), but the understanding of the result is necessary (ever a theory).

  70. Scott Says:

    Rahul #65:

      Haven’t the last 100 years of results & experiments made our belief in the correctness of QM very very strong? Is there a significant cohort that’s still unsure about the core of QM?

    Well, it’s not just the people who flat-out deny QM. It’s also the people like Gil Kalai, Michel Dyakonov, Robert Alicki, and possibly even yourself (in previous threads), who say they accept QM, but then hypothesize some other principle on top of QM that would “censor” quantum computing, or make the effort of building a QC grow exponentially with the number of qubits, or something like that, and thereby uphold the classical Extended Church-Turing Thesis. As I’ve said before, I don’t think they’re right, but I think the possibility that they’re right is sufficiently sane to make it worth doing the experiment.

    (Much like the people who said, sure, there has to be something new at the LHC scale to keep the laws of physics consistent, but who knows if it’s a Higgs boson? It could be just about anything. The fact that it turned out to be the Higgs, exactly the one predicted in the 1960s, doesn’t mean that it wasn’t worth doing the experiment.)

    Lastly, I’ll make the obvious remark that the QC skeptics can’t have it both ways: they can’t oppose efforts to build a QC both because it will never work, and because it will so obviously work that the question isn’t even interesting! 🙂

  71. fred Says:

    jonas #67
    Haha, it’s not my claim, it’s in that MIT Technological Review article!
    Also, a journalist brain is a computer (not sure how many teraflops), so it can in principle solve some classes of combinatorial problems efficiently (e.g. Bipartite Matching).
    I think that Scott’s response to this is that the solution space for the protein folding might not be as interesting as it seems – e.g. when a stone rolls down a hill there’s a gigantic amount of paths it could take to reach the bottom (taking into account the fact it’s a chaotic system), but with enough gravity and slope, it won’t get stuck half way down and will reach the bottom no matter what and the “optimization” problem becomes trivial (not sure if the analogy is right).

  72. Rahul Says:

    It’s also the people like Gil Kalai, Michel Dyakonov, Robert Alicki, and possibly even yourself (in previous threads), who say they accept QM, but then hypothesize some other principle on top of QM that would “censor” quantum computing

    Of course I totally accept QM. I don’t have a strong opinion either way on if QC is fundamentally possible or violates some as yet unknown constraint. Perhaps that reflects my relative ignorance of the fields.

    My argument is purely pragmatic: Every device that’s consistent with known physical laws hasn’t been built. Nor is there the need to build every such device. Some devices may be entirely consistent & allowable by physical laws but just too damn difficult or expensive to build.

    Most estimates I’ve seen for when a practical, scalable, even marginally “useful” QC might be built run into time-scales so long as to dampen my enthusiasm. Mind, these weren’t skeptics making the estimates. In an ideal world with infinite resources I’d have absolutely no problems with a massive scale, mega QC development project. But sadly that’s not the world we live in.

    Even in the annoying, resource-constrained world we live in I’ve no beef with supporting a niche group of motivated, talented researchers work on trying to build a QC (or figure out why it cannot be built). There’s many unsolved problems to be solved, not all as glamorous as QC perhaps. There’s some (subjective) optimum level of effort, funding and manpower one ought to pour into cracking each such problem. If one were some all-powerful funding-allotment-czar. Personally, I think we are already at that optimal point for QC (and spare me the comparison with how much money Facebook wastes on writing stupid games. )

    Am I as enthusiastic about QC as I was about the LHC? No. Maybe that’s just my mistake. Time will tell.

  73. Greg Kuperberg Says:

    Scott –

    [The people who] hypothesize some other principle on top of QM that would “censor” quantum computing, or make the effort of building a QC grow exponentially with the number of qubits, or something like that, and thereby uphold the classical Extended Church-Turing Thesis. As I’ve said before, I don’t think they’re right, but I think the possibility that they’re right is sufficiently sane to make it worth doing the experiment.

    I agree. However, (1) if they are right, their explanations for how they could be right are grossly inadequate. (This is a pattern that has also developed among alternatives to inflationary cosmology and to string theory.) (2) Although I agree that it’s an important reason to do the experiment, it is not by any means the main reason. I am confident, albeit not 100% certain, that Peter Shor et al are right.

  74. Observer Says:

    The Extended Church-Turing Thesis.has about 80 years of empirical confirmation. How many anti-QC principles do you want?

    Scott, do you similarly claim that the SUSY skeptics must prove that some accepted principle censors SUSY? At some point it becomes wasteful to spend billions of dollars looking for SUSY particles when all attempts have failed.

  75. Scott Says:

    Observer #74:

      The Extended Church-Turing Thesis.has about 80 years of empirical confirmation. How many anti-QC principles do you want?

    LOL, quantum mechanics also has about 80 years of empirical confirmation! (Or more like 90, actually. And for the ECT, it’s more like 50 years.) And no one has convincingly explained how to reconcile quantum mechanics with the ECT, so as to kill quantum computation. Given that the revision quantum mechanics suggests to the ECT is so subtle and interesting, that (as Greg says above) no one has any good ideas for how to modify or add to quantum mechanics to uphold the ECT, and that the people who formulated the ECT simply weren’t thinking about quantum mechanics, it seems reasonable to guess that QM will simply win this bout, and the ECT will need to revised (or “upgraded”) to the Quantum ECT.

      Scott, do you similarly claim that the SUSY skeptics must prove that some accepted principle censors SUSY?

    That’s a terrible analogy, because SUSY could simply not be there—or be there but only at the Planck scale—without creating any true crisis for currently-accepted physics. “All” it would mean is that various apparent fine-tunings in the known laws of physics would remain unexplained, or would be even worse than they would be with SUSY. But that’s a bullet that one could always simply bite and move on.

    A better analogy is the Higgs boson. My understanding is that, without the Higgs or something else new at LHC energies, the amplitudes for certain electroweak processes at LHC energies would exceed 1—an obvious mathematical absurdity. So we knew, even before the LHC was built, that either the LHC would find the Higgs, or else it would find something else even more interesting than the Higgs.

    In the same way, the possibility of using Shor’s algorithm to factor large numbers in polynomial time is a prediction of 1920s quantum mechanics, plus some extremely minimal assumptions about the ability to prepare, control, and measure quantum states as desired. So we know, even before building a QC, that either it has to work, or else there has to be some new physical phenomenon even more interesting than QC that prevents it from working.

    And yes, of course it’s possible that human beings will simply give up, or run out of money, before they either build a scalable QC or discover the reason why it couldn’t have worked. The same could have happened with the Higgs boson: physicists could have given up before finding either the Higgs or whatever else it was that corrected the electroweak amplitudes at LHC energies. But fortunately, they didn’t give up: with the support of their civilization, they persevered, so that now we know the truth.

  76. fred Says:

    Scott #75
    But, if we manage to build a QC that successfully implements Shor’s, perfect invalidation of the ECT would still require to formally also prove that factoring can’t be done classically, right?

  77. no_one Says:

    Rome was not built in a day and so is also true of the LHC or the manhattan project or the Apollo program or the Hubble space telescope. LHC was the culmination of a long line of particle accelerators. The proof-of-concept that fission is feasible was demonstrated by Fermi’s working fission reactor. The proof that fusion is feasible was already out there for everyone to see. Rockets and telescopes have been around for a very long time. I do not see why QC should be given a free pass in this regard and I am certain Scott is not arguing for that (Scott has quoted Babbage more than once to make this precise point). If QC is to become successful as a computational pardigm there will need to be a very long line of tiny steps building up to the eventual machine. Thinking *only* about implementing Shor’s algorithm is not going to serve any purpose towards realizing a QC. Perhaps what is needed is for the QC experts to lay out a program of incremental steps for the next 50-100 years. Of course this is already obvious to those in the QC business and I am sure such a program is under way in many research labs. If someone were to throw $100 billion at IBM in 1980 would that have speeded up the emergence of Google? I am therefore not quite sure if the present is the right time to give QC a ton of attention and resources. People tend to take a wait and see approach and that is not entirely a bad thing. Besides, whats the hurry? If we get there 20 years late isn’t that way better than never getting there (there := scalable QC)?

  78. Peter Nelson Says:

    fred #76

    Yes, it would. Instead of using Shor’s Algorithm, you could instead implement Boson Sampling at a large scale. This would be both easier than implementing Shor’s Algorithm (It doesn’t require a general purpose QC.) as well as a more persuasive invalidation of the ECT (because if BS can be efficiently solved on a classical turing machine, then the polynomial hierarchy collapses).

  79. Scott Says:

    fred #76: Yes, what Peter Nelson #78 said!

    no_one #77: Yes, of course you want to work toward universal QC in a sequence of incremental steps, rather than going straight for factoring a 2000-bit number. So that’s exactly what labs all over the world have been doing for the past 15-20 years: first demonstrating 1-qubit gates, then demonstrating 2-qubit gates, and now (like Martinis’s group) starting to demonstrate the requirements of quantum error-correction in a potentially-scalable architecture.

    And I completely agree with you that the time has not yet arrived for a Manhattan Project style push for scalable QC: there’s still a lot of basic research that remains to be done. (In fact, that’s another thing I dislike about the “gizmo vision” of QC! It keeps pushing people toward saying that scalable QC will be practical in the next few years—who wants a gizmo that will take 50 years to arrive?—even when they know it’s wildly overoptimistic.)

  80. quax Says:

    Scott #79, so are you arguing that someone like John Martinis is simply pulling his timeline out of his posterior?

  81. A.Vlasov Says:

    Scott #75, I feel some circularity in your reply on the second question. E.g.: why QC skeptics must prove that some accepted principle censors factoring/QC? – Because factoring/QC is a prediction of QM (and so only some new physical phenomenon may prevent it from working).

    But that is a difference between propositions: “accepted [by QM] principles may not censor QC”, “QC is a prediction of QM”?

    Maybe, it could be simpler to accept QC as a new theory (instead of complete equivalent of QM) with all possible advantages and risks. Then SUSY/Higgs etc comparisons would be more justified.

  82. Scott Says:

    quax #80: In all the talks by John Martinis that I attended, he was quite careful in making any projections. E.g., he would say that in 5 years he thinks they could demonstrate more of the basic requirements for fault-tolerance and scalability, but not that in 5 years they’d already have a practical device. If I missed something, could you please point me to a talk or paper where he says anything like the latter?

  83. Scott Says:

    A.Vlasov #81: I don’t see the circularity you allege. I’m not defining QM to be “that theory which predicts that scalable QC is possible.” Rather, I’m asserting that, if you look at any QM textbook since Dirac’s in 1930, then it’s very hard to understand why what’s written in those books doesn’t predict the possibility of QC, as soon as the question is asked. At the very least, I think the burden is on the skeptics to explain what vitiates the prediction.

  84. A.Vlasov Says:

    Scott #83, I partially agree with Greg #73 – skeptics failed too explain that, and so I think that argument of Observer #74 may be valid.

  85. fred Says:

    Peter #78
    Scott #79

    Ok, I seriously need to do my homework about BS… 🙂

    Does this have any merit?
    “Will boson-sampling ever disprove the Extended Church-Turing thesis?”
    http://arxiv.org/abs/1401.2199

  86. Mike Says:

    “I think the burden is on the skeptics to explain what vitiates the prediction”

    While I agree, I do think the proponents should announce progress when they see it:

    A record quantum entanglement: 103 dimensions —
    More quantum dimensions easier to achieve than more qubits, researchers find.

    http://www.kurzweilai.net/a-record-quantum-entanglement-103-dimensions

  87. fred Says:

    Oh, regarding that paper I link in #84, maybe it wasn’t taking the latest “Scattershot BS” ideas into account.

  88. pete Says:

    Vaguely apropos of the discussion here, I’m finding it impossible to resist asking: could anyone who knows this stuff comment on http://arxiv.org/abs/1403.7686 which, as far as I can make out, seeks to get a fresh perspective on the measurement problem from the POV of computational complexity, on the basis of a claimed proof that solving the Schroedinger equation is NP-hard?

  89. Scott Says:

    pete #88: The abstract of that thing looked so nonsensical that I didn’t make it through to the actual paper. If anyone has and wants to explain it here, that’s fine.

  90. Scott Says:

    fred #87: That’s correct. Besides omitting Scattershot, the other two problems with that paper were that

    (1) the difficulty it pointed out with scalability, was something that Alex and I understood perfectly well and discussed in our original paper.

    (2) saying “hmm, scaling sure seems hard” is not in any way a proof of impossibility of scaling (something that Scattershot just makes a bit more dramatic…)

  91. Arrowson Says:

    Scott, Columbus had a venture to (through) unknown regions, which is not at all the case with D-wave. Analogy is altogether wrong.

  92. Rahul Says:

    And no one has convincingly explained how to reconcile quantum mechanics with the ECT

    Naive question: Is the ECT “provable” in any sense of the word? It’s only a hypothesis, right? Something that has been empirically borne out again and again, yes. Or is there really a proof?

    Another question: Say we have to get drop ECT from our basket of hypotheses (axioms?) what are the biggest consequences. Are there any consequences that reach outside the domain of complexity theory? Even within complexity theory what are the major results that will fall i.e. what are the big theorems we take for granted, that are contingent upon ECT?

    In the sense that any modification needed to the core of QM might cascade into quite far ranging consequences.

  93. Rahul Says:

    The abstract of that thing looked so nonsensical that I didn’t make it through to the actual paper.

    Why does arxiv make it so damn hard to find out author affiliation?

    Anyone know who Arkady Bolotin is or what his credentials are?

  94. Sandro Says:

    In the same way, the possibility of using Shor’s algorithm to factor large numbers in polynomial time is a prediction of 1920s quantum mechanics, plus some extremely minimal assumptions about the ability to prepare, control, and measure quantum states as desired. So we know, even before building a QC, that either it has to work, or else there has to be some new physical phenomenon even more interesting than QC that prevents it from working.

    Given what we currently know, could you reasonably speculate where such an impediment might turn up?

    For instance, could the factors involved in preparing, error correcting, controlling and measuring all add up sufficiently to bring a QC back into the classical realm, or are we sufficiently confident that we know the cost of these operations that it would have be something even more speculative?

  95. Scott Says:

    Arrowson #91: I don’t know what you’re talking about. The performance of quantum annealing and the adiabatic algorithm on real-life NP-hard problem instances is a “vast unknown region” if anything in quantum computing is. It’s certainly nothing like Shor’s factoring algorithm, where the potential speedup you could get (at least, compared to the best known classical algorithms) is already well mapped out.

  96. Scott Says:

    OK everyone: At several people’s request, I’ve now taken a look at arXiv:1403.7686, and I can confirm that it’s complete garbage. The author is simply mistaken that solving the Schrödinger equation is “NP-complete” in any interesting sense: his argument for that seems to rely on a rediscovery of the adiabatic algorithm, but he doesn’t mention that the spectral gap could be exponentially small (and hence the annealing time could be exponentially large)—the central problem that’s been the bane of Farhi and his collaborators (and, of course, of D-Wave) for the past 15 years.

    Also, even if you thought (for totally mistaken reasons) that quantum mechanics let you solve NP-complete problems in polynomial time, that might (or might not) suggest to you that quantum mechanics should be replaced by something else. But until you’d actually found a replacement, and given some sort of evidence for its truth, I don’t see how you could claim to have thereby “solved the measurement problem”!!

    As additional problems, the author appears to conflate the P vs. NP problem with the question of whether NP-complete problems can be efficiently solved in the physical world, a common novice mistake. And also, he seems comically unaware of everything that’s been discovered in quantum computing theory over the past 20 years relevant to the issues he’s writing about—as if he just emerged from a cave.

  97. quax Says:

    Scott #82, I didn’t state that he’d have a practical device, but that he has a timeline. It was an attempt to get a bit more concrete, i.e. trying to pin down your general lament about dishonest QC research proposals.

    To me your complaint just seems very unspecific and nebulous.

    Unless you propose they’ve all been hired by D-Wave, who and where are are all these QC researchers with their irresponsible promises?

    Just would like to see some specific examples.

  98. Scott Says:

    quax #97: This is yet another case where I can’t win. If I don’t name names, then I’m “unspecific and nebulous,” while if I do, then I’m petty and mean-spirited and attacking people who are doing good research—and of course, in each particular case, there will turn out to be a perfectly-good reason why they made whatever overhyped claims they made and why they weren’t overhyped at all. In a way, D-Wave makes things easy for me, by being so over-the-top in its public pronouncements as to remove all doubt about whether I should call them out by name.

  99. jonas Says:

    Anyway, although I don’t have anything to add really, I have read this and the previous writeup and have found them interesting. Thank you for finding the time for blogging even with your daugther, Scott.

  100. Scott Says:

    Sandro #94:

      Given what we currently know, could you reasonably speculate where such an impediment might turn up?

    Sorry, not really! If you truly want speculations along these lines, then see the numerous past threads on this blog where Gil Kalai, Michael Dyakonov, and others offered their views about why scalable QC will be impossible (even though QM itself is just fine). Personally, I’ve never been able to follow their arguments, but maybe you’ll be able to.

    My own view is close to that of Greg Kuperberg in comment #73: yes, it’s conceivable that the skeptics will turn out to be right, but if so, their current explanations for how they could be right are grossly inadequate.

  101. Scott Says:

    Rahul #92: No, I don’t think the ECT is “provable,” because it’s ultimately an empirical statement about the laws of physics. If we had a final theory of physics, then conceivably we could prove that that theory satisfied the ECT, and thereby reduce our uncertainty to whether the theory was truly the “final theory” at all. But we don’t have such a final theory. And moreover, almost all of the best theories that we currently have are quantum theories, and of course QM strongly suggests that the ECT (as originally formulated) is false.

    For that reason, rejecting the ECT has at least one extremely important consequence for physics: namely, the consequence that quantum mechanics could possibly be true! 🙂 (Which is a strange way of saying that quantum mechanics should cause us to severely question the ECT.)

  102. Scott Says:

    jonas #99: You’re welcome! Thanks for the thanks! 🙂

  103. quax Says:

    Scott, before getting back on topic let me 2nd jonas #99, hope you’ll always will be able to make some time for blogging.

    As to #98:

    The no win scenario hinges on your ability to turn lemon into lemonade. For instance, you could go back and just pick past examples of QC research that didn’t live up to the original promises and then argue for a ‘postmortem’ case study as to why. From there one could frame any concern for over the top current promises as an attempt to avoid past disappointments.

    My point being, people and especially scientists are often open to constructive criticism.

    You seem to be very serious about this. In my opinion, if you really want to effect change, then you need to take it to the next level.

  104. Sandro Says:

    And moreover, almost all of the best theories that we currently have are quantum theories, and of course QM strongly suggests that the ECT (as originally formulated) is false.

    Although ‘t Hooft demonstrated that supersymmetric string theory is equivalent to a deterministic, discrete time cellular automoton operating only on integers.

  105. Scott Says:

    Sandro #104: ‘t Hooft can’t even explain Bell inequality violation, let alone more complicated quantum phenomena. Or rather, the type of “explanation” he offers—involving a cosmic conspiracy between your brain and the subatomic particles you’re measuring—is not a sane one.

  106. fred Says:

    QC better work because simply recognizing that it’s failing would be extremely long/expensive/wasteful – it’s obviously working with a handful of qbits and it would take many teams using entirely different approaches (all with different noise/error profiles) and all failing at scalability to suggest that there is maybe some inherent practical “electric fence” that’s always forcing us to use exponentially more resources as the number of qbits increases linearly.

    What I’m not clear about though – what is the inherent source of errors in QC? Is it just because of engineering limitations, or is it because of the uncertainty principle?

  107. Sandro Says:

    Scott #105: I don’t see the problem. The alleged “cosmic conspiracy” seems to amount to postselection. A similar criterion resolves CTC paradox.

    You’re right that ‘t Hooft’s CA model is still in early days and very incomplete, but I don’t think most charges levelled against it and superdeterminism itself are convincing.

  108. Scott Says:

    Sandro #107: What I wouldn’t give to lock Geordie Rose and Gerard ‘t Hooft in a room together, for a super-quantum-speedup versus superdeterminism cagematch…

    If CTCs existed, then at least there would be a compelling physical reason to introduce the metaphysical lunacy of postselection! ‘t Hooft introduces it simply because he doesn’t like quantum mechanics, and in so doing, offers a “cure” that’s maybe a trillion times worse than the disease.

    And as a general rule, if you’re trying to invent a classical model for QM, and you see the Bell inequality as “a detail to be dealt with later,” then your model is not “incomplete”: it’s incompletable.

  109. srp Says:

    1. I love all macroscopic demonstrations of quantum phenomena–Bose-Einstein condensates, non-linear materials, plasmons, quantum computing, superconductivity, etc. and support demoing them. They’re supercool (some literally) and it’s not hard to imagine that developing better technological control over these phenomena won’t pay off later. Most of the work doesn’t involve giant single-instrument commitments, which maintains a healthy research ecology.

    2. Scott’s suggestion that QC could help do calculations of QCD stuff suggests a piggyback funding appeal with particle physics that could be given a nice popular twist. The sad truth appears to be that even though we keep hearing about the perfection of the Standard Model and moaning about how the poor particle physicists can’t find any anomalies, that claim appears to be a bit creative. There are huge problems with understanding protons, especially their spin properties. Google tranversely polarized colliding proton beams, for one example. The theory appears to not just be off, but to be way off, qualitatively and quantitatively. The apparent defense is that it’s so hard to calculate the predictions of QCD that maybe the current predictions are really just math mistakes, not theoretical flaws. Great! QC to the rescue if you can wait the necessary decades to get the machines working.

    3. Your ideas about computation being fundamental to understanding of the universe require more hype, I’m afraid, if they are to reach the public in the way that thinking about the Big Bang or fundamental particles does. I’m not sure I buy them, but in any case they do not have the currency needed to generate support for pure research.

    4. I was waiting for you to refer to Hardy’s A Mathematician’s Apology. You seem to possess a similar sensibility, right down to the politics.

  110. Sandro Says:

    Scott #108: Possibly incompletable as it stands perhaps, but point taken! One final note:

    If CTCs existed, then at least there would be a compelling physical reason to introduce the metaphysical lunacy of postselection!

    Perhaps postselection is a huge pill to swallow, but it seems worthwhile to have at least one viable superdeterministic scientific theory to give a completely different perspective on quantum phenomena. Certainly having de Broglie-Bohm as a viable deterministic QM has provided interesting contrasts to claims assuming indeterministic Copenhagen.

    Further, recent attempts at deriving QM from information theory have provided some unique insights. It seems reasonable to believe that a viable superdeterministic theory, if such is possible, would yield even more insights of the same kind. For instance, given superdeterminism something like postselection is needed to ensure Bell inequality violations occur, but why must Bell inequality violations occur? There must be some logical reason for this global invariant, and theories that give up realism can’t really answer such questions.

    Some of these types of questions can now be explored in information theoretic-derivations of QM, but doesn’t it seem plausible that that a superdeterministic variant might raise some interesting questions of its own, and provide some interesting answers to other questions?

  111. quax Says:

    Scott #108, I think Geordie and ‘t Hooft would have a blast, because where you seem to assume a dogmatic dislike of QM, I see a pragmatic playing with a CA toy model to see how much can be coaxed out of it. It may very well find some applications in computational quantum chemistry.

  112. Rahul Says:

    Scott #108:

    Can we add John Sidles as referee?

  113. A.Vlasov Says:

    Scott #83

    Rather, I’m asserting that, if you look at any QM textbook since Dirac’s in 1930, then it’s very hard to understand why what’s written in those books doesn’t predict the possibility of QC, as soon as the question is asked.

    I would say, that in quantum circuits model factoring became inevitable, but usual axioms of QM are still formally compatible with ECT.

    A simple principle of proof exists for such a things, yet sometimes it may be almost completely useless from practical point of view: It is enough to point some mathematical model compatible both with the axioms and with effective classical simulations.

    I think, we may use 5 axioms from second page of Fuchs work 1401.7254 and matchcircuits as a model compatible with effective simulations, e.g. see 0908.1467.

    Certainly, factoring shows in the same way that the same axioms are compatible also with conjecture about impossibility of classical simulations.

  114. Scott Says:

    A.Vlasov #113: No, I don’t think so! Matchcircuits correspond to a universe consisting entirely of identical, noninteracting fermions. But if you read any standard QM textbook, you’ll find not only fermions mentioned but also bosons, and (more importantly) the possibility that two or more particles can interact, for example via the electromagnetic force. And that’s enough to get you up to BQP-universality.

  115. A.Vlasov Says:

    Scott #114. Matchcircuit implements entanglement between qubits via two-qubit gates, so if you not fixed on particular physical implementation, argument about absence of interaction is not necessary valid. But in any case matchgates is valid mathematical way for proof of compatibility.

    It may be compared with well-known historical example about parallel postulate in Euclidean geometry. To show that it does not follow from other axioms it was enough to show model of non-Euclidean geometry. In such a case your sincere disagreement could be compared with argument against Lobachevski plane – in our world straight line are straight and infinite, but you show model with finite segments of circles.

  116. Thomas Says:

    I don’t understand why if the physical world can solve NP-complete problems efficiently, that doesn’t imply we could take advantage of such systems to solve NP problems in general efficiently? I get that that wouldn’t mean P=NP, but perhaps that BQP=NP?

  117. Scott Says:

    Thomas #116: If the world was classical and deterministic, then NP-complete problems being efficiently solvable in the physical world would mean P=NP. If the world was classical and randomized, it would mean NP⊆BPP. If the world was quantum, it would mean NP⊆BQP. And if the world was described by some other complexity class—call it PhysP—it would mean NP⊆PhysP. In general, what it would mean for complexity classes depends on which complexity class you take to describe the physical world! And I would hope someone writing a “revolutionary paper” on this subject would understand that elementary point.

  118. Sandro Says:

    Scott #117: That sounds like it would make for an interesting article. I don’t follow how classical worlds implying P=NP and NP⊆BPP, so that would be an interesting read!

    Also, the world being quantum implying NP⊆BQP doesn’t seem to jibe with what I’ve read that it’s believed that BQP and NP don’t have a strict subset relationship.

  119. Scott Says:

    Sandro #118: Sorry, by “it” I meant “NP-complete problems being efficiently solvable in the physical world”—a hypothetical situation that I don’t, indeed, believe actually holds. I’ve edited the comment to clarify that.

  120. Jerry Says:

    If quantum computation is to become a reality, quantum entanglement will need to become the law of the land.

    Quantum decoherence is the ADHD of quantum mechanics that all the Ritalin in the universe won’t cure. If the only places where quantum computing might be practical are at a black hole’s event horizon or on the surface of Titan (where it’s colder than a witch’s qubit), what good is it in the kitchen?

    ⟨ Jon Stewart|Me ⟩

  121. Scott Says:

    Jerry #120: A few corrections to what you say.

    1. Entanglement IS the law of the land — in fact, too much so! Decoherence is nothing other than unwanted entanglement.

    2. A black hole’s event horizon is an unbelievably HOT environment, at least to an observer who’s stationary there.

    3. The best environment for QC is almost certainly a lab. Interstellar space, at 2.7 degrees Kelvin, is actually too hot for many QC proposals.

    4. If, hypothetically, QC were practical but only on the surface on Titan, then I’d count that as a practical SUCCESS! The world’s QC center could simply be installed on Titan by robotic spacecraft, and the world’s researchers could divvy up time to dial in to it, much like with the Hubble telescope.

  122. Rahul Says:

    I’ve a factual question. What is the actual amount of Govt. money (say NSF funding or some EU body etc.) that goes into QC? Does anyone know or have a list of funding for various areas of CS?

    I’d be curious to know.

    Also, what fraction of a typical top-20 CS department’s faculty works on QC? (I assumes there’s some from EE / ECE etc. too)

  123. Jerry Says:

    Comment to Scott:

    #1. Kind of my point. Quantum Decoherence thrives as the ADHD of QM. How do we ever hope to tame it?

    #2. I get the B.H. analogy from Prof. Susskind’s O/L lectures, where he toys with entangled qubits and qutrits. If QM is so fragile as to mandate the No-Cloning rule, why is it so robust as to also enforce the No-Deleting rule?

    #3. If 2.7 K is too “hot”, I have made my point. What would it take to be able to quantum computing on the kitchen table? Shipping my overheated quantum processor back to Intel will only benefit the shipping giants (FedEx + UPS = FedUp). B.T.W, “Kelvin” has no “degree” in front of it.

    #4. The Hubble might let you see your quantum computer, but we’ll need the Russians to get us there. good luck with that!

  124. quax Says:

    Scott #119, so if a complex protein folds itself that doesn’t count as an efficient solution to exactly that NP problem?

    Don’t think that’s what you are trying to say, but that’s how it reads to me.

  125. Scott Says:

    quax #124: There are two different problems that you need to distinguish here—boundless confusion has been generated by people’s failure to do so.

    Problem #1 is finding the absolute minimum-energy folding configuration of a given protein (let’s say, in the H-P model). That problem has been shown to be NP-hard, if the number of amino acids is the input size.

    Problem #2 is finding the folding configuration that Nature itself finds (not necessarily the absolute minimum one). That problem is in BQP (and therefore, almost certainly not NP-hard), since in principle, one could always solve it by simulating the whole quantum dynamics of the protein on a quantum computer.

    If you want to unconfuse yourself about the computational complexity of protein folding, then the key realization is that these two problems are not the same, despite being constantly conflated! So for example, suppose you created a string of several trillion amino acids whose minimum-energy folding configuration encoded a proof of the Riemann hypothesis (something that should be possible, in principle, by the NP-hardness of Problem #1). In that case, there’s absolutely no reason to think Nature will solve Problem #1: it will instead solve Problem #2, the non-NP-hard one, by finding some nice, convenient local minimum and staying there.

    As a real-life example of this, prions (the agents of mad cow disease) are apparently proteins that folded into suboptimal configurations. One reason why we don’t see this more often is pretty simple: there’s been enormous selection pressure for proteins to evolve to be easy to fold! If a protein didn’t have a nice, smooth energy landscape to roll down to the global optimum, but rather a jagged, NP-hard one, that protein would tend to get weeded out of the gene pool.

  126. Scott Says:

    Jerry #123:

      Kind of my point. Quantum Decoherence thrives as the ADHD of QM. How do we ever hope to tame it?

    Well, decoherence has been tamed well enough to create quantum superpositions of buckyballs and even more complex molecules—as well as countless more complicated entangled states. My view is that, if you think decoherence can be tamed well enough to do all those things but not to do quantum computing, then the burden lies on you to explain where the dividing line is.

      I get the B.H. analogy from Prof. Susskind’s O/L lectures, where he toys with entangled qubits and qutrits. If QM is so fragile as to mandate the No-Cloning rule, why is it so robust as to also enforce the No-Deleting rule?

    No-cloning and no-deleting both follow from a much more fundamental principle, namely the unitarity of quantum mechanics. If you want to understand what unitarity means, and you like Lenny Susskind, then try his wonderful new book Quantum Mechanics: The Theoretical Minimum. Or you could try my own Quantum Computing Since Democritus.

      If 2.7 K is too “hot”, I have made my point. What would it take to be able to quantum computing on the kitchen table?

    No, if you imagine quantum computing as something that has to get onto “kitchen tables” to succeed, then I’ve made my point. The media has failed you: you’ve bought fully into what I called the “gizmo vision” of quantum computing in the post, imagining a quantum computer as if it were the next iPhone. Scalable, fault-tolerant QC is best thought of as a scientific challenge (with engineering aspects). The right question to ask, at this point in time, is not whether it can succeed in this or that imagined consumer marketplace, but simply whether it’s possible at all.

      The Hubble might let you see your quantum computer, but we’ll need the Russians to get us there. good luck with that!

    The US discontinued only its manned space program, not its unmanned one (though planetary exploration faces pretty grim political prospects). What I had in mind (in the silly hypothetical scenario) was a robotic mission to Titan. There’s absolutely no scientific reason to send humans to space, and there’s never been any.

  127. pete Says:

    Scott #96, just wanted to say a belated “thanks” for delving into and evaluating the paper I linked, which evidently was not huge fun for you. As is apparent, I’m a quantum / complexity doofus (though at least mine was a bleeding edge doofosity in this instance, it seems) so apologies for that! On the plus side, I did find your npconplete.pdf on this site, which tops out google results for ‘efficient physical solution np-complete’.

    I admit to a nagging feeling that results on computation could make a significant contribution to physics fundamentals, and the exact relation between physical reality and computation remains mysterious for me, but I’ll try harder in future to resist soliciting opinions here on “revolutionary” findings such as the one I linked!

  128. Darrell Burgan Says:

    Scott #96: while the paper you reference is clearly off, I wonder what your thoughts are about whether there is a deep connection between computational complexity theory and theoretical physics?

  129. Scott Says:

    Darrell #128: Well, I’ve devoted my career to such a connection, so I’m going to go with “yes”! For more, see (for example) here or here, or check out my Quantum Computing Since Democritus book.

  130. Scott Says:

    Rahul #122:

      What is the actual amount of Govt. money (say NSF funding or some EU body etc.) that goes into QC?

    I don’t know, but I can tell you that it would be hard to quantify, because there’s a lot of experimental physics research these days that’s partly motivated by QC, but also motivated by other goals, like sensing, atomic clocks, etc. etc. So different people might choose which projects to count in extremely different ways.

      Also, what fraction of a typical top-20 CS department’s faculty works on QC?

    My guess is somewhere around 1%. At MIT CSAIL, for example, it’s just me and Peter Shor, out of 110 faculty total. And at Berkeley, it’s just Umesh out of 61 faculty. And at Stanford, Princeton, Cornell, and Carnegie Mellon, there’s no one in CS who does quantum computing. (On the other hand, the University of Waterloo and Caltech might singlehandedly raise the percentage a bit…)

  131. Scott Says:

    A.Vlasov #115: OK, but I was making a further point. Namely that, if you look in any quantum mechanics textbook since the 1930s, the particular examples of quantum systems discussed there (e.g., interacting spins, interacting bosons and fermions) are already enough to get you over the threshold of BQP-universality. So this is really not analogous at all to Euclidean vs. hyperbolic geometry, which are two equally reasonable and self-consistent universes that are compatible with Euclid’s first four axioms. There’s a fundamental asymmetry between computational universality and the lack of it: if you want non-universality, then you have to “hobble” the laws of physics in very strange ways to get it. And I’d say it’s been pretty clear since the early days of QM that the laws of physics are not, in fact, hobbled in the relevant ways.

  132. Rahul Says:

    Scott:

    My guess is somewhere around 1%. At MIT CSAIL, for example, it’s just me and Peter Shor, out of 110 faculty total. And at Berkeley, it’s just Umesh out of 61 faculty. And at Stanford, Princeton, Cornell, and Carnegie Mellon, there’s no one in CS who does quantum computing.

    Oh ok. I wasn’t aware it was so low. My impression was much higher.

    If it is indeed so low, then thanks for correcting my mis-perception. Maybe you guys aren’t consuming that much resources after all. 🙂 My bad.

  133. Greg Kuperberg Says:

    Scott –

    The US discontinued only its manned space program, not its unmanned one

    Actually, Bush began a new multi-billion-dollar “manned” space program, which has been continued under Obama. It’s manned except for one wrinkle: After 10 years, nothing manned has actually been launched, only unmanned rehearsals and not much of that either. It’s not clear that anything manned will ever be launched in this “manned” program. It has also splintered into rival subprograms, with Obama and Congressional Republicans taking opposite sides in some cases. The target mission of these manned launches to be are, variously: the space station, the moon, a captured asteroid orbiting the moon, and Mars.

    With any luck, this effort will be cancelled before it makes it out of its coma.

  134. quax Says:

    Scott #125, thanks for clarifying this. Makes sense that nature may indeed just settle for good enough.

  135. Darrell Burgan Says:

    Scott #129: book is on its way. For what it’s worth, it seems intuitive and obvious to this layperson that there would be a relationship between fundamental physics and computing. Quantum information theory, for example, barely even hides this relationship. Nature is a computer! But I guess an old programmer like me would think so. 🙂

  136. A.Vlasov Says:

    Scott #115, I only talked about some rigor in formulations, i.e. if it is valid to claim derivation of quantum computers directly from basic principles of QM?

    If you are talking about whole set of ideas that may be found in quantum mechanical books written after 1930, I do not know how to address that during some reasonable amount of time.

    BTW, your arguments again may have allusions with early discussions about Lobachevski plain, i.e. that it is a “hobble” way to describe our world, so, it is not relevant to our world and, so, our world is obviously Euclidean.

  137. A.Vlasov Says:

    Sorry, should be “Scott #131”

  138. Scott Says:

    Rahul #132:

      If it is indeed so low, then thanks for correcting my mis-perception. Maybe you guys aren’t consuming that much resources after all.

    You’re incredibly welcome—correcting misperceptions is my goal here! 🙂

    I should tell you that the percentage is probably higher in physics than in CS—partly because, as I said, QC blends smoothly into so many other topics (condensed matter, quantum optics, etc.) that the physicists are interested in for other reasons. And also, when considering a potential hire, the physicists (unlike the computer scientists) don’t need to be sold on the reality of quantum mechanics—and they might remember that, compared to other things they’ve already invested in, like string theory and quantum cosmology, QC is extremely short-term and applied. 🙂

  139. rrtucci Says:

    Rahul said:
    “Oh ok. I wasn’t aware it was so low. My impression was much higher.”

    That’s because “Scott” is really the name of an army of blogging androids being tested by Boston Dynamics (the company that makes the creepy army robots).

    “We’ve finally convinced our biggest skeptic, Rahul. We are ready to ship”

  140. Rahul Says:

    And also, when considering a potential hire, the physicists (unlike the computer scientists) don’t need to be sold on the reality of quantum mechanics—and they might remember that, compared to other things they’ve already invested in, like string theory and quantum cosmology, QC is extremely short-term and applied. 🙂

    Indeed! As you might imagine I’m no fan of string theory & quantum cosmology either.

    OTOH, experimental Physicsts working on QC have one advantage; say QC turns out a dud then might be very easily reassigned (I think) to other solid state / condensed matter projects & programs.

    For a hardcore, undiversified, fully committed, theoretical QC guy reassignment might be a tad harder?

  141. Jerry Says:

    re: Scott’s comment #126:

    Scott: Thanks for your helpful insights. Your blog-followers can always count on them.

    I understand we are a long way from quantum toasters. If it can be proven mathematically that quantum computing is possible, and it can be demonstrated in the laboratory (even if it is amongst two playful electrons in a buckyball system) does that prove that quantum computing can be possible at the room temperature quantum toaster level in your kitchen? I wonder that if quantum computing is possible, nature will put on the brakes with something like a limit of, say, two qubits and/or a quantum computing “Curie temperature”, above which no Q. C. allowed.

    There is an argument that humans (and all other living entities) do not actually make observations or measurements, but rather become quantum entangled with the system we believe to be observing.

    There is a post on R. J. Lipton’s blog, “Gödel’s Lost letter and P = NP” entitled “Can Plants Do Arithmetic?”
    http://rjlipton.wordpress.com/2014/03/03/can-plants-do-arithmetic/
    I would very much appreciate your thoughts (or entanglements).

    I have completed your O/L course, “Quantum Computing Since Democritus”. Does your book of the same name contain more detail than the O/L version?

    One thing that I hope humans will always be able to do better than computers is language. Will a computer (classical or quantum) ever be able to speak, e.g. English, and know just the right time to insert Jewish humor or have the ability to interact with its audience such that it can perceive their reaction and change its delivery? I hope not.

  142. Rahul Says:

    Will a computer (classical or quantum) ever be able to speak, e.g. English, and know just the right time to insert Jewish humor or have the ability to interact with its audience such that it can perceive their reaction and change its delivery?

    Yeah I’m dying to hear a computer say “You can kiss my tuchus.” 🙂

  143. Jr Says:

    “I submit that, had it been undertaken by curious and careful scientists—or at least people with a scientific mindset—rather than by swashbucklers funded by greedy kings, the European exploration and colonization of the Americas could have been incalculably less tragic.”

    Or they would have been that more effective in exterminating the native population, due to their scientific mindset.

  144. Scott Says:

    Rahul #142: Whenever such a computer is built, Bender from Futurama will have already beat it with “bite my shiny metal ass.”

  145. Scott Says:

    Jr #143: It would’ve been hard to have been more effective—Pizarro destroyed and enslaved the entire Incan empire with fewer than 200 men, by first lulling the Inca into thinking he had peaceful intentions, then kidnapping the emperor, then demanding all the Inca wealth as ransom, then reneging on the deal and killing the emperor.

    In any case, I thought I’d made this clear, but I wasn’t talking about people who use technology and strategic cunning to be more effective thugs. I was talking about people who are motivated by intellectual curiosity, discovery, and the preservation of knowledge rather than lust for riches. Essentially by definition, such people would’ve been incapable of doing what Pizarro did.

  146. Scott Says:

    Rahul #140: University faculty are not “reassigned” from one research area to another. They themselves decide what they want to work on. The choices the university has in the matter are to hire or not to hire, and to tenure or not to tenure.

    Having said that, I know people who once did quantum computing but have “reassigned” themselves to other things, even finance or the software industry (and a good number of former string theorists have done the same). There are also QC people, like Umesh Vazirani, Ronald de Wolf, Harry Buhrman, and myself, who go through phases of working again on classical complexity theory. So yes, such “reassignment” is certainly possible.

  147. Rahul Says:

    University faculty are not “reassigned” from one research area to another. They themselves decide what they want to work on. The choices the university has in the matter are to hire or not to hire, and to tenure or not to tenure.

    Yes, I realize that. My point is it is easier for a Dept. if people are more easily “re-assignable”. You don’t want massive obsoletion or deadweight tenured faculty with expertise in areas no one wants to fund or where grad students won’t want to work in.

  148. Rahul Says:

    I was talking about people who are motivated by intellectual curiosity, discovery, and the preservation of knowledge rather than lust for riches. Essentially by definition, such people would’ve been incapable of doing what Pizarro did.

    No true Scotsman? If we identified scientists that did nasty things, maybe eugenicists that killed disabled men or castrated the mentally disabled or some such, you’d say that “by definition” they are not scientists?

  149. Scott Says:

    Rahul #148: Whether the criminals who did those things were “scientists” is a semantic question that doesn’t particularly interest me. In any case, I specifically referred in my post to “curious and careful scientists,” and they certainly didn’t meet the “careful” requirement! And my understanding is that they weren’t good scientists either: they were mediocrities. If this means I was expressing the wish that the Americas could’ve been explored by true Scotsmen rather than false Scotsmen, then so be it.

  150. Scott Says:

    Jerry #141:

    (1) There’s no reason whatsoever why quantum computing would need to work at room temperature in order to be practical. Some quantum computing proposals (like NMR) do work at room temperature, but it’s a convenience rather than a necessity. Even today, supercomputers are often cooled with liquid nitrogen to let them run faster without melting. And the technology to cool things to (say) 10 millikelvin is more-or-less routine in physics labs. Once again, you (and others) need to stop thinking about QC as a consumer product, and think about it as a scientific experiment—indeed, that was the entire point of this post.

    (2) Experiments with as many as 10-20 qubits in ion traps have already been done. So, you don’t get to discuss whether Nature imposes a “limit of two qubits,” as if it’s a speculative question awaiting an answer. The question has been settled.

    (3) Likewise, the buckyball experiments done by the Zeilinger group didn’t involve quantum interference with “two playful electrons.” The entire buckyball—with hundreds of electrons (and protons and neutrons)—was placed in a superposition of going through one slit and going through another slit. And that’s no longer even a state-of-the-art experiment.

    (4) Fine, you say, then why not assume that Nature imposes a limit where you can only do the kinds of experiments that have already been done, and nothing more impressive than them? Of course, the same could’ve been asked countless times in history: why not simply assume, Dr. Fermi, that Nature will let you create a little chain reaction underneath a football stadium, but not a big nuclear explosion? Why not assume, Wilbur and Orville, that you can fly around in a plane for 12 seconds, but if you want to fly for hours, Nature will put on the brakes? In each of these cases, the response is obvious: explain the fundamental physical principle that you think imposes your hypothesized limit, and where the limit goes into effect, and why at that place and not somewhere else. Otherwise you’re just blowing hot air.

    (5) I’d say the same thing about human-level artificial intelligence: while it’s not obvious that it’s possible, the burden is squarely on the people who think it’s not possible, to explain the fundamental physical principle that allows a clump of neurons to behave intelligently but not a clump of transistors. For more on this theme, see Section 4 of my essay Why Philosophers Should Care About Computational Complexity.

    (6) My book is based off the lecture notes, but it has plenty of new content—corrections, clarifications, and new things that were discovered between 2006 and 2013.

  151. Sniffnoy Says:

    The entire buckyball—with hundreds of electrons (and protons and neutrons)—was placed in a superposition of going through one slit and going through another slit. And that’s no longer even a state-of-the-art experiment.

    And others have done considerably better than just a buckyball! 🙂

  152. srp Says:

    It should be mentioned that a modern human-rights-oriented liberal would have found the Aztec and Inca empires utterly abhorrent and would have had to argue for armed intervention to overthrow them along the same lines as UN intervention against The Lord’s Army or Hutu genocidaires in modern times. In fact, Cortes’s victory over the Aztecs depended on the eagerness of the first natives he ran into wanting to help destroy them. Attempts to turn these oppressive polities into cute endangered panda bears are anachronistic exercises that make moderns feel superior to their predecessors but distort the alternatives at the time. (The disease catastrophe is a different matter.)

  153. Scott Says:

    srp #152: The notion that Cortes and Pizarro destroyed the Maya and Inca empires for human rights reasons is laughable. Sure, you could imagine a “modern human-rights-oriented liberal” arguing for armed intervention against those empires to stop their human sacrifices and other abhorrent practices (but they wouldn’t have succeeded: even today, almost no country is ever invaded solely because of what it does to its own people, the two examples you mentioned being cases in point). But what about what Cortes and Pizarro did next: burning the priceless artifacts of the conquered empires, enslaving the populations, and just generally achieving the incredible feat of making the oppressive regimes they overthrew look like cute little panda bears by comparison?

  154. anon Says:

    If QC is scalable, then the Mayas and Incas are still doing fine in some part of the wave function of the universe.

  155. Greg Kuperberg Says:

    Scott –

    But what about what Cortes and Pizarro did next: burning the priceless artifacts of the conquered empires, enslaving the populations, and just generally achieving the incredible feat of making the oppressive regimes they overthrew look like cute little panda bears by comparison?

    I don’t think that that’s really true. It is true that Cortes and Pizarro were war criminals by modern ethical standards. It is true that their actions were shameful and arguably criminal even by the sketchy ethical standards of 15th century Europe. However, a major reason that that their fearsome effort succeeded is that they were about the same as the Inca imperialists that they overthrew. (I could guess that much, and what follows is lifted from Wikipedia.) The Inca empire was only about a century old; it had grown quickly out of an older but much smaller Kingdom of Cuzco. One factor in the Spaniards’ favor was that they walked into a just-concluded civil war between the emperor Atahualpa and his (recently executed) brother Huayna. The Spanish also allied with various other Indian groups who, for various defensible reasons, hated the Inca regime.

    The issue of slavery is a case in point. The Incas themselves had a fairly heavy system of taxation — which had to be extracted in the form of labor, since there was no widely used currency. This forced labor system can be called “slavery”. The reason that Spanish slavery lasted for the next century or so was that it was conceptually similar.

    The notorious gold room of Atahualpa (which I had heard about before) is another case in point. Atahualpa could have guessed, based on the outcomes of his own conquests, that he was unlikely to live long. He might not have known what the Spanish would do with his gold artifacts, but he could have guessed that they wouldn’t give them back. He ripped his empire’s gold treasures out of his empire’s palaces and altars in a long shot bid to save himself.

    After all, how likely is it that Europeans have been the world’s uniquely bad imperialists? I’m not in favor of imperialism, European or otherwise. However, there is a lot of evidence that imperialism follows roughly the same course in many civilizations.

    The one uniquely bad piece of luck for the Incas was the smallpox epidemic. But Europeans at that time did not understand smallpox themselves, did not introduce it deliberately to the Inca region, and did not know that Indians lacked immunity to it.

    If ethically modern scientists had gone to South America at that time instead of the Spanish conquistadors, they would have faced an ethical quandary. They could have said “Shh…, let’s not disturb these Incas while they conquer another 10,000 square miles.” But it’s hard to say when that’s the right thing to do. For instance, you could view the pervasive warlordism in Afghanistan right now as a natural experiment in cultural anthropology. Or you could turn to a universal theory of human rights and try to get them to stop.

  156. Mike Says:

    “If QC is scalable, then the Mayas and Incas are still doing fine in some part of the wave function of the universe.”

    Funny, even though very probably true. 😉

  157. John Sidlesbot Says:

    In photosynthesis, where they are anatomically distinct from educational ecology of a disorganized mass of notable engineers and (more general) does act to borrow an algebraic, geometric, algebraic, and quantum solvers for regenerative medicine (for example) required is becoming for sure. IMHO a holiday-season attempt to modern neurology has implemented rigorously.

    After all, a better error E that is noisy dynamical systems are seeking describe quantum trajectories in consequence of every human love to ask this disconnect between “live” and numerically and created. Since Howard is not an informatic algebraic geometry that our QSE Group would be imprudent to making tremendous progress has worked on this point is, obviously, is central theme of the past.

  158. Rahul Says:

    The notion that Cortes and Pizarro destroyed the Maya and Inca empires for human rights reasons is laughable.

    I think the whole notion that one can take a conflict that happened 500 years ago, and then try to pass some sort of moral judgement about it using your contemporary yardstick of propriety is a bit iffy.

    How noble & ethical were the scientists back then? Who are practical examples of Scott’s “curious & careful” back in 1500. How strong was the sense that killing was evil? Who was the torchbearer of ethics? The counterfactuals are extremely hard to judge. Perhaps if the new world had never been colonized the native Americans would have been better off too!

    I feel the whole exercise is a bit pointless.

  159. fred Says:

    Scott #146
    “There are also QC people, like Umesh Vazirani, Ronald de Wolf, Harry Buhrman, and myself, who go through phases of working again on classical complexity theory.”

    Scott, what are your main areas of focus lately?
    I would imagine Boson Sampling is one of your top priorities?

  160. Scott Says:

    fred #159: My main areas of focus lately are changing Lily’s diapers, and picking up the morsels of food she throws off her high chair onto the floor. Other than that, trying to finish writing a bunch of old papers. Thinking about forrelation, computational complexity and firewalls, private-key quantum money, and yes, BosonSampling.

  161. Scott Says:

    Rahul #158: There are also people who say that we can’t pass moral judgment on things done during WWII, because the standards of propriety are different now than they were 70 years ago. Or even that we can’t pass moral judgment on things done today in different cultures, or by people with different upbringings than our own. Which raises an obvious question: what can we pass moral judgment on? And isn’t saying that we shouldn’t pass moral judgment itself a moral judgment? 🙂

    As Greg #155 concedes, the conquistadors were really bad even by the standards of their time. Conversely, Democritus, Galileo, Franklin, Faraday, Darwin, and Einstein (to pick a few examples) all seem to me to have been enlightened by the standards of their time, not just scientifically but in other respects too. Now personally, I subscribe to the view that there’s a connection between science and humanistic values: they both have a common enemy in arbitrary authority and evidence-free claims to knowledge, and crucially, it’s usually impossible to trample people’s rights without also lying to them about purely factual questions. But that debate really deserves a post of its own sometime.

  162. Jerry Says:

    Re: Scott’s comment #150

    I think we would both be happy if quantum computing was accomplished in the laboratory on a micro-scale at 10(exp-3)K. I have done CPMAS-NMR using a superconducting magnet surrounded in liquid helium. This was not quantum computing.

    There are companies and individuals claiming to have the worlds first quantum computer. There are research groups that have published the results of very elegant Q.M. experiments, like the entanglement within buckyballs.

    My question is, “How does one verify and receive a mazel tov that they actually have a quantum computer; a computer that utilizes quantum algorithms, has a “Bloch Sphere” hidden inside a black box, and spits out lengthy factorialization data so fast it makes Speedy Gonzales seem like Ordinary Gonzales (another Bender @ Futurama quotation).”

    P.S. Are you selling qubit coins yet?

  163. fred Says:

    Scott #160
    Haha, would you agree that diaper changing is definitely NP-Hard? (let’s not even ask what’s the problem input size).

  164. Rahul Says:

    And isn’t saying that we shouldn’t pass moral judgment itself a moral judgment?

    No it isn’t. It’s more of a pragmatic judgement. I’m not saying it is immoral to do so, just that it is usually a futile exercise when conducted so far back in time.

    Surely, you acknowledge that events 70 years ago need not constrain us in the same exact way as those 500 years ago?

  165. Greg Kuperberg Says:

    Scott –

    As Greg #155 concedes, the conquistadors were really bad even by the standards of their time.

    Well, they were disreputable by European standards of the time, but I would not say really bad. They were a little earlier than Henry VIII, and a lot earlier than Oliver Cromwell, and both of those guys were no Mother Teresas.

    While I agree that scientists were much more enlightened than conquistadors, I don’t think that the ethical issues are quite so one-sided — how well would scientists have remembered their enlightenment if they had been on Pizarro’s expedition? In particular, the easy way out is to fall for the myth of the noble savage. The truth is that Atahualpa was just another conquistador. And that many South American Indians then did not particularly see Spaniards as the greater evil. (Smallpox, on the other hand, was uniquely terrible.)

    After all, consider the dilemmas of a new group of European settlers: the Ashkenazi Jews who recreated Israel. The indigenous people they face are called Arabs, and any fair person should agree that the ethical questions are not that simple. There is a subtle error that comes from simplistically demonizing the European colonization of the Americas. Namely, if those people were all so utterly monstrous, then it depersonalizes the point; of course we today are not like that.

  166. quax Says:

    Scott #161, by all means I agree that this debate on ethics really deserves its own thread, there have been some really thought provoking posts on the matter (especially Greg’s cogent writing).

    To me the question to what extend scientists may be inoculated against the worst ethical depravity is a very important one, and to my mind not clear cut at all, given the various examples we can pick from recent German history.

    A cynic will argue that for every Albert Einstein we had a Fritz Haber who eagerly developed poison gas for his emperor. For every thoughtful Erwin Schrödinger we had a Pascual Jordan who happily joined the Nazi party.

    All of them were first rate scientists in their field.

    So what went wrong? And why did most scientist like Heisenberg and Otto Hahn basically just carry on as if completely blind to all the evil around them?

  167. Rahul Says:

    A cynic will argue that for every Albert Einstein we had a Fritz Haber who eagerly developed poison gas for his emperor.

    Somewhat unusual to see Haber mentioned for poison gas & not for the Haber-Bosch process.

  168. Silas Barta Says:

    one scene, where an economist asks a physicist after a public talk about the “return on investment” of the LHC, and is given the standard correct answer, about “what was the return on investment of radio waves when they were first discovered?”

    Actually, this is the part that bothers me. For other scientific breakthroughs, I can identify some sort of “NP-complete”-style asymmetry to it, in which the (usefulness of the) discovery is easy to verify but hard to find. I can’t do that for the LHC/Higgs Boson mass, and when I ask about such a property I just get lectured about the awesomeness of science.

    For example:

    – Once you know Newtonian gravitation, you can easily verify that planetary motions (and cannonball ballistics) obey it. Compiling a working look-up/epicycle table would have been prohibitively expensive.

    – Once you know how radio waves work, you can narrowly concentrate your radio designs to those which maximize transmission clarity. Guessing a bunch of setups would have been prohibitively expensive.

    – Once you know how general relativity works, you can predict the time delays in the satellite signals. Having to work out the Newton-relative error tables from experiment would have been prohibitively expensive.

    – Once you know the Higgs Boson exists and has mass X, ____ ?

    I don’t see how any technology would play out in which it would have been cheaper to *first* find the HB mass, and then build toys that depend on that value as a key design parameter. If there are any technologies that could exploit HB effects and are mass producible at all, it would have been cheaper to just build them all and see which ones work. (Unless the error bars needed to exploit it were exponentially small.)

    Can anyone cast the search for the HB in these terms?

  169. Mike Says:

    “And why did most scientist like Heisenberg and Otto Hahn basically just carry on as if completely blind to all the evil around them?”

    ” . . . if those people were all so utterly monstrous, then it depersonalizes the point; of course we today are not like that.”

    When all is said and done, what more can any “modern” person do (and everyone was after all in a sense modern in their day) other than wrestle with the subtleties and nuances that complicated facts and circumstances place in our way, trying best we can to be guided by moral principles, both foundational (i.e., timeless as anything can in the end be) and learned from human experience, to be able to say that knowing what I now know, that thing which was done in the past was morally wrong, and I trust that I would have the courage here and now to do the moral thing.

  170. quax Says:

    Rahul #167, by all accounts Fritz Haber was a staunch Prussian patriot. Many German Ashkenazi Jews shared that attitude. For instance a great-uncle of mine was named Wilhelm after the emperor.

  171. Greg Kuperberg Says:

    quax –

    And why did most scientists like Heisenberg and Otto Hahn basically just carry on as if completely blind to all the evil around them?

    It wasn’t most of them, at least not most of the best ones. Scott is correct insofar as that when scientists are given a choice between good and evil, most of them choose good. That’s completely different from depressing conflicts without a good side — you can’t expect scientists to create enlightenment out of nothing. Some scientists think that scientists can do that, but they’re wrong.

    You’re correct that there was a full range of reactions to Nazi Germany among Germany’s non-Jewish scientists. But it wasn’t symmetrical. Most of them behaved honorably in one way or another: They left the country, or they stayed out of politics, or they helped Jews or anti-Hitlerism in small ways. Few of them resisted the Nazis enough to risk their lives, but that’s a lot to ask for. It was not the case that for every honorable Schrödinger, there was an evil Jordan. For every three honorable Schrödingers, there was one evil Jordan.

  172. Mike Says:

    Raul@167,

    I too thought that the characterization of Fritz Haber was a bit “unusual”. In any event, his Wikipedia entry seems to offer a much more balanced description for those who care about such things:

    http://en.wikipedia.org/wiki/Fritz_Haber

  173. Scott Says:

    Greg #165: You know the history better than I do, but here are a few of the salient differences between conquistadors and Zionists.

    1. The conquistadors despoiled two continents, whereas the Zionists had designs only on a sliver of the Middle East about the size of New Jersey.

    2. The conquistadors came with dreams of conquering and enslaving the native population, and looting their gold. The Zionists came with dreams of buying land and farming it, and developing the region not only to their own benefit but also the Arabs’. (I’m speaking, in both cases, about the stated dream, rather than what actually happened…)

    3. Even given 1 and 2, many Jews still strongly opposed Zionism, until it looked more and more like it was the European Jews’ only alternative to annihilation (which it plausibly could have been, except that it didn’t become available in time for most of them).

    In all of these respects, it strikes me that the Zionists were much more similar to (some of) the later American pilgrims than they were to conquistadors. The pilgrims, too, were fleeing religiously-motivated persecution, and from what I read, some of them had surprisingly modern-sounding ideas about just buying land from the local tribes and coexisting with them. Later, of course, relations turned violent and horrible, but the original idea seems basically well-intentioned. Which seems to me like an important contrast with the conquistadors, whose original idea was already terrible! 🙂

  174. quax Says:

    Greg #171 Few of them resisted the Nazis enough to risk their lives, but that’s a lot to ask for.

    Planck’s son did and paid the ultimate price, albeit he wasn’t a scientist.

    When you write that “when scientists are given a choice between good and evil, most of them choose good”, then this begets a follow up question: More so than less educated individuals?

    I.e. to what extend does a scientific education truly inoculate against evil?

    This seems obvious when concerning superstitions (witch burning etc.) but it seems to me less clear when an ideology is dressed up in pseudo-scientific garb.

    Ultimately, I think the bare bone ethics of science, truthfulness, rationality and willingness to follow the evidence, may not be enough to establish sufficient ethical rail guards.

    Problem is every individual always thinks they are choosing good. I.e. Fritz Haber was convinced to do the right thing in helping his country wage war. Bad choices never lack convictions or at least rationalizations.

  175. quax Says:

    Mike #172, click-through fail. I have that link embedded in my original comment, precisely because he, of course, cannot be reduced to his WW1 efforts.

    Haber was a deeply tragic figure, yet there is little doubt in my mind that his poison gas involvement deserves to be considered a terrible lapse of moral judgement.

  176. Greg Kuperberg Says:

    Scott –

    You make a fair point about the difference between Spanish conquistadors and later North American (and to be fair, also Latin American) settlers. Nonetheless, the conventional story of the brutal Europeans vs the noble aborigines extends fully into the 19th century. By then, as you notice too, you definitely can make certain comparisons with the Arab-Israeli conflict or with the American war on terrorism.

    Again, vilification of European treatment of Amerinds has reached an ironic limit: It is portrayed as so terrible that there isn’t all that much to learn from it. For instance, I learned form the Laura Ingalls books (and from some further reading) that the Minnesota Massacre really was the 9/11 of the 19th century American West. But that’s not the conventional description now. So that if we have another massive terrorist incident, nothing is learned from brutal, misdirected reaction in response to the previous one.

  177. Greg Kuperberg Says:

    quax –

    To what extent does a scientific education truly inoculate against evil?

    First, scientists have more at stake in supporting peace and international cooperation, since science itself is an international enterprise. Second, while scientists may not really have a greater sympathy instinct than other people, scientific knowledge makes them less likely to misplace their sympathy…at least in the opinion of other scientists. 🙂

  178. Vadim Says:

    Scott,

    Wanted to let you know that I just got back from watching Particle Fever at the Kendall Cinema and thought it was AWESOME. Along with being a very worthwhile topic for a documentary, it was very well made. Just like nature and wildlife are always great topics for a documentary, but there are nature documentaries and then there are BBC-made, David Attenborough-narrated nature documentaries, the creators hit the bullseye with this one.

  179. srp Says:

    I’m glad to see that Greg and others don’t suffer from the noble savage–or worse, noble civilized primitive–idea that Scott somewhat carelessly propagated. The Incas and Aztecs were bad guys, so were Pizarro and Cortes (the former somewhat worse). Scott’s strawman defense–that I thought Pizarro and Cortes were humanitarians–is so obviously silly that it bespeaks trying to change the subject instead of engage the issue seriously.

    More importantly, from the standpoint of 2014 it doesn’t make sense for a Euro-descended person to identify with either side of this conflict with respect to blame or virtue. Nobody today is responsible for any of this and the cultural distance from both is vast. On the other hand, we have lots of mass murder and vicious warfare occurring right now and the dominant thinking among both the goo-goos and the realists is “let ’em kill each other.” That’s not crazy, but it isn’t obviously something that future generations will honor us for, either.

  180. Rahul Says:

    The Zionists came with dreams of buying land and farming it, and developing the region not only to their own benefit but also the Arabs’. (I’m speaking, in both cases, about the stated dream, rather than what actually happened…)

    Which brings up the old question: Do intentions matter more or consequences?

    Also, how often in the history of diplomacy do stated intentions align with actual intentions?

  181. Rahul Says:

    Also, couldn’t someone ask in the same spirit:

    “Had the Jewish migration been undertaken exclusively by curious, contemplative and careful Rabbis & philosophers rather than by these quite worldly, hardy & practical Zionists the Jewish occupation of this sliver of the Middle East could have been incalculably less tragic.”

  182. Rahul Says:

    by all accounts Fritz Haber was a staunch Prussian patriot. Many German Ashkenazi Jews shared that attitude. For instance a great-uncle of mine was named Wilhelm after the emperor.

    Maybe. All I was pointing out was that Fritz Haber probably saved several orders of magnitude more lives indirectly via fertilizers than he ever managed to kill via poison gas.

    OTOH, in a moral calculus, if & where this ought to count, I’m not sure.

  183. fred Says:

    Scott:
    “Second, the movie frames the importance of the Higgs search as follows: if the Higgs boson turned out to be relatively light, like 115 GeV, […] on the other hand, the Higgs turned out to be relatively heavy, like 140 GeV, […]. So the fact that the Higgs ended up being 125 GeV means the universe is […]”

    Sigh… Gigantic SPOILARZ, man!

  184. Scott Says:

    Rahul #181: To an amazing extent, the early Zionists were philosophers—more so than almost any group of settlers in human history. Theodor Herzl and his followers had lofty ideals about not merely “restoring an ancient homeland,” but turning it into a modern secular liberal democracy that would emphasize science. Indeed, one of the main ideas was to create new universities (e.g., the Hebrew University of Jerusalem), where Jewish scientists and intellectuals could escape the antisemitic quotas imposed on them in Europe.

    Of course, things didn’t go exactly according to plan. While some of the local Arabs were happy to have Jews around providing jobs (as the Zionists had hoped), others staged violent pogroms. Maybe most importantly, the leader of the Palestinians from 1921 to 1948, the Grand Mufti, was a staunch ally of the Nazis who (with full knowledge of the then-ongoing Holocaust) said things like:

      It is the duty of Muhammadans in general and Arabs in particular to … drive all Jews from Arab and Muhammadan countries … Germany is also struggling against the common foe … It has very clearly recognized the Jews for what they are and resolved to find a definitive solution for the Jewish danger that will eliminate the scourge that Jews represent in the world …

    Anyway, it became increasingly obvious that the only choices for the Jews were to fight back or die. While millions of Jews in Europe had gone to their deaths with only scattered resistance, the ones in Israel (who, of course, had the advantage of better information, including about the just-concluded Holocaust) chose to fight back. And in a case of anthropic postselection, the latter are the ones who still exist (or whose descendants do).

    I’ve thought about it often, but given the options available at the time (with immigration to the US sealed off, etc.), it’s hard for me to see what other path the Jews of that time could have chosen that would’ve been more moral. And the results were not by any means an unmitigated tragedy, just as the colonization of America by the Pilgrims wasn’t. Have you been to Israel? Despite what you see on the news, it’s pretty nice there, including the Israeli Arab parts. (As is often pointed out, Arabs have more rights and economic opportunities in Israel than they do in most Arab countries.)

    Yes, the situation in the West Bank and Gaza is a tragedy, with plenty of blame for it to go around. And yes, when it comes to many of the West Bank settlers, I’d say that comparisons to Spanish conquistadors aren’t entirely unjustified. The settlers certainly aren’t people with the “careful scientific mindset” that I was talking about, and Theodor Herzl wouldn’t recognize any of his values in them.

  185. Scott Says:

    fred #183: Sorry dude. 🙂 I guess I figured that if you read the news, you already know how the story turns out.

  186. Jerry Says:

    The book “Einstein’s Jewish Science”, by Steven Gimbel http://www.amazon.com/Einsteins-Jewish-Science-Intersection-Politics/dp/1421405547/ref=sr_1_1?s=books&ie=UTF8&qid=1396964651&sr=1-1&keywords=einstein%27s+jewish+science is an excellent historical account of the journey Jewish scientists, along with their supporters, antagonists, and frenemies have taken before, during, and after WWII.

    Scott: I am still struggling to understand what would distinguish a real classical computer from a real quantum computer. If an alleged snake oil salesman tells you he has a quantum computer is his laboratory at 0.001 K, what would he have to do to prove to you that, yes, you have the world’s first quantum computer? Mazel tov!

  187. Mike Says:

    “What would he have to do to prove to you that, yes, you have the world’s first quantum computer?”

    Solve any problem that classical computers can’t solve because the amounts of time and memory needed are practically infeasible. Of course, one could always say that a more clever classical algorithm and/or better hardware could alter the performance of the classical computer, but FAPP, based on our best understanding at the time, that would do it for me. 😉

  188. Scott Says:

    Jerry #186: Well, suppose you picked random 2000-bit prime numbers and multiplied them using your classical computer, you gave the salesman the 4000-bit product, and he told you after a matter of minutes what your original factors were. Then that would prove that either he had a quantum computer, or else he had a fast classical factoring algorithm (which, in many ways, would be even more incredible!). So that would be extremely strong evidence for QC.

    Now, if you really insist on ruling out the second possibility—the possibility that there’s “merely” an earth-shattering, breakthrough classical algorithm under the hood, rather than a quantum algorithm—then there are even some beautiful recent protocols (due to Aharonov-BenOr-Eban, Broadbent-Fitzsimons-Kashefi, and Reichardt-Unger-Vazirani) that let you do that. In particular, these protocols let you rigorously verify the presence of quantum computation if you can send single qubits (prepared in secret states known only to you) to the alleged QC, or if there are two QCs that share entanglement but are unable to communicate with each other, and that you can interrogate like police suspects in different cells.

    Or, of course, you could just open up the alleged QC, examine it, measure it at intermediate points in its operation, etc., and convince yourself that it’s doing something like Shor’s algorithm, which is totally unlike any classical factoring algorithm (so that you would never mistake one for the other).

    In summary, the only reason why “verifying quantumness” is such a subtle issue today is that we don’t yet have devices that achieve unambiguous quantum speedups! That’s why, in cases like D-Wave, people really have to squint to figure out what kinds of quantum behavior are present in the device, and whether the quantum behavior is playing any “causal role” in the computation (even though it’s not leading to the problem being solved faster). And they even get into philosophical arguments about what counts as a “causal role.”

    If, on the other hand, you had a scalable, fault-tolerant, universal QC capable of running Shor’s factoring algorithm—i.e., the kind of QC that research labs all over the world are hoping to eventually build, and that D-Wave is not trying to build—then verifying its quantumness would be almost as straightforward as you could possibly hope.

  189. Rahul Says:

    Then that would prove that either he had a quantum computer, or else he had a fast classical factoring algorithm

    Are there any problems for which a lower bound is known for a classical algorithm? i.e. if such a problem were indeed solved fast we could be sure it is due to quantumness alone?

  190. Scott Says:

    Rahul #189: Only in the black-box setting. In the non-black-box world (i.e., with no oracles), we can’t possibly prove such a lower bound unconditionally without proving something like BPP≠BQP, and therefore also P≠PSPACE.

    On the other hand, if you’ll allow just a tiny amount of interaction with the QC (or better yet, two entangled QCs), then the interactive protocols that I mentioned in comment #188 let you come much closer to “unconditional proofs of QC” than many of us thought was possible before the protocols were discovered.

    (And again, if Shor’s algorithm were used to quickly factor 4000-bit numbers, I don’t know if any sane person would actually remain unconvinced that QC had been achieved! Certainly the case would then be rock-solid by any ordinary standards that scientists apply to anything else. So this entire discussion, while interesting for theorists like me, is basically academic.)

  191. Rahul Says:

    Scott #190:

    Thanks. Agree about this being academic, I was merely curious.

    Those protocols you explained in #188 are interesting, yet, to me as a layman lack the simplicity & elegance of the “factor a 4000-bit integer” test.

    Too bad we cannot prove a lower bound for any such “natural” problem. ( I realize “natural” is arbitrary. )

    Those protocols you describe in #188 must, of course, work, but as someone not used to this, they sound awfully contrived & artificial as test problems (cf. factorization).

    I was just wishfully hoping for a simple, elegant, confirmatory test. Apparently can’t be done I guess.

  192. Jerry Says:

    Scott #188

    Thanks, Scott. That was the “Mehr Licht”* moment I was looking for!

    *Goethe’s last words (more light).

  193. fred Says:

    Scott #188
    ” In particular, these protocols let you rigorously verify the presence of quantum computation if you can send single qubits (prepared in secret states known only to you) to the alleged QC, or if there are two QCs that share entanglement but are unable to communicate with each other, and that you can interrogate like police suspects in different cells.”

    Isn’t that basically doing a kind of Bell test experiment?

  194. Douglas Knight Says:

    I’ve thought about it often, but given the options available at the time (with immigration to the US sealed off, etc.)

    I’m not sure which time you’re talking about, but Argentina allowed Jewish immigration for 14 years longer than the US, through 1938. I’m unclear on how easy it was to immigrate to British Mandate Palestine.

  195. fred Says:

    Philosophically, I’ve been wondering why the physical world would be such that the QM world (at the bottom) is degenerating into a classical world (on top of it) that has less “power” (even if only slightly so).
    I guess that sort of things isn’t uncommon, it’s already the case it seems with entropy (statistical physics), with heat loss, and irreversibility… would there be a sort of equivalent theorem that shows loss of power when going from QM to classical?
    Also interesting that in spite of all that the most interesting processes we know (our minds?) live in the classical world (apparently)… maybe until we create AIs that are based on QC?

  196. Scott Says:

    fred #193:

      Isn’t that basically doing a kind of Bell test experiment?

    Yes! You can think of the Reichardt-Unger-Vazirani protocol as basically a “Bell test experiment on steroids”—one where you use lots of Bell tests to verify, not merely that there’s quantum entanglement present, but that the two entangled computers are doing the specific quantum computation you wanted.

  197. Scott Says:

    Douglas #194:

      I’m not sure which time you’re talking about, but Argentina allowed Jewish immigration for 14 years longer than the US, through 1938. I’m unclear on how easy it was to immigrate to British Mandate Palestine.

    The most relevant period for this discussion is probably 1938-1945.

    Regarding your question, much like what happened in Argentina, the British White Paper of 1939 sealed off almost all Jewish immigration into Palestine exactly when it could’ve made the most difference. In fact, even after WWII, the British still wouldn’t allow the now-homeless Holocaust survivors to immigrate to Palestine, but kept them (including my wife’s grandparents) for several years in internment camps in Cyprus.

  198. Silas Barta Says:

    Scott #188:

    That suggests a quantum computing variant of an old joke:

    A dejected D-Wave engineer is hocking a big mainframe at a flea market with the sign: “Can quickly factor large semiprimes, only $200”.

    A skeptical compsci professor passes by and says, “What a crock! There’s no way that thing works.”

    The engineer says, “Hey, don’t take my word for it! Just do the Arthur-Merlin protocol!”

    So the professor generates a large semiprime, inputs it, and gets back the factors after a few seconds. Surprised, he repeats the test a polynomial number of times. Convinced, he offers to buy it. But…

    “If this thing has a quick way to factor large semiprimes, why are you selling it so cheap?”

    The engineer says, “It’s junk — doesn’t even use quantum entanglement!”

  199. Mike Says:

    ““If this thing has a quick way to factor large semiprimes, why are you selling it so cheap?””

    No debate. If it can do this, so be it! People will pay good money for it. But if it can’t, then please, spare me the analogies. 🙂

  200. Monteczuma's ghost Says:

    Scott,

    There’s decent evidence that the immune systems of new world inhabitants were not sufficient to withstand encounters with Europeans due to there being few animals available to domesticate in the new world. As populations grew and navigation technology improved there was no avoiding a holocaust. Worlds would meet before medical tech caught up. How do you think ‘thoughtful scientists’ would avoid this when germ theory wasn’t even established!

  201. quax Says:

    srp #179, this surprisingly timely article seems rather pertinent to this strand of the discussion. Despite all we know now, it is still happening today: Uncontacted Tribes Die Instantly After We Meet Them

    Time for a Prime Directive?

  202. Rahul Says:

    In fact, even after WWII, the British still wouldn’t allow the now-homeless Holocaust survivors to immigrate to Palestine, but kept them (including my wife’s grandparents) for several years in internment camps in Cyprus.

    Related question, I’m wondering what’s your view on, contemperory USA’s policy on refugees / asylees? Could a similar allegation be made that the Americans wouldn’t even allow the now-homeless Syrians or Somalis or Sudanese in?

    Not an absolute prohibition true, but effectively? I guess how strong is the claim of homeless, genocide survivors to emigrate to better lands rather than being stuffed into camps & if it is a strong claim are we today much better at addressing the claim than the British 70 years ago?

  203. g Says:

    “There’s absolutely no scientific reason to send humans to space, and there’s never been any.”

    Steve Squyres, principal investigator of two of the most successful robotic missions ever, disagrees. “I believe that the most successful exploration is going to be carried out by humans, not by robots. What Spirit and Opportunity have done in 5 1/2 years on Mars, you and I could have done in a good week.”

  204. Rahul Says:

    While millions of Jews in Europe had gone to their deaths with only scattered resistance, the ones in Israel (who, of course, had the advantage of better information, including about the just-concluded Holocaust) chose to fight back. And in a case of anthropic postselection, the latter are the ones who still exist (or whose descendants do).

    Scott #184: Maybe I wasn’t clear earlier. I’m not at all trying to blame the Jews here.

    My point simply was that the Jews succeeded in saving themselves & establishing Israel only because the pioneers were who they were & they did what they did. And not just kindly philosophers like Theodor Herzl but also the rougher men of the later era like Ben-Gurion, Menachem Begin, Yitzhak Rabin etc. An Israel built by just philosophers would’ve been a far shakier proposition. Ben-Gurion was as essential as was Herzl to the success story.

    And now it’d be absolutely silly of me to ponder in hindsight, “Oh! But why didn’t the pioneer Zionists abhor force entirely & try only peaceful, non-violent routes a la Gandhi?”

    In the same spirit, I feel, it’s silly to consider the hypothetical of how much better the world would be if only careful & curious scientists had explored the new world, instead of Columbus or Cortes.

  205. domenico Says:

    I have a crazy idea.
    If there is an electric analog computers, with usual electric components, then if the construction has scale invariance, it is possible reduce the dimension of the analog computer, and to obtain ever the same results; but if the dimension is too little, then start quantum effect (tunneling, wave overlapping, etc.) and the analog computer start to give different results (quantum results).
    I don’t understand if the computer contain a quantum superposition of the solutions, or if the quantum noise make the computer unusable.

  206. Scott Says:

    Monteczuma’s ghost #200:

      How do you think ‘thoughtful scientists’ would avoid this when germ theory wasn’t even established!

    Already addressed in comment #7, part (2).

  207. fred Says:

    domenico #205

    Right, with analog electronic circuits (like a radio, an amp, etc), you don’t have scale invariance.
    The main issue really is that as you miniaturize the elements, currents and voltages become less and less “continuous” as you start to see the effect of every single electron instead.
    So the “analog” computation breaks down (you’re no longer dealing with smooth variables).

  208. Scott Says:

    Rahul #202: I think that, alas, the world still hasn’t solved the problem of what to do with refugees from wars, genocides, and totalitarian regimes. Yes, keeping them in internment camps is certainly better than returning them to the deadly situations they escaped from (as happened to some of the Jews fleeing the Nazis, e.g. on the MS St. Louis). But in any case where you have hundreds of thousands or millions of displaced people, you need a more permanent solution, and that will inevitably become a question of politics and international diplomacy. Maybe the best general statement one can make is that, if there is an “obvious” solution—i.e., a place where the refugees overwhelmingly want to go, and where there’s an existing community ready to absorb them and make them productive citizens—then the rest of the world shouldn’t obstruct that solution. That’s exactly what happened with the Holocaust refugees: the embryonic Israeli state desperately wanted them, so much so that Israelis risked their lives in dramatic smuggling operations, but the British still wouldn’t allow it, for fear of damaging their relations with the Arab countries. I guess the other general comment to make is that, if countries really had their own economic best interests at heart, then in many cases they’d be fighting over who gets the refugees. Immigration, especially skilled immigration, can be an economic boon.

  209. Scott Says:

    Rahul #204: It’s a fair point. My counter is simply that, between Hernán Cortés and a meek armchair philosopher, there’s a huge middle ground—the people who act decisively when needed, but who also take time to reason carefully about the broader picture, who constantly ask themselves “do we really need to go this far? is there a sensible alternative? what will posterity think about us?” In a kind-of sort-of analogous way, between D-Wave and a useless complexity theorist like me, there’s the vast middle ground of John Martinis, Robert Schoelkopf, David Wineland, Rainer Blatt, and others all over the world who roll up their sleeves and make real progress toward building scalable QCs without ever thumbing their noses at theory. 😉

  210. Steve Says:

    Alas, sounds like we’re headed for a Quantum Computing Winter that should be familiar to anybody who worked with AI 20-30 years ago.

    The good news is that following that model, by 2040-2050, Quantum Computing should be making respectable strides, getting funding, and appearing in everyday applications.

  211. domenico Says:

    fred #207

    Yes, you are right on the quantization of the charge, but I am thinking that the scale invariance of the analog computer is true, if the doping of the transistor (and diode) is reduced, then the depletion width is reduced (scale invariance), but the current and the voltage of the analog computer must scale.
    Each other linear component can be changed with continuous values, until the quantum mechanics become prevailing.

  212. Rahul Says:

    a place where the refugees overwhelmingly want to go, and where there’s an existing community ready to absorb them and make them productive citizens—then the rest of the world shouldn’t obstruct that solution….That’s exactly what happened with the Holocaust refugees: the embryonic Israeli state desperately wanted them,

    I think you are being overly simplistic. You make it sound like a third party imposing a blockade. It was hardly that way. So far as Palestine in 1945 was concerned, Britain was hardly “rest of the world”. They still were administering the Palestinian Mandate, right? So they were, at worst, refusing entry into territory administered by them.

    If in your phrase “existing community” you include the Arab settlers clearly the sentiments were quite hostile about incoming Jews, including a recent civil war. The embryonic Israeli state you mention was itself in the midst of an extended existential battle & by no means agreeable to all Palestinians. The British hands were tied by their earlier treaties & declarations and I sense some legitimate fear of not worsening the Arab-Jew conflict in Palestine by suddenly allowing in a sudden influx of Jews. Or any appearance of trying to unduly favor one side in the conflict.

    In hindsight, the situation seems rather complex. If you are going to blame the Brits for being cautious you’ve probably got to blame the US a hundred times over for similar hurdles to Jewish immigration all through WW-2 with no similar risks like those that the British faced in Palestine.

    I admit I’m no expert on these issues, so apologies if I’m messing something up!

  213. Sam Hopkins Says:

    Scott: I don’t understand your objection to the “gizmo” conception of quantum computing. Surely it would be more objectionable if all applications of the gizmo were speculative at best (see D-Wave); but Shor’s algorithm gives us an application heavily backed by theory! Do you think someone writing a grant proposal that focuses on (or at least mentions for motivation) the fact that a quantum computer would break the predominant form of online cryptography is being disingenuous?

    To go along with this, I also tend not to support the general pessimism towards the possibility of new and useful quantum exponential speed-ups. For example, how outlandish really is the claim that in the next 20 years we will have an efficient quantum graph isomorphism algorithm?

  214. Scott Says:

    domenico #211: Our universe is not scale-invariant. Even on everyday scales, an ant the size of an elephant couldn’t walk. And on microscopic scales, matter is made up of elementary particles obeying the laws of quantum mechanics, so those are the laws that are ultimately relevant if we want to talk about the physical limits of computation.

  215. Scott Says:

    Sam #213: I’m not making the absurd claim that there are no applications of QC, or that anyone who talks about such applications is dishonest! I’m interested and excited about many of the applications myself (especially quantum simulation). My claim is only that the scope and importance of the applications has been exaggerated, relative to the scientific importance of proving that scalable QC is possible at all.

    In particular, I wish I could say that the typical person reading articles about “the coming quantum revolution in computing” came away understanding the following points:

    (1) For most of what we do with our computers—word processing, email, web-browsing, Skyping, videogames—a QC doesn’t help. To whatever extent those things are broken, they’re not broken in ways that switching to QC will fix.

    (2) There are much easier ways to compromise computer security than by efficiently factoring integers (e.g., the Heartbleed attack announced just yesterday), and those are probably the ones you should worry about in practice, to whatever extent you worry about computer security at all.

    (3) There are other cryptosystems out there that we don’t know how to break even with a QC (e.g., lattice-based systems, multivariate polynomial systems, almost any private-key system), albeit they’re currently less convenient than the ones we use (the public-key ones tend to require large key sizes and message sizes, though that will probably be improved with further research).

    (4) The hope of a big quantum advantage for NP-complete optimization problems is, at present, a speculative one.

    (5) Even for quantum simulation—in my opinion, the most compelling practical application of QC that we know about—it’s subtle to figure out exactly where QC would give you a practical advantage. That’s true both because of the large polynomial overheads in current quantum simulation algorithms, and because of the 90 years of brilliance that have gone into the design of classical simulation heuristics (perturbation methods, Quantum Monte Carlo, density functional theory, …) that very often (though not always) tell you what you want to know about a quantum system in practice.

    As for graph isomorphism: yes, I do think it’s entirely possible that it’s in BQP! I also think it’s entirely possible that it’s in P. 🙂 But in any case, (a) we already have classical codes that solve GI extremely fast in almost all cases in practice, and (b) the practical applications of GI are, as far as I know, limited.

  216. Scott Says:

    Rahul #212: I do blame the US for its immigration policies after 1924—indeed, those were incomparably worse in their effects than the British keeping the survivors in internment camps after the Holocaust was over. Sorry for not making that clear.

  217. fred Says:

    pretty interesting interview with the DWave CEO

    http://spectrum.ieee.org/podcast/computing/hardware/dwave-aims-to-bring-quantum-computing-to-the-cloud?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrum+%28IEEE+Spectrum%29

  218. Scott Says:

    fred #217: Thanks for the link! That’s one of the least cringe-inducing D-Wave interviews I’ve seen. Vern Brownell seems much more guarded in his statements than some others at D-Wave have been, and I’m grateful for that.

    I do, of course, have some fundamental disagreements with what he said.

    Firstly, I don’t know which problems he’s talking about where the D-Wave machine does “maybe slightly better” than classical computers—is he referring to the “Defining and detecting quantum speedup” paper? If so, then I’d say that it doesn’t count to compare D-Wave against a classical annealing code on a bunch of randomly-generated instances, postselect the 10% or so where D-Wave did slightly better (for certain ways of measuring speedup, not other ways), and then call those a “class of instances where you get a speedup”! At the very least, the class of instances should be something whose properties you can specify in advance.

    Secondly, he stresses how impressive it is that D-Wave has even achieved “parity” with classical hardware, given the 60 years and billions of dollars that have been invested in the latter. I’ve heard many D-Wave supporters make the same point recently. But they always forget to add that this “parity” is for the one specific problem of finding the low-energy states of the D-Wave Chimera graph! And while other optimization problems can be reduced to that problem, doing so incurs a very large blowup. So you could turn things around, and talk up how impressive it is that a general-purpose computer can achieve parity with a special-purpose device designed over a decade with $150 million solely for the one task on which we’re comparing the two… 🙂

    Thirdly, as I’ve said many times before on this blog, I don’t see the scientific case that simply scaling to more and more qubits, without fundamentally rethinking the hardware, should be expected to produce a clear speedup. It could, but we have no good reason to think so right now (and even Daniel Lidar agrees with me here). So I’d be curious to know on what basis Brownell made that prediction.

  219. Scott Says:

    g #203:

      Steve Squyres, principal investigator of two of the most successful robotic missions ever, disagrees. “I believe that the most successful exploration is going to be carried out by humans, not by robots. What Spirit and Opportunity have done in 5 1/2 years on Mars, you and I could have done in a good week.”

    I actually think Squyres might have a point that, in the future, there could conceivably be good scientific reasons to send humans to space, if (e.g.) the costs were to come down enough. (Of course, on the other side of the ledger, the robots will continue to get better and better…)

    My claim is merely that, if you look at any of the human space missions that have actually occurred so far, then we could’ve obtained the same scientific knowledge from unmanned missions at maybe 1% of the cost—except, of course, for knowledge about the effects of prolonged weightlessness on the human body, and stuff like that, which is obviously hard to obtain otherwise (but also of limited interest unless there’s a manned space program).

    Crucially, even if you want robots to do things that are complicated and ambitious, like scooping up rocks and returning them to earth, it’s still a lot cheaper than having humans do those things. (The USSR, for example, did sample-return missions to the Moon, at a small fraction of the cost of Apollo. And we could, and should, do the same thing with Mars.)

    And in the farcical case of using the Space Shuttle to service the Hubble telescope, it would actually have been much cheaper to just build and launch several new Hubble telescopes, using unmanned rockets!

    I happily concede that there are defensible non-scientific reasons for sending humans into space: for example, inspiring schoolkids, catalyzing the nation’s technology development, showing the Ruskies who’s boss. But all of those reasons were probably much more compelling in the 1960s than they are today.

  220. fred Says:

    Scott #219
    But it seems reasonable to assume that long term survival of the human race involves colonizing other worlds, no?
    That said, there’s probably too little time between the rise of industrial revolution and the apparition of its adverse effects (irreversible environment damage, nuclear war, biological war) to allow for a viable and serious “relocation” option (we don’t have the tech to terraform other worlds). And, anyway, spreading our fundamental idiotic nature to other planets isn’t a very good guarantee of survival – thinking that we have other worlds to waste would probably encourage our bad habits.

  221. Scott Says:

    fred #220: Even an Earth ravaged by nuclear war and environmental destruction would still be much more hospitable to human life than any other astronomical body that we know about. Yes, someday we might have the technology to terraform other planets, or to travel to other planets that happen to be hospitable to us already (if there are any). But it’s far from obvious to me that, long before that, we wouldn’t also have the technology to (e.g.) upload ourselves onto a digital substrate and leave behind the physical world entirely! (At least until the electricity runs out.)

  222. srp Says:

    “At the very least, the class of instances should be something whose properties you can specify in advance.”

    Simple. That class that would be selected by a D-Wave employee after inspection of the results.

    Projected Hindsight is the ultimate weapon.

  223. Greg Kuperberg Says:

    g – “Steve Squyres, principal investigator of two of the most successful robotic missions ever, disagrees.”

    Squyres has his own opportunistic reasons for holding such opinions. A number of people connected to big-ticket space missions start to talk this way, even if those specific missions are robotic.

  224. Rahul Says:

    One naive question: What are we doing with the LHC now that the initial Higgs confirmation is done? Are there other big goals? Or incremental developments & polishing / verification of original results?

  225. Scott Says:

    Rahul #224: Experts could tell you much more, but the LHC is now offline for a couple years while they upgrade it to its full design energy of 7 TeV. Once it’s back online at the higher energy, they’ll look again for anything new or unexpected there (e.g., superpartners or dark matter particles); of course there’s no guarantee that any such thing will turn up. In the meantime, the LHC is also used to do other physics not quite at the “energy frontier”—particularly quantum chromodynamics.

  226. Rahul Says:

    Once it’s back online at the higher energy, they’ll look again for anything new or unexpected there

    Thanks Scott! That first bit sounds like a fishing expedition. 🙂 An expensive one too. Well, I hope it is worthwhile. And they find something to justify the upgrade costs.

    One problem with projects like these is knowing when to quit. I mean if there are big questions you need the LHC for (or an HL-LHC) by all means keep using it. But there’s this temptation to flog a dead horse sometimes.

    Mega projects get bureaucratic / self-perpetrating & sometimes people will look for problems to fit a tool rather than tools to solve problems.

    Funnily Wikipedia entries (which are typically quite good) exist for both the HL-LHC & the VLHC but neither talk of any concrete objectives.

  227. Scott Says:

    Rahul #226: But then there’s the other way of looking at it. Namely, once you’ve spent so much money to build the LHC in the first place, it would be a terrible waste not to run it at its full design energy! 🙂 Besides, while finding anything new would (in most physicists’ minds) certainly justify the upgrade cost, not finding anything would, as I understand it, much more decisively kill the idea that there could be any “low-energy explanation” (like SUSY) for the stability of the Higgs mass—and that would also be of great interest to the theorists.

  228. domenico Says:

    Thank you Scott #214

    I am thinking that a simple oscillator, with Hooke’s law, have scale invariance; it is possible to change the displacement for a scale, and the inverse of the spring constant for the same scale (to maintain the symmetry). The trajectory are the same, but there is a scale.
    I thought that the same is true for each physical law, if the constants in the law are changed with the scale (and this is not true ever), and considering the constant like generalized coordinate in the theory (so that there is a conservation law – Noether theorem – for each variable under scale, also for the physical constant).
    I thought to apply this to verify when the system becomes a quantum checking the trajectory, the dynamic of the system, because if the system is a quantum system one can think to exploit the quantum behaviour. But I understand that it is more interesting to have a test for quantum computer (Is a black box a quantum computer according to the results?).

  229. fred Says:

    Scott #221

    Right, an medium term there are plenty of clues that the reliance on “physical stuff” may taper down to a strict minimum:

    * disappearance of physical medium for books, movies, etc.
    * success of virtual economies: lots of games make a ton of money selling virtual items (which can be rare but have zero additional cost on the environment), i.e. people are willing to accept virtual items as tokens of social status (a rare digital car vs a real world Ferrari).
    * the advent of virtual reality: strong sense of presence will make physical travel obsolete (virtual tourism, telecommute, long distance communication, etc).

    It seems that we’ll get pretty soon to a point where we can divert significant funds from real-world transport infrastructures (highways, gas, vehicle) into better and faster internet.
    Then eventually the only real-world consumable left will be food and maintenance/upgrade of computing clouds.
    The food problem can be optimized too by growing efficient crops directly where you need them it (rooftops of sky-scrappers, or local farm communities, etc).

  230. Rahul Says:

    Scott #227:

    once you’ve spent so much money to build the LHC in the first place, it would be a terrible waste not to run it at its full design energy!

    Isn’t that the Sunk Cost Fallacy?

  231. fred Says:

    Rahul #230
    I’m pretty sure that the plan since the beginning was to eventually retrofit the LHC into a Wave-Motion Cannon
    http://tinyurl.com/pexw9qz

  232. Jerry Says:

    re #228

    …[]so that there is a conservation law – Noether theorem – for each variable under scale, also for the physical constant).

    I am a big Emmy Noether fan. She was a true unsung hero of mathematics & physics. Einstein was not her mentor, he was her mentee. Being a woman and a Jew during an era when both faced prejudice and persecution raises the question of what more she might have accomplished without these obstacles.

    See: http://www.nytimes.com/2012/03/27/science/emmy-noether-the-most-significant-mathematician-youve-never-heard-of.html?pagewanted=all&_r=0

    re: Scott’s comment #214:

    “Our universe is not scale-invariant.”

    If one needs to prepare a Bell state to firmly verify the existence of a quantum computer, it seems the “best” quantum computer would simple be two entangled electrons, perhaps on a graphene surface.

    As you “scale up” to more entangled electrons and eventually macroscopic (Newtonian) structures, errors and decoherence will give you a very bad day.

    Scott has stated that a proof that quantum computing is physically impossible would have more profound implications than if it were possible. From what I have seen and heard so far, I remain a set theorist: {quantum computers, bigfoot, Easter bunny}.

  233. Mike Says:

    “Isn’t that the Sunk Cost Fallacy?”

    Yeah, but contrary to conventional wisdom, in a broad range of situations, it is rational for people to condition behavior on sunk costs, because of informational content, reputational concerns, or financial and time constraints. Once all the elements of the decision-making environment are taken into account, reacting to sunk costs can often be understood as rational behavior.

    http://www.suemialon.com/research/RevisedSunkCostsMatterApril2007.pdf

  234. fred Says:

    Btw, didn’t they just found that weird 4-quarks particle at the LHC?

  235. srp Says:

    The point on sunk costs is that the marginal cost of using the LHC after it’s been built isn’t that high relative to the potential value of the findings, even if those findings are less epochal than discovering the Higgs. BTW, I believe there is a ton of detailed measurement on the Higgs that they want to do to pin down all sorts of things about it.

    And God knows QCD can use all the experimental guidance it can get. Maybe someday we’ll be able to actually derive a proton from first principles, or even a nucleus with two particles.

  236. Gil Kalai Says:

    Here are thirteen statements raised here and how my opinion compared to Scott’s regarding them.

    (1) A physical computing device for factoring can be developed with speed-up of 50 order of magnitudes compared to classical computers.

    Scott: yes, my opinion: no.

    (2) Universal quantum computers will be built in several decades.

    My opinion: no.

    (3) The possibility of quantum computing devices to achieve exponential speed-up and the failure of the extended Church-Turing thesis can be convincingly demonstrated using BosonSampling (without quantum fault-tolerance).

    Scott: yes, me: no, even if universal quantum computers based on quantum fault-tolerance are possible.

    (4) The BosonSampling task can be demonstrated in the near future and this will require resources below 100 million dollars.

    I commend Scott’s enthusiasm for this project. (Merely the possibility of a Higgs-level discovery for 1/1000 the cost should make you  skeptical.) 

    (5) The scientific importance of (1) and even (3) can be compared to the discovery of the Higgs boson.

    Here, Scott and I agree.

    (6) Quantum computing devices may revolutionize our ability to design and create new medications.

    (Scott advocated this very important potential practical application in several places but not in this post.)

    (7) Quantum computers will enable breaking many of the cryptographic systems used today.

    We both agree that if QC could be built this is correct.

    (8) Item (7) is not a big deal since people will simply move to different crypto-systems

    This is more or less what Scott says. I disagree with that. (The tiny investments in QC by relevant agencies may reflect their revealed beliefs regarding QC.)

    (9) Quantum computers will have major applications for optimization

    Scott strongly object and refer to this view as “hype”. Given his belief in QC, I find his position unreasonable. 

    (10) Quantum computers will have major impact for artificial intelligence.

    The same as (9).

    (11) The scientific importance of demonstrating QC exceeds by far any practical use they may have.

    I think that building QC would have vastly important practical uses and even a convincing explanation for why QC cannot be built may lead to practical fruits.

    (12) In contrast to producing Higgs bosons, building a scalable quantum computer is so “obviously, self-evidently” possible that we don’t even need experimental confirmation of it:

    I think that this position is unreasonable.

    (13) Quantum-computers driven experimental physics/engineering deserve major funding. But the time is not ripe yet for a LHC/Manhatten-scale multi-billions investement.

    We agree on this point. 

  237. Jerry Says:

    “Re: Gil Kalai Says:

    Comment #235 April 11th, 2014 at 3:14 am

    Here are thirteen statements raised here and how my opinion compared to Scott’s regarding them.”

    These were essentially the points I tried to make yesterday with an analogy from set theory: {Bigfoot, Easter Bunny, Quantum Computers}.

    Aarogantly (sic), Scott deleted it. It is ironic that Scott encourages disagreement in the classes he teaches, but not on his blog.

  238. Scott Says:

    Gil #235:

      Merely the possibility of a Higgs-level discovery for 1/1000 the cost should make you skeptical.

    The discovery of the B-modes in the CMB a month ago seems to have been precisely a Higgs-level discovery (or even more than that) for 1/1000 the cost.

    (On the other hand, we really don’t know yet whether Scattershot BosonSampling can be done with 20-30 photons, for 1/1000 the cost of the LHC or any other realistic cost. It does strike me as a fruitful thing to investigate, though.)

      Scott strongly object [to the view that quantum computers will have major applications for optimization] and refer to this view as “hype”. Given his belief in QC, I find his position unreasonable.

    Gil, if you think anyone who “believes in QC” should also believe that QC will have major applications for optimization (i.e., believe it as a fact rather than as a speculative possibility), then you must have strong arguments that quantum optimization algorithms like the adiabatic algorithm really can produce practically-important speedups over classical algorithms like simulated annealing. So, what are those arguments?

  239. Douglas Knight Says:

    srp: Actually, quantum computers would be much more useful for QCD than experimental results from LHC. They can’t derive the proton from first principles because it is a quantum computation. Scott keeps warning us that molecular modeling might be difficult with a near future quantum computer, but atomic modeling ought to be a pretty immediate application.

  240. Douglas Knight Says:

    Gil, is you point 3 a reversal? Didn’t you say before that 10 or 20 bosons would be enough to disprove you? Or was that just disprove your arguments, but not to convince you of other people’s arguments? I don’t see much room for a middle ground. How can you object to scaling without knowing at what scale it breaks down?

    The BS paper argues that scalable BS with classical computers ought to collapse the polynomial hierarchy. So scalable BS is a better argument against ECT than factoring. No one actually expects scalable BS before scalable QC, but shouldn’t some finite amount be very convincing? Again, how can you argue that scaling is not possible without knowing where it breaks down?

  241. Greg Kuperberg Says:

    Scott:

    Vern Brownell seems much more guarded in his statements than some others at D-Wave have been.

    Actually, Vern Brownell seems much more guarded in that interview than Vern Brownell has been. He’s not actually being guarded, he’s playing double game.

    E.g., do know why quantum computing is so amazing, Scott? Because it will leave Moore’s Law in the dust. Moore’s Law will be replaced by Rose’s Law. “You will see many orders of magnitude improvement on each generation, rather than the 2X or 5X that we typically see in classical computing.”

    http://gigaom.com/2014/03/19/quantum-computers-will-leave-moores-law-far-far-behind/

  242. Greg Kuperberg Says:

    Gil:

    (9) “Quantum computers will have major applications for optimization”. Scott strongly objects and refers to this view as “hype”. Given his belief in QC, I find his position unreasonable.

    What actually says is that a major impact of QC in optimization is speculative and based on hype. There is a big difference between saying that a claim is based on hype, and saying that it is hype. We can speculate that quantum algorithms have a big effect on classical optimization, but no one knows that. Scott is exactly correct that existing claims are based on hype. Your point of debate here is spurious.

  243. aviti Says:

    scott #79, so this is what was cooking in the kitchen, …http://www.nature.com/nature/journal/v508/n7497/full/nature13171.html

  244. jonas Says:

    Scott, have you met the kind of belief David describes in the blog entry “http://www.madore.org/~david/weblog/2014-05.html#d.2014-05-08.2200” , where people believe the concepts of mathematics (or of TCS) aren’t real but the concepts of physics other sciences are? If so, do these beliefs have practical consequences? For example, are people trying to use them as arguments for decreasing government funding of mathematical research (compared to research in physics and other sciences)?

  245. Scott Says:

    jonas #244: Insofar as I understand that French blog post (via Google Translate), I strongly agree with it. Speaking generally, the problem with people who adopt anti-realist positions—whether they’re the mathematical or the scientific kind of anti-realist—is that they never do so consistently. “OK great, so the positive integers are an arbitrary human construct? Then why not electrons? Why not the room that you’re sitting in? All three didn’t have names until some human thought to name them; on the other hand, all three seem to have properties that I can’t change just by wishing it…” Labels like “arbitrary human construct” never get applied to everything, only to particular things that the speaker wants to denigrate.

  246. jonas Says:

    Scott: thanks for the reply.

  247. Serge Says:

    jonas #244: Thank you for making us know David Madore. I agree with what he says about Platonism. For example, for me the set of all Turing machines actually exists. It’s just its predictive power about what happens in this world which has been overrated in my opinion.

    I think the language in David’s blog is simple enough so as to allow even the readers with faint memories of school French to understand most of what he says. Google Translate is quite good, but every now and then it makes an awful mistake that spoils everything…

  248. Collin237 Says:

    Scott #245: Some things, such as race and religion, really are “arbitrary human constructs”. Perhaps these people want to sow doubt about this, without making their evil intention obvious. So they instead claim that various other things are AHC, so as to make it seem like it’s up for debate.

  249. Collin237 Says:

    Scott #0: “but simply some other explanation that no one has thought of yet”

    Such as points of space (in the non-pointillistic sense of irreducibly small volumes, e.g., Planck scale) decaying into either more or fewer points, and thus evolving by natural selection toward some kind of optimum that corresponds to physics as we know it?

  250. Stephen Jordan Says:

    Particle fever has now become available for free on streaming netflix, and for rent via amazon streaming, which is much cheaper than the download from itunes.