{"id":4447,"date":"2019-12-28T10:08:12","date_gmt":"2019-12-28T16:08:12","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=4447"},"modified":"2019-12-28T18:44:28","modified_gmt":"2019-12-29T00:44:28","slug":"quantum-computing-motte-and-baileys","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=4447","title":{"rendered":"Quantum computing motte-and-baileys"},"content":{"rendered":"\n<p>In the wake of two culture-war posts&#8212;the <a href=\"https:\/\/scottaaronson.blog\/?p=4450\">first<\/a> on the term &#8220;quantum supremacy,&#8221; the <a href=\"https:\/\/scottaaronson.blog\/?p=4476\">second<\/a> on the acronym &#8220;NIPS&#8221;&#8212;it&#8217;s clear that we all need to cool off with something anodyne and uncontroversial.  Fortunately, this holiday season, I know just the thing to bring everyone together: groaning about quantum computing hype!<\/p>\n\n\n\n<p>When I was at the <a href=\"https:\/\/q2b.qcware.com\/\">Q2B conference<\/a> in San Jose, I learned about lots of cool stuff that&#8217;s happening in the wake of Google&#8217;s quantum supremacy announcement.  I heard about the 57-qubit superconducting chip that the Google group is now building, following up on its 53-qubit one; and also about their first small-scale experimental demonstration of my certified randomness protocol.  I learned about recent progress on costing out the numbers of qubits and gates needed to do fault-tolerant quantum simulations of useful chemical reactions (IIRC, maybe a hundred thousand qubits and a few hours&#8217; worth of gates&#8212;scary, but not Shor&#8217;s algorithm scary).<\/p>\n\n\n\n<p>I also learned about two claims about quantum algorithms that startups have made, and which are being wrongly interpreted.  The basic pattern is one that I&#8217;ve come to know well over the years, and which you could call a science version of the <a href=\"https:\/\/philpapers.org\/archive\/SHATVO-2.pdf\">motte-and-bailey<\/a>.  (For those not up on nerd blogosphere terminology: in medieval times, the motte was a dank castle to which you\u2019d retreat while under attack; the bailey was the desirable land that you\u2019d farm once the attackers left.)<\/p>\n\n\n\n<p>To wit:<\/p>\n\n\n\n<ol class=\"wp-block-list\"><li>Startup makes claims that have both a true boring interpretation (e.g., you can do X with a quantum computer), as well as a false exciting interpretation (e.g., you can do X with a quantum computer, <em>and it would actually make sense to do this, because you&#8217;ll get an asymptotic speedup over the best known classical algorithm<\/em>).<\/li><li>Lots of business and government people get all excited, because they assume the false exciting interpretation must be true (or why else would everyone be talking about this?).  Some of those people ask me for comment.<\/li><li>I look into it, perhaps by asking the folks at the startup.  The startup folks clarify that they meant only the true boring interpretation.  To be sure, they&#8217;re actively <em>exploring<\/em> the false exciting interpretation&#8212;whether some parts of it might be true after all&#8212;but they&#8217;re certainly not making any claims about it that would merit, say, a harsh post on <em>Shtetl-Optimized<\/em>.<\/li><li>I&#8217;m satisfied to have gotten to the bottom of things, and I tell the startup folks to go their merry way.<\/li><li>Yet many people continue to seem as excited as if the false exciting interpretation had been shown to be true.  They continue asking me questions that presuppose its truth.<\/li><\/ol>\n\n\n\n<p>Our first instance of this pattern is the <a href=\"https:\/\/www.newscientist.com\/article\/2227387-quantum-computer-sets-new-record-for-finding-prime-number-factors\/\">recent claim<\/a>, by <a href=\"https:\/\/www.zapatacomputing.com\/\">Zapata Computing<\/a>, to have set a world record for integer factoring (1,099,551,473,989 =  1,048,589 \u00d7 1,048,601) with a quantum computer, by running a QAOA\/variational algorithm on IBM&#8217;s superconducting device.  Gosh!  That sure sounds a lot better than the 21 that&#8217;s been factored with Shor&#8217;s algorithm, doesn&#8217;t it?<\/p>\n\n\n\n<p>I read the <a href=\"https:\/\/arxiv.org\/abs\/1808.08927\">Zapata paper<\/a> that this is based on, entitled &#8220;Variational Quantum Factoring,&#8221; and I don&#8217;t believe that a single word in it is false.  My issue is something the paper <em>omits<\/em>: namely, that once you&#8217;ve reduced factoring to a generic optimization problem, you&#8217;ve thrown away all the mathematical structure that <a href=\"https:\/\/en.wikipedia.org\/wiki\/Shor%27s_algorithm\">Shor&#8217;s algorithm<\/a> cleverly exploits, and that makes factoring asymptotically easy for a quantum computer.  And hence there&#8217;s no reason to expect your quantum algorithm to scale any better than brute-force trial division (or in the most optimistic scenario, trial division enhanced with Grover search).  On large numbers, your algorithm will be roundly outperformed even by <em>classical<\/em> algorithms that do exploit structure, like the <a href=\"https:\/\/en.wikipedia.org\/wiki\/General_number_field_sieve\">Number Field Sieve<\/a>.  Indeed, the quantum computer&#8217;s success at factoring the number will have had little or nothing to do with its being <em>quantum<\/em> at all&#8212;a classical optimization algorithm would&#8217;ve served as well.  And thus, the only reasons to factor a number on a quantum device in this way, would seem to be stuff like calibrating the device.<\/p>\n\n\n\n<p>Admittedly, to people who work in quantum algorithms, everything above is so obvious that it doesn&#8217;t need to be said.  But I learned at Q2B that there are interested people for whom this is <em>not<\/em> obvious, and even comes as a revelation.  So that&#8217;s why I&#8217;m saying it.<\/p>\n\n\n\n<p>Again and again over the past twenty years, I&#8217;ve seen people reinvent the notion of a &#8220;simpler alternative&#8221; to Shor&#8217;s algorithm: one that cuts out all the difficulty of building a fault-tolerant quantum computer.  In every case, the trouble, typically left unstated, has been that these alternatives <em>also<\/em> cut out the exponential speedup that&#8217;s Shor&#8217;s algorithm&#8217;s raison d&#8217;\u00eatre.<\/p>\n\n\n\n<p>Our second example today of a quantum computing motte-and-bailey is the claim, by Toronto-based quantum computing startup <a href=\"https:\/\/www.xanadu.ai\/\">Xanadu<\/a>, that <a href=\"https:\/\/arxiv.org\/abs\/1612.01199\">Gaussian BosonSampling<\/a> can be used to solve all sorts of graph problems, like graph isomorphism, graph similarity, and densest subgraph.  As the co-inventor of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Boson_sampling\">BosonSampling<\/a>, few things would warm my heart more than finding an actual application for that model (besides quantum supremacy experiments and, perhaps, certified random number generation).  But I still regard this as an open problem&#8212;if by &#8220;application,&#8221; we mean outperforming what you could&#8217;ve done classically.<\/p>\n\n\n\n<p>In papers (see for example <a href=\"https:\/\/arxiv.org\/abs\/1810.10644\">here<\/a>, <a href=\"https:\/\/arxiv.org\/abs\/1905.12646\">here<\/a>, <a href=\"https:\/\/arxiv.org\/abs\/1803.10730\">here<\/a>), members of the Xanadu team have given all sorts of ways to take a graph, and encode it into an instance of Gaussian BosonSampling, in such a way that the output distribution will then reveal features of the graph, like its isomorphism type or its dense subgraphs.  The trouble is that so far, I\u2019ve seen no indications that this will actually lead to quantum algorithms that outperform the best classical algorithms, for any graph problems of practical interest.<\/p>\n\n\n\n<p>In the case of Densest Subgraph, the Xanadu folks use the output of a Gaussian BosonSampler to seed (that is, provide an initial guess for) a classical local search algorithm.  They say they observe better results this way than if they seed that classical local search algorithm with completely random initial conditions.  But of course, the real question is: could we get equally good results by seeding with the output of some <em>classical<\/em> heuristic?  Or by solving Densest Subgraph with a different approach entirely?  Given how hard it\u2019s turned out to be just to <em>verify<\/em> that the outputs of a BosonSampling device come from such a device at all, it would seem astonishing if the answer to these questions wasn\u2019t \u201cyes.\u201d<\/p>\n\n\n\n<p>In the case of Graph Isomorphism, the situation is even clearer.  There, the central claim made by the Xanadu folks is that given a graph G, they can use a Gaussian BosonSampling device to sample a probability distribution that encodes G\u2019s isomorphism type.  So, isn\u2019t this \u201cpromising\u201d for solving GI with a quantum computer?  All you\u2019d need to do now is invent some fast classical algorithm that could look at the samples coming from two graphs G and H, and tell you whether the probability distributions were the same.<\/p>\n\n\n\n<p>Except, not really.  While the Xanadu paper never says so, if all you want is to sample a distribution that encodes a graph\u2019s isomorphism type, that\u2019s easy to do classically!  (I even put this on the final exam for my undergraduate Quantum Information Science course a couple weeks ago.)  Here\u2019s how: given as input a graph G, just output G but with its vertices randomly permuted.  Indeed, this will even provide a further property, better than anything the BosonSampling approach has been shown to provide (or than it probably does provide): namely, if G and H are <em>not<\/em> isomorphic, then the two probability distributions will not only be different but will have disjoint supports.  Alas, this still leaves us with the problem of distinguishing which distribution a given sample came from, which is as hard as Graph Isomorphism itself.  None of these approaches, classical or quantum, seem to lead to any algorithm that\u2019s subexponential time, let alone competitive with the <a href=\"https:\/\/scottaaronson.blog\/?p=2521\">\u201cBabai approach\u201d<\/a> of thinking really hard about graphs.<\/p>\n\n\n\n<p>All of this stuff falls victim to what I regard as the Fundamental Error of Quantum Algorithms Research: namely, to treat it as &#8220;promising&#8221; that a quantum algorithm works at all, or works better than some brute-force classical algorithm, without asking yourself whether there are any indications that your approach will <em>ever<\/em> be able to exploit interference of amplitudes to outperform the <em>best<\/em> classical algorithm.<\/p>\n\n\n\n<p>Incidentally, I\u2019m not sure exactly why, but in practice, a major red flag that the Fundamental Error is about to be committed is when someone starts talking about \u201chybrid quantum\/classical algorithms.\u201d  By this they seem to mean: \u201coutside the domain of traditional quantum algorithms, so don\u2019t judge us by the standards of that domain.\u201d  But I liked the way someone at Q2B put it to me: <em>every<\/em> quantum algorithm is a \u201chybrid quantum\/classical algorithm,\u201d with classical processors used wherever they can be, and qubits used only where they must be.<\/p>\n\n\n\n<p>The other thing people do, when challenged, is to say \u201cwell, admittedly we have no <em>rigorous proof<\/em> of an asymptotic quantum speedup\u201d\u2014thereby brilliantly reframing the whole conversation, to make people like me look like churlish theoreticians insisting on an impossible and perhaps irrelevant standard of rigor, blind to some huge practical quantum speedup that\u2019s about to change the world.  The real issue, of course, is not that they haven\u2019t given a <em>proof<\/em> of a quantum speedup (in either the real world or the black-box world); rather, it\u2019s that they\u2019ve typically given no reasons whatsoever to think that there <em>might<\/em> be a quantum speedup, compared to the best classical algorithms available.<\/p>\n\n\n\n<p>In the holiday spirit, let me end on a positive note.  When I did the Q&amp;A at Q2B&#8212;the same one where Sarah Kaiser asked me to comment on the term &#8220;quantum supremacy&#8221;&#8212;one of my answers touched on the most important theoretical open problems about sampling-based quantum supremacy experiments.  At the top of the list, I said, was whether there&#8217;s some interactive protocol by which a near-term quantum computer can not only exhibit quantum supremacy, but <em>prove<\/em> it to a polynomial-time-bounded classical skeptic.  I mentioned that there was <em>one<\/em> proposal for how to do this, in the IQP model, due to <a href=\"https:\/\/arxiv.org\/abs\/0809.0847\">Bremner and Shepherd<\/a>, from way back in 2008.  I said that their proposal deserved much more attention than it had received, and that trying to break it would be one obvious thing to work on.  Little did I know that, <strong>literally while I was speaking<\/strong>, a <a href=\"https:\/\/arxiv.org\/abs\/1912.05547\">paper was being posted to the arXiv<\/a>, by Gregory Kahanamoku-Meyer, that claims to break Bremner and Shepherd&#8217;s protocol.  I haven&#8217;t yet studied the paper, but assuming it&#8217;s correct, it represents the first clear progress on this problem in years (even though of a negative kind).  Cool!!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the wake of two culture-war posts&#8212;the first on the term &#8220;quantum supremacy,&#8221; the second on the acronym &#8220;NIPS&#8221;&#8212;it&#8217;s clear that we all need to cool off with something anodyne and uncontroversial. Fortunately, this holiday season, I know just the thing to bring everyone together: groaning about quantum computing hype! When I was at the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[5,4],"tags":[],"class_list":["post-4447","post","type-post","status-publish","format-standard","hentry","category-complexity","category-quantum"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/4447","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4447"}],"version-history":[{"count":5,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/4447\/revisions"}],"predecessor-version":[{"id":4507,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/4447\/revisions\/4507"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4447"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}