{"id":2854,"date":"2016-07-29T17:49:31","date_gmt":"2016-07-29T21:49:31","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=2854"},"modified":"2017-01-12T16:26:06","modified_gmt":"2017-01-12T21:26:06","slug":"more-wrong-things-i-said-in-papers","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=2854","title":{"rendered":"More Wrong Things I Said in Papers"},"content":{"rendered":"<p>Two years ago, I wrote a blog post entitled <a href=\"https:\/\/scottaaronson.blog\/?p=2072\">PostBQP Postscripts<\/a>, owning up to not one but <em>four<\/em> substantive mathematical errors that I&#8217;d made over the years in my published papers, and which my students and colleagues later brought to my sheepish attention.\u00a0 Fortunately, none of these errors affected the papers&#8217; main messages; they just added interesting new twists to the story.\u00a0 Even so, I remember feeling at the time like undergoing this public repentance\u00a0was soul-cleansing intellectual hygiene.\u00a0 I also felt like writing one big &#8220;post of shame&#8221; was <em>easier<\/em> than writing a bunch of separate errata and submitting them to journals, while also reaching a wider audience (and, therefore, doing an even better soul-cleansing job).<\/p>\n<p>So\u00a0I resolved that, anytime I&#8217;d saved up enough errata, I&#8217;d do another sackcloth-and-ashes post.\u00a0 Which brings us to today.\u00a0 Without further ado:<\/p>\n<hr \/>\n<p><strong>I. Quantum Money Falling Down<\/strong><\/p>\n<p>My and Paul Christiano&#8217;s <a href=\"http:\/\/theoryofcomputing.org\/articles\/v009a009\/v009a009.pdf\">explicit public-key quantum money scheme<\/a>&#8212;the one based on low-degree polynomials&#8212;has now been fully broken.\u00a0 To clarify, our abstract hidden-subspace scheme&#8212;the one that uses a classical black-box to test membership in the subspaces&#8212;remains totally fine.\u00a0 Indeed, we unconditionally proved the security of the black-box scheme, and our security proof stands.\u00a0 In the paper, though, we also stuck our necks out further, and conjectured that you could instantiate the black box, by publishing random low-degree polynomials that vanish on the subspaces you want to hide.\u00a0 While I considered this superfluous, at Paul&#8217;s insistence, we also recommended adding completely-random &#8220;noise polynomials&#8221; for extra security.<\/p>\n<p>Our scheme was broken in two stages.\u00a0 First, in 2014, <a href=\"https:\/\/hal.inria.fr\/hal-01098223\/document\">Pena et al.<\/a> broke the noiseless version of our scheme, using Gr\u00f6bner-basis methods, over fields of characteristic greater than 2.\u00a0 Over F<sub>2<\/sub>&#8212;the field we happened to use in our scheme&#8212;Pena et al. couldn&#8217;t quite prove that their attack worked, but they gave numerical evidence that at least it finds the subspaces in n<sup>O(log n)<\/sup> time.\u00a0 Note that nothing in Pena et al.&#8217;s attack is specific to quantum money: indeed, their attack consists of a purely classical algorithm, which efficiently solves the general classical problem of recovering large subspaces from polynomials that hide them.<\/p>\n<p>At that point, at least the <em>noisy<\/em> version of our scheme&#8212;the one Paul had insisted we include&#8212;was still standing!\u00a0 Indeed, the Gr\u00f6bner-basis attack seemed to break down entirely when some of the polynomials were random garbage.<\/p>\n<p>Later, though, Paul and Or Sattath realized that\u00a0a quantum trick&#8212;basically, the <a href=\"http:\/\/arxiv.org\/abs\/0912.3823\">single-copy tomography<\/a> of Farhi et al.&#8212;can identify which polynomials are the noisy ones, provided we&#8217;re given a legitimate quantum money state to start with.\u00a0 As a consequence, the problem of breaking the noisy scheme can be reduced to the problem of breaking the noiseless scheme&#8212;i.e., the problem that Pena et al. already essentially solved.<\/p>\n<p>As\u00a0bad as this sounds, it has an interesting positive consequence. \u00a0In our paper, Paul and I had actually given a security reduction\u00a0for our money scheme based on low-degree polynomials. \u00a0In particular, we showed that there&#8217;s no polynomial-time quantum algorithm to counterfeit our money states, <i>unless<\/i>\u00a0there&#8217;s a polynomial-time quantum algorithm that finds a basis for a subspace S\u2264F<sub>2<\/sub><sup>n<\/sup> of dimension n\/2 with \u03a9(2<sup>-n\/2<\/sup>) success probability, given a collection of low-degree polynomials p<sub>1<\/sub>,&#8230;,p<sub>m<\/sub> and q<sub>1<\/sub>,&#8230;,q<sub>m<\/sub> (m=O(n)) most of which vanish on S and its dual subspace respectively, but\u00a0that are otherwise random. \u00a0So, running our reduction backwards, the only possible conclusion\u00a0from the break is that there <em>is<\/em> such a quantum algorithm! \u00a0Yet we would&#8217;ve had no idea how to find that quantum algorithm without going through quantum money&#8212;nor do we know a classical algorithm for the problem, or even a quantum algorithm with \u03a9(1) success probability.<\/p>\n<p>In the meantime, the problem of designing a public-key quantum money scheme, with good cryptographic evidence for its security, remains open. \u00a0It&#8217;s plausible that there&#8217;s some other, more secure way to instantiate my and Paul&#8217;s hidden subspace scheme, for example using lattices. \u00a0And even before we&#8217;ve found such a way, we can use <a href=\"http:\/\/news.mit.edu\/2015\/secure-foundation-any-cryptographic-system-1028\">indistinguishability obfuscation<\/a> as a stopgap. \u00a0We could also seek cryptographic evidence for the security of other kinds of public-key quantum money, like <a href=\"https:\/\/arxiv.org\/abs\/1004.5127\">Farhi et al.&#8217;s<\/a> based on knot invariants.<\/p>\n<p>A paper about all this is on our to-do stack. In the meantime, for further details, see Lecture 9 in my <a href=\"http:\/\/www.scottaaronson.com\/barbados-2016.pdf\">Barbados lecture notes<\/a>.<\/p>\n<hr \/>\n<p><strong>II. A De-Merlinization Mistake<\/strong><\/p>\n<p>In my 2006 paper <a href=\"http:\/\/www.scottaaronson.com\/papers\/qmaqpoly.pdf\">QMA\/qpoly\u00a0\u2286 PSPACE\/poly: De-Merlinizing Quantum Protocols<\/a>, the technical core of the complexity result was a new quantum information lemma that I called the &#8220;Quantum OR Bound&#8221; (Lemma 14 in the paper).<\/p>\n<p>Basically, the Quantum OR Bound says that, if we have an unknown quantum state \u03c1, as well as\u00a0a collection of measurements M<sub>1<\/sub>,&#8230;,M<sub>n<\/sub> that we might want to make on \u03c1, then we can distinguish the case that (a) every M<sub>i<\/sub> rejects \u03c1\u00a0with overwhelming probability, from the case that (b) at least one M<sub>i<\/sub> accepts \u03c1\u00a0with high probability. \u00a0And we can do this <em>despite<\/em> having only one copy of \u03c1, and despite the fact that earlier measurements might corrupt \u03c1, thereby compromising the later measurements. \u00a0The intuition is simply that, if the earlier measurements corrupted\u00a0\u03c1 substantially, that could only be because some of them had a decent probability of accepting \u03c1, meaning that at any rate, we&#8217;re not in case (a).<\/p>\n<p>I&#8217;ve since reused the Quantum OR Bound for other problems&#8212;most notably, a proof that private-key quantum money requires either a computational assumption or a huge database maintained by the bank (see Theorem 8.3.1 in my <a href=\"http:\/\/www.scottaaronson.com\/barbados-2016.pdf\">Barbados lecture notes<\/a>).<\/p>\n<p>Alas, Aram Harrow and Ashley Montanaro <a href=\"https:\/\/arxiv.org\/abs\/1607.03236\">recently discovered<\/a> that my proof of the Quantum OR Bound is wrong. \u00a0It&#8217;s wrong because I neglected the possibility of &#8220;<a href=\"https:\/\/en.wikipedia.org\/wiki\/Quantum_Zeno_effect\">Zeno-like<\/a> behavior,&#8221; in which repeated measurements on a quantum state would gradually shift the state far away from its starting point, without ever having a significant probability of rejecting the state. \u00a0For some reason, I assumed without any adequate argument that choosing the measurements at random, rather than in a predetermined order, would solve that problem.<\/p>\n<p>Now, I might actually\u00a0be\u00a0<em>right<\/em> that randomizing the measurements is enough\u00a0to solve the Zeno problem! \u00a0That remains a plausible\u00a0conjecture, which Harrow and Montanaro could neither confirm nor refute. \u00a0In the meantime, though, Harrow and Montanaro were able to recover my QMA\/qpoly\u2286PSPACE\/poly theorem, and all the other conclusions known to follow from the Quantum OR Bound (including some new ones that they discover), by designing a <em>new<\/em> measurement procedure whose soundness they can prove.<\/p>\n<p>Their new procedure is based on an elegant, obvious-in-retrospect idea that somehow never occurred to me. \u00a0Namely, instead of just applying M<sub>i<\/sub>&#8216;s to\u00a0\u03c1, one can first put a control qubit into an equal superposition of the |0\u3009 and |1\u3009 states, and then apply M<sub>i<\/sub>&#8216;s <em>conditioned<\/em> on the control qubit being in the |1\u3009 state. \u00a0While doing this, one can periodically measure the control qubit in the {|+\u3009,|-\u3009} basis, in order to check directly whether applying the M<sub>i<\/sub>&#8216;s has substantially corrupted \u03c1. \u00a0(If it hasn&#8217;t, one will always get the outcome |+\u3009; if it has, one might get |-\u3009.) \u00a0Substantial corruption, if detected, then tells us that some\u00a0M<sub>i<\/sub>&#8216;s must have had non-negligible probabilities of accepting \u03c1.<\/p>\n<hr \/>\n<p><strong>III. Almost As Good As True<\/strong><\/p>\n<p>One lemma that I&#8217;ve used even <em>more<\/em> than the Quantum OR Bound is what I&#8217;ve called the &#8220;Almost As Good As New Lemma,&#8221; and what others in the field have called the &#8220;Gentle Measurement Lemma.&#8221;<\/p>\n<p>I claimed a proof of the AAGANL in my 2004 paper <a href=\"http:\/\/theoryofcomputing.org\/articles\/v001a001\/v001a001.pdf\">Limitations of Quantum Advice and One-Way Communication<\/a>\u00a0(Lemma 2.2 there), and have used the lemma in like half a dozen later papers. \u00a0Alas, when I lectured at Barbados, Sasha Razborov and others discovered that my proof of the AAGANL was missing a crucial step!\u00a0 More concretely, the proof I gave there works for pure states but not for mixed states. \u00a0For mixed states, the trouble\u00a0is that I take a purification of the mixed state&#8212;something that always exists mathematically&#8212;but then illegally assume that the measurement I&#8217;m analyzing acts on the particular purification I&#8217;ve conjured up.<\/p>\n<p>Fortunately, one can easily fix this problem by decomposing the state\u00a0\u03c1 into a mixture of pure states, then applying my earlier argument to each pure state separately, and finally using Cauchy-Schwarz (or just the convexity of the square-root function) to recombine the results.\u00a0 Moreover, this is exactly what other people&#8217;s proofs of the Gentle Measurement Lemma <em>did <\/em>do, though I&#8217;d never noticed it before Barbados&#8212;I just idly wondered why those other proofs took twice as long as mine to do the same work! \u00a0For a correct proof, see Lemma 1.3.1 in the <a href=\"http:\/\/www.scottaaronson.com\/barbados-2016.pdf\">Barbados lecture notes<\/a>.<\/p>\n<hr \/>\n<p><strong>IV. Oracle Woes<\/strong><\/p>\n<p>In my 2010 paper\u00a0<a href=\"http:\/\/www.scottaaronson.com\/papers\/bqpph.pdf\">BQP and the Polynomial Hierarchy<\/a>, I claimed to construct oracles A relative to which BQP\u2284BPP<sub>path<\/sub> and BQP\u2284SZK, even while making only partial progress toward\u00a0the big prize, which would&#8217;ve been an oracle relative to which BQP\u2284PH. \u00a0Not only that: I claimed to show that <em>any<\/em> problem with a property called &#8220;almost k-wise independence&#8221;&#8212;one example being the Forrelation (or Fourier Checking) problem that I introduced in that paper&#8212;was neither in BPP<sub>path<\/sub>\u00a0nor in SZK. \u00a0But I showed that Forrelation <em>is<\/em> in BQP, thus yielding the separations.<\/p>\n<p>Alas, this past spring Lijie Chen, who was my superb visiting student from Tsinghua University, realized that my proofs of these particular separations were wrong. \u00a0Not only that, they were wrong <em>because I implicitly substituted a ratio of expectations for an expectation of ratios<\/em> (!). \u00a0Again, it might still be <em>true<\/em> that almost k-wise independent problems can be neither in BPP<sub>path<\/sub>\u00a0nor in SZK: that remains an interesting conjecture, which Lijie was unable to resolve one way or the other. \u00a0(On the other hand, I showed <a href=\"http:\/\/www.scottaaronson.com\/papers\/glnfalse.pdf\">here<\/a> that almost k-wise independent problems <em>can<\/em> be in PH.)<\/p>\n<p>But never fear!\u00a0 In a <a href=\"http:\/\/arxiv.org\/abs\/1605.00619\">recent arXiv preprint<\/a>, Lijie has supplied correct proofs for the\u00a0BQP\u2284BPP<sub>path<\/sub> and BQP\u2284SZK oracle separations&#8212;using the same Forrelation problem that I studied, but additional properties of Forrelation besides its almost k-wise independence. \u00a0Lijie notes that my proofs, had they worked, would also have yielded an oracle relative to which BQP\u2284AM, which would&#8217;ve been a spectacular result, nontrivial progress toward BQP\u2284PH. \u00a0His proofs, by contrast, apply only to worst-case decision problems rather than problems of distinguishing two probability distributions, and therefore don&#8217;t imply anything about BQP vs. AM. \u00a0Anyway, there&#8217;s other cool stuff in his paper too.<\/p>\n<hr \/>\n<p><strong>V. We Needed More Coffee<\/strong><\/p>\n<p>This is one I&#8217;ve <a href=\"https:\/\/scottaaronson.blog\/?p=1818\">already written about<\/a> on this blog, but just in case anyone missed it &#8230; in my, Sean Carroll, and Lauren Ouellette&#8217;s original\u00a0<a href=\"http:\/\/www.scottaaronson.com\/papers\/coffee2.pdf\">draft paper on the coffee automaton<\/a>, the specific rule we discuss <em>doesn&#8217;t<\/em> generate any significant amount of complexity (in the sense of coarse-grained entropy). \u00a0We wrongly thought it did, because of a misinterpretation of our simulation data. \u00a0But as Brent Werness brought to our attention, not only does a corrected simulation not show any complexity bump, one can rigorously <em>prove<\/em> there&#8217;s no complexity bump. \u00a0And we could&#8217;ve realized all this from the beginning, by reflecting that pure random diffusion (e.g., what cream does in coffee when you don&#8217;t stir it with a spoon) <em>doesn&#8217;t<\/em>\u00a0actually produce interesting tendril patterns.<\/p>\n<p>On the other hand, Brent proposed\u00a0a different rule&#8212;one that involves &#8220;shearing&#8221; whole regions of cream and coffee across each other&#8212;that <em>does<\/em> generate significant complexity, basically because of all the long-range correlations it induces. \u00a0And not only do we clearly see this in simulations, but the growth of complexity can be rigorously proven! \u00a0Anyway, we have a long-delayed revision of the paper that will explain all this in more detail, with Brent as well as MIT student Varun Mohan now added as coauthors.<\/p>\n<hr \/>\n<p>If any of my colleagues feel inspired to write up their own &#8220;litanies of mathematical error,&#8221; they&#8217;re welcome to do so in the comments!\u00a0 Just remember: you don&#8217;t earn any epistemic virtue points unless the errors you reveal <em>actually<\/em> embarrass you.\u00a0 No humblebragging about how you once left out a minus sign in your paper that won the Fields Medal.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Two years ago, I wrote a blog post entitled PostBQP Postscripts, owning up to not one but four substantive mathematical errors that I&#8217;d made over the years in my published papers, and which my students and colleagues later brought to my sheepish attention.\u00a0 Fortunately, none of these errors affected the papers&#8217; main messages; they just [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[5,18,4],"tags":[],"class_list":["post-2854","post","type-post","status-publish","format-standard","hentry","category-complexity","category-embarrassing-myself","category-quantum"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2854","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2854"}],"version-history":[{"count":8,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2854\/revisions"}],"predecessor-version":[{"id":3122,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2854\/revisions\/3122"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2854"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2854"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2854"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}