{"id":3512,"date":"2017-10-22T15:20:03","date_gmt":"2017-10-22T20:20:03","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=3512"},"modified":"2017-10-23T20:27:32","modified_gmt":"2017-10-24T01:27:32","slug":"2n-is-exponential-but-250-is-finite","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=3512","title":{"rendered":"2^n is exponential, but 2^50 is finite"},"content":{"rendered":"<p><b><font color=\"red\">Unrelated Update (Oct. 23)<\/font><\/b> I still feel bad that there was no time for public questions at my &#8220;Theoretically Speaking&#8221; talk in Berkeley, and also that the lecture hall was too small to accomodate a large fraction of the people who showed up.  So, if you&#8217;re someone who came there wanting to ask me something, go ahead and ask in the comments of this post.<\/p>\n<hr>\n<p>During my whirlwind tour of the Bay Area, questions started pouring in about a preprint from a group mostly at IBM Yorktown Heights, entitled <a href=\"https:\/\/arxiv.org\/abs\/1710.05867\">Breaking the 49-Qubit Barrier in the Simulation of Quantum Circuits<\/a>.\u00a0 In particular, does this paper make a mockery of everything the upcoming quantum supremacy experiments will try to achieve, and all the theorems about them that we&#8217;ve proved?<\/p>\n<p>Following my usual practice, let me paste the abstract here, so that we have the authors&#8217; words in front of us, rather than what a friend of a friend said a popular article reported\u00a0<em>might<\/em>\u00a0have been in the paper.<\/p>\n<p style=\"padding-left: 30px;\">With the current rate of progress in quantum computing technologies, 50-qubit systems will soon become a reality.\u00a0 To assess, refine and advance the design and control of these devices, one needs a means to test and evaluate their fidelity. This in turn requires the capability of computing ideal quantum state amplitudes for devices of such sizes and larger.\u00a0 In this study, we present a new approach for this task that significantly extends the boundaries of what can be classically computed.\u00a0 We demonstrate our method by presenting results obtained from a calculation of the complete set of output amplitudes of a universal random circuit with depth 27 in a 2D lattice of 7 \u00d7 7 qubits.\u00a0 We further present results obtained by calculating an arbitrarily selected slice of 2<sup>37<\/sup> amplitudes of a universal random circuit with depth 23 in a 2D lattice of 8\u00d77 qubits.\u00a0 Such calculations were previously thought to be impossible due to impracticable memory requirements. Using the methods presented in this paper, the above simulations required 4.5 and 3.0 TB of memory, respectively, to store calculations, which is well within the limits of existing classical computers.<\/p>\n<p>This is an excellent paper, which sets a new record for the classical simulation of generic quantum circuits; I congratulate the authors for it.\u00a0 Now, though, I want you to take a deep breath and repeat after me:<\/p>\n<p><strong>This paper does not undercut the rationale for quantum supremacy experiments.<\/strong>\u00a0 The truth, ironically, is almost the opposite: it being possible to simulate 49-qubit circuits using a classical computer is a\u00a0<em>precondition<\/em>\u00a0for Google&#8217;s planned quantum supremacy experiment, because it&#8217;s the only way we know to check such an experiment&#8217;s results!\u00a0 The goal, with sampling-based quantum supremacy, was always to target the &#8220;sweet spot,&#8221; which we estimated at around 50 qubits, where classical simulation is still possible, but it&#8217;s clearly orders of magnitude more expensive than doing the experiment itself.\u00a0 If you like, the goal is to get as far as you can up the mountain of exponentiality, conditioned on people still being able to see you from the base.\u00a0 Why?\u00a0 Because you can.\u00a0 Because it&#8217;s there.\u00a0 Because it challenges those who think quantum computing will never scale: <i>explain this, punks!\u00a0\u00a0<\/i>But there&#8217;s no point unless you can verify the result.<\/p>\n<p>Related to that, <strong>the paper does not refute any prediction I made<\/strong>, by doing anything I claimed was impossible.\u00a0 On the contrary (if you must know), the paper confirms something that I predicted <em>would<\/em> be possible. \u00a0People said: &#8220;40 qubits is the practical limit of what you can simulate, so there&#8217;s no point in Google or anyone else doing a supremacy experiment with 49 qubits, since they can never verify the results.&#8221; \u00a0I would shrug and say something like: &#8220;eh, if you can do 40 qubits, then I&#8217;m sure you can do 50.\u00a0 It&#8217;s only a thousand times harder!&#8221;<\/p>\n<p>So, how does the paper get up to 50 qubits?\u00a0 A lot of computing power and a lot of clever tricks, one of which (the irony thickens&#8230;) came from a paper that I recently coauthored with Lijie Chen:\u00a0<a href=\"https:\/\/www.scottaaronson.com\/papers\/quantumsupre.pdf\">Complexity-Theoretic Foundations of Quantum Supremacy Experiments<\/a>.\u00a0 Lijie and I were interested in the question: what&#8217;s the best way to simulate a quantum circuit with n qubits and m gates?\u00a0 We noticed that there&#8217;s a time\/space tradeoff here: you could just store the entire amplitude vector in memory and update, which would take exp(n) memory but also &#8220;only&#8221; about exp(n) time.\u00a0 Or you could compute the amplitudes you cared about via Feynman sums (as in the proof of BQP\u2286PSPACE), which takes only linear memory, but exp(m) time.\u00a0 If you imagine, let&#8217;s say, n=50 and m=1000, then exp(n) might be practical if you&#8217;re IBM or Google, but exp(m) is certainly not.<\/p>\n<p>So then we raised the question: could one get the best of both worlds?\u00a0 That is, could one simulate such a quantum circuit using both linear memory and exp(n) time?\u00a0 And we showed that this is <em>almost<\/em> possible: we gave an algorithm that uses linear memory and d<sup>O(n)<\/sup> time, where d is the circuit depth.\u00a0 Furthermore, the more memory it has available, the faster our algorithm will run&#8212;until, in the limit of exponential memory, it just <em>becomes<\/em>\u00a0the &#8220;store the whole amplitude vector&#8221; algorithm mentioned above.\u00a0 I&#8217;m not sure why this algorithm wasn&#8217;t discovered earlier, especially since it basically just amounts to <a href=\"https:\/\/en.wikipedia.org\/wiki\/Savitch%27s_theorem\">Savitch&#8217;s Theorem<\/a>\u00a0from complexity theory.\u00a0 In any case, though, the IBM group used this idea among others to take full advantage of the RAM it had available.<\/p>\n<p>Let me make one final remark: this little episode\u00a0<strong>perfectly illustrates why theoretical computer scientists like to talk about polynomial vs. exponential rather than specific numbers.<\/strong>\u00a0 If you keep your eyes on the asymptotic fundamentals, rather than every factor of 10 or 1000, then you&#8217;re not constantly shocked by events, like a dog turning its head for every passing squirrel.\u00a0 Before you could simulate 40 qubits, now you can simulate 50.\u00a0 Maybe with more cleverness you could get to 60 or even 70.\u00a0 But &#8230; dude.\u00a0 The problem is still exponential time.<\/p>\n<p>We saw the same &#8220;SQUIRREL!\u00a0 SQUIRREL!&#8221; reaction with the people who claimed that the <a href=\"https:\/\/arxiv.org\/abs\/1706.01260\">wonderful paper by Clifford and Clifford<\/a> had undercut the rationale for BosonSampling experiments, by showing how to solve the problem in &#8220;merely&#8221; ~2<sup>n<\/sup> time rather than ~m<sup>n<\/sup>, where n is the number of photons and m is the number of modes.\u00a0 Of course, Arkhipov and I had never <em>claimed<\/em> more than\u00a0~2<sup>n<\/sup>\u00a0hardness for the problem, and Clifford and Clifford&#8217;s important result had <em>justified<\/em>\u00a0our conservatism on that point, but, y&#8217;know &#8230; SQUIRREL!<\/p>\n<p>More broadly, it seems to me that this dynamic constantly occurs in the applied cryptography world.\u00a0 OMIGOD a 128-bit hash function has been broken!\u00a0 Big news!\u00a0 OMIGOD a new, harder hash function has been designed!\u00a0 Bigger news!\u00a0 OMIGOD OMIGOD OMIGOD the new one was broken too!!\u00a0 All of it fully predictable once you realize that we&#8217;re on the shores of an exponentially hard problem, and for some reason, refusing to go far enough out into the sea (i.e., pick large enough security parameters) that none of this back-and-forth would happen.<\/p>\n<p>I apologize, sincerely, if I come off as too testy in this post.\u00a0 No doubt it&#8217;s entirely the fault of a cognitive defect on my end, wherein ten separate people asking me about something get treated by my brain like a single person who still doesn&#8217;t get it even after I&#8217;ve explained it ten times.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Unrelated Update (Oct. 23) I still feel bad that there was no time for public questions at my &#8220;Theoretically Speaking&#8221; talk in Berkeley, and also that the lecture hall was too small to accomodate a large fraction of the people who showed up. So, if you&#8217;re someone who came there wanting to ask me something, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[31,5,4],"tags":[],"class_list":["post-3512","post","type-post","status-publish","format-standard","hentry","category-announcements","category-complexity","category-quantum"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/3512","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3512"}],"version-history":[{"count":5,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/3512\/revisions"}],"predecessor-version":[{"id":3520,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/3512\/revisions\/3520"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3512"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3512"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3512"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}