{"id":2555,"date":"2015-12-09T17:34:20","date_gmt":"2015-12-09T22:34:20","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=2555"},"modified":"2016-12-10T04:28:14","modified_gmt":"2016-12-10T09:28:14","slug":"google-d-wave-and-the-case-of-the-factor-108-speedup-for-what","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=2555","title":{"rendered":"Google, D-Wave, and the case of the factor-10^8 speedup for WHAT?"},"content":{"rendered":"<p><b><span style=\"color: red;\">Update (Dec. 16):<\/span><\/b>\u00a0 If you&#8217;re still following this, please check out\u00a0<a href=\"https:\/\/scottaaronson.blog\/?p=2555#comment-974407\">an important comment by Alex Selby<\/a>, the discoverer of Selby&#8217;s algorithm, which I\u00a0discussed in the post. \u00a0Selby queries a few\u00a0points in the Google paper: among other things, he disagrees with their explanation of why his classical algorithm works so well on D-Wave&#8217;s Chimera graph (and with their prediction that it should stop working for larger graphs), and he explains that Karmarkar-Karp is not\u00a0the best known classical algorithm for the\u00a0Number Partitioning problem. \u00a0He also questions\u00a0whether\u00a0simulated annealing is the benchmark against which everything should be compared (on the grounds that &#8220;everything else requires fine-tuning&#8221;), pointing out that SA itself typically requires lots of tuning to get it to work well.<\/p>\n<p><b><span style=\"color: red;\">Update (Dec. 11):<\/span><\/b> MIT News now has a <a href=\"http:\/\/news.mit.edu\/2015\/3q-scott-aaronson-google-quantum-computing-paper-1211\">Q&amp;A with me<\/a> about the new Google paper. I&#8217;m really happy with how the Q&amp;A turned out; people who had trouble understanding this blog post might find the Q&amp;A easier. Thanks very much to Larry Hardesty for arranging it.<\/p>\n<p>Meanwhile, I feel good that there seems to have been actual <i>progress<\/i> in the D-Wave debate! In previous rounds, I had disagreed vehemently with some of my MIT colleagues (like Ed Farhi and Peter Shor) about the best way to respond to D-Wave&#8217;s announcements. Today, though, at our weekly group meeting, there was almost no daylight between any of us. Partly, I&#8217;m sure, it&#8217;s that I&#8217;ve learned to express myself better; partly it&#8217;s that the &#8220;trigger&#8221; this time was a serious research paper by a group separate from D-Wave, rather than some trash-talking statement from Geordie Rose. But mostly it&#8217;s that, thanks to the Google group&#8217;s careful investigations, this time pretty much anyone who knows anything <i>agrees about all the basic facts<\/i>, as I laid them out in this blog post and in the Q&amp;A. All that remains are some small differences in emotional attitude: e.g., how much of your time do you want to spend on a speculative, &#8220;dirty&#8221; approach to quantum computing (which is far ahead of everyone else in terms of engineering and systems integration, but which still shows no signs of an asymptotic speedup over the best classical algorithms, which is pretty unsurprising given theoretical expectations), at a time when the &#8220;clean&#8221; approaches <i>might<\/i> finally be closing in on the long-sought asymptotic quantum speedup?<\/p>\n<p><b><span style=\"color: red;\">Another Update:<\/span><\/b>\u00a0Daniel Lidar was nice enough to email me an important observation, and to give me permission to share it here. \u00a0Namely, the D-Wave 2X has a <em>minimum<\/em> annealing time of 20 microseconds. \u00a0Because of this, the observed running times for small instance sizes are artificially forced\u00a0upward, making the <em>growth rate<\/em>\u00a0in the machine&#8217;s running time look milder than it really is. \u00a0(Regular readers might remember\u00a0that exactly the same issue\u00a0plagued\u00a0previous D-Wave vs. classical performance comparisons.) \u00a0Correcting this would certainly decrease the D-Wave 2X&#8217;s predicted speedup\u00a0over simulated annealing, in extrapolations to larger numbers of qubits than have been tested so far (although Daniel doesn&#8217;t know by how much). \u00a0Daniel stresses that he&#8217;s not criticizing the Google paper, which explicitly mentions the minimum annealing time&#8212;just calling attention to something that deserves emphasis.<\/p>\n<hr \/>\n<p>In retrospect, I should&#8217;ve been suspicious, when more than a year went by with no major D-Wave announcements that everyone wanted me to react to immediately. Could it really be that this debate was over&#8212;or not &#8220;over,&#8221; but where it always should&#8217;ve been, in the hands of experts who might disagree vehemently but are always careful to qualify speedup claims&#8212;thereby freeing up the erstwhile Chief D-Wave Skeptic for more &#8220;&#8221;&#8221;rewarding&#8221;&#8221;&#8221; projects, like charting a middle path through the Internet&#8217;s endless social justice wars?<\/p>\n<p>Nope.<\/p>\n<p>As many\u00a0of you will have seen by now, on Monday\u00a0a team at Google <a href=\"http:\/\/arxiv.org\/abs\/1512.02206\">put out a major paper<\/a> reporting new experiments on\u00a0the D-Wave 2X machine. \u00a0(See also <a href=\"http:\/\/googleresearch.blogspot.ca\/2015\/12\/when-can-quantum-annealing-win.html\">Hartmut Neven&#8217;s blog post<\/a> about this.) \u00a0The predictable popularized\u00a0version\u00a0of the results&#8212;see for example <a href=\"http:\/\/www.techtimes.com\/articles\/114614\/20151209\/googles-d-wave-2x-quantum-computer-100-million-times-faster-than-regular-computer-chip.htm\">here<\/a>\u00a0and <a href=\"http:\/\/www.theregister.co.uk\/2015\/12\/09\/googles_quantum_computer\/\">here<\/a>&#8212;is that the D-Wave 2X has\u00a0now demonstrated\u00a0a factor-of-100-million\u00a0speedup over standard classical chips, thereby conclusively putting to rest the question of whether the device is\u00a0&#8220;truly a quantum computer.&#8221; \u00a0In the comment sections of one my previous posts, D-Wave investor Steve Jurvetson even\u00a0tried\u00a0to\u00a0<a href=\"https:\/\/scottaaronson.blog\/?p=1400#comment-962456\">erect a victory stele<\/a>, by quoting Karl Popper about falsification.<\/p>\n<p>In situations like this, the first thing I do is turn to Matthias Troyer, who&#8217;s arguably the planet&#8217;s most balanced, knowledgeable, trustworthy interpreter of quantum annealing experiments. Happily, in collaboration with Ilia Zintchenko and Ethan Brown, Matthias was generous enough to write a <a href=\"http:\/\/www.scottaaronson.com\/troyer.pdf\">clear 3-page document<\/a> putting the new results into context, and to give me permission to share it on this blog. From a purely scientific standpoint, my post could end right here, with a link to their document.<\/p>\n<p>Then again, from a purely scientific standpoint, the\u00a0post could&#8217;ve ended even earlier, with the link to the <a href=\"http:\/\/arxiv.org\/abs\/1512.02206\">Google paper itself<\/a>! \u00a0For this is not a case where the paper hides some crucial issue\u00a0that the skeptics then need to ferret out. \u00a0On the contrary, the paper&#8217;s authors include some of the most careful\u00a0people in the business, and the paper explains the caveats as clearly as one could ask. \u00a0In some sense, then, all that&#8217;s left for me or Matthias to do is to <em>tell you what you&#8217;d learn\u00a0if you read the paper!<\/em><\/p>\n<p>So, OK, has the D-Wave 2X demonstrated a\u00a0factor-10<sup>8<\/sup> speedup or not? \u00a0Here&#8217;s the shortest answer that I think is non-misleading:<\/p>\n<p style=\"padding-left: 30px;\"><strong>Yes, there&#8217;s a factor-10<sup>8<\/sup> speedup that looks clearly asymptotic in nature, and there&#8217;s <em>also<\/em> a factor-10<sup>8<\/sup> speedup over Quantum Monte Carlo. But the asymptotic speedup is only if you compare against simulated annealing, while the speedup over Quantum Monte Carlo is only <em>constant-factor<\/em>, not asymptotic. And in any case, both speedups disappear if you compare against other classical algorithms, like that of Alex Selby. Also, the constant-factor speedup probably has less to do with quantum mechanics than with the fact that D-Wave built extremely specialized hardware, which was then compared against a classical chip on the problem of simulating the specialized hardware itself (i.e., on Ising spin minimization instances with the topology of D-Wave&#8217;s Chimera graph). Thus, while there&#8217;s been genuine, interesting progress, it remains uncertain whether D-Wave&#8217;s approach will lead to speedups over the best known classical algorithms, let alone to speedups over the best known classical algorithms that are also asymptotic or also of practical importance. Indeed, all of these points also remain uncertain for quantum annealing as a whole.<\/strong><\/p>\n<p>To expand a bit, there are really\u00a0three\u00a0separate results in the Google paper:<\/p>\n<ol>\n<li>The authors create Chimera instances\u00a0with tall, thin energy barriers blocking the way to the global minimum, by exploiting\u00a0the 8-qubit &#8220;clusters&#8221; that play such a central role in the Chimera graph. \u00a0In line with a <a href=\"http:\/\/arxiv.org\/abs\/quant-ph\/0201031\">2002 theoretical prediction<\/a>\u00a0by Farhi, Goldstone, and Gutmann (a prediction\u00a0we&#8217;ve often discussed on this blog), they then find that on these special\u00a0instances, quantum annealing reaches\u00a0the global minimum exponentially faster than classical\u00a0simulated annealing, and that the D-Wave machine realizes this advantage. \u00a0As far as I&#8217;m concerned, this completely nails down the case for computationally-relevant collective quantum tunneling in the D-Wave machine, <em>at least within the 8-qubit clusters<\/em>. \u00a0On the other hand, the authors point out that there are other classical algorithms, like that of\u00a0<a href=\"http:\/\/arxiv.org\/abs\/1409.3934\">Selby<\/a>\u00a0(building on Hamze and de Freitas), which group together the 8-bit clusters into 256-valued mega-variables, and thereby get rid of the energy\u00a0barrier that kills simulated annealing. \u00a0These classical algorithms are found empirically to outperform the D-Wave machine. \u00a0The authors also match the D-Wave machine&#8217;s asymptotic performance (though not the leading constant) using Quantum Monte Carlo, which (despite its name) is a <em>classical<\/em> algorithm often used to find quantum-mechanical ground states.<\/li>\n<li>The authors make a case that the ability to tunnel past\u00a0tall, thin energy barriers&#8212;i.e., the central advantage that\u00a0quantum annealing has been shown to have over classical annealing&#8212;might\u00a0be relevant to at least some real-world optimization problems. \u00a0They do this by studying a classic NP-hard problem called Number Partitioning, where you&#8217;re given a list of N positive integers, and your\u00a0goal is to partition the integers into two subsets whose sums differ from each other by as little as possible. \u00a0Through numerical studies on classical computers, they find that quantum annealing (in the ideal case) and Quantum Monte Carlo should <em>both<\/em> outperform simulated annealing, by roughly equal amounts, on random instances of Number Partitioning. \u00a0Note that\u00a0this part of the paper doesn&#8217;t involve any experiments on the D-Wave machine itself, so\u00a0we don&#8217;t know\u00a0whether calibration errors, encoding loss, etc. will kill the theoretical advantage over simulated annealing. \u00a0But even if not, this still wouldn&#8217;t yield\u00a0a\u00a0&#8220;true quantum speedup,&#8221; since (again) Quantum Monte Carlo is a perfectly-good\u00a0<em>classical<\/em> algorithm, whose asymptotics match those of quantum annealing\u00a0on these instances.<\/li>\n<li>Finally, on the special Chimera instances with the tall, thin energy barriers, the authors find that the D-Wave 2X reaches the global optimum about\u00a010<sup>8<\/sup>\u00a0times faster than Quantum Monte Carlo running on a single-core classical computer. \u00a0But, extremely interestingly, they also find that this speedup does <em>not<\/em> grow with problem size; instead\u00a0it simply saturates at ~10<sup>8<\/sup>. \u00a0In other words, this is a constant-factor speedup rather than an asymptotic one. \u00a0Now, obviously, solving a problem\u00a0&#8220;only&#8221;\u00a0100 million\u00a0times faster (rather than asymptotically faster) can still have practical value! \u00a0But it&#8217;s crucial to remember that this constant-factor speedup is only observed for the Chimera instances&#8212;or in essence, for &#8220;the problem of simulating the D-Wave machine itself&#8221;! \u00a0If you wanted to solve something\u00a0of practical importance, you&#8217;d first need to embed it into the Chimera graph, and it remains unclear whether any of the constant-factor speedup would survive that embedding. \u00a0In any case, while the paper isn&#8217;t explicit about this, I gather that the constant-factor speedup disappears when one compares against (e.g.) the Selby algorithm, rather than against QMC.<\/li>\n<\/ol>\n<p>So then, what do I say to Steve Jurvetson? \u00a0I say&#8212;happily, not grudgingly!&#8212;that\u00a0the new Google paper provides the clearest\u00a0demonstration so far of a D-Wave device&#8217;s capabilities. \u00a0But then I remind him\u00a0of all the worries\u00a0the QC researchers had from the beginning about\u00a0D-Wave&#8217;s whole approach: the absence\u00a0of error-correction; the restriction to finite-temperature quantum annealing (moreover, using &#8220;stoquastic Hamiltonians&#8221;), for which we lack clear evidence for a quantum speedup; the rush for more qubits rather than better qubits. \u00a0And I say: not only do all these worries remain in force, <em>they&#8217;ve been thrown into sharper relief than ever<\/em>, now that many of the side issues have been dealt with. \u00a0The D-Wave 2X is a remarkable piece of engineering. \u00a0If it&#8217;s <em>still<\/em> not showing\u00a0an asymptotic speedup over the best\u00a0known classical algorithms&#8212;as the new Google paper clearly\u00a0explains\u00a0that it isn&#8217;t&#8212;then the reasons are not boring\u00a0or trivial ones. \u00a0Rather, they seem related to\u00a0fundamental\u00a0design choices\u00a0that D-Wave made over a decade ago.<\/p>\n<p>The obvious question now is: can\u00a0D-Wave improve its design, in order to get a speedup that&#8217;s asymptotic, <em>and<\/em>\u00a0that holds against all classical algorithms (including QMC and Selby&#8217;s algorithm), <em>and<\/em> that survives the encoding of a &#8220;real-world&#8221; problem into the Chimera graph? \u00a0Well, maybe or maybe not. \u00a0The Google paper returns again and again to the subject\u00a0of planned future improvements to\u00a0the machine, and how they <em>might<\/em> clear the path to a &#8220;true&#8221; quantum speedup. Roughly speaking, if we rule out radical alterations to D-Wave&#8217;s approach, there are four main things one would want to try, to see if they helped:<\/p>\n<ol>\n<li>Lower temperatures (and thus, longer qubit lifetimes, and smaller spectral gaps that can be safely gotten across without jumping up to an excited state).<\/li>\n<li>Better calibration of the qubits and couplings (and thus, ability to encode a problem of interest, like the Number Partitioning problem mentioned earlier, to greater precision).<\/li>\n<li>The ability to apply &#8220;non-stoquastic&#8221; Hamiltonians. \u00a0(D-Wave&#8217;s existing\u00a0machines\u00a0are all limited to\u00a0<em>stoquastic Hamiltonians<\/em>, defined as Hamiltonians all of whose off-diagonal entries are real and non-positive.\u00a0 While stoquastic Hamiltonians are easier from an engineering standpoint, they&#8217;re also the easiest kind to simulate classically, using algorithms like QMC&#8212;so much so that there&#8217;s no consensus on\u00a0whether it&#8217;s even theoretically <em>possible<\/em> to get a true quantum speedup using\u00a0stoquastic quantum annealing. \u00a0This is a subject of <a href=\"http:\/\/arxiv.org\/abs\/1302.5733\">active research<\/a>.)<\/li>\n<li>Better connectivity among the qubits (thereby reducing the huge loss\u00a0that comes\u00a0from taking problems of practical interest, and encoding them in the Chimera graph).<\/li>\n<\/ol>\n<p>(Note that &#8220;more qubits&#8221; is <em>not<\/em> on this list: if a &#8220;true quantum speedup&#8221; is possible at all with D-Wave&#8217;s approach, then the 1000+ qubits that they already have seem like more than enough to notice it.)<\/p>\n<p>Anyway, these are all, of course, things D-Wave knows about and will be working on in the near future. As well they should! But to repeat: even if D-Wave makes all four of these improvements, we still have no idea whether they&#8217;ll see a true, asymptotic, Selby-resistant, encoding-resistant quantum speedup. We just can&#8217;t say for sure that they <em>won&#8217;t<\/em> see one.<\/p>\n<p>In the meantime, while it&#8217;s sometimes\u00a0easy to forget during blog-discussions, the field of experimental quantum computing is a proper superset of\u00a0D-Wave, and things have gotten tremendously more exciting on many fronts\u00a0within\u00a0the last year or two. \u00a0In particular, the group of John Martinis at Google\u00a0(Martinis is one of the coauthors of the Google paper) now has superconducting qubits with <em>orders of magnitude<\/em> better coherence times than D-Wave&#8217;s qubits, and has demonstrated rudimentary quantum error-correction on 9 of them. \u00a0They&#8217;re now talking about scaling up to ~40 super-high-quality qubits with controllable couplings&#8212;not in the\u00a0remote future, but in, like, <em>the next few years<\/em>. \u00a0If and when they achieve that, I&#8217;m extremely optimistic that they&#8217;ll be able to show\u00a0a clear quantum advantage\u00a0for <em>something <\/em>(e.g., some BosonSampling-like sampling task), if not necessarily something of practical importance. \u00a0IBM Yorktown Heights, which I visited last week, is also working (with <a href=\"http:\/\/spectrum.ieee.org\/nanoclast\/computing\/hardware\/spy-agency-bets-on-ibm-for-universal-quantum-computing\">IARPA funding<\/a>) on integrating superconducting qubits with many-microsecond coherence times. \u00a0Meanwhile, some\u00a0of the top ion-trap groups, like\u00a0Chris Monroe&#8217;s at the University of Maryland, are talking similarly\u00a0big about what they expect to be able to do soon. The &#8220;academic approach&#8221; to QC&#8212;which one could summarize as &#8220;understand the qubits, control them, keep them alive, and <em>only then<\/em> try to scale them up&#8221;&#8212;is finally bearing some juicy fruit.<\/p>\n<p>(At last week&#8217;s IBM conference, there was plenty of D-Wave discussion; how could there not be? But the physicists in attendance&#8212;I was almost the only computer scientist there&#8212;seemed much more interested in approaches that aim for longer-laster qubits, fault-tolerance, and a clear asymptotic speedup.)<\/p>\n<p>I still have no idea\u00a0when and if we&#8217;ll have a practical, universal, fault-tolerant QC, capable of factoring 10,000-digit numbers and so on. \u00a0But it&#8217;s now looking like\u00a0only a matter of years until <em>Gil Kalai, and the other quantum computing skeptics,\u00a0will be\u00a0forced to admit they were wrong<\/em>&#8212;which was always the main application I cared about anyway!<\/p>\n<p>So yeah, it&#8217;s a\u00a0heady\u00a0time\u00a0for QC,\u00a0with many\u00a0things\u00a0coming together faster than I&#8217;<span class=\"il\">d<\/span> expected (then again, it was always my personal rule\u00a0to err on the side of caution,\u00a0and thereby avoid contributing to runaway spirals of hype). \u00a0As we stagger\u00a0ahead\u00a0into this new world of computing&#8212;bravely, coherently, hopefully non-stoquastically, possibly\u00a0fault-tolerantly&#8212;my goal on\u00a0this blog will remain what it&#8217;s been for a decade:\u00a0not to prognosticate, not to pick winners, but merely\u00a0to\u00a0try to understand and explain what has and hasn&#8217;t <em>already<\/em> been shown.<\/p>\n<hr \/>\n<p><b><span style=\"color: red;\">Update (Dec. 10):<\/span><\/b> Some readers might be interested in an <a href=\"https:\/\/scottaaronson.blog\/?p=2555&amp;cpage=1#comment-964125\">economic analysis of the D-Wave speedup<\/a> by commenter Carl Shulman.<\/p>\n<p><b><span style=\"color: red;\">Another Update:<\/span><\/b>\u00a0Since apparently some people didn&#8217;t understand this post, here are some comments from a <a href=\"https:\/\/news.ycombinator.com\/item?id=10707442\">Y-Combinator thread about the post<\/a>\u00a0that might be helpful:<\/p>\n<blockquote><p>(1) [T]he conclusion of the Google paper is that we have probable evidence that with enough qubits and a big enough problem it will be faster for a very specific problem compared to a non-optimal classical algorithm (we have ones that are for sure better).<\/p>\n<p>This probably sounds like a somewhat useless result (quantum computer beats B-team classical algorithm), but it is in fact interesting because D-Wave&#8217;s computers are designed to perform quantum annealing and they are comparing it to simulated annealing (the somewhat analogous classical algorithm). However they only found evidence of a constant (i.e. one that 4000 qubits wouldn&#8217;t help with) speed up (though a large one) compared to a somewhat better algorithm (Quantum Monte Carlo, which is ironically not a quantum algorithm), and they still can&#8217;t beat an even better classical algorithm (Selby&#8217;s) at all, even in a way that won&#8217;t scale.<\/p>\n<p>Scott&#8217;s central thesis is that although it is possible there could be a turning point past 2000 qubits where the D-Wave will beat our best classical alternative, none of the data collected so far suggests that. So it&#8217;s possible that a 4000 qubit D-Wave machine will exhibit this trend, but there is no evidence of it (yet) from examining a 2000 qubit machine. Scott&#8217;s central gripe with D-Wave&#8217;s approach is that they don&#8217;t have any even pie-in-the-sky theoretical reason to expect this to happen, and scaling up quantum computers without breaking the entire process is much harder than for classical computers so making them <i>even bigger<\/i> doesn&#8217;t seem like a solution.<\/p>\n<p>(2) DWave machines are NOT gate quantum computers; they call their machine quantum annealing machines. It is not known the complexity class of problems that can be solved efficiently by quantum annealing machines, or if that class is equivalent to classical machines.<\/p>\n<p>The result shows that the DWave machine is asymptotically faster than the Simulated Annealing algorithm (yay!), which suggests that it is executing the Quantum Annealing algorithm. However, the paper also explicitly states that this does not mean that the Dwave machine is exhibiting a &#8216;quantum speedup&#8217;. To do this, they would need to show it to outperform the best known classical algorithm, which as the paper acknowledges, it does not.<\/p>\n<p>What the paper <i>does<\/i> seem to be showing is that the machine in question is actually fundamentally quantum in nature; it&#8217;s just not clear yet that that the type of quantum computer it is is an improvement over classical ones.<\/p>\n<p>(3) [I]t isn&#8217;t called out in the linked blog since by now Scott probably considers it basic background information, but D-Wave only solves a very particular problem, and it is both not entirely clear that it has a superior solution to that problem than a classical algorithm can obtain <i>and<\/i> it is not clear that encoding real problems into that problem will not end up costing you all of the gains itself. Really pragmatic applications are still a ways into the future. It&#8217;s hard to imagine what they might be when we&#8217;re still so early in the process, and still have no good idea what either the practical or theoretical limits are.<\/p>\n<p>(4) The popular perception of quantum computers as &#8220;doing things in parallel&#8221; is very misleading. A quantum computer lets you perform computation on a superposed state while maintaining that superposition. But that only helps if the structure of the problem lets you somehow &#8220;cancel out&#8221; the incorrect results leaving you with the single correct one. <strong>[There&#8217;s hope for the world! &#8211;SA]<\/strong><\/p><\/blockquote>\n","protected":false},"excerpt":{"rendered":"<p>Update (Dec. 16):\u00a0 If you&#8217;re still following this, please check out\u00a0an important comment by Alex Selby, the discoverer of Selby&#8217;s algorithm, which I\u00a0discussed in the post. \u00a0Selby queries a few\u00a0points in the Google paper: among other things, he disagrees with their explanation of why his classical algorithm works so well on D-Wave&#8217;s Chimera graph (and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[4,17],"tags":[],"class_list":["post-2555","post","type-post","status-publish","format-standard","hentry","category-quantum","category-speaking-truth-to-parallelism"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2555","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2555"}],"version-history":[{"count":34,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2555\/revisions"}],"predecessor-version":[{"id":2604,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/2555\/revisions\/2604"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2555"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2555"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2555"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}