{"id":9183,"date":"2025-09-27T18:55:32","date_gmt":"2025-09-27T23:55:32","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=9183"},"modified":"2025-09-29T14:03:00","modified_gmt":"2025-09-29T19:03:00","slug":"the-qma-singularity","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=9183","title":{"rendered":"The QMA Singularity"},"content":{"rendered":"\n<p><strong><mark style=\"background-color:rgba(0, 0, 0, 0)\" class=\"has-inline-color has-vivid-red-color\">Update (Sep. 29):<\/mark><\/strong> Since this post has now gone semi-viral on X, Hacker News, etc., with people arguing about how trivial or nontrivial was GPT5&#8217;s &#8220;discovery,&#8221; it seems worthwhile to say something that was implicit in the post.<\/p>\n\n\n\n<p>Namely, GPT5-Thinking&#8217;s suggestion of a function to use &#8220;should have&#8221; been obvious to us.  It <em>would have<\/em> been obvious to us had we known more, or had we spent more time studying the literature or asking experts.<\/p>\n\n\n\n<p>The point is, anyone engaged in mathematical research knows that an AI that can &#8220;merely&#8221; fill in the insights that &#8220;should&#8217;ve been&#8221; obvious to you is a really huge freaking deal!  It speeds up the actual discovery process, as opposed to the process of writing LaTeX or preparing the bibliography or whatever.  This post gave one tiny example of what I&#8217;m sure will soon be thousands.<\/p>\n\n\n\n<p>I should also add that, since this post went up, a commenter named Phillip Harris <a href=\"https:\/\/scottaaronson.blog\/?p=9183#comment-2016680\">proposed a better function to use<\/a> than GPT-5&#8217;s: det(I-E) rather than Tr[(I-E)<sup>-1<\/sup>].  While we&#8217;re still checking details, not only do we think this works, we think it simplifies our argument and solves one of our open problems.  So it seems human supremacy has been restored, at least for now!<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>A couple days ago, Freek Witteveen of CWI and I posted a paper to the arXiv called <a href=\"https:\/\/arxiv.org\/abs\/2509.21131\">&#8220;Limits to black-box amplification in QMA.&#8221;<\/a>  Let me share the abstract:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>We study the limitations of black-box amplification in the quantum complexity class QMA. Amplification is known to boost any inverse-polynomial gap between completeness and soundness to exponentially small error, and a recent result (Jeffery and Witteveen, 2025) shows that completeness can in fact be amplified to be doubly exponentially close to 1. We prove that this is optimal for black-box procedures: we provide a quantum oracle relative to which no QMA verification procedure using polynomial resources can achieve completeness closer to 1 than doubly exponential, or a soundness which is super-exponentially small. This is proven by using techniques from complex approximation theory, to make the oracle separation from (Aaronson, 2008), between QMA and QMA with perfect completeness, quantitative.<\/p>\n<\/blockquote>\n\n\n\n<p>You can also <a href=\"https:\/\/www.scottaaronson.com\/talks\/two-oracles.pptx\">check out my PowerPoint slides here<\/a>.<\/p>\n\n\n\n<p>To explain the context: <a href=\"https:\/\/en.wikipedia.org\/wiki\/QMA\">QMA<\/a>, or Quantum Merlin Arthur, is the canonical quantum version of NP. It&#8217;s the class of all decision problems for which, if the answer is &#8220;yes,&#8221; then Merlin can send Arthur a quantum witness state that causes him to accept with probability at least 2\/3 (after a polynomial-time quantum computation), while if the answer is &#8220;no,&#8221; then regardless of what witness Merlin sends, Arthur accepts with probability at most 1\/3. Here, as usual in complexity theory, the constants 2\/3 and 1\/3 are just conventions, which can be replaced (for example) by 1-2<sup>-n<\/sup> and 2<sup>-n<\/sup> using amplification.<\/p>\n\n\n\n<p>A longstanding open problem about QMA&#8212;not the biggest problem, but arguably the most annoying&#8212;has been whether the 2\/3 can be replaced by 1, as it can be for classical MA for example. In other words, does QMA = QMA<sub>1<\/sub>, where QMA<sub>1<\/sub> is the subclass of QMA that admits protocols with &#8220;perfect completeness&#8221;? In 2008, I used real analysis to <a href=\"https:\/\/arxiv.org\/abs\/0806.0450\">show that there&#8217;s a quantum oracle<\/a> relative to which QMA \u2260 QMA<sub>1<\/sub>, which means that any proof of QMA = QMA<sub>1<\/sub> would need to use &#8220;quantumly nonrelativizing techniques&#8221; (not at all an insuperable barrier, but at least we learned <em>something<\/em> about why the problem is nontrivial).<\/p>\n\n\n\n<p>Then came a bombshell: in June, <a href=\"https:\/\/www.cwi.nl\/en\/people\/freek-witteveen\/\">Freek Witteveen<\/a> and longtime friend-of-the-blog <a href=\"https:\/\/homepages.cwi.nl\/~jeffery\/\">Stacey Jeffery<\/a> released a <a href=\"https:\/\/arxiv.org\/abs\/2506.15551\">paper<\/a> showing that any QMA protocol can be amplified, in a black-box manner, to have completeness error that&#8217;s <em>doubly<\/em> exponentially small, 1\/exp(exp(n)). They did this via a method I never would&#8217;ve thought of, wherein a probability of acceptance is encoded via the amplitudes of a quantum state that decrease in a geometric series. QMA, it turned out, was an old friend that still had surprises up its sleeve after a quarter-century.<\/p>\n\n\n\n<p>In August, we had Freek speak about this breakthrough by Zoom in our quantum group meeting at UT Austin.  Later that day, I asked Freek whether their new protocol was the <em>best<\/em> you could hope to do with black-box techniques, or whether for example one could amplify the completeness error to be <em>triply<\/em> exponentially small, 1\/exp(exp(exp(n))).  About a week later, Freek and I had a full proof written down that, using black-box techniques, doubly-exponentially small completeness error is the best you can do.  In other words: we showed that, when one makes my 2008 QMA \u2260 QMA<sub>1<\/sub> quantum oracle separation quantitative, one gets a lower bound that <em>precisely<\/em> matches Freek and Stacey&#8217;s protocol.<\/p>\n\n\n\n<p>All this will, I hope, interest and excite aficianados of quantum complexity classes, while others might have very little reason to care.<\/p>\n\n\n\n<p>But here&#8217;s a reason why other people might care. This is the first paper I&#8217;ve ever put out for which a key technical step in the proof of the main result came from AI&#8212;specifically, from GPT5-Thinking. Here was the situation: we had an N\u00d7N Hermitian matrix E(\u03b8) (where, say, N=2<sup>n<\/sup>), each of whose entries was a poly(n)-degree trigonometric polynomial in a real parameter \u03b8. We needed to study the largest eigenvalue of E(\u03b8), as \u03b8 varied from 0 to 1, to show that this \u03bb<sub>max<\/sub>(E(\u03b8)) couldn&#8217;t start out close to 0 but then spend a long time &#8220;hanging out&#8221; ridiculously close to 1, like 1\/exp(exp(exp(n))) close for example.<\/p>\n\n\n\n<p>Given a week or two to try out ideas and search the literature, I&#8217;m pretty sure that Freek and I could&#8217;ve solved this problem ourselves. Instead, though, I simply asked GPT5-Thinking. After five minutes, it gave me something confident, plausible-looking, and (I could tell) wrong. But rather than laughing at the silly AI like a skeptic might do, I <em>told<\/em> GPT5 how I knew it was wrong. It thought some more, apologized, and tried again, and gave me something better. So it went for a few iterations, much like interacting with a grad student or colleague. Within a half hour, it had suggested to look at the function<\/p>\n\n\n\n<p>$$ Tr[(I-E(\\theta))^{-1}] = \\sum_{i=1}^N \\frac{1}{1-\\lambda_i(\\theta)}. $$<\/p>\n\n\n\n<p>It pointed out, correctly, that this was a rational function in \u03b8 of controllable degree, that happened to encode the relevant information about how close the largest eigenvalue \u03bb<sub>max<\/sub>(E(\u03b8)) is to 1. And this &#8230; <em>worked<\/em>, as we could easily check ourselves with no AI assistance. And I mean, maybe GPT5 had seen this or a similar construction somewhere in its training data. But there&#8217;s not the slightest doubt that, if a student had given it to me, I would&#8217;ve called it clever.  Obvious with hindsight, but many such ideas are.<\/p>\n\n\n\n<p>I had tried similar problems a year ago, with the then-new GPT reasoning models, but I didn&#8217;t get results that were nearly as good. Now, in September 2025, I&#8217;m here to tell you that AI has finally come for what my experience tells me is the most quintessentially human of all human intellectual activities: namely, proving oracle separations between quantum complexity classes. Right now, it almost certainly <em>can&#8217;t<\/em> write the whole research paper (at least if you want it to be correct and good), but it <em>can<\/em> help you get unstuck if you otherwise know what you&#8217;re doing, which you might call a sweet spot. Who knows how long this state of affairs will last? I guess I should be grateful that I have tenure.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Update (Sep. 29): Since this post has now gone semi-viral on X, Hacker News, etc., with people arguing about how trivial or nontrivial was GPT5&#8217;s &#8220;discovery,&#8221; it seems worthwhile to say something that was implicit in the post. Namely, GPT5-Thinking&#8217;s suggestion of a function to use &#8220;should have&#8221; been obvious to us. It would have [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_feature_clip_id":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[5,11,4],"tags":[],"class_list":["post-9183","post","type-post","status-publish","format-standard","hentry","category-complexity","category-nerd-interest","category-quantum"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/9183","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=9183"}],"version-history":[{"count":11,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/9183\/revisions"}],"predecessor-version":[{"id":9208,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/9183\/revisions\/9208"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=9183"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=9183"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=9183"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}