Archive for the ‘Speaking Truth to Parallelism’ Category

More on whether useful quantum computing is “imminent”

Sunday, December 21st, 2025

These days, the most common question I get goes something like this:

A decade ago, you told people that scalable quantum computing wasn’t imminent. Now, though, you claim it plausibly is imminent. Why have you reversed yourself??

I appreciated the friend of mine who paraphrased this as follows: “A decade ago you said you were 35. Now you say you’re 45. Explain yourself!”


A couple weeks ago, I was delighted to attend Q2B in Santa Clara, where I gave a keynote talk entitled “Why I Think Quantum Computing Works” (link goes to the PowerPoint slides). This is one of the most optimistic talks I’ve ever given. But mostly that’s just because, uncharacteristically for me, here I gave short shrift to the challenge of broadening the class of problems that achieve huge quantum speedups, and just focused on the experimental milestones achieved over the past year. With every experimental milestone, the little voice in my head that asks “but what if Gil Kalai turned out to be right after all? what if scalable QC wasn’t possible?” grows quieter, until now it can barely be heard.

Going to Q2B was extremely helpful in giving me a sense of the current state of the field. Ryan Babbush gave a superb overview (I couldn’t have improved a word) of the current status of quantum algorithms, while John Preskill’s annual where-we-stand talk was “magisterial” as usual (that’s the word I’ve long used for his talks), making mine look like just a warmup act for his. Meanwhile, Quantinuum took a victory lap, boasting of their recent successes in a way that I considered basically justified.


After returning from Q2B, I then did an hour-long podcast with “The Quantum Bull” on the topic “How Close Are We to Fault-Tolerant Quantum Computing?” You can watch it here:

As far as I remember, this is the first YouTube interview I’ve ever done that concentrates entirely on the current state of the QC race, skipping any attempt to explain amplitudes, interference, and other basic concepts. Despite (or conceivably because?) of that, I’m happy with how this interview turned out. Watch if you want to know my detailed current views on hardware—as always, I recommend 2x speed.

Or for those who don’t have the half hour, a quick summary:

  • In quantum computing, there are the large companies and startups that might succeed or might fail, but are at least trying to solve the real technical problems, and some of them are making amazing progress. And then there are the companies that have optimized for doing IPOs, getting astronomical valuations, and selling a narrative to retail investors and governments about how quantum computing is poised to revolutionize optimization and machine learning and finance. Right now, I see these two sets of companies as almost entirely disjoint from each other.
  • The interview also contains my most direct condemnation yet of some of the wild misrepresentations that IonQ, in particular, has made to governments about what QC will be good for (“unlike AI, quantum computers won’t hallucinate because they’re deterministic!”)
  • The two approaches that had the most impressive demonstrations in the past year are trapped ions (especially Quantinuum but also Oxford Ionics) and superconducting qubits (especially Google but also IBM), and perhaps also neutral atoms (especially QuEra but also Infleqtion and Atom Computing).
  • Contrary to a misconception that refuses to die, I haven’t dramatically changed my views on any of these matters. As I have for a quarter century, I continue to profess a lot of confidence in the basic principles of quantum computing theory worked out in the mid-1990s, and I also continue to profess ignorance of exactly how many years it will take to realize those principles in the lab, and of which hardware approach will get there first.
  • But yeah, of course I update in response to developments on the ground, because it would be insane not to! And 2025 was clearly a year that met or exceeded my expectations on hardware, with multiple platforms now boasting >99.9% fidelity two-qubit gates, at or above the theoretical threshold for fault-tolerance. This year updated me in favor of taking more seriously the aggressive pronouncements—the “roadmaps”—of Google, Quantinuum, QuEra, PsiQuantum, and other companies about where they could be in 2028 or 2029.
  • One more time for those in the back: the main known applications of quantum computers remain (1) the simulation of quantum physics and chemistry themselves, (2) breaking a lot of currently deployed cryptography, and (3) eventually, achieving some modest benefits for optimization, machine learning, and other areas (but it will probably be a while before those modest benefits win out in practice). To be sure, the detailed list of quantum speedups expands over time (as new quantum algorithms get discovered) and also contracts over time (as some of the quantum algorithms get dequantized). But the list of known applications “from 30,000 feet” remains fairly close to what it was a quarter century ago, after you hack away the dense thickets of obfuscation and hype.

I’m going to close this post with a warning. When Frisch and Peierls wrote their now-famous memo in March 1940, estimating the mass of Uranium-235 that would be needed for a fission bomb, they didn’t publish it in a journal, but communicated the result through military channels only. As recently as February 1939, Frisch and Meitner had published in Nature their theoretical explanation of recent experiments, showing that the uranium nucleus could fission when bombarded by neutrons. But by 1940, Frisch and Peierls realized that the time for open publication of these matters had passed.

Similarly, at some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already. This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems, a process that I’m grateful is already underway.


Update: Someone on Twitter who’s “long $IONQ” says he’ll be posting about and investigating me every day, never resting until UT Austin fires me, in order to punish me for slandering IonQ and other “pure play” SPAC IPO quantum companies. And also, because I’ve been anti-Trump and pro-Biden. He confabulates that I must be trying to profit from my stance (eg by shorting the companies I criticize), it being inconceivable to him that anyone would say anything purely because they care about what’s true.

Quantum Investment Bros: Have you no shame?

Thursday, November 20th, 2025

Near the end of my last post, I made a little offhand remark:

[G]iven the current staggering rate of hardware progress, I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election. And I say that not only because of the possibility of the next US presidential election getting cancelled, or preempted by runaway superintelligence!

As I later clarified, I’ll consider this “live possibility” to be fulfilled even if a fault-tolerant Shor’s algorithm is “merely” used to factor 15 into 3×5—a milestone that seems a few steps, but only a few steps, away from what Google, Quantinuum, QuEra, and others have already demonstrated over the past year. After that milestone, I then expect “smooth sailing” to more and more logical qubits and gates and the factorization of larger and larger integers, however fast or slow that ramp-up proceeds (which of course I don’t know).

In any case, the main reason I made my remark was just to tee up the wisecrack about whether I’m not sure if there’ll be a 2028 US presidential election.


My remark, alas, then went viral on Twitter, with people posting countless takes like this:

A quantum expert skeptic who the bears quote all the time – Scott Aaronson – recently got very excited about a number of quantum advances. He now thinks there’s a possibility of running Shor before the next US president election – a timeline that lines up ONLY with $IONQ‘s roadmap, and NOBODY else’s! This represent a MAJOR capitulation of previously predicted timelines by any skeptics.

Shall we enumerate the layers of ugh here?

  1. I’ve been saying for several years now that anyone paranoid about cybersecurity should probably already be looking to migrate to quantum-resistant cryptography, because one can’t rule out the possibility that hardware progress will be fast. I didn’t “capitulate”: I mildly updated what I said before, in light of exciting recent advances.
  2. A “live possibility” is short not only of a “certainty,” but of a “probability.” It’s basically just an “I’m not confident this won’t happen.”
  3. Worst is the obsessive focus on IonQ, a company that I never mentioned (except in the context of its recently-acquired subsidiary, Oxford Ionics), but which now has a $17 billion valuation. I should explain that, at least since it decided to do an IPO, IonQ has generally been regarded within the research community as … err … a bit like the early D-Wave, intellectual-respectability-wise. They’ll eagerly sell retail investors on the use of quantum computers to recognize handwriting and suchlike, despite (I would say) virtually no basis to believe in a quantum scaling advantage for such tasks. Or they’ll aggressively market current devices to governments who don’t understand what they’re for, but just want to say they have a quantum computer and not get left behind. Or they’ll testify to Congress that quantum, unlike AI, “doesn’t hallucinate” and indeed is “deterministic.” It pains me to write this, as IonQ was founded by (and indeed, still employs) scientists who I deeply admire and respect.
  4. Perhaps none of this would matter (or would matter only to pointy-headed theorists like me) if IonQ were the world leader in quantum computing hardware, or even trapped-ion hardware. But by all accounts, IonQ’s hardware and demonstrations have lagged well behind those of its direct competitor, Quantinuum. It seems to me that, to whatever extent IonQ gets vastly more attention, it’s mostly just because it chose to IPO early, and also because it’s prioritized marketing to the degree it has.

Over the past few days, I’ve explained the above to various people, only to have them look back at me with glazed, uncomprehending eyes and say, “so then, which quantum stock should I buy? or should I short quantum?”

It would seem rude for me to press quarters into these people’s hands, explaining that they must make gain from whatever they learn. So instead I reply: “You do realize, don’t you, that I’m, like, a professor at a state university, who flies coach and lives in a nice but unremarkable house? If I had any skill at timing the market, picking winners, etc., don’t you think I’d live in a mansion with an infinity pool, and fly my Cessna to whichever conferences I deigned to attend?”


It’s like this: if you think quantum computers able to break 2048-bit cryptography within 3-5 years are a near-certainty, then I’d say your confidence is unwarranted. If you think such quantum computers, once built, will also quickly revolutionize optimization and machine learning and finance and countless other domains beyond quantum simulation and cryptanalysis—then I’d say that more likely than not, an unscrupulous person has lied to you about our current understanding of quantum algorithms.

On the other hand, if you think Bitcoin, and SSL, and all the other protocols based on Shor-breakable cryptography, are almost certainly safe for the next 5 years … then I submit that your confidence is also unwarranted. Your confidence might then be like most physicists’ confidence in 1938 that nuclear weapons were decades away, or like my own confidence in 2015 that an AI able to pass a reasonable Turing Test was decades away. It might merely be the confidence that “this still looks like the work of decades—unless someone were to gather together all the scientific building blocks that have now been demonstrated, and scale them up like a stark raving madman.” The trouble is that sometimes people, y’know, do that.

Beyond that, the question of “how many years?” doesn’t even interest me very much, except insofar as I can mine from it the things I value in life, like scientific understanding, humor, and irony.


There are, famously, many intellectual Communists who are ruthless capitalists in their day-to-day lives. I somehow wound up the opposite. Intellectually, I see capitalism as a golden goose, a miraculous engine that’s lifted the human species out of its disease-ridden hovels and into air-conditioned high-rises, whereas Communism led instead to misery and gulags and piles of skulls every single time it was tried.

And yet, when I actually see the workings of capitalism up close, I often want to retch. In case after case, it seems, our system rewards bold, confident, risk-taking ignoramuses and liars, those who can shamelessly hype a technology (or conversely, declare it flatly impossible)—with such voices drowning out the cautious experts who not only strive to tell the truth, but also made all the actual discoveries that the technology rests on. My ideal economic system is, basically, whichever one can keep the people who can clearly explain the capabilities and limits and risks and benefits of X in charge of X for as long as possible.

HSBC unleashes yet another “qombie”: a zombie claim of quantum advantage that isn’t

Thursday, September 25th, 2025

Today, I got email after email asking me to comment on a new paper from HSBC—yes, the bank—together with IBM. The paper claims to use a quantum computer to get a 34% advantage in predictions of financial trading data. (See also blog posts here and here, or numerous popular articles that you can easily find and I won’t link.) What have we got? Let’s read the abstract:

The estimation of fill probabilities for trade orders represents a key ingredient in the optimization of algorithmic trading strategies. It is bound by the complex dynamics of financial markets with inherent uncertainties, and the limitations of models aiming to learn from multivariate financial time series that often exhibit stochastic properties with hidden temporal patterns. In this paper, we focus on algorithmic responses to trade inquiries in the corporate bond market and investigate fill probability estimation errors of common machine learning models when given real production-scale intraday trade event data, transformed by a quantum algorithm running on IBM Heron processors, as well as on noiseless quantum simulators for comparison. We introduce a framework to embed these quantum-generated data transforms as a decoupled offline component that can be selectively queried by models in lowlatency institutional trade optimization settings. A trade execution backtesting method is employed to evaluate the fill prediction performance of these models in relation to their input data. We observe a relative gain of up to ∼ 34% in out-of-sample test scores for those models with access to quantum hardware-transformed data over those using the original trading data or transforms by noiseless quantum simulation. These empirical results suggest that the inherent noise in current quantum hardware contributes to this effect and motivates further studies. Our work demonstrates the emerging potential of quantum computing as a complementary explorative tool in quantitative finance and encourages applied industry research towards practical applications in trading.

As they say, there are more red flags here than in a People’s Liberation Army parade. To critique this paper is not quite “shooting fish in a barrel,” because the fish are already dead before we’ve reached the end of the abstract.

They see a quantum advantage for the task in question, but only because of the noise in their quantum hardware? When they simulate the noiseless quantum computation classically, the advantage disappears? WTF? This strikes me as all but an admission that the “advantage” is just a strange artifact of the particular methods that they decided to compare—that it has nothing really to do with quantum mechanics in general, or with quantum computational speedup in particular.

Indeed, the possibility of selection bias rears its head. How many times did someone do some totally unprincipled, stab-in-the-dark comparison of a specific quantum learning method against a specific classical method, and get predictions from the quantum method that were worse than whatever they got classically … so then they didn’t publish a paper about it?

If it seems like I’m being harsh, it’s because to my mind, the entire concept of this sort of study is fatally flawed from the beginning, optimized for generating headlines rather than knowledge.  The first task, I would’ve thought, is to show the reality of quantum computational advantage in the system or algorithm under investigation, even just for a useless benchmark problem. Only after one has done that, has one earned the right to look for a practical benefit in algorithmic trading or predicting financial time-series data or whatever, coming from that same advantage. If you skip the first step, then whatever “benefits” you get from your quantum computer are overwhelmingly likely to be cargo cult benefits.

And yet none of it matters. The paper can, more or less, openly admit all this right in the abstract, and yet it will still predictably generate lots of credulous articles in the business and financial news about HSBC using quantum computers to improve bond trading!—which, one assumes, was the point of the exercise from the beginning. Qombies roam the earth: undead narratives of “quantum advantage for important business problems” detached from any serious underlying truth-claim. And even here at one of the top 50 quantum computing blogs on the planet, there’s nothing I can do about it other than scream into the void.


Update (Sep. 26): Someone let me know that Martin Shkreli, the “pharma bro,” will be hosting a conference call for investors to push back on quantum computing hype. He announced on X that he’s offering quantum computing experts $2k each to speak in his call. On the off chance that Shkreli reads this blog: I’d be willing to do it for $50k. And if Shkreli were to complain about my jacking up the price… 😄

Above my pay grade: Jensen Huang and the quantum computing stock market crash

Thursday, January 9th, 2025

Update (Jan. 13): Readers might enjoy the Bankless Podcast, in which I and Justin Drake of the Ethereum engineering team discuss quantum computing and its impact on cryptocurrency. I learned something interesting from Justin—namely that Satoshi has about $90 billion worth of bitcoin that’s never been touched since the cryptocurrency’s earliest days, much of which (added: the early stuff, the stuff not additionally protected by a hash function) would be stealable by anyone who could break elliptic curve cryptography—for example, by using a scalable quantum computer. At what point in time, if any, would this stash acquire the moral or even legal status of (say) gold doubloons just lying on the bottom of the ocean? Arrr, ’tis avast Hilbert space!


Apparently Jensen Huang, the CEO of NVIDIA, opined on an analyst call this week that quantum computing was plausibly still twenty years away from being practical. As a direct result, a bunch of publicly-traded quantum computing companies (including IonQ, Rigetti, and D-Wave) fell 40% or more in value, and even Google/Alphabet stock fell on the news.

So then friends and family attuned to the financial markets started sending me messages asking for my reaction, as the world’s semi-unwilling Quantum Computing Opiner-in-Chief.

My reaction? Mostly just that it felt really weird for all those billions of dollars to change hands, or evaporate, based on what a microchip CEO offhandedly opined about my tiny little field, while I (like much of that field) could’ve remained entirely oblivious to it, were it not for all of their messages!

But was Jensen Huang right in his time estimate? And, relatedly, what is the “correct” valuation of quantum computing companies? Alas, however much more I know about quantum computing than Jensen Huang does, the knowledge does not enable me to answer to either question.

I can, of course, pontificate about the questions, as I can pontificate about anything.

To start with the question of timelines: yes, there’s a lot still to be done, and twenty years might well be correct. But as I’ve pointed out before, within the past year we’ve seen 2-qubit gates with ~99.9% fidelity, which is very near the threshold for practical fault-tolerance. And of course, Google has now demonstrated fault-tolerance that becomes more and more of a win with increasing code size. So no, I can’t confidently rule out commercially useful quantum simulations within the next decade. Like, it sounds fanciful, but then I remember how fanciful it would’ve seemed in 2012 that we’d have conversational AI by 2022. I was alive in 2012! And speaking of which, if you really believe (as many people now do) AI will match or exceed human capabilities in most fields in the next decade, then that will scramble all the other timelines too. And presumably Jensen Huang understands these points as well as anyone.

Now for the valuation question. On the one hand, Shtetl-Optimized readers will know that there’s been plenty of obfuscation and even outright lying, to journalists, the public, and investors, about what quantum computing will be good for and how soon. To whatever extent the previous valuations were based on that lying, a brutal correction was of course in order, regardless of what triggered it.

On the other hand, I can’t say with certainty that high valuations are wrong! After all, even if there’s only a 10% chance that something will produce $100B in value, that would still justify a $10B valuation. It’s a completely different way of thinking than what we’re used to in academia.

For whatever it’s worth, my own family’s money is just sitting in index funds and CDs. I have no quantum computing investments of any kind. I do sometimes accept consulting fees to talk to quantum computing startups and report back my thoughts. When I do, my highest recommendation is: “these people are smart and honest, everything they say about quantum algorithms is correct insofar as I can judge, and I hope they succeed. I wouldn’t invest my own money, but I’m very happy if you or anyone else does.” Meanwhile, my lowest recommendation is: “these people are hypesters and charlatans, and I hope they fail. But even then, I can’t say with confidence that their valuation won’t temporarily skyrocket, in which case investing in them would presumably have been the right call.”

So basically: it’s good that I became an academic rather than an investor.


Having returned from family vacation, I hope to get back to a more regular blogging schedule … let’s see how it goes!

Podcasts!

Wednesday, December 4th, 2024

Update (Dec. 9): For those who still haven’t gotten enough, check out a 1-hour Zoom panel discussion about quantum algorithms, featuring yours truly along with my distinguished colleagues Eddie Farhi, Aram Harrow, and Andrew Childs, moderated by Barry Sanders, as part of the QTML’2024 conference held in Melbourne (although, it being Thanksgiving week, none of the four panelists were actually there in person). Part of the panel devolves into a long debate between me and Eddie about how interesting quantum algorithms are if they don’t achieve speedups over classical algorithms, and whether some quantum algorithms papers mislead people by not clearly addressing the speedup question (you get one guess as to which side I took). I resolved going in to keep my comments as civil and polite as possible—you can judge for yourself how well I succeeded! Thanks very much to Barry and the other QTML organizers for making this happen.


Do you like watching me spout about AI alignment, watermarking, my time at OpenAI, the P versus NP problem, quantum computing, consciousness, Penrose’s views on physics and uncomputability, university culture, wokeness, free speech, my academic trajectory, and much more, despite my slightly spastic demeanor and my many verbal infelicities? Then holy crap are you in luck today! Here’s 2.5 hours of me talking to former professional poker players (and now wonderful Austin-based friends) Liv Boeree and her husband Igor Kurganov about all of those topics. (Or 1.25 hours if you watch at 2x speed, as I strongly recommend.)

But that’s not all! Here I am talking to Harvard’s Hrvoje Kukina, in a much shorter (45-minute) podcast focused on quantum computing, cosmological bounds on information processing, and the idea of the universe as a computer:

Last but not least, here I am in an hour-long podcast (this one audio-only) with longtime friend Kelly Weinersmith and her co-host Daniel Whiteson, talking about quantum computing.

Enjoy!

My podcast with Brian Greene

Friday, October 18th, 2024

Yes, he’s the guy from The Elegant Universe book and TV series. Our conversation is 1 hour 40 minutes; as usual I strongly recommend listening at 2x speed. The topics, chosen by Brian, include quantum computing (algorithms, hardware, error-correction … the works), my childhood, the interpretation of quantum mechanics, the current state of AI, the future of sentient life in the cosmos, and mathematical Platonism. I’m happy with how it turned out; in particular, my verbal infelicities seem to have been at a minimum this time. I recommend skipping the YouTube comments if you want to stay sane, but do share your questions and reactions in the comments here. Thanks to Brian and his team for doing this. Enjoy!


Update (Oct. 28): If that’s not enough Scott Aaronson video content for you, please enjoy another quantum computing podcast interview, this one with Ayush Prakash and shorter (clocking in at 45 minutes). Ayush pitched this podcast to me as an opportunity to explain quantum computing to Gen Z. Thus, I considered peppering my explanations of interference and entanglement with such phrases as ‘fo-shizzle’ and ‘da bomb,’ but I desisted after reflecting that whatever youth slang I knew was probably already outdated whenever I’d picked it up, back in the twentieth century.

Quantum advantage for NP approximation? For REAL this time?

Saturday, October 5th, 2024

The other night I spoke at a quantum computing event and was asked—for the hundredth time? the thousandth?—whether I agreed that the quantum algorithm called QAOA was poised revolutionize industries by finding better solutions to NP-hard optimization problems. I replied that while serious, worthwhile research on that algorithm continues, alas, so far I have yet to see a single piece of evidence that QAOA outperforms the best classical heuristics on any problem that anyone cares about. (Note added: in the comments, Ashley Montanaro shares a paper with empirical evidence that QAOA provides a modest polynomial speedup over known classical heuristics for random k-SAT. This is the best/only such evidence I’ve seen, and which still stands as far as I know!)

I added I was sad to see the arXiv flooded with thousands of relentlessly upbeat QAOA papers that dodge the speedup question by simply never raising it at all. I said that, in my experience, these papers reliably led outsiders to conclude that surely there must be lots of excellent known speedups from QAOA—since otherwise, why would so many people be writing papers about it?

Anyway, the person right after me talked about a “quantum dating app” (!) they were developing.

I figured that, as usual, my words had thudded to the ground with zero impact, truth never having had a chance against what sounds good and what everyone wants to hear.

But then, the morning afterward, someone from the audience emailed me that, incredulous at my words, he went through a bunch of QAOA papers, looking for the evidence of its beating classical algorithms that he knew must be in them, and was shocked to find the evidence missing, just as I had claimed! So he changed his view.

That one message filled me with renewed hope about my ability to inject icy blasts of reality into the quantum algorithms discourse.


So, with that prologue, surely I’m about to give you yet another icy blast of quantum algorithms not helping for optimization problems?

Aha! Inspired by Scott Alexander, this is the part of the post where, having led you one way, I suddenly jerk you the other way. My highest loyalty, you see, is not to any narrative, but only to THE TRUTH.

And the truth is this: this summer, my old friend Stephen Jordan and seven coauthors, from Google and elsewhere, put out a striking preprint about a brand-new quantum algorithm for optimization problems that they call Decoded Quantum Interferometry (DQI). This week Stephen was gracious enough to explain the new algorithm in detail when he visited our group at UT Austin.

DQI can be used for a variety of NP-hard optimization problems, at least in the regime of approximation where they aren’t NP-hard. But a canonical example is what the authors call “Optimal Polynomial Intersection” or OPI, which involves finding a low-degree polynomial that intersects as many subsets as possible from a given list. Here’s the formal definition:

OPI. Given integers n<p with p prime, we’re given as input subsets S1,…,Sp-1 of the finite field Fp. The goal is to find a degree-(n-1) polynomial Q that maximizes the number of y∈{1,…,p-1} such that Q(y)∈Sy, i.e. that intersects as many of the subsets as possible.

For this problem, taking as an example the case p-1=10n and |Sy|=⌊p/2⌋ for all y, Stephen et al. prove that DQI satisfies a 1/2 + (√19)/20 ≈ 0.7179 fraction of the p-1 constraints in polynomial time. By contrast, they say the best classical polynomial-time algorithm they were able to find satisfies an 0.55+o(1) fraction of the constraints.

To my knowledge, this is the first serious claim to get a better approximation ratio quantumly for an NP-hard problem, since Farhi et al. made the claim for QAOA solving something called MAX-E3LIN2 back in 2014, and then my blogging about it led to a group of ten computer scientists finding a classical algorithm that got an even better approximation.

So, how did Stephen et al. pull this off? How did they get around the fact that, again and again, exponential quantum speedups only seem to exist for algebraically structured problems like factoring or discrete log, and not for problems like 3SAT or Max-Cut that lack algebraic structure?

Here’s the key: they didn’t. Instead they leaned into the fact, by targeting an optimization problem that (despite being NP-hard) has loads of algebraic structure! The key insight, in their new DQI algorithm, is that the Quantum Fourier Transform can be used to reduce other NP-hard problems to problems of optimal decoding of a suitable error-correcting code. (This insight built on the breakthrough two years ago by Yamakawa and Zhandry, giving a quantum algorithm that gets an exponential speedup for an NP search problem relative to a random oracle.)

Now, sometimes the reduction to a coding theory problem is “out of the frying pan and into the fire,” as the new optimization problem is no easier than the original one. In the special case of searching for a low-degree polynomial, however, the optimal decoding problem ends up being for the Reed-Solomon code, where we’ve known efficient classical algorithms for generations, famously including the Berlekamp-Welch algorithm.

One open problem that I find extremely interesting is whether OPI, in the regime where DQI works, is in coNP or coAM, or has some other identifiable structural feature that presumably precludes its being NP-hard.

Regardless, though, as of this week, the hope of using quantum computers to get better approximation ratios for NP-hard optimization problems is back in business! Will that remain so? Or will my blogging about such an attempt yet again lead to its dequantization? Either way I’m happy.

Quantum Computing: Between Hope and Hype

Sunday, September 22nd, 2024

So, back in June the White House announced that UCLA would host a binational US/India workshop, for national security officials from both countries to learn about the current status of quantum computing and post-quantum cryptography. It fell to my friend and colleague Rafail Ostrovsky to organize the workshop, which ended up being held last week. When Rafi invited me to give the opening talk, I knew he’d keep emailing until I said yes. So, on the 3-hour flight to LAX, I wrote the following talk in a spiral notebook, which I then delivered the next morning with no slides. I called it “Quantum Computing: Between Hope and Hype.” I thought Shtetl-Optimized readers might be interested too, since it contains my reflections on a quarter-century in quantum computing, and prognostications on what I expect soon. Enjoy, and let me know what you think!


Quantum Computing: Between Hope and Hype
by Scott Aaronson

September 16, 2024

When Rafi invited me to open this event, it sounded like he wanted big-picture pontification more than technical results, which is just as well, since I’m getting old for the latter. Also, I’m just now getting back into quantum computing after a two-year leave at OpenAI to think about the theoretical foundations of AI safety. Luckily for me, that was a relaxing experience, since not much happened in AI these past two years. [Pause for laughs] So then, did anything happen in quantum computing while I was away?

This, of course, has been an extraordinary time for both quantum computing and AI, and not only because the two fields were mentioned for the first time in an American presidential debate (along with, I think, the problem of immigrants eating pets). But it’s extraordinary for quantum computing and for AI in very different ways. In AI, practice is wildly ahead of theory, and there’s a race for scientific understanding to catch up to where we’ve gotten via the pure scaling of neural nets and the compute and data used to train them. In quantum computing, it’s just the opposite: there’s right now a race for practice to catch up to where theory has been since the mid-1990s.

I started in quantum computing around 1998, which is not quite as long as some people here, but which does cover most of the time since Shor’s algorithm and the rest were discovered. So I can say: this past year or two is the first time I’ve felt like the race to build a scalable fault-tolerant quantum computer is actually underway. Like people are no longer merely giving talks about the race or warming up for the race, but running the race.

Within just the last few weeks, we saw the group at Google announce that they’d used the Kitaev surface code, with distance 7, to encode one logical qubit using 100 or so physical qubits, in superconducting architecture. They got a net gain: their logical qubit stays alive for maybe twice as long as the underlying physical qubits do. And crucially, they find that their logical coherence time increases as they pass to larger codes, with higher distance, on more physical qubits. With superconducting, there are still limits to how many physical qubits you can stuff onto a chip, and eventually you’ll need communication of qubits between chips, which has yet to be demonstrated. But if you could scale Google’s current experiment even to 1500 physical qubits, you’d probably be below the threshold where you could use that as a building block for a future scalable fault-tolerant device.

Then, just last week, a collaboration between Microsoft and Quantinuum announced that, in the trapped-ion architecture, they applied pretty substantial circuits to logically-encoded qubits—-again in a way that gets a net gain in fidelity over not doing error-correction, modulo a debate about whether they’re relying too much on postselection. So, they made a GHZ state, which is basically like a Schrödinger cat, out of 12 logically encoded qubits. They also did a “quantum chemistry simulation,” which had only two logical qubits, but which required three logical non-Clifford gates—which is the hard kind of gate when you’re doing error-correction.

Because of these advances, as well as others—what QuEra is doing with neutral atoms, what PsiQuantum and Xanadu are doing with photonics, etc.—I’m now more optimistic than I’ve ever been that, if things continue at the current rate, either there are useful fault-tolerant QCs in the next decade, or else something surprising happens to stop that. Plausibly we’ll get there not just with one hardware architecture, but with multiple ones, much like the Manhattan Project got a uranium bomb and a plutonium bomb around the same time, so the question will become which one is most economic.

If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too!

Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.

So, I think this moment is a good one for reflection. We’re used to quantum computing having this air of unreality about it. Like sure, we go to conferences, we prove theorems about these complexity classes like BQP and QMA, the experimenters do little toy demos that don’t scale. But if this will ever be practical at all, then for all we know, not for another 200 years. It feels really different to think of this as something plausibly imminent. So what I want to do for the rest of this talk is to step back and ask, what are the main reasons why people regarded this as not entirely real? And what can we say about those reasons in light of where we are today?


Reason #1

For the general public, maybe the overriding reason not to take QC seriously has just been that it sounded too good to be true. Like, great, you’ll have this magic machine that’s gonna exponentially speed up every problem in optimization and machine learning and finance by trying out every possible solution simultaneously, in different parallel universes. Does it also dice peppers?

For this objection, I’d say that our response hasn’t changed at all in 30 years, and it’s simply, “No, that’s not what it will do and not how it will work.” We should acknowledge that laypeople and journalists and unfortunately even some investors and government officials have been misled by the people whose job it was to explain this stuff to them.

I think it’s important to tell people that the only hope of getting a speedup from a QC is to exploit the way that QM works differently from classical probability theory — in particular, that it involves these numbers called amplitudes, which can be positive, negative, or even complex. With every quantum algorithm, what you’re trying to do is choreograph a pattern of interference where for each wrong answer, the contributions to its amplitude cancel each other out, whereas the contributions to the amplitude of the right answer reinforce each other. The trouble is, it’s only for a few practical problems that we know how to do that in a way that vastly outperforms the best known classical algorithms.

What are those problems? Here, for all the theoretical progress that’s been made in these past decades, I’m going to give the same answer in 2024 that I would’ve given in 1998. Namely, there’s the simulation of chemistry, materials, nuclear physics, or anything else where many-body quantum effects matter. This was Feynman’s original application from 1981, but probably still the most important one commercially. It could plausibly help with batteries, drugs, solar cells, high-temperature superconductors, all kinds of other things, maybe even in the next few years.

And then there’s breaking public-key cryptography, which is not commercially important, but is important for other reasons well-known to everyone here.

And then there’s everything else. For problems in optimization, machine learning, finance, and so on, there’s typically a Grover’s speedup, but that of course is “only” a square root and not an exponential, which means that it will take much longer before it’s relevant in practice. And one of the earliest things we learned in quantum computing theory is that there’s no “black-box” way to beat the Grover speedup. By the way, that’s also relevant to breaking cryptography — other than the subset of cryptography that’s based on abelian groups and can be broken by Shor’s algorithm or the like. The centerpiece of my PhD thesis, twenty years ago, was the theorem that you can’t get more than a Grover-type polynomial speedup for the black-box problem of finding collisions in cryptographic hash functions.

So then what remains? Well, there are all sorts heuristic quantum algorithms for classical optimization and machine learning problems — QAOA (Quantum Approximate Optimization Algorithm), quantum annealing, and so on — and we can hope that sometimes they’ll beat the best classical heuristics for the same problems, but it will be trench warfare, not just magically speeding up everything. There are lots of quantum algorithms somehow inspired by the HHL (Harrow-Hassidim-Lloyd) algorithm for solving linear systems, and we can hope that some of those algorithms will get exponential speedups for end-to-end problems that matter, as opposed to problems of transforming one quantum state to another quantum state. We can of course hope that new quantum algorithms will be discovered. And most of all, we can look for entirely new problem domains, where people hadn’t even considered using quantum computers before—new orchards in which to pick low-hanging fruit. Recently, Shih-Han Hung and I, along with others, have proposed using current QCs to generate cryptographically certified random numbers, which could be used in post-state cryptocurrencies like Ethereum. I’m hopeful that people will find other protocol applications of QC like that one — “proof of quantum work.” [Another major potential protocol application, which Dan Boneh brought up after my talk, is quantum one-shot signatures.]

Anyway, taken together, I don’t think any of this is too good to be true. I think it’s genuinely good and probably true!


Reason #2

A second reason people didn’t take seriously that QC was actually going to happen was the general thesis of technological stagnation, at least in the physical world. You know, maybe in the 40s and 50s, humans built entirely new types of machines, but nowadays what do we do? We issue press releases. We make promises. We argue on social media.

Nowadays, of course, pessimism about technological progress seems hard to square with the revolution that’s happening in AI, another field that spent decades being ridiculed for unfulfilled promises and that’s now fulfilling the promises. I’d also speculate that, to the extent there is technological stagnation, most of it is simply that it’s become really hard to build new infrastructure—high-speed rail, nuclear power plants, futuristic cities—for legal reasons and NIMBY reasons and environmental review reasons and Baumol’s cost disease reasons. But none of that really applies to QC, just like it hasn’t applied so far to AI.


Reason #3

A third reason people didn’t take this seriously was the sense of “It’s been 20 years already, where’s my quantum computer?” QC is often compared to fusion power, another technology that’s “eternally just over the horizon.” (Except, I’m no expert, but there seems to be dramatic progress these days in fusion power too!)

My response to the people who make that complaint was always, like, how much do you know about the history of technology? It took more than a century for heavier-than-air flight to go from correct statements of the basic principle to reality. Universal programmable classical computers surely seemed more fantastical from the standpoint of 1920 than quantum computers seem today, but then a few decades later they were built. Today, AI provides a particularly dramatic example where ideas were proposed a long time ago—neural nets, backpropagation—those ideas were then written off as failures, but no, we now know that the ideas were perfectly sound; it just took a few decades for the scaling of hardware to catch up to the ideas. That’s why this objection never had much purchase by me, even before the dramatic advances in experimental quantum error-correction of the last year or two.


Reason #4

A fourth reason why people didn’t take QC seriously is that, a century after the discovery of QM, some people still harbor doubts about quantum mechanics itself. Either they explicitly doubt it, like Leonid Levin, Roger Penrose, or Gerard ‘t Hooft. Or they say things like, “complex Hilbert space in 2n dimensions is a nice mathematical formalism, but mathematical formalism is not reality”—the kind of thing you say when you want to doubt, but not take full intellectual responsibility for your doubts.

I think the only thing for us to say in response, as quantum computing researchers—and the thing I consistently have said—is man, we welcome that confrontation! Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that.


Reason #5

My final reason why people didn’t take QC seriously is the only technical one I’ll discuss here. Namely, maybe quantum mechanics is fine but fault-tolerant quantum computing is fundamentally “screened off” or “censored” by decoherence or noise—and maybe the theory of quantum fault-tolerance, which seemed to indicate the opposite, makes unjustified assumptions. This has been the position of Gil Kalai, for example.

The challenge for that position has always been to articulate, what is true about the world instead? Can every realistic quantum system be simulated efficiently by a classical computer? If so, how? What is a model of correlated noise that kills QC without also killing scalable classical computing?—which turns out to be a hard problem.

In any case, I think this position has been dealt a severe blow by the Random Circuit Sampling quantum supremacy experiments of the past five years. Scientifically, the most important thing we’ve learned from these experiments is that the fidelity seems to decay exponentially with the number of qubits, but “only” exponentially — as it would if the errors were independent from one gate to the next, precisely as the theory of quantum fault-tolerance assumes. So for anyone who believes this objection, I’d say that the ball is now firmly in their court.


So, if we accept that QC is on the threshold of becoming real, what are the next steps? There are the obvious ones: push forward with building better hardware and using it to demonstrate logical qubits and fault-tolerant operations on them. Continue developing better error-correction methods. Continue looking for new quantum algorithms and new problems for those algorithms to solve.

But there’s also a less obvious decision right now. Namely, do we put everything into fault-tolerant qubits, or do we continue trying to demonstrate quantum advantage in the NISQ (pre-fault-tolerant) era? There’s a case to be made that fault-tolerance will ultimately be needed for scaling, and anything you do without fault-tolerance is some variety of non-scalable circus trick, so we might as well get over the hump now.

But I’d like to advocate putting at least some thought into how to demonstrate a quantum advantage in the near-term. Thay could be via cryptographic protocols, like those that Kahanamoku-Meyer et al. have proposed. It could be via pseudorandom peaked quantum circuits, a recent proposal by me and Yuxuan Zhang—if we can figure out an efficient way to generate the circuits. Or we could try to demonstrate what William Kretschmer, Harry Buhrman, and I have called “quantum information supremacy,” where, instead of computational advantage, you try to do an experiment that directly shows the vastness of Hilbert space, via exponential advantages for quantum communication complexity, for example. I’m optimistic that that might be doable in the very near future, and have been working with Quantinuum to try to do it.

On the one hand, when I started in quantum computing 25 years ago, I reconciled myself to the prospect that I’m going to study what fundamental physics implies about the limits of computation, and maybe I’ll never live to see any of it experimentally tested, and that’s fine. On the other hand, once you tell me that there is a serious prospect of testing it soon, then I become kind of impatient. Some part of me says, let’s do this! Let’s try to achieve forthwith what I’ve always regarded as the #1 application of quantum computers, more important than codebreaking or even quantum simulation: namely, disproving the people who said that scalable quantum computing was impossible.

Book Review: “Quantum Supremacy” by Michio Kaku (tl;dr DO NOT BUY)

Friday, May 19th, 2023

Update (June 6): I wish to clarify that I did not write any of the dialogue for the “Scott Aaronson” character who refutes Michio Kaku’s quantum computing hype in this YouTube video, which uses an AI recreation of my voice. The writer appears to be physics/math blogger and podcaster Hassaan Saleem; see his website here. Luckily, the character and I do share many common views; I’m sure we’d hit it off if we met.


When I was a teenager, I enjoyed reading Hyperspace, an early popularization of string theory by the theoretical physicist Michio Kaku. I’m sure I’d have plenty of criticisms if I reread it today, but at the time, I liked it a lot. In the decades since, Kaku has widened his ambit to, well, pretty much everything, regularly churning out popular books with subtitles like “How Science Will Revolutionize the 21st Century” and “How Science Will Shape Human Destiny and Our Daily Lives.” He’s also appeared on countless TV specials, in many cases to argue that UFOs likely contain extraterrestrial visitors.

Now Kaku has a new bestseller about quantum computing, creatively entitled Quantum Supremacy. He even appeared on Joe Rogan a couple weeks ago to promote the book, surely reaching an orders-of-magnitude larger audience than I have in two decades of trying to explain quantum computing to non-experts. (Incidentally, to those who’ve asked why Joe Rogan hasn’t invited me on his show to explain quantum computing: I guess you now have an answer of sorts!)

In the spirit, perhaps, of the TikTokkers who eat live cockroaches or whatever to satisfy their viewers, I decided to oblige loyal Shtetl-Optimized fans by buying Quantum Supremacy and reading it. So I can now state with confidence: beating out a crowded field, this is the worst book about quantum computing, for some definition of the word “about,” that I’ve ever encountered.

Admittedly, it’s not obvious why I’m reviewing the book here at all. Among people who’ve heard of this blog, I expect that approximately zero would be tempted to buy Kaku’s book, at least if they flipped through a few random pages and saw the … level of care that went into them. Conversely, the book’s target readers have probably never visited a blog like this one and never will. So what’s the use of this post?

Well, as the accidental #1 quantum computing blogger on the planet, I feel a sort of grim obligation here. Who knows, maybe this post will show up in the first page of Google results for Kaku’s book, and it will manage to rescue two or three people from the kindergarten of lies.


Where to begin? Should we just go through the first chapter with a red pen? OK then: on the very first page, Kaku writes,

Google revealed that their Sycamore quantum computer could solve a mathematical problem in 200 seconds that would take 10,000 years on the world’s fastest supercomputer.

No, the “10,000 years” estimate was quickly falsified, as anyone following the subject knows. I’d be the first to stress that the situation is complicated; compared to the best currently-known classical algorithms, some quantum advantage remains for the Random Circuit Sampling task, depending on how you measure it. But to repeat the “10,000 years” figure at this point, with no qualifications, is actively misleading.

Turning to the second page:

[Quantum computers] are a new type of computer that can tackle problems that digital computers can never solve, even with an infinite amount of time. For example, digital computers can never accurately calculate how atoms combine to create crucial chemical reactions, especially those that make life possible. Digital computers can only compute on digital tape, consisting of a series of 0s and 1s, which are too crude to describe the delicate waves of electrons dancing deep inside a molecule. For example, when tediously computing the paths taken by a mouse in a maze, a digital computer has to painfully analyze each possible path, one after the other. A quantum computer, however, simultaneously analyzes all possible paths at the same time, with lightning speed.

OK, so here Kaku has already perpetuated two of the most basic, forehead-banging errors about what quantum computers can do. In truth, anything that a QC can calculate, a classical computer can calculate as well, given exponentially more time: for example, by representing the entire wavefunction, all 2n amplitudes, to whatever accuracy is needed. That’s why it was understood from the very beginning that quantum computers can’t change what’s computable, but only how efficiently things can be computed.

And then there’s the Misconception of Misconceptions, about how a QC “analyzes all possible paths at the same time”—with no recognition anywhere of the central difficulty, the thing that makes a QC enormously weaker than an exponentially parallel classical computer, but is also the new and interesting part, namely that you only get to see a single, random outcome when you measure, with its probability given by the Born rule. That’s the error so common that I warn against it right below the title of my blog.

[Q]uantum computers are so powerful that, in principle, they could break all known cybercodes.

Nope, that’s strongly believed to be false, just like the analogous statement for classical computers. Despite its obvious relevance for business and policy types, the entire field of post-quantum cryptography—including the lattice-based public-key cryptosystems that have by now survived 20+ years of efforts to find a quantum algorithm to break them—receives just a single vague mention, on pages 84-85. The possibility of cryptography surviving quantum computers is quickly dismissed because “these new trapdoor functions are not easy to implement.” (But they have been implemented.)


There’s no attempt, anywhere in this book, to explain how any quantum algorithm actually works, let alone is there a word anywhere about the limitations of quantum algorithms. And yet there’s still enough said to be wrong. On page 84, shortly after confusing the concept of a one-way function with that of a trapdoor function, Kaku writes:

Let N represent the number we wish to factorize. For an ordinary digital computer, the amount of time it takes to factorize a number grows exponentially, like t ~ eN, times some unimportant factors.

This is a double howler: first, trial division takes only ~√N time; Kaku has confused N itself with its number of digits, ~log2N. Second, he seems unaware that much better classical factoring algorithms, like the Number Field Sieve, have been known for decades, even though those algorithms play a central role in codebreaking and in any discussion of where the quantum/classical crossover might happen.


Honestly, though, the errors aren’t the worst of it. The majority of the book is not even worth hunting for errors in, because fundamentally, it’s filler.

First there’s page after page breathlessly quoting prestigious-sounding people and organizations—Google’s Sundar Pichai, various government agencies, some report by Deloitte—about just how revolutionary they think quantum computing will be. Then there are capsule hagiographies of Babbage and Lovelace, Gödel and Turing, Planck and Einstein, Feynman and Everett.

And then the bulk of the book is actually about stuff with no direct relation to quantum computing at all—the origin of life, climate change, energy generation, cancer, curing aging, etc.—except with ungrounded speculations tacked onto the end of each chapter about how quantum computers will someday revolutionize all of this. Personally, I’d say that

  1. Quantum simulation speeding up progress in biochemistry, high-temperature superconductivity, and the like is at least plausible—though very far from guaranteed, since one has to beat the cleverest classical approaches that can be designed for the same problems (a point that Kaku nowhere grapples with).
  2. The stuff involving optimization, machine learning, and the like is almost entirely wishful thinking.
  3. Not once in the book has Kaku even mentioned the intellectual tools (e.g., looking at actual quantum algorithms like Grover’s algorithm or phase estimation, and their performance on various tasks) that would be needed to distinguish 1 from 2.

In his acknowledgments section, Kaku simply lists a bunch of famous scientists he’s met in his life—Feynman, Witten, Hawking, Penrose, Brian Greene, Lisa Randall, Neil deGrasse Tyson. Not a single living quantum computing researcher is acknowledged, not one.

Recently, I’d been cautiously optimistic that, after decades of overblown headlines about “trying all answers in parallel,” “cracking all known codes,” etc., the standard for quantum computing popularization was slowly creeping upward. Maybe I was just bowled over by this recent YouTube video (“How Quantum Computers Break the Internet… Starting Now”), which despite its clickbait title and its slick presentation, miraculously gets essentially everything right, shaming the hypesters by demonstrating just how much better it’s possible to do.

Kaku’s slapdash “book,” and the publicity campaign around it, represents a noxious step backwards. The wonder of it, to me, is Kaku holds a PhD in theoretical physics. And yet the average English major who’s written a “what’s the deal with quantum computing?” article for some obscure link aggregator site has done a more careful and honest job than Kaku has. That’s setting the bar about a millimeter off the floor. I think the difference is, at least the English major knows that they’re supposed to call an expert or two, when writing about an enormously complicated subject of which they’re completely ignorant.


Update: I’ve now been immersed in the AI safety field for one year, let I wouldn’t consider myself nearly ready to write a book on the subject. My knowledge of related parts of CS, my year studying AI in grad school, and my having created the subject of computational learning theory of quantum states would all be relevant but totally insufficient. And AI safety, for all its importance, has less than quantum computing does in the way of difficult-to-understand concepts and results that basically everyone in the field agrees about. And if I did someday write such a book, I’d be pretty terrified of getting stuff wrong, and would have multiple expert colleagues read drafts.

In case this wasn’t clear enough from my post, Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions.

Cargo Cult Quantum Factoring

Wednesday, January 4th, 2023

Just days after we celebrated my wife’s 40th birthday, she came down with COVID, meaning she’s been isolating and I’ve been spending almost all my time dealing with our kids.

But if experience has taught me anything, it’s that the quantum hype train never slows down. In the past 24 hours, at least four people have emailed to ask me about a new paper entitled “Factoring integers with sublinear resources on a superconducting quantum processor.” Even the security expert Bruce Schneier, while skeptical, took the paper surprisingly seriously.

The paper claims … well, it’s hard to pin down what it claims, but it’s certainly given many people the impression that there’s been a decisive advance on how to factor huge integers, and thereby break the RSA cryptosystem, using a near-term quantum computer. Not by using Shor’s Algorithm, mind you, but by using the deceptively similarly named Schnorr’s Algorithm. The latter is a classical algorithm based on lattices, which the authors then “enhance” using the heuristic quantum optimization method called QAOA.

For those who don’t care to read further, here is my 3-word review:

No. Just No.

And here’s my slightly longer review:

Schnorr ≠ Shor. Yes, even when Schnorr’s algorithm is dubiously “enhanced” using QAOA—a quantum algorithm that, incredibly, for all the hundreds of papers written about it, has not yet been convincingly argued to yield any speedup for any problem whatsoever (besides, as it were, the problem of reproducing its own pattern of errors) (one possible recent exception from Sami Boulebnane and Ashley Montanaro).

In the new paper, the authors spend page after page saying-without-saying that it might soon become possible to break RSA-2048, using a NISQ (i.e., non-fault-tolerant) quantum computer. They do so via two time-tested strategems:

  1. the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates), and
  2. complete silence about the one crucial point.

Then, finally, they come clean about the one crucial point in a single sentence of the Conclusion section:

It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.

“Unclear” is an understatement here. It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so.

All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many. Having said that, this actually isn’t the first time I’ve encountered the strange idea that the exponential quantum speedup for factoring integers, which we know about from Shor’s algorithm, should somehow “rub off” onto quantum optimization heuristics that embody none of the actual insights of Shor’s algorithm, as if by sympathetic magic. Since this idea needs a name, I’d hereby like to propose Cargo Cult Quantum Factoring.

And with that, I feel I’ve adequately discharged my duties here to sanity and truth. If I’m slow to answer comments, it’ll be because I’m dealing with two screaming kids.