More on whether useful quantum computing is “imminent”

These days, the most common question I get goes something like this:

A decade ago, you told people that scalable quantum computing wasn’t imminent. Now, though, you claim it plausibly is imminent. Why have you reversed yourself??

I appreciated the friend of mine who paraphrased this as follows: “A decade ago you said you were 35. Now you say you’re 45. Explain yourself!”


A couple weeks ago, I was delighted to attend Q2B in Santa Clara, where I gave a keynote talk entitled “Why I Think Quantum Computing Works” (link goes to the PowerPoint slides). This is one of the most optimistic talks I’ve ever given. But mostly that’s just because, uncharacteristically for me, here I gave short shrift to the challenge of broadening the class of problems that achieve huge quantum speedups, and just focused on the experimental milestones achieved over the past year. With every experimental milestone, the little voice in my head that asks “but what if Gil Kalai turned out to be right after all? what if scalable QC wasn’t possible?” grows quieter, until now it can barely be heard.

Going to Q2B was extremely helpful in giving me a sense of the current state of the field. Ryan Babbush gave a superb overview (I couldn’t have improved a word) of the current status of quantum algorithms, while John Preskill’s annual where-we-stand talk was “magisterial” as usual (that’s the word I’ve long used for his talks), making mine look like just a warmup act for his. Meanwhile, Quantinuum took a victory lap, boasting of their recent successes in a way that I considered basically justified.


After returning from Q2B, I then did an hour-long podcast with “The Quantum Bull” on the topic “How Close Are We to Fault-Tolerant Quantum Computing?” You can watch it here:

As far as I remember, this is the first YouTube interview I’ve ever done that concentrates entirely on the current state of the QC race, skipping any attempt to explain amplitudes, interference, and other basic concepts. Despite (or conceivably because?) of that, I’m happy with how this interview turned out. Watch if you want to know my detailed current views on hardware—as always, I recommend 2x speed.

Or for those who don’t have the half hour, a quick summary:

  • In quantum computing, there are the large companies and startups that might succeed or might fail, but are at least trying to solve the real technical problems, and some of them are making amazing progress. And then there are the companies that have optimized for doing IPOs, getting astronomical valuations, and selling a narrative to retail investors and governments about how quantum computing is poised to revolutionize optimization and machine learning and finance. Right now, I see these two sets of companies as almost entirely disjoint from each other.
  • The interview also contains my most direct condemnation yet of some of the wild misrepresentations that IonQ, in particular, has made to governments about what QC will be good for (“unlike AI, quantum computers won’t hallucinate because they’re deterministic!”)
  • The two approaches that had the most impressive demonstrations in the past year are trapped ions (especially Quantinuum but also Oxford Ionics) and superconducting qubits (especially Google but also IBM), and perhaps also neutral atoms (especially QuEra but also Infleqtion and Atom Computing).
  • Contrary to a misconception that refuses to die, I haven’t dramatically changed my views on any of these matters. As I have for a quarter century, I continue to profess a lot of confidence in the basic principles of quantum computing theory worked out in the mid-1990s, and I also continue to profess ignorance of exactly how many years it will take to realize those principles in the lab, and of which hardware approach will get there first.
  • But yeah, of course I update in response to developments on the ground, because it would be insane not to! And 2025 was clearly a year that met or exceeded my expectations on hardware, with multiple platforms now boasting >99.9% fidelity two-qubit gates, at or above the theoretical threshold for fault-tolerance. This year updated me in favor of taking more seriously the aggressive pronouncements—the “roadmaps”—of Google, Quantinuum, QuEra, PsiQuantum, and other companies about where they could be in 2028 or 2029.
  • One more time for those in the back: the main known applications of quantum computers remain (1) the simulation of quantum physics and chemistry themselves, (2) breaking a lot of currently deployed cryptography, and (3) eventually, achieving some modest benefits for optimization, machine learning, and other areas (but it will probably be a while before those modest benefits win out in practice). To be sure, the detailed list of quantum speedups expands over time (as new quantum algorithms get discovered) and also contracts over time (as some of the quantum algorithms get dequantized). But the list of known applications “from 30,000 feet” remains fairly close to what it was a quarter century ago, after you hack away the dense thickets of obfuscation and hype.

I’m going to close this post with a warning. When Frisch and Peierls wrote their now-famous memo in March 1940, estimating the mass of Uranium-235 that would be needed for a fission bomb, they didn’t publish it in a journal, but communicated the result through military channels only. As recently as February 1939, Frisch and Meitner had published in Nature their theoretical explanation of recent experiments, showing that the uranium nucleus could fission when bombarded by neutrons. But by 1940, Frisch and Peierls realized that the time for open publication of these matters had passed.

Similarly, at some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already. This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems, a process that I’m grateful is already underway.


Update: Someone on Twitter who’s “long $IONQ” says he’ll be posting about and investigating me every day, never resting until UT Austin fires me, in order to punish me for slandering IonQ and other “pure play” SPAC IPO quantum companies. And also, because I’ve been anti-Trump and pro-Biden. He confabulates that I must be trying to profit from my stance (eg by shorting the companies I criticize), it being inconceivable to him that anyone would say anything purely because they care about what’s true.

74 Responses to “More on whether useful quantum computing is “imminent””

  1. Michael Marthaler Says:

    Do you know if the Q2B Talks will also be online available?

  2. Scott Says:

    Michael #1: At least some of them, I think, but not sure when.

  3. Soatok Says:

    This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems, a process that I’m grateful is already underway.

    I want to share something I read a while back: Quantum is unimportant to post-quantum. The author argues that adopting post-quantum cryptography is valuable even if it turns out that quantum computers are infeasible to build in our lifetimes. And, therefore, resting on one’s laurels with the PQ rollout isn’t a good idea even if you’re highly skeptical.

    For my part, I’ve been advocating for X-Wing for ActivityPub E2EE efforts as the default KEM for encrypting private messages.

  4. foo Says:

    As a complete outsider, my takeaway is “I can ignore the hype about QC for a few more years”, and I’ll take that any day. 🙂

    Out of curiosity, do you have any opinion about French QC startups (Pasqal, Quandela)? They tend to have serious scientists as founders, but of course that’s usually not enough.

  5. Anon Says:

    educating policy markers becomes more important as we get closer to useful quantum computers

    you may like the following talk.

    https://www.youtube.com/live/t5HBhE-Q5xA

    the speakers emphasized a few important aspects:

    1. need more research into safety of the current post-quantum crypto mechanisms, we might not be on very solid grounds
    2. need to make crypto based systems more modular so if we notice one is broken, we can quickly and easily replace it
    3. need more funding for computer science research to find new quantum algorithms, to make quantum computers more useful (also in your talk, but I liked how they framed it not just as the statement of the current state but as an opportunity for an area that is not getting enough research funding compared to quantum hardware)

    the note that the number of operations that a quantum computer can perform will be much smaller than the current classical ones had a striking sharpness, I am going to use that when I try to explain that quantum computers are not really faster classical computers, but rather different beasts that are actually slower but can do new operations, and to make them actually useful we need new algorithms that exploit those new operations

  6. Johnny D Says:

    Scott, this is an argument for QC failure. This post seems like the place for it. Here is an argument that assumes quantum theory is correct and even fundamentally reality is a QC gauge theory.

    In quantum circuit gauge theory, the circuit has 2 types of size, number of logical qubits and amount of entanglement amongst the logical qubits. Fixing entanglement and increasing qubits can give EFT. Fixing qubits and increasing entanglement gives gravity with strength determined by the specifics of the theory.

    In the semantics of error correcting quantum computing, logical qubits exist in a background gauge theory. For universal QC, the logical qubits are defects in the gauge theory that are degenerate vacuum states. This means that in the semantic gauge theory, there is no cost to transport defects around each other to do computation. Thus the semantic gravitation emerges with zero strength and can be ignored even for highly entangled states.

    At the physical qubit level, states always have energy differentials. They are not degenerate states. That is what makes them stable enough to define. It takes energy to move entaninglement around. If reality is a circuit gauge theory then large entanglement amongst these physical qubit states do have gravitational effects with a nonzero strength. Why can this be ignored? Does it put an entanglement limit on QC? I think the effect would show up in the Born rule for the error correcting code as there would be spurious correlations not accounted for in the QECC.

  7. Scott Says:

    Johnny D #6: I was barely able to make sense of your argument—maybe someone else will have better luck than me.

    What I can tell you is that people have estimated gravitational sources of decoherence, and they are utterly negligible compared to more prosaic sources of decoherence. As in, absent some shocking new development in physics, your QC would need to literally be on astronomical scale before they were relevant—and even then, there’s no reason why quantum error correction couldn’t handle it the same as it handles any other small decoherence source. I don’t see how you could get any other answer without a huge change to our understanding of QM.

    And certainly the experimental successes of the past couple years, which I covered in my talk and interview, have revealed no novel kinds of decoherence, only the prosaic kinds that we know how to handle, up to the scale of hundreds of qubits and thousands of operations. Do you have a prediction for the scale at which your conjectured new effect would become relevant?

  8. Johnny D Says:

    Scott #7: This is not decoherence. I am assuming the QECC works perfectly to deal with decoherence. This is about how gravity emerges from quantum systems. This is about QECC semantics of vaccuum degeneracy not being a valid assumption at the physical qubits level where the entanglement syntactically (physically) is. I think the estimate would depend on energy differentials of the physical qubits. I will think about an estimate, but I wouldn’t hold my breath.

  9. Johnny D Says:

    Scott #7: Sorry, I didn’t notice your comment on the understanding of quantum mechanics. This is not about quantum theory being wrong. This is about QECC ignoring this effect.

  10. Scott Says:

    Johnny D #8: Alright then, if you ever come up with a more concrete prediction for what we’ll see that differs from the standard predictions of QC theory (and when), feel free to share it here … but I’ll follow your instructions and not hold my breath! 😀

  11. Topics Everyone Is Talking About No337 - x321.org Says:

    […] ⚛️ Is Practical Quantum Computing Finally Near? A thoughtful and balanced reflection on the state of quantum computing—combining scientific rigor with a realistic view of its current limitations and potential. Scott Aaronson revisits the question of whether practical quantum computing is close at hand, following insights from the Q2B conference. He notes impressive progress from Google, Quantinuum, and QuEra, with qubit fidelity surpassing fault-tolerance thresholds. While confident in the robustness of quantum theory, he remains skeptical of overhyped claims and warns that analyses of cryptography-breaking potential may soon be restricted due to growing security risks. 🔗 Read more 🔗 […]

  12. Aha! Says:

    “unlike AI, quantum computers won’t hallucinate because they’re deterministic”
    A complexity theorist saying QC are deterministic is a blasphemy of highest order.

  13. Scott Says:

    Aha! #12: Can you read? Di Masi, who I was quoting, is the CEO of IonQ, not a complexity theorist.

  14. Johnny D Says:

    Scott #7: Challenge accepted! For the case of trapped ions, where you use the spin of the electron as a physical qubit: An electron is a fundamental particle in gauge theory. Its EM interactions are fundamental gauge interactions. If you believe in holography and a finite dof density, then there is a fundamental 2d holographic gauge circuit from which electrons and EM emerge in the bulk. In the 2d theory, the entangled state is created by moving gauge defects around each other. ‘Around each other’ is well-defined because it is 2d. For massive particles in the bulk, the 2d theory has a ‘friction’ (gauge invariants in wilson loops) that the defects feel as they move around each other. This emerges in the bulk, giving rise to energies and distances. So each entangling 2-qubit gate contributes to this effect in the bulk. The contribution is proportional to its length and energy differential. Which emerge as ~ (delta E)*L, where delta E is the energy differential in the bulk qubit and L is the distance between the qubits in the bulk. Ideally you want small distance and small energy differences, but that leads to issues with the QC in the physical qubits. The total contribution in the bulk is (delta E)*(sum over entangling gates of L). This has units of area in the bulk which translates to the number of qubits in 2d. Looking at this effect per 2d qubit is (ave L)*(delta E)*n where n is the complexity. I would guess that when this number, which is dimensionless in Plank units, is of order 1 an effect would be seen. Doing some arithmetic shows this might be at a complexity of thousands? It depends on the design.

  15. Scott Says:

    Johnny D #14: Google and Quantinuum and QuEra have already demonstrated circuits with several thousand operations, and haven’t detected any hint of a deviation from the usual predictions of QM (if they had, it would’ve been the biggest news in science). At what scale would you agree your hypothesis has been falsified?

  16. Craig Gidney Says:

    Scott #15: I would be cautious about this reasoning. Yes, existing experiments haven’t revealed any surprises. But if there is a surprise waiting, I don’t think existing experiments are particularly likely to have caught it.

    For example, consider figure 3a of https://arxiv.org/pdf/2408.13687v1#page=5 . It shows a noise floor (attributed to high energy events like cosmic rays) in rep code memory circuits. Specifically, it shows the noise floor improving from 1e-6 to 1e-10 due to the introduction of gap engineering. Now notice that the surface code memory experiment in the same paper has per-round error rates around 1e-3. That error rate would make it very hard to notice even the original 1e-6 floor (you could perhaps notice errors being slightly too bursty). The expected noise is high enough that it would mask the unexpected noise. This is why I say it’s plausible we wouldn’t know about hypothetical nasty stuff yet.

    Personally, I intend to declare victory over Kalai and other skeptics somewhere around 1e-12 error per logical Clifford. I want it to be legitimately hard to see expected failures, so that the unexpected failures would be blatant. I don’t want a complicated victory, I want a “don’t even need to do statistics” victory ( as in https://xkcd.com/2400/ ). And I think it’s >90% chance of that to be achieved before 2030 by at least one quantum computing group.

  17. QC+Thermodynamics Says:

    If 2nd law of Thermodynamics is violated what would improve in Quantum Mechanics? Notice I am not asking for failure for Quantum mechanics?

  18. Dan Says:

    Scott, regarding the phrase “the simulation of quantum physics and chemistry themselves”- I get the impression that the QC community actually benefits from the undefined nature of the word “simulation.”

    In most (if not all) industrially interesting chemistry calculations, the properties of interest are static (i.e., ground energies). Yet, likely for business and marketing reasons, algorithms aiming to estimate these static properties are still labeled “simulations.” In physics, when we speak of “simulation,” we usually mean “time evolution” – advancing the system in time and measuring it eventually.

    While “time evolution” offers an exponential speedup via QC, inferring static properties in practical scenarios does not, as far as I’m aware. (I am not sure there is a known consequential speedup there compared to classical computing at all in practical scenarios). Unfortunately, strict “time evolution” doesn’t seem to have major consequences for industrial chemistry yet, even if it has significant research implications.

    The confusion deepens because the two goals sometimes coincide: in some algorithms for estimating ground energies, people use time evolution as a subroutine (though that doesn’t necessarily guarantee an advantage for the overall task).

    What do you think about this? Perhaps the community would benefit from separating these terms? One could be “time evolution” – acknowledged as possessing speedup – and the second “static estimation” (or similar), where the speedup should be considered hypothetical, just as with other applications? I would love to hear your thoughts.

  19. QMAnon Says:

    “Similarly, at some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries.”

    I must be missing something about the “adversaries”:
    how could an organization capable of building an actual QC be totally clueless about using it, in particular implementing Shor on it?
    Given a QC that’s large enough, it shouldn’t be that hard to implement Shor on it to factor the number 15, and then figure by extrapolation the number of physical qubits that particular QC would require to factor a 128 bit key, no?
    And then if they manage to factor 15, the magic is in scaling the system incrementally and factoring bigger and bigger numbers, unlocking more and more useful capabilities on the way.

  20. Johnny D Says:

    The 10 year path to my comments.
    2015 – proved the following elementary result. Consider n random bits with values -1,1. If we assume symmetry under all permutations and let n get large. The gigantic space of all possible joint distributions reduces to just specifying the first 2 moments. All higher moments are functions of those. Assuming a fair bit, just the second moment. I immediately thought, this is gravity. This is why Newton’s gravity is the sum of 2 point forces with no 3, 4,… points. How to go from classical to quantum? Must be Born rule violation.
    2018 – July, 90 degree, sitting in front yard, listening to Susskind’s Gravity and Complexity at IAS. I can only describe my feeling as religious ecstasy. Susskind quantized the classic correlation result (my words, he never saw my work) to quantum entanglement. He believed.
    2020 – contemplated a Planck length as qubit, either energy there or vacuum. Then I shifted the length by half its length and asked, could I find the energy in the intersection by making 2 observations and getting yes for both. The answer is no, these are noncommuting observables. But it got me thinking about states that live on the ends of Planck lengths, Planck squares and Planck cubes. I discovered the octonions there. I was not smart enough to understand nonassociativity. I was inspired by Kohl Furey and John Baez that division algebras held the key to particle physics. A second religion!
    2024 – awoke at 2am with epiphany on nonassociativity, after 3 months of hard work (I am not very fast), I understood the division algebras and lengths, squares and cubes. But how to quantize?
    2025 – awoke at 2am with epiphany on quantizing division algebras. The QDA turns out to be an accounting system for quantum entanglement. The religions merge. But where are particles??
    Very recently – found the grid to put QDA on, quantum division algebra space, QDAS. QDAS is a quantum circuit on a 2d grid. Realized it is a gauge theory, and QDAS was an accounting system for gauge theories, the correct gauge theories, particles!! Unification!! Holography!! The standard model and beyond, 3+1 spacetime with gravity. Susskind’s white whale.
    Yesterday – realized how gravity is a Born rule violation.
    QDAS is easily specified, I can put it in a comment or two. I am horrible at physics and a 2nd rate mathematician, don’t take my word on it. Do the calculations yourself. It’s fun!

  21. Johnny D Says:

    Scott #15: the violation is in the ECC. I think you need 1000s of physical gates supporting ECC, then the error is in the ECC. The whole ECC will run correctly, the error will show in violations to ECC Born rule that show as spurious correlations, so you might think logical qubit measurement cannot be 10 due to the logical circiut you ran, but 10 shows up, not decoherence, not error, gravity!

  22. QMAnon Says:

    A QC is deterministic in the sense that factoring 15 on it using Shor, over and over, should always give the same answer of 3 and 5! 🙂

  23. Scott Says:

    Craig Gidney #16: I was careful to leave open the possibility of surprises that haven’t shown up with ~100-qubit, ~1000-gate circuits but that will show up later. I was simply trying to pin Johnny D down on, “if you’re so confident that gravity and holography set a limit to entanglement, tell me the scale at which it happens.” Alas, the more I tried, the more verbiage I got that didn’t make sense to me. It seems clear that a QC successfully running Shor with thousands of logical qubits would refute Johnny D’s hypothesis, but I still don’t know what’s the minimal thing that would refute his hypothesis.

  24. Scott Says:

    QMAnon #19: It’s crucial to understand here that QC is already being made available, and will continue to be made available, as a commercial cloud service to anyone in the world who wants it.

    So we end up with a really devilish “Know Your Customer” problem: if someone submits a quantum circuit, supposedly for condensed matter physics simulation, how can we tell whether that circuit actually conceals a cryptographically-relevant Shor’s algorithm in some obfuscated form?

    If we can’t tell, then it seems like the least we could do would be to avoid sharing with unsophisticated hackers, whatever the most sophisticated groups in the world might know about how to optimize the implementation of Shor’s algorithm for breaking ECDSA-256 and the like. (The general principles have of course long been out of the bag, but with the earliest capable devices, a lot will also come down to implementation details.)

  25. Scott Says:

    QMAnon #22:

      A QC is deterministic in the sense that factoring 15 on it using Shor, over and over, should always give the same answer of 3 and 5!

    That’s absolutely right (well, with probability arbitrarily close to 1 anyway), but that’s not the point at issue).

    Crucially, though, notice that the “determinism” here is not a feature of quantum computers, it’s merely a feature of Shor’s algorithm—and that that feature is shared by any algorithm (quantum or classical) with any provable performance guarantee whatsoever!

    Where I’d say that di Masi misled Congress, was in giving them the impression that quantum computers could somehow help in solving the hallucination problems of current generative AI. I don’t have the slightest clue how they’d do that. (No, I can’t rule it out, but the speculation seems totally disconnected from any quantum algorithm I’ve ever heard of.)

    The truth actually seems closer to the opposite of the impression he tried to create. Namely that, because we don’t know how QCs can help that much with AI problems, we therefore fall back on quantum speedups for more specialized, structured mathematical tasks, which in his lingo would be the “deterministic” speedups.

  26. QMAnon Says:

    Scott #24
    I wondered about that, but typically online services provide only free access for toy size problems, without a pay wall the offer would be dwarfed by the demand, at the minimum there’s gotta be some kind of auction process to prioritize access. So I doubt some random hacker could compete for access with major institutions.

    It’s still the case that for an actual commercial use, some adversary government could pass as some enterprise user and pay a big fee to factor a list of primes.

    That said, in practice, adversary government have been able to totally compromise actual systems (phones of presidents) without the need to break encryption.

  27. Scott Says:

    Dan #18: On the terminological question, it seems totally fine to me to call estimating ground state properties a subclass of “quantum simulation” — namely, the subclass where the quantum systems that we’re trying to simulate are just sitting there doing nothing! As you point out yourself, there’s significant overlap in methods between that and dynamical simulation anyway.

    On the substantive question, I agree with you that the prospects for quantum speedup seem to get better, the more “dynamical” is the problem of interest—and therefore, the more useless is Quantum Monte Carlo or other classical methods that are specialized for ground states. Even with ground states, though, I think there’s a real shot at quantum speedup, from first guessing an ansatz that has non-negligible overlap with the ground state you want, and then using adiabatic evolution or Grover iterations to get closer and closer to the true ground state. It’s just that it’s hard to make rigorous statements there, since a lot depends on how good the ansatz will be, in addition to how badly QMC will suffer from the sign problem.

  28. Scott Says:

    QC+Thermodynamics #17:

      If 2nd law of Thermodynamics is violated what would improve in Quantum Mechanics? Notice I am not asking for failure for Quantum mechanics?

    The Second Law is just a mathematical consequence of the reversibility of the fundamental time evolution laws, together with the special, low-entropy initial state that the universe had at the Big Bang.

    So a detailed answer to your question would depend entirely on what more fundamental change to physics causes the Second Law to be violated: are closed timelike curves discovered? Does unitarity break down?

    In any case, though, it’s hard for me to imagine an answer that wouldn’t appear to us as a breakdown of QM, with consequences for everything just as enormous as you’d imagine.

  29. Scott Says:

    QMAnon #26:

      That said, in practice, adversary government have been able to totally compromise actual systems (phones of presidents) without the need to break encryption.

    Yes, of course—I always make that point when I teach about crypto!

    On the other hand, we know that those sorts of attacks can’t always work, because if they did, the NSA and GCHQ and similar agencies around the world wouldn’t have large budgets devoted to cryptanalysis.

  30. Vadim Says:

    Heh, investors are a fragile lot, and I say that as an investor. Just try saying something remotely critical, even merely praise that’s too faint, about Tesla, I dare ya. Sadly, snake oil sells, and the intersection of quantum and AI is the frontier of snake oil salesmanship. Just wait until some new company manages to work “fusion” into the sales pitch.

  31. QMAnon Says:

    Since the topic of our understanding of QM came up, a naive question: in all those experiments related to entanglement, like Alice and Bob have each one half of an entangled pair, and then Bob does some measurement, and it affects Alice one way or another… what is the role of the distance between Bob and Alice? Does it ever matter whether it’s 10 meters or 10 light years? Does Alice have to wait 10 years for her system to be affected by Bob’s measurement in the latter case?
    I’m trying to understand if the so-called collapse of the wave function (or decoherence for many worlds) can only propagate at a finite speed C, like everything else, or noone knows because there’s no known experiment to test it (and finite speed of information propagation always gets in the way).

  32. Dan Says:

    Scott #27:
    While the terminology is technically correct, as scientists, I believe it is our responsibility to use language that a non-technical audience can interpret correctly. I believe this point is important to you as well. If this terminology is easily manipulated, I think that alone is a strong argument for replacing it, don’t you think?

    Regarding the algorithms themselves: Sure, there could be a speed-up for estimating ground states. However, I have a feeling that in cases where we have a good ansatz, we could likely exploit it for a very good classical algorithm as well. Most importantly, the state of these algorithms is no different than in any other field, which brings me back to the point in my first paragraph.

  33. Scott Says:

    QMAnon #31: In quantum mechanics, there’s no speed or distance limit whatsoever for entanglement, and experiments have directly confirmed that, showing that if there was a speed limit, it would need to be many times the speed of light.

    But the subtlety is that you can’t use entanglement to send a message faster than light — if you could, QM would just flat-out break special relativity! You can “only” use entanglement to produce non-classical correlations, as in the Bell experiment. Along with the nature of quantum computing speedup, this is one of the things almost every popularizer in history has mangled.

  34. QMAnon Says:

    Thank you, Scott!

    By the way, I thought that bitcoin/blockchain was safe from Shor’s prime factorization because it uses Elliptic Curve…, but not according to google’s AI (I think it’s hallucinating)

    “How Shor’s Algorithm Impacts Bitcoin

    Public-Key Cryptography:
    Bitcoin uses the Elliptic Curve Digital Signature Algorithm (ECDSA) to secure addresses and sign transactions. ECDSA’s security relies on the difficulty of the elliptic curve discrete logarithm problem (ECDLP), which is mathematically related to the prime factorization problem that classical computers find nearly impossible to solve for large numbers.

    Deriving Private Keys:
    Shor’s algorithm, when run on a cryptographically relevant quantum computer (CRQC), can solve the ECDLP exponentially faster than any classical supercomputer. This would allow an attacker to derive the private key from a public key exposed on the blockchain during a transaction, effectively granting them control over the associated funds.”

  35. Scott Says:

    QMAnon #34: No, Shor’s algorithm completely breaks elliptic curve crypto, because it’s still based on abelian groups (Dan Boneh wrote a note pointing that out in 1995). Our best candidates for quantum-resistant public-key crypto are generally based on problems from lattices and coding theory (which, by Oded Regev’s work, can be related to finding hidden structure in the non-abelian dihedral group).

  36. Scott Says:

    Dan #32: I do think the prospects for quantum speedups for ground state estimation problems are materially better than those for (eg) classical ML and finance and other areas where people constantly talk about quantum speedups. With ground state problems, at least we understand why there could be a speedup: because you can’t use guess an ansatz and then improve it, and classical will be exponentially slower unless QMC works, or something else that’s at least somewhat nontrivial and clever. That creates room for many shots on goal. With classical ML, by contrast, the hypesters often babble incoherently when I ask them where their speedup would be coming from in the best case, other than from Grover.

    On terminology, what do you want us to do: talk about “dynamical simulation” and “ground state estimation” as two separate things, and avoid ever classing them together as two subcategories of quantum simulation?

  37. QMAnon Says:

    Scott #35

    thanks for clarifying because I thought the google AI kept flip-flopping between yes and no but I guess I was missing some subtlety in the details (between strict prime factorization and extension of Shor’s)

    “No, Elliptic Curve Cryptography (ECC) is not based on the prime factorization problem, and therefore cannot be broken by algorithms (classical or quantum) designed specifically for prime factorization.”

    “Yes, Elliptic Curve Cryptography (ECC) is vulnerable to Shor’s algorithm, though it relies on solving the harder Elliptic Curve Discrete Logarithm Problem (ECDLP) rather than integer factorization, which Shor’s algorithm was originally designed for.”

  38. QMAnon Says:

    I’m not an expert, but the fact that bitcoin is vulnerable means that even if we update the encryption for new transactions, the validatity of the existing historical blockchain is vulnerable (someone could easily rewrite its history).
    That’s a much bigger problem than securing user passwords with a new encryption scheme.

  39. Dan Says:

    Scott #36:
    I believe there is a hierarchy of expectations here, and our terminology needs to reflect that.
    Let’s assume a QC will be built according to current published roadmaps (and I do believe one will be built, more or less according to plan). Given this assumption, in my view, the hierarchy of quantum algorithms should reflect the distinction between what we actually expect and what we merely hope for. We need to be explicit about the measure of that “hope”.

    If we were to order this hierarchy:

    1. Known, proven exponential speedups: Shor’s algorithm and dynamical simulations. Both of these have clear utility for some people.

    2. Hopeful practical speedup (Ground State Estimation): We are hoping for this because we know how to get an exponential speed-up if we have a good ansatz. We currently lack proof that we have any good ansatz, and we have no rigorous comparison to classical algorithms in relevant use cases.

    3. Hopeful practical speedup (Grover): We are hoping that, eventually, we could somehow push the logical clock rate far beyond 1MHz, or find problems (apart from cryptoanalysis) where the (fantastic) expected logical clock rate for SC qubits of 1MHz is somehow relevant. We don’t have any published plan for how to do that, or even if it’s possible.

    4. The rest of the babble, as you put it.

    In my opinion, the qualitative gap between Point 1 and the rest has significant consequences from both a public and business perspective; therefore, we should adopt terminology that reflects this reality.

  40. Del Says:

    QAnon#31 and Scott#33

    Obviously Scott’s answer is the established one and there is nothing to “correct” there.
    However that answer has a problem which troubled and continues to trouble generations of physicist starting from Einstein: local realism seems impossible in quantum mechanics (which in most interpretations is a non-local theory, even though with the severe limitation that its nonlocality cannot be exploited for distant communications).

    Many-world interpretations seemed to have an even worse nonlocality problem, with a potentially non-countable (but for sure infinite) number of universe spawning continuously in existence at each measurement (i.e. each local measurement causing a cloning of the entire universe even at formidable distances from the measurement location).

    This paper and associated cartoonish poster seems to have solved the problem, having made the many-world interpretation slightly different in practice but tremendously different in principle, removing the need for both any non-locality and any non-realism. And no, it’s not one of those loons, the author did find a loophole previously overlooked.

    Like all such papers it does not make any useful prediction different from the Copenhagen interpretation, but yet I think it’s one of the least mind-bending of them all, IMHO (I mean, it’s still QM, so it’s still mind-bending, but not as badly as the alternatives).

    The poster reads and understands in a few minutes, the paper takes a bit longer, but not much.

    https://www.mdpi.com/1099-4300/21/1/87

  41. Mitchell Porter Says:

    Johnny D #6 #8 #9 #14 #20

    So, your idea is that fundamental physics is equivalent to some kind of quantum circuit model, and that this underlying physics prevents quantum computing from scaling for some reason.

    I am interested in understanding your idea. I am reasonably conversant with how the forces are described in quantum field theory and string theory, and also with what people are trying to do in a variety of other approaches. (To some extent, your comments are reminiscent of Xiao-gang Wen’s “string nets”.)

    I am hopefully capable of understanding your concepts, and giving you a critical and informed assessment. If the discussion becomes too much for this blog, please feel free to post something at

    https://quantizinggravity.blogspot.com/2025/12/the-world-according-to-johnny-d.html

    and we can go over it in greater detail.

  42. QC+Thermodynamics Says:

    If 2nd law is broken are we looking at a ‘more’ deterministic interpretation of QM?

  43. Kevin Killeen Says:

    Hi Professor Aaronson, just read your post following Q2B. As always, your perspective is both grounding and genuinely exciting. I really appreciate how you separate the technical progress, which is clearly accelerating, from the distracting hype that still surrounds the field.

    Your clarification about updating in response to real-world data versus “dramatically changing your views” was a helpful distinction. It’s also refreshing to hear someone with your credibility explicitly call out companies that are “optimizing for IPOs” rather than solving real technical problems. The warning about the eventual end of open publication for cryptanalysis work is particularly sobering.

    Your Q2B talk was about why you think quantum computing works. So my question is about the next big conceptual hurdle: For the field to move beyond its core applications (simulation, crypto-breaking), what kind of discovery in quantum algorithms or complexity theory do you think would be most catalytic? Is it a new algorithmic paradigm, or a deeper theoretical insight into a specific problem class like optimization?

  44. Prasanna Says:

    Scott,

    Now that AI has solved the hard problem of protein folding, do you think it will find the right algorithms / heuristics for prime factoring at the scales used in cryptography ? If so, extending this hypothesis ,can it find solutions to useful quantum simulations before QCs get there ?

  45. Scott Says:

    QC+Thermodynamics #42: No. Forget about the interpretation of QM (a question that assumes the validity of QM, which is now in the garbage like so much else).

    If the Second Law is violated, then we’re presumably looking at a breakdown of causality or reversibility or both—i.e., at the most basic facts about physics since Galileo. Yet somehow, the new theory that replaces centuries of physics would need to explain why all that physics, including the Second Law, had seemed to be true the whole time. I have no idea how that could work, other than (for example) via the discovery of closed timelike curves.

  46. Scott Says:

    Prasanna #44: AI will surely help (indeed, is already helping) with simulating quantum systems. But I don’t know whether a fast classical factoring algorithm even exists. If it doesn’t exist, then AI won’t be able to find it.

  47. Scott Says:

    Kevin Killeen #43:

      So my question is about the next big conceptual hurdle: For the field to move beyond its core applications (simulation, crypto-breaking), what kind of discovery in quantum algorithms or complexity theory do you think would be most catalytic? Is it a new algorithmic paradigm, or a deeper theoretical insight into a specific problem class like optimization?

    I think the first step to enlightenment here is to acknowledge that the true answer might be: we don’t move beyond. The main quantum speedups have already been discovered (assuming that even those will stand). Whatever clever new quantum algorithms are found, as otherwise important as Shor’s and Grover’s, they’ll then be matched by clever new classical algorithms for the same tasks.

    Or maybe not! Maybe there are other speedups as important as Shor and Grover and even quantum simulation itself that remain to be found. If so, we explore. We do complexity theory. We’re now asking what’s fundamentally a math and science question, not an engineering or applications one.

    My one concrete suggestion here is to look for other ways QC could be useful in cryptographic protocols, besides generating certified randomness. I feel like that’s been under-explored, and like we have many shots on goal, because unlike with most potential application areas, here we get to design the problems so that they exhibit maximum quantum advantage.

  48. QMAnon Says:

    The fact that prime factorization appears to be such an outlier (in terms of QC being such a massive win compared to classical) is truly remarkable, it would be amazing if it were the only such problem type, or if the advantage couldn’t be leveraged in a more generic way.
    Is there a lot of active research in finding out why it’s the case and whether there could be other type of problems (beyond the classic P vs NP classification) for which it’s the case?

  49. Scott Says:

    QMAnon #48: Yes, welcome to quantum algorithms and complexity! 😀 Welcome to what the field I’m part of has been doing for the past 30 years.

    What’s special about factoring from a quantum standpoint is just that it’s reducible to finding the periods of periodic functions, and that in turn (like other problems with hidden abelian group structure) is solvable by taking a giant quantum Fourier transform. We’ve identified a bunch of other problems—often, fairly abstruse problems in algebra or number theory, and/or oracular problems—that seem to admit exponential quantum speedups based on similar techniques. The hard part is to find stuff in the intersection of “admits exponential quantum speedup” and “someone cares about the answer, for cryptographic or other reasons.” With factoring, the field of QC got really lucky (or the wider world got really unlucky…) that our cybersecurity infrastructure was based on problems that just so happen to have the right abelian group structure. In any case, none of this structure seems to be present in NP-complete problems, or other stuff people care about in industrial applications.

  50. gentzen Says:

    Mitchell Porter #41
    Johnny D #6 #8 #9 #14 #20
    Both superconducting and trapped ion realizations of the quantum circuit model are actually commutative gauge theories. The possibility to „use“ virtual phase shift gates is a consequence of this. I would say this happens because both the initialization and the measurement only create or detect the 0 or 1 states of the qubits of the computational base, where the local phase of the individual qubits is irrelevant.

    One further observation is that for controlled gates (say Controlled-X, Controlled-Y, …), only the phase relations of the target qubit are important, while for the controlled gate commutes with phase shift gates for the control qubit. (For Controlled-Z or more general controlled phase shift gates, both qubits are control qubits, so phase shift gates commute with both in that case.)

  51. Mark Spinelli Says:

    Scott #47:

    I like your call to find “non-algorithmic quantum functionality”, in addition to hunting for elusive exponential algorithmic speedups that may just be white whales. My immediate thought drifts to quantum money – a quantum computer does not, e.g., algorithmically *find* a way to show knot equivalence or solve tough problems about Hecke algebras and modular forms; rather, the quantum computer coherently evaluates an invariant *in superposition* in a unique way, above and beyond what can be done classically.

    I think there are other related applications – like proof-of-presence, proof-of-delivery, chain-of-custody, etc. – perhaps waiting to be found, that is also based on something akin to coherent evaluation of a state in superposition.

  52. Prasanna Says:

    Scott #48,
    What is it about the “right abelian group structure” that is seems to conjure up parallel worlds to provide exponential speedups 🙂 Or more prosaically, how does the substrate (like superconducting Josephson junctions) matter for compute speedups, challenging the Church-Turing extended thesis. Just like the complex numbers being algebraically closed matching reality of QM, the QC domain seems to be wedded to this group structure ?

  53. Scott Says:

    Prasanna #52: Now you’re getting to the real meat of an undergrad quantum computing course, like the one I teach every year!

    Firstly, the thing we’re talking about here has nothing to do with Josephson junctions, or any other particular technology for realizing qubits.

    Secondly, rather than looking for a direct connection between QM and abelian groups, it’s better to break your question up into bite-size steps:

    – Why hidden structure in abelian groups can be revealed using a Fourier transform
    – Why quantum mechanics lets you “implicitly” calculate an exponentially large Fourier transform using only a polynomial number of operations
    – The role of interference among amplitudes in this story
    – Why everything gets way more complicated if you try to generalize to non-abelian groups

    All these questions, except the last, are covered in detail in my lecture notes! (Or even my Shor, I’ll do it blog post from way back in 2007.)

    One good tip is to start with Simon’s Algorithm, and only after you understand it move on to Shor’s.

  54. QC+Thermodynamics Says:

    “If the Second Law is violated, then we’re presumably looking at a breakdown of causality or reversibility or both—i.e., at the most basic facts about physics since Galileo. Yet somehow, the new theory that replaces centuries of physics would need to explain why all that physics, including the Second Law, had seemed to be true the whole time. I have no idea how that could work, other than (for example) via the discovery of closed timelike curves”

    For the most part we already have such a theory. Our equations in physics look identical forward and backward in time. If there is no entropy or entropy is not as we think it is and time is not dependent on entropy. Why cannot we have both causality (a time related effect) and breakdown of 2nd law?

  55. Scott Says:

    QC+Thermodynamics #54: Because entropy is not a fundamental physical quantity, like mass or energy, that can enter directly into fundamental laws of physics. Entropy is instead a measure of disordered information in the universe’s state, and it increases for well-understood mathematical reasons—nothing further to do with physics—whenever you have

    (1) a low-entropy initial state, and
    (2) reversible time-evolution laws that are sufficiently “generic” or “mixing.”

    Sorry, but I’m done explaining this over and over. Maybe you can find another person (or an LLM) to explain further!

  56. John Says:

    It is a little bit confusing when you write, “In quantum computing, there are the large companies and startups that might succeed or might fail, but are at least trying to solve the real technical problems, and some of them are making amazing progress. And then there are the companies that have optimized for doing IPOs, getting astronomical valuations, and selling a narrative to retail investors and governments about how quantum computing is poised to revolutionize optimization and machine learning and finance. Right now, I see these two sets of companies as almost entirely disjoint from each other.”

    Then you mention Oxford Ionics as one of the companies having made the most progress, and IonQ as one of the companies misleading retail investors and governments. They are the same company!

  57. QC+Thermodynamics Says:

    “Entropy is instead a measure of disordered information in the universe’s state, and it increases for well-understood mathematical reasons—nothing further to do with physics—whenever you have”

    I agree “Entropy is instead a measure of disordered information in the universe’s state”. I do agree on “it increases for well-understood mathematical reasons—nothing further to do with physics—whenever you have”. But I do also agree Maxwell’s demon exists in a computational sense where landauer limit fails and erasure costs approach 0 (but not negative). This is what I mean when I ask ‘Why cannot we have both causality (a time related effect) and breakdown of 2nd law’? I do not know if we will have determinism but traditional meaning of different entropies – Shannon, Von Neumann and blackhole have to be shelved.

    We still incur loss of energy and so causality is respected but 2nd law breaks.

    “Theory What breaks What survives”

    Thermodynamics Entropy increase Causality”
    “Quantum mechanics
    Fundamental randomness
    No-retrocausality”
    “General relativity
    Absolute horizons
    Light cones”

  58. Scott Says:

    John #56: By far my most important criticism of IonQ is that they say things that are false, which is only tangentially related to what technology they have or don’t have.

    But on the latter: as far as I know, buying Oxford Ionics was the #1 positive thing that IonQ has done recently. Presumably, they bought them because Oxford Ionics really has created 2-qubit trapped-ion gates whose error rates are on the frontier, in a way that IonQ has not.

    So, let’s all hope that IonQ will now change to be more like Oxford Ionics, rather than Oxford Ionics changing to be more like IonQ!

  59. Quantum Computers: A Brief Assessment of Progress in the Past Decade | Combinatorics and more Says:

    […] from 2025.December  2025: There is a new post by Scott Aaronson More on whether useful quantum computing is “imminent”, where he presents a very optimistic viewpoint regarding the expected progress in the next few […]

  60. AG Says:

    Scott #55: Just how terribly misguided would it be to construe this comment as implying that “generic”/”mixing” are possibly borderline fundamental physical notions?

  61. QC+Thermodynamics Says:

    If Landauer Lower Bound and Heisenberg Uncertainty are both violated in the sense $1$ bit is not physically $k_BT\ln2$ Joules of energy but a number $r_1\in(0,k_BT\ln2)$ and $0<\sigma_x\sigma_p<r_2<\frac{\hbar}2$ holds where $r_1,r_2$ are positive but can be arbitrarily low as much as the experimenter desires then does superposition and entanglement and causality still survive?

  62. Scott Says:

    AG #60: Somewhat misguided?

    If we can mathematically derive the “mixing” part from the “simple initial state + interacting reversible dynamics” part, isn’t that a pretty conclusive proof that the “mixing” part should not be taken to be fundamental?

  63. AG Says:

    Scott #62: Not necessarily — e.g. if “interacting reversible dynamics” (which implicitly involves the “fundamental physical quantities” of “mass” and “energy”) can be derived from the “generic”/”mixing” conditions inherent in the mathematical notion of entropy.

  64. Scott Says:

    AG #63: Would you agree that the type of theory you’re asking for, where mixing was taken as fundamental and then (eg) reversibility of the dynamics was derived from it, would not be a reductionist theory (ie, not the kind that’s worked from Newton to the present)?

  65. Gil Kalai Says:

    I certainly take note of the optimistic views expressed by Scott and by Craig (#16) regarding expectations for experimental quantum computing over the next three to four years. (I have heard similar levels of optimism from several other colleagues as well.) It will be interesting to revisit the situation in 2030.

    Regarding my own analysis, I have tried to explain my argument about quantum computers as clearly as I can in various places, and I have also attempted to present other skeptical perspectives. Let me highlight three important components and consequences of my theory:

    1) The (fantastic) claims of quantum supremacy based on random circuit sampling (or boson sampling) are incorrect.

    2) Quantum error correction of the quality required for fault-tolerant quantum computation is inherently beyond reach.

    3) I make several claims concerning correlated errors. Let me emphasize that all of my claims about error correlations for highly entangled states should be observable in straightforward simulations of the noisy quantum circuits required to create those states. Any phenomena not captured by such simulations (like cosmic rays) are not part of my argument.

    The third point is closely related to Craig’s thoughtful comment (#16): all of the “hypothetical nasty effects” implied by my analysis should already appear in basic simulations of the noisy circuits used to generate the surface code (or other error-correcting codes). I would be very interested in examining data from such simulations. Is such data available?

    Craig wrote: “Personally, I intend to declare victory over Kalai and other skeptics somewhere around 1e-12 error per logical Clifford.” This seems like a reasonable—and even generous—criterion. Scott mentioned a small voice in his head asking, “But what if Gil Kalai turned out to be right after all? What if scalable quantum computing wasn’t possible?” From my perspective, “being right” depends on the validity of items (1)–(3) above.

    From my (perhaps naive) point of view, “victory” will be the resolution of this major scientific question concerning the possibility of quantum computation and the computational power of our physical world. That said, I would of course be more pleased if my perspective—and the specific claims (1)–(3)—prevail, and I do expect that this will happen.

    Happy 2026, everybody.

  66. QMAnon Says:

    Entropy boils down to the fact that the world (the most simplified model, of a 2d billiard ball table) has small things flying around in space, bouncing off each other with conservation of momentum, and all the information is contained in position and velocity, and you can always run things backwards by inversion of velocity.
    Then, at a macro level, if you imagine you start with a space filled with static particles in a perfect grid (like a crystal), and disturb it by wacking one particle, it would lead to a cascade of perturbations that propagate outwards, and the disruption pattern is a record of this event, like a memory (a bit like a vapor chamber).

    According to a Quanta magazine video of breakthroughs in 2025, it is now understood how to start with such model and make your way up to statistical mechanics (Hilbert’s sixth problem)

    https://youtu.be/hRpcWpAeWng?si=nJsCUFFaw4ENbre-

  67. QMAnon Says:

    “(1) the simulation of quantum physics and chemistry themselves, (2) breaking a lot of currently deployed cryptography, and (3) eventually, achieving some modest benefits for optimization, machine learning, and other areas (but it will probably be a while before those modest benefits win out in practice”

    Are there concrete examples of open material science problems that would be solved easily once QC reach maturity?
    Maybe supra conductivity simulations?
    Because when you look at actual basic chemistry problems that seem to matter, like protein folding, they don’t appear that quantum in nature, and are more optimization problems with a strong structure and/or very large dimensional space (similar to why deep learning works, and therefore amenable to machine learning, as shown by the success of Alpha Folding).

  68. Del Says:

    Gil #65

    Maybe this? https://github.com/Quantinuum/random-circuit-sampling

    I thought they released similar data for the new Helios machine, but either I misremember or I can’t find it, however https://github.com/Quantinuum/selene could be a good starting point.

  69. johu Says:

    Johnny D #6

    There are computational states with 0 energy difference – computational states which are degenerate. For example states defined in polarization degrees of freedom of free space photons, cat states of photons in linear resonators, logical states of surface code and the list goes on.

  70. jojo Says:

    certainly it did it for me. Great information!

  71. Jair Says:

    FYI, Scott’s talk is now on Youtube. https://www.youtube.com/watch?v=oclXVtQuQE4&list=PLh7C25oO7PW209STGTJOXRDbidmUO-FWu

  72. Johnny D Says:

    Assume reality is emergent from a quantum circuit. So this reality follows all laws of quantum mechanics. Now assume that the underlying circuit uses only Clifford gates, so that universal QC is not available. Could universal QC be available in the emergent world? No, the entanglement structure is limited. What phenomenon in fundamental physics leads to a belief that the underlying system uses non-Clifford gates? I don’t know of any.

  73. Johnny D Says:

    Scott #23: Clearer thoughts on the gravitational effect on entanglement (GEE).
    1. GEE is a physical effect, so it occurs in physical qubits. It has an effect on the entanglement of the state. It is proportional to the physical state’s complexity. The proportionality constant is system dependent. The standard QC model (SQC) leaves this effect out. QC with GEE (QCG) is more complete. As I said before, there is no claim that QM is wrong in any way, just SQC. GEE shows up in correlations of physical qubit measurements. So it would not affect any one physical qubit’s measurement stats.
    2. As logical qubits rely on entangled physical qubits, GEE affects even a single SQC logical qubit, even with no logical gates applied. Maintaining a SQC logical state of |0> could be an issue.
    3. The formula, GEE ~ complexity*delta E * ave L, is speculative. The derivation was heuristic. The formula is similar to the one that Susskind gives when he talks about QM=GR. He says something like correlations decrease exponentially with complexity per qubit. What did he mean? He says something like, you can use a qc to study gravity. I don’t think he was talking about simulating gravity theories on a qc. I think he is talking about correlations being QCG, not SQC. I have no idea if he agrees with me.

  74. Johnny D Says:

    Scott #23, Johnny D #73: SQC would be correct if the fundamental dof were unentangled. Of course, then we would have no physical space in which to build a qc. QCG includes the effects that stem from the fundamental dof entanglement.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!