Quantum computing: too much to handle!

Tomorrow I’m headed to Berkeley for the Inkhaven blogging residency, whose participants need to write one blog post per day or get kicked out. I’ll be there to share my “wisdom” as a distinguished elder blogger (note that Shtetl-Optimized is now in its twentieth year). I’m acutely aware of the irony, that I myself can barely muster the willpower these days to put up a post every other week.

And it’s not as if nothing is happening in this blog’s traditional stomping-ground of quantum computing! In fact, the issue is just the opposite: way too much is happening for me to do it any sort of justice. Who do people think I am, Zvi Mowshowitz? The mere thought of being comprehensive, of responsibly staying on top of all the latest QC developments, makes me want to curl up in bed, and either scroll through political Substacks or take a nap.


But then, you know, eventually a post gets written. Let me give you some vignettes about what’s new in QC, any one of which could easily have been its own post if I were twenty years younger.

(1) Google announced verifiable quantum advantage based on Out-of-Time-Order-Correlators (OTOC)—this is actually from back in June, but it’s gotten more and more attention as Google has explained it more thoroughly. See especially this recent 2-page note by King, Kothari, et al., explaining Google’s experiment in theoretical computer science language. Basically, what they do is, starting from the all-|0⟩ state, to apply a random circuit C, then a single gate g, then C-1, then another gate h, then C again, then g again, then C-1, and then measure a qubit. If C is shallow, then the qubit is likely to still be |0⟩. If C is too deep, then the qubit is likely to be in the maximally mixed state, totally uncorrelated with its initial state—the gates g and h having caused a “butterfly effect” that completely ruined all the cancellation between C and C-1. Google claims that, empirically, there’s an intermediate regime where the qubit is neither |0⟩ nor the maximally mixed state, but a third thing—and that this third thing seems hard to determine classically, using tensor network algorithms or anything else they’ve thrown at it, but it can of course be determined by running the quantum computer. Crucially, because we’re just trying to estimate a few parameters here, rather than sample from a probability distribution (as with previous quantum supremacy experiments), the output can be checked by comparing it against the output of a second quantum computer, even though the problem still isn’t in NP. Incidentally, if you’re wondering why they go back and forth between C and C-1 multiple times rather than just once, it’s to be extra confident that there’s not a fast classical simulation. Of course there might turn out to be a fast classical simulation anyway, but if so, it will require a new idea: gauntlet thrown.

(2) Quantinuum, the trapped-ion QC startup in Colorado, announced its Helios processor. Quick summary of the specs: 98 qubits, all-to-all 2-qubit gates with 99.92% fidelity, the ability to choose which gates to apply “just in time” (rather than fixing the whole circuit in advance, as was needed with their previous API), and an “X”-shaped junction for routing qubits one way or the other (the sort of thing that a scalable trapped-ion quantum computer will need many of). This will enable, and is already enabling, more and better demonstrations of quantum advantage.

(3) Quantinuum and JP Morgan Chase announced the demonstration of a substantially improved version of my and Shih-Han-Hung’s protocol for generating cryptographically certified random bits, using quantum supremacy experiments based on random circuit sampling. They did their demo on Quantinuum’s new Helios processor. Compared to the previous demonstration, the new innovation is to send the circuit to the quantum computer one layer at a time, rather than all at once (something that, again, Quantinuum’s new API allows). The idea is that a cheating server, who wanted to spoof the randomness deterministically, now has much less time: using the most competitive known methods (e.g., those based on tensor network contraction), it seems the cheater would need to swing into action only after learning the final layer of gates, so would now have mere milliseconds to spoof rather than seconds, making Internet latency the dominant source of spoofing time in practice. While a complexity-theoretic analysis of the new protocol (or, in general, of “layer-by-layer” quantum supremacy protocols like it) is still lacking, I like the idea a lot.

(4) The startup company BlueQubit announced a candidate demonstration of verifiable quantum supremacy via obfuscated peaked random circuits, again on a Quantinuum trapped-ion processor (though not Helios). In so doing, BlueQubit is following the program that Yuxuan Zhang and I laid out last year: namely, generate a quantum circuit C that hopefully looks random to any efficient classical algorithm, but that conceals a secret high-probability output string x, which pops out if you run C on a quantum computer on the all-0 initial state. To try to hide x, BlueQubit uses at least three different circuit obfuscation techniques, which already tells you that they can’t have complete confidence in any one of them (since if they did, why the other two?). Nevertheless, I’m satisfied that they tried hard to break their own obfuscation, and failed. Now it’s other people’s turn to try.

(5) Deshpande, Fefferman, et al. announced a different theoretical proposal for quantum advantage from peaked quantum circuits, based on error-correcting codes. This seems tempting to try to demonstrate along the way to quantum fault-tolerance.

(6) A big one: John Bostanci, Jonas Haferkamp, Chinmay Nirkhe, and Mark Zhandry announced a proof of a classical oracle separation between the complexity classes QMA and QCMA, something that they’ve been working on for well over a year. Their candidate problem is basically a QMA-ified version of my Forrelation, which Raz and Tal previously used to achieve an oracle separation between BQP and PH. I caution that their paper is 91 pages long and hasn’t yet been vetted by independent experts, and there have been serious failed attempts on this exact problem in this past. If this stands, however, it finally settles a problem that’s been open since 2002 (and which I’ve worked on at various points starting in 2002), and shows a strong sense in which quantum proofs are more powerful than classical proofs. Note that in 2006, Greg Kuperberg and I gave a quantum oracle separation between QMA and QCMA—introducing the concept of quantum oracles for the specific purpose of that result—and since then, there’s been progress on making the oracle steadily “more classical,” but the oracle was always still randomized or “in-place” or had restrictions on how it could be queried.

(7) Oxford Ionics (which is now owned by IonQ) announced a 2-qubit gate with 99.99% fidelity: a record, and significantly past the threshold for quantum fault-tolerance. However, as far as I know, it remains to demonstrate this sort of fidelity in a large programmable system with dozens of qubits and hundreds of gates.

(8) Semi-announcement: Quanta reports that “Physicists Take the Imaginary Numbers Out of Quantum Mechanics,” and this seems to have gone viral on my social media. The article misses the opportunity to explain that “taking the imaginary numbers out” is as trivial as choosing to call each complex amplitude “just an ordered pair of reals, obeying such-and-such rules, which happen to mimic the rules for complex numbers.” Thus, the only interesting question here is whether one can take imaginary numbers out of QM in various more-or-less “natural” ways: a technical debate that the recent papers are pushing forward. For what it’s worth, I don’t expect that anything coming out of this line of work will ever be “natural” enough for me to stop explaining QM in terms of complex numbers in my undergraduate class, for example.

(9) The list of accepted talks for the annual QIP conference, to be held January 24-30 in Riga, Latvia, is now out. Lots of great stuff as always.

(10) There are probably other major recent developments in QC that I should’ve put into this post but forgot about. You can remind me about them in the comments.

(11) Indeed there are! I completely forgot that Phasecraft announced two simulations of fermionic systems that might achieve quantum advantage, one using Google’s Willow superconducting chip and the other using a Quantinuum device.


To summarize three takeaways:

  • Evidence continues to pile up that we are not living in the universe of Gil Kalai and the other quantum computing skeptics. Indeed, given the current staggering rate of hardware progress, I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election. And I say that not only because of the possibility of the next US presidential election getting cancelled, or preempted by runaway superintelligence!
  • OK, but what will those quantum computers be useful for? Anyone who’s been reading this blog for the past 20 years, or any non-negligible fraction thereof, hopefully already has a calibrated sense of that, so I won’t belabor. But briefly: yes, our knowledge of useful quantum algorithms has slowly been expanding over the past thirty years. The central difficulty is that our knowledge of useful classical algorithms has also been expanding, and the only thing that matters is the differential between the two! I’d say that the two biggest known application areas for QC remain (a) quantum simulation and (b) the breaking of public-key cryptography, just as they were thirty years ago. In any case, none of the exciting developments that I’ve chosen to highlight in this post directly address the “what is it good for?” question, with the exception of the certified randomness thing.
  • In talks over the past three years, I’ve advocated “verifiable quantum supremacy on current hardware” as perhaps the central challenge right now for quantum computing theory. (As I love to point out, we do know how to achieve any two of (a) quantum supremacy that’s (b) verifiable and (c) runs on current hardware!) So I’m gratified that three of the recent developments that I chose to highlight, namely (1), (4), and (5), directly address this challenge. Of course, we’re not yet sure whether any of these three attempts will stand—that is, whether they’ll resist all attempts to simulate them classically. But the more serious shots on goal we have (and all three of these are quite serious), the better the chances that at least one will stand! So I’m glad that people are sticking their necks out, proposing these things, and honestly communicating what they know and don’t know about them: this is exactly what I’d hoped would happen. Of course, complexity-theoretic analysis of these proposals would also be great, perhaps from people with more youth and/or energy than me. Now it’s time for me to sleep.

51 Responses to “Quantum computing: too much to handle!”

  1. Anon Says:

    Scott, I think you have got so used to the QC as an insider that you sometimes down play the importance of quantum simulation. That one I think can potentially open up many advanced in medicine, material design, …

    I get that you want to be clear it won’t solve many classical computation problems, at least we don’t know if it can solve them (might change once we have actual quantum computers and more people start thinking about solving problems using them). And there is a lot of hype by companies working on building quantum computers.

    Still I feel very excited about the progress being made and what it would lead to, both in terms of better understanding of the nature, and utilizing the nature towards beneficial goals for humanity.

    And thank you for taking the time for keeping us informed about the progress.

  2. Scott Says:

    Anon #1: Oh I agree that quantum simulation might be a huge deal, and nowhere in this post did I say otherwise! Even with quantum simulation, though — QC’s “home turf” — we’ll still need to compete against DFT and QMC and tensor-network algorithms and all sorts of other clever classical simulation methods, and now also against the use of deep learning to guess the right classical ansatz for a quantum system in cases where humans weren’t able to (as for example with AlphaFold). That’s why I’m not certain about it.

  3. Charbel Says:

    Hi Scott- thanks for the post.

    I have a question that you might have answered elsewhere.

    Do you think that if we’re able to build a scalable fault-tolerant quantum computer running millions of qubits (and more), successfully implementing Shor’s algorithm, etc., would that falsify the objective-collapse interpretations of quantum mechanics? (i.e., Penrose’s view, Pearle’s, …)

  4. Quantum Computers: A Brief Assessment of Progress in the Past Decade | Combinatorics and more Says:

    […] stay ahead of classical computing on these benchmarks. Update, (November 2025) : Scott Aaronson now gives a good chance that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the end of […]

  5. Aaronson handles some of that quantum hype - Evolution IT Says:

    […] Article URL: https://scottaaronson.blog/?p=9325 […]

  6. Franz J. Schreiber Says:

    Hi Scott, regarding the usefulness of qc’s, are you aware of any in-principle obstructions to super-quadratic (but still poly) speedups on NP-hard problems like say 3-SAT? If not, do you have a gut feeling regarding this question?

  7. Craig Gidney Says:

    > For what it’s worth, I don’t expect that anything coming out of this line of work will ever be “natural” enough for me to stop explaining QM in terms of complex numbers in my undergraduate class, for example.

    I will try to describe a situation that you may find natural. Probably not natural enough to teach it, but it should at least seem more natural than the setup for MIP*=RE! Here we go…

    You find yourself in a quantum computer store, looking to buy a quantum computer. There you find Eve, the untrustworthy salesperson. You tell her: “I need a quantum computer. I’m doing transduction from a telescope and applying mirror corrections in software, so it’s very important I have a *truly* universal gateset like Clifford+T rather than a merely computationally universal gateset like Hadamard+Toffoli. Do you have a Clifford+T machine?”.

    Eve says “Oh! Sure, I have just the thing!”. She walks you over to a machine labelled ‘TOFAMARD’ and promises “This machine right here can do everything you need. It’s definitely not a Hadamard+Toffoli machine! You can trust me on that!”

    You don’t trust Eve. And you unfortunately don’t have a trusted quantum computer with you, to check the states produced by the untrusted computer. Is there any classically verifiable test you can force Eve to perform, that will distinguish a Hadamard+Toffoli computer from a Clifford+T computer?

    Renou et al proved that there is such a test, similar to a Bell test, using three isolated computers. However, crucially, they needed to assume Eve did not distribute entanglement between the computers before you arrived. There exists a catalyst state that allows Eve to pass any proposed test of the Hadamard+Toffoli machines; even tests where there’s spacelike separated computations occurring.

    The fact that, with minimal preparation, the real-only gateset Hadamard+Toffoli can pass any test attempting to distinguish it from the complex gateset Clifford+T is the sense in which complex numbers are provably “not necessary”. At least… not for distributed quantum computations.

  8. Sasan Says:

    Very informative, Prof. Aaronson. I always enjoy reading your brilliant opinion about quantum physics, politics, and etc.

  9. Mark Spinelli Says:

    I also like how Phasecraft near-simultaneously dropped results of their experiments on the Fermi-Hubbard model for both superconducting Willow and ion-trap H2 devices.

    About the numbers on Helios – how deep of a random Boolean function \(f(x)\) from (say) 49 qubits in a first register to 49 qubits in a second register, composed of Toffoli and NOT/CNOT gates, could be run, and still have a decent overall fidelity?

    I think Toffoli’s need a handful of Mølmer–Sørensen gates, but still, even throwing in SPAM and shuffling errors, if one could prepare \(\sum|x\rangle|f(x)\rangle\) for \(f(x)\) with 500-1000 or so random CCNOT’s, measuring the first \(x\) and second \(y\) register in the computational basis, then even if only one-in-a-hundred shots we had \(y=f(x)\), that would show that Helios coherently held a highly entangled state of our own choosing in a Hilbert space of enormous dimension – at least for the successful shots. The odds that two random 49-bit strings would satisfy the test is only \(1/2^{98}\).

    It’s not a supremacy experiment (an abacus could do the same, and more accurately) but…

  10. Clint Says:

    Hi Scott,

    Sorry you missed your nap …

    Our social media algorithms must be in the same family … thank you for the sanity check … thought for a minute there that I didn’t understand that complex numbers were pairs of real numbers with a certain structure … I’ll stick with these results for now until someone shows me they can formalize QM/C with “just two real numbers” without then adding “and oh yeah they have to be in this structure …”

    And because it is also all over my feeds, for anyone else wondering, (best of my understanding) Gödel’s theorems do not provide evidence that the universe is not a simulation. The theorems are a consequence of computability … not the other way around. Incompleteness follows from effective axiomatization, the computability condition. IMHO, if they say anything about it at all, Gödel’s theorems, or physical evidence of incompleteness, would thus support the simulation hypothesis.

  11. Scott Says:

    Charbel #3:

      Do you think that if we’re able to build a scalable fault-tolerant quantum computer running millions of qubits (and more), successfully implementing Shor’s algorithm, etc., would that falsify the objective-collapse interpretations of quantum mechanics? (i.e., Penrose’s view, Pearle’s, …)

    That’s a surprisingly complicated question, because under some objective-collapse proposals, the objective collapse could be treated as just one more source of decoherence to handle using quantum error correction, and scalable QC would still be possible! Part of the issue is that these proposals are often underspecified (something I learned from students who did a project for my graduate class on their implications for QC), and the prospects for QC could strongly depend on how you fill in missing details.

    In any case, if we succeed at building scalable QCs, there’s one thing we’ll be able to say for certain about objective collapse: namely that, even if it’s there, it clearly isn’t potent enough to kill scalable QC! 😀

  12. Scott Says:

    Franz J. Schrieber #6:

      regarding the usefulness of qc’s, are you aware of any in-principle obstructions to super-quadratic (but still poly) speedups on NP-hard problems like say 3-SAT? If not, do you have a gut feeling regarding this question?

    No, there’s no known in-principle obstruction to this. In fact superquadratic (typically 4th-power) quantum speedups are now known for some special optimization problems, using something called the Kikuchi method, although at least some of those speedups recently went away (reverting back to the Grover speedup) when the classical algorithms improved.

    My gut feeling is that there will indeed be superquadratic quantum speedups for some special NP-complete problems, but for the most “generic” problems like CircuitSAT, probably the Grover speedup only. I could go either way on 3SAT.

  13. Scott Says:

    Craig Gidney #7: Thanks so much for that excellent clarification!

    But do we agree that, if I remember to bring to the store my own rudimentary trusted device for measuring qubits, then I can verify that the quantum computer for sale really does natively use complex amplitudes?

    If so, then the situation reminds me a little of left/right symmetry, if the weak interaction didn’t violate it. Purely by sending messages back and forth with aliens (and not using weak decays, or eg astronomical bodies that we can both see), I can never learn whether my left is the aliens’ right and vice versa. But as soon as I can meet a single one of the aliens in person, the symmetry is broken for their entire civilization.

  14. Scott Says:

    Clint #10: Yes, frankly, that “Gödel proves we’re not in a simulation” thing was so groan-inducingly awful that I felt like no comment from me would be necessary…

  15. Sheng-Hsuan Says:

    I would also like to make a self advertisement on Quantinuum’s work on Fermi-Hubbard model simulation using the new Helios machine, which came out simultaneously
    https://arxiv.org/pdf/2511.02125

    We demonstrate that it is possible to simulate a relevant physical phenomena, light-induced superconductivity. At the same time, we find that simulating the deepest circuits is challenging using the classical methods we know. The system size we simulate (6×6) is the same as the largest system size Phasecraft ran on the Willow chip, and we observe accurate raw data up to time = 1 (in units of 1/t) benchmarked against the free-fermion case.

  16. Craig Gidney Says:

    > If I remember to bring to the store my own rudimentary trusted device for measuring qubits, then I can verify that the quantum computer for sale really does natively use complex amplitudes?

    Yes. As long as Eve also didn’t sell you that one and so forth and so on all the way back to the beginning of time, which is a legitimate concern in the “testing the nature of nature” case.

    > If so, then the situation reminds me a little of left/right symmetry, if the weak interaction didn’t violate it.

    That’s actually an excellent thing to be reminded of here, because the spoofing construction starts by e.g. replacing T gates with phase kickback from conditionally incrementing a phase gradient state (\sum_{k=0}^7 T^k |k\rangle). An intermediate step is to prove the circuit must behave identically when replacing the state with its opposite-winding (“left-handed”) variant (\sum_{k=0}^7 T^{-k} |k\rangle) due to the global conjugation symmetry of the complex plane.

  17. Henrik Says:

    Regarding point (11), I think it would be fair to mention Quantinuum’s https://arxiv.org/abs/2511.02125, which almost certainly is harder to simulate than both of the Phasecraft papers and appeared the same week (which does not diminish the Phasecraft work in the slightest! Also, full disclaimer: both myself and Sheng-Hsuan #15 are authors on that work).

  18. Del Says:

    I don’t have much to add sorry, but I do have two small things.

    First and foremost, this post is great. Thanks!

    Second, definitely take a nap if you feel so inclined, but forget about politics please. We can think about it when the next election comes close (or if there is a march or something to participate in), but what’s the point of doing it now?
    It’s just so depressing that dedicating any time would just hurt all the good people here. Let’s instead talk about quantum for now. Looking forward to your one blog post per day now 🙂

  19. Vladimir Says:

    From a condensed matter point of view, Quantinuum’s Fermi-Hubbard paper is much better than Phasecraft’s. The latter merely evolve an arbitrary initial state, i.e. solve QC’s home turf problem, a problem which – as I never tire of pointing out – holds little scientific interest and no technological potential *in itself*. Quantinuum actually make some effort to find a relevant initial state to evolve, but the abysmal (by the standards of available classical methods) energies they obtain aren’t exactly cause for optimism.

  20. Justin Says:

    What a week if QMA vs QCMA (using bosons in the proof!) is all the way at (6).

    I’m surprised they didn’t mention BQP/qpoly vs BQP/poly. Any sense if we’ll soon see an oracle separation there?

  21. Scott Says:

    Justin #20: Good question, I don’t know! Yes, it would seem surprising if a similar argument couldn’t separate BQP/poly from BQP/qpoly.

    I figured that QMA vs. QCMA deserves its own post later—but after I’ve understood something of the proof and it’s been verified.

  22. Alessandro Strumia Says:

    Is the quantum computing attempt at Fermilab competitive with industry?

  23. Scott Says:

    Alessandro Strumia #22:

    Short answer: No.

    Long answer: But there’s no reason to expect it to be! There are lots of smaller efforts that are doing things that are interesting and potentially relevant, even if they themselves are not close to the scaling frontier. Looking it up just now, apparently Fermilab has been studying shielding qubits from cosmic radiation by doing things deep underground, and also using long-lived qubits as dark matter detectors? I’d be happy to learn more from someone who knows about their work.

  24. notGPT Says:

    Please give some advice on good writing. I don’t think I have seen that discussed in detail on this blog. (Maybe something to blog about from Inkhaven?!) Just a few good sources, pointers would also be great. Also what are some “easy” mistakes to be aware of and how to easily fix them. Getting word order wrong and other simple stuff like that.
    Does anyone know where to find list of such mistakes (writing books maybe?) and how to properly fix them? Also would be great if Scott could tell a little about how he maintains standard in his own style. So, if not a generator, at least a verifier algorithm for Scott level great writing for blogs (or even academic) would be amazing.

  25. Scott Says:

    notGPT #24: Alright then. Here at Inkhaven, I’ve been meeting with blogger after blogger who hands me a printed draft of their latest post. I start reading and then I say, “I don’t understand what argument you’re trying to make here. I don’t understand the context you seem to be assuming. I don’t even understand the words and acronyms you’re using.”

    And the person says to me, “well, what I was actually trying to say was …”

    So then I listen for a while, and they tell me something that, whether I agree with it or not, makes 100x more sense than the blog post.

    So then I say, “that’s great!! what you just told me, that’s what you’re going to start your post with.”

    And then they thank me (!!).

    I feel like 80% of good writing, is just learning to simulate this whole process internally, without needing to go through the routine.

  26. Soumik Ghosh Says:

    Thanks so much for the shoutout to our paper, Scott! Let me give a bit more context to the interested reader. This is adding to what Scott wrote in (5).

    Our key observation is a simple one—it is to observe that quantum error correction inherently generates peaked circuits. Using an error correcting code, consider initializing k logical qubits in the logical |0> state using n physical qubits. Say the code is robust against bit-flip errors and say some bit-flip errors indeed occur, changing a few of the logical |0>-s to |1>-s. Now, if you now measure the syndrome of this error, that uniquely specifies what the error pattern is. It has to—for error correction to work! 

    In other words, if you were to measure all the logical qubits in the standard basis, then conditioned on the syndrome measurement, you would see a unique string—ie, a “peak”— in the logical registers. This phenomenon is true not just for bit-flip errors and codes that correct for that, but for any error correcting code and any choice of errors that is correctable by the code.

    The rest of our paper is concerned with turning this observation into a verifiable quantum advantage experiment with plausible hardness guarantees, making it NISQ-friendly, and ruling out some obvious spoofers in the verification step. A nice feature of this scheme is that preparing logical qubits is what one would have to do anyway, en route to fault tolerance. So, one can think of this scheme as a “side quest” that experimentalists can merrily do to claim verifiable quantum advantage, without getting too distracted from building a fully fault tolerant quantum computer. A win-win situation, so to say!

  27. Scott Says:

    Soumik Ghosh #26: Thanks so much for that clarification, and look forward to hearing more about it when you visit UT to give you a talk about!

  28. A Says:

    Hi Scott, I had a question related to the discussion on peaked circuits and BlueQubit’s protocol — fixing the circuit depth to be constant for the random/peaked layers and accounting for the obfuscation, would you expect the peakedness to decay asymptotically to 0 as the number of qubits is increased (e.g. inverse polynomially in n) or be bounded away from 0?

    I’m not sure if theoretical results are already known about peakedness in this shallow depth setting, but they seem useful. The broader question is, supposing BlueQubit’s construction does work in the NISQ regime, can it also be extended to large scale QCs easily?

  29. Julian Says:

    Will you be participating in the “one blog post a day” challenge while you’re at Inkhaven?

  30. Scott Says:

    Julian #29: As you can see, no, alas. But I have been inspired by being here to work on some writing and hopefully you’ll see the results soon!

  31. Scott Says:

    A #28: I don’t know the answer to that — it would be a good question to direct to Hrant Gharibyan or Hayk Tepanyan. Or if you wanted me to think about it, remind me of the relevant obfuscation steps (those that might decrease the peak weight) in a short self-contained way?

  32. Raoul Ohio Says:

    Speaking of Zvi Mowshowitz, it is hard to imagine how any one person can write that much every day, let alone learn enough for each day’s writing.

  33. Hayk Says:

    A #28: The peak doesn’t degrade depending on number of qubits in our protocol. As Scott A. hinted in his reply #31 – there might be degradation related to specific obfuscation steps. In our protocol however we choose obfuscation in a way as to not degrade the peak at all.
    To answer your second question – we don’t see a reason why our protocol cannot be extended to large scale QCs easily.

  34. Mark Spinelli Says:

    Soumik Ghosh #26:

    Thank you and your colleagues for your paper!

    I’ve got a potentially dumb question about it – the proposal seems to support a public (re)verification? In particular, based on my very poor and incomplete understanding, letting Alice be the classical verifier and Bob be the purportedly quantum prover, then as I understand it Alice publishes a codeword \(C_Z\), as well as an IQP circuit \(U\) and an error angle \(\theta\); Bob runs \(U(\theta)\) a number of times, measuring in the Hadamard basis, and then Alice uses knowledge of another code \(C_X\) with \(C_Z^\perp\subset C_X\) to decode and test Bob’s runs.

    The paper notes Alice keeps \(C_X\) as her own secret. But couldn’t Alice, or anyone else for that matter, find a good peakedness code \(C_X\) only after publishing the hardness code \(C_Z\)? Could Alice publish \(C_Z\) and \(U(\theta)\), then Bob runs \(U(\theta)\) measuring in the \(X\) basis and publishing his runs, then a third party Charlie pick his own \(C_X\) to decode Bob’s runs?

  35. Raoul Ohio Says:

    re the second of the three takeaways: “what is QC useful for”?.

    1. It might turn out that both QC and AI turn out to be very useful for a few things, and not so much for most stuff — who knows? Zvi recently (today?) discussed the few situations where he finds it is worth the time and bother to ask AI for help.

    2. As for the frequent claimed “Quantum Supremacy” breakthroughs, I suspect that the vast majority of people in the CS/Math/Physics area (like me) usually have no idea what the calculation is or if it matters. Thus it would be nice if some results appeared about something that everyone understands. Factoring integers is a prime example, being easy to understand and the first thing most of us have heard about for QC. Although it might turn out that factoring is not a good fit for QC, a timeline of factoring results would be most illuminating.

    Folklore has it that long ago some QC factored 15 into 3*5. Is it true that no QC has been able to factor 21? If not, any idea what the biggest number factored is? Comments?

    3. Online financial advice for those wanting to invest in QC stocks is rather entertaining, seemingly devoid of any knowledge of QM. At any rate, there are several darlings of the QC investment world, including our old pals DWave. DWave’s attraction appears to be that they perhaps the only company to have actually made and sold anything. There was once much debate posted to SO as to what DWave computers actually did, if anything. Any updates on what they are doing?

  36. Chris Vale Says:

    Scott, when you talk about running Shor’s algorithm before the next election, are you suggesting that encryption will be broken? Or some sort of small scale demonstration?

    Thanks!

    PS, huge fan, love your book, wish I understood the half of it I don’t, but the half I do understand is great!

  37. Jim Says:

    “I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election.”

    Thank you Scott for being the perfect example of a Thanksgiving turkey – everything seems fine up until the day of the holiday!

    Just a year ago I believe you were saying this point was decades away.

  38. Scott Says:

    Jim #37: Yes, of course I’ve updated somewhat based on the surprisingly rapid progress and lack of showstoppers … but show me the actual recent quote where I said we could have any confidence that scalable QC was “decades away.” On the contrary, I’ve been saying for several years now that anyone for whom it really matters should already be making plans to migrate to post-quantum encryption.

  39. Scott Says:

    Chris Vale #36: To clarify — if, before the 2028 presidential election, a fully fault-tolerant Shor’s algorithm was used even just to factor 15 into 3×5, I would view the “live possibility” here as having come to pass.

    The point is, from that point forward, it seems like mostly a predictable matter of adding more fault-tolerant qubits and scaling up, and I find it hard to understand what the showstopper would be.

  40. Dan Says:

    What are your thoughts on A. King’s recent paper claiming quantum advantage on multi-objective optimization (MaxCUT) problems? (https://arxiv.org/pdf/2511.01762)

  41. Scott Says:

    Dan #40: My thoughts are that it would be phenomenal if someone (not me) looks carefully at whether the claims in that paper survive dequantization.

  42. Andrew King Says:

    Dan #40: Since I happened by: per the abstract, the only claim is that we handily beat IBM and all the classical solvers they ran against QAOA in their paper. In other words, as far as I know, all evidence still points to QA being much more effective than QAOA in practice (even when QAOA is allowed to use a noise-free simulation of “practice”).

  43. Chaoyang Lu Says:

    > I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election.

    Hi Scott, a very interesting point – what would be your best guess of the logic qubit error rate (10^-x), the number (y) of logic qubits, and the number to be factorized (N)?

  44. Julian Says:

    Do you find yourself writing less these because you’re too busy, or because the world is too depressing to write about?

  45. Scott Says:

    Chaoyang Lu #43: Sorry, I’m not going to guess at that level of detail, except maybe to make the safest prediction, that the numbers to be factored will include 15. 🙂

  46. Scott Says:

    Julian #44: Some of both.

  47. OhMyGoodness Says:

    You should invite a motivational speaker from DEVGRU to toughen up the quantum computing organization at UT. “The only easy day was yesterday” and “There will be time to sleep when I am dead” kinds of motivational tidbits.

  48. Ethereum Cofounder Issues Stark Crypto Warning That Could Spell Disaster For Bitcoin Amid Sudden Price Sell-Off – PI Global Investments Says:

    […] before the next U.S. presidential election,” quantum computer researcher Scott Aaronson wrote in blog post this month, referring to how a quantum computer could break the encryption that underpins […]

  49. Ethereum Co-founder Flags Quantum Threat to Bitcoin Says:

    […] of much of modern cryptography. As quantum computer researcher Scott Aaronson noted in a recent blog post, “Given the current staggering rate of hardware progress, I now think it’s a live […]

  50. Ethereum Cofounder Issues Stark BlackRock Warning That Could Spell Disaster For Bitcoin Amid Sudden Price Sell-Off - Invest Insider News Says:

    […] before the next U.S. presidential election,” quantum computer researcher Scott Aaronson wrote in blog post this month, referring to how a quantum computer could break the encryption that underpins […]

  51. Soumik Ghosh Says:

    Mark Spinelli #34: Yes, you are indeed right! The $C_X$ code can be chosen much later. And yes, a third party could also do the checking. There are exponentially many $C_X$ codes that are compatible with the same $C_Z$ code. A verifier could just pick any one of them.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!