Archive for the ‘Quantum’ Category

Before we start on quantum

Tuesday, April 7th, 2026

Imagine that every week for twenty years, people message you asking you to comment on the latest wolf sighting, and every week you have to tell them: I haven’t seen a wolf, I haven’t heard a wolf, I believe wolves exist but I don’t yet see evidence of them anywhere near our town.

Then one evening, you hear a howl in the distance, and sure enough, on a hill overlooking the town is the clear silhouette of a large wolf. So you point to it — and all the same people laugh and accuse you of “crying wolf.”

Now you know how it’s been for me with cryptographically relevant quantum computing.


I’ve been writing about QC on this blog for a while, and have done hundreds of public lectures and interviews and podcasts on the subject. By now, I can almost always predict where a non-expert’s QC question is going from its first few words, and have a well-rehearsed answer ready to go the moment they stop talking. Yet sometimes I feel like it’s all for naught.

Only today did it occur to me that I should write about something more basic. Not quantum computing itself, but the habits of mind that seem to prevent some listeners from hearing whatever I or other researchers have to tell them about QC. The stuff that we’re wasting our breath if we don’t get past.

Which habits of mind am I talking about?

  1. The Tyranny of Black and White. Hundreds of times, I’ve answered someone’s request to explain QC, only to have them nod impatiently, then interrupt as soon as they can with: “So basically, the take-home message is that quantum is coming, and it’ll change everything?” Someone else might respond to exactly the same words from me with: “So basically, you’re saying it’s all hype and I shouldn’t take any of it seriously?” As in my wolf allegory, the same person might even jump from one reaction to the other. Seeing this, I’ve become a fervent believer in horseshoe theory, in QC no less than in politics. Which sort of makes sense: if you think QCs are “the magic machines of the future that will revolutionize everything,” and then you learn that they’re not, why wouldn’t you jump to the opposite extreme and conclude you’ve been lied to and it’s all a scam?
  2. The Unidimensional Hype-Meter. “So … [long, thoughtful pause] … you’re actually telling me that some of what I hear about QC is real … but some of it is hype? Or—yuk yuk, I bet no one ever told you this one before—it’s a superposition of real and hype?” OK, that’s better. But it’s still trying to project everything down onto a 1-dimensional subspace that loses almost all the information!
  3. Words As Seasoning. I often get the sense that a listener is treating all the words of explanation—about amplitudes and interference, Shor versus Grover, physical versus logical qubits, etc.—as seasoning, filler, an annoying tic, a stalling tactic to put off answering the only questions that matter: “is Quantum real or not real? If it’s real, when is it coming? Which companies will own the Quantum space?” In reality, explanations are the entire substance of what I can offer. For my experience has consistently been that, if someone has no interest in learning what QC is, which classes of problems it helps for, etc., then even if I answer their simplistic questions like “which QC companies are good or bad?,” they won’t believe my answers anyway. Or they’ll believe my answers only until the next person comes along and tells them the opposite.
  4. Black-Boxing. Sometimes these days, I’ll survey the spectacular recent progress in fault-tolerance, 2-qubit gate fidelities, programmable hundred-qubit systems, etc., only to be answered with a sneer: “What’s the biggest number that Shor’s algorithm has factored? Still 15 after all these years? Haha, apparently the emperor has no clothes!” I’ve commented that this is sort of like dismissing the Manhattan Project as hopelessly stalled in 1944, on the ground that so far it hasn’t produced even a tiny nuclear explosion. Or the Apollo program in 1967, on the ground that so far it hasn’t gotten any humans even 10% of the way to the moon. Or GPT in 2020, on the ground that so far it can’t even do elementary-school math. Yes, sometimes emperors are naked—but you can’t tell until you actually look at the emperor! Engage with the specifics of quantum error correction. If there’s a reason why you think it can’t work beyond a certain scale, say so. But don’t fixate on one external benchmark and ignore everything happening under the hood, if the experts are telling you that under the hood is where all the action now is, and your preferred benchmark is only relevant later.
  5. Questions with Confused Premises. “When is Q-Day?” I confess that this question threw me for a loop the first few times I heard it, because I had no idea what “Q-Day” was. Apparently, it’s the single day when quantum computing becomes powerful enough to break all of cryptography? Or: “What differentiates quantum from binary?” “How will daily life be different once we all have quantum computers in our homes?” Try to minimize the number of presuppositions.
  6. Anchoring on Specific Marketing Claims. “What do you make of D-Wave’s latest quantum annealing announcement?” “What about IonQ’s claim to recognize handwriting with a QC?” “What about Microsoft’s claim to have built a topological qubit?” These questions can be fine as part of a larger conversation. Again and again, though, someone who doesn’t know the basics will lead with them—with whichever specific, contentious thing they most recently read. Then the entire conversation gets stuck at a deep node within the concept tree, and it can’t progress until we backtrack about five levels.

Anyway—sorry for yet another post of venting and ranting. Maybe this will help:

The wise child asks, “what are the main classes of problems that are currently known to admit superpolynomial quantum speedups?” To this child, you can talk about quantum simulation and finding hidden structures in abelian and occasionally nonabelian groups, as well as Forrelation, glued trees, HHL, and DQI—explaining how the central challenge has been to find end-to-end speedups for non-oracular tasks.

The wicked child asks, “so can I buy a quantum computer right now to help me pick stocks and search for oil and turbocharge LLMs, or is this entire thing basically a fraud?” To this child you answer: “the quantum computing people who seek you as their audience are frauds.”

The simple child asks, “what is quantum computing?” You answer: “it’s a strange new way of harnessing nature to do computation, one that dramatically speeds up certain tasks, but doesn’t really help with others.”

And to the child who doesn’t know how to ask—well, to that child you don’t need to bring up quantum computing at all. That child is probably already fascinated to learn classical stuff.

Quantum computing bombshells that are not April Fools

Wednesday, April 1st, 2026

For those of you who haven’t seen, there were actually two “bombshell” QC announcements this week. One, from Caltech, including friend-of-the-blog John Preskill, showed how to do quantum fault-tolerance with lower overhead than was previously known, by using high-rate codes, which could work for example in neutral-atom architectures (or possibly other architectures that allow nonlocal operations, like trapped ions). The second bombshell, from Google, gave a lower-overhead implementation of Shor’s algorithm to break 256-bit elliptic curve cryptography.

Notably, out of an abudance of caution, the Google team chose to “publish” its result via a cryptographic zero-knowledge proof that their circuit exists (so, without revealing the details to attackers). This is the first time I’ve ever seen a new mathematical result actually announced that way, although I understand that there’s precedent in the 1500’s, when mathematicians would (for example) prove their ability to solve quartic equations by challenging their rivals to duels. I’m not sure how much it will actually help, as once other groups know that a smaller circuit exists, it might be only a short time until they’re able to find it as well.

Neither of these results change the basic principles of QC that we’ve known for decades, but they do change the numbers.

When you put both of them together, Bitcoin signatures for example certainly look vulnerable to quantum attack earlier than was previously known!  In particular, the Caltech group estimates that a mere 25,000 physical qubits might suffice for this, where a year ago the best estimates were in the millions. How much time will this save — maybe a year?  Subtracting, of course, off a number of years that no one knows.

In any case, these results provide an even stronger impetus for people to upgrade now to quantum-resistant cryptography.  They—meaning you, if relevant—should really get on that!

When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior. Will we, in quantum computing, also soon cross that threshold? But I got strong pushback on that analogy from the cryptography and cybersecurity people who I most respect. They said: we have decades of experience with this, and the answer is that you publish. And, they said, if publishing causes people still using quantum-vulnerable systems to crap their pants … well, maybe that’s what needs to happen right now.

Naturally, journalists have been hounding me for comments, though it was the worst possible week, when I needed to host like four separate visitors in Austin. I hope this post helps! Please feel free to ask questions or post further details in the comments.

And now, with no time for this blog post to leaven and rise, I need to go home for my family’s Seder. Happy Passover!

Congrats to Bennett and Brassard on the Turing Award!

Wednesday, March 18th, 2026

I’m on a spring break vacation-plus-lecture-tour with Dana and the kids in Mexico City this week, and wasn’t planning to blog, but I see that I need to make an exception. Charles Bennett and Gilles Brassard have won the Turing Award, for their seminal contributions to quantum computing and information including the BB84 quantum key distribution scheme. This is the first-ever Turing Award specifically for quantum stuff (though previous Turing Award winners, including Andy Yao, Leslie Valiant, and Avi Wigderson, have had quantum among their interests).

As a practical proposal, BB84 is already technologically feasible but has struggled to find an economic niche, in a world where conventional public-key encryption already solves much the same problem using only the standard Internet—and where, even after scalable quantum computers become able to break many of our current encryption schemes, post-quantum encryption (again running on the standard Internet) stands ready to replace those schemes. Nevertheless, as an idea, BB84 has already been transformative, playing a central role in the birth of quantum information science itself. Beyond BB84, Bennett and Brassard have made dozens of other major contributions to quantum information science, with a personal favorite of mine being the 1994 BBBV (Bennett Bernstein Brassard Vazirani) paper, which first established the limitations of quantum computers at solving unstructured search problems (and indeed, proved the optimality of Grover’s algorithm even before Grover’s algorithm had been discovered to exist).

While I take my kids to see Aztec artifacts, you can learn much more from Ben Brubaker’s Quanta article, to which I contributed without even knowing that it would be about Bennett and Brassard winning the Turing Award (info that was strictly embargoed before today). It’s an honor to have known Charlie and Gilles as well as I have for decades, and to have been able to celebrate one of their previous honors, the Wolf Prize, with them in Jerusalem. Huge congrats to two of the founders of our field!

The ”JVG algorithm” is crap

Saturday, March 7th, 2026

Sorry to interrupt your regular programming about the AI apocalypse, etc., and return to the traditional beat of this blog’s very earliest years … but I’ve now gotten multiple messages asking me to comment on something called the “JVG (Jesse–Victor–Gharabaghi) algorithm” (yes, the authors named it after themselves). This is presented as a massive improvement over Shor’s factoring algorithm, which could (according to popular articles) allow RSA-2048 to be broken using only 5,000 physical qubits.

On inspection, the paper’s big new idea is that, in the key step of Shor’s algorithm where you compute xr mod N in a superposition over all r’s, you instead precompute the xr mod N’s on a classical computer and then load them all into the quantum state.

Alright kids, why does this not work? Shall we call on someone in the back of the class—like, any undergrad quantum computing class in the world? Yes class, that’s right! There are exponentially many r’s. Computing them all takes exponential time, and loading them into the quantum computer also takes exponential time. We’re out of the n2-time frying pan but into the 2n-time fire. This can only look like it wins on tiny numbers; on large numbers it’s hopeless.

If you want to see people explaining the same point more politely and at greater length, try this from Hacker News or this from Postquantum.com.

Even for those who know nothing about quantum algorithms, is there anything that could’ve raised suspicion here?

  1. The paper didn’t appear on the arXiv, but someplace called “Preprints.org.” Come to think of it, I should add this to my famous Ten Signs a Claimed Mathematical Breakthrough is Wrong! It’s not that there isn’t tons of crap on the arXiv as well, but so far I’ve seen pretty much only crap on preprint repositories other than arXiv, ECCC, and IACR.
  2. Judging from a Google search, the claim seems to have gotten endlessly amplified on clickbait link-farming news sites, but ignored by reputable science news outlets—yes, even the usual quantum hypesters weren’t touching this one!

Often, when something is this bad, the merciful answer is to let it die in obscurity. In this case, I feel like there was a sufficient level of intellectual hooliganism, just total lack of concern for what’s true, that those involved deserve to have this Shtetl-Optimized post as a tiny bit of egg on their faces forever.

Moar Updatez

Thursday, March 5th, 2026

To start on a somber note: those of us at UT Austin are in mourning this week for Savitha Shan, an undergrad double major here in economics and information systems, who was murdered over the weekend by an Islamist terrorist who started randomly shooting people on Sixth Street, apparently angry about the war in Iran. Two other innocents were also killed.

As it happens, these murders happened just a few hours after the end of my daughter’s bat mitzvah, and in walking distance from the venue. The bat mitzvah itself was an incredibly joyful and successful event that consumed most of my time lately, and which I might or might not say more about—the nastier the online trolls get, the more I need to think about my family’s privacy.


Of all the many quantum computing podcasts/interviews I’ve done recently, I’m probably happiest with this one, with Yuval Boger of QuEra. It covers all the main points about where the hardware currently is, the threat to public-key cryptography, my decades-long battle against quantum applications hype, etc. etc., and there’s even an AI-created transcript that eliminates my verbal infelicities!


A month ago, I blogged about “The Time I Didn’t Meet Jeffrey Epstein” (basically, because my mom warned me not to). Now the story has been written up in Science magazine, under the clickbaity headline “Meet Three Scientists Who Said No to Epstein.” (Besides yours truly, the other two scientists are friend-of-the-blog Sean Carroll, whose not-meeting-Epstein story I’d already heard directly from him, and David Agus, whose story I hadn’t heard.)

To be clear: as I explained in my post, I never actually said “no” to Epstein. Instead, based on my mom’s advice, I simply failed to follow up with his emissary, to the point where no meeting ever happened.

Anyway, ever since Science ran this story and it started making the rounds on social media, my mom has been getting congratulatory messages from friends of hers who saw it!


I’ve been a huge fan of the philosopher-novelist Rebecca Newberger Goldstein ever since I read her celebrated debut work, The Mind-Body Problem, back in 2005. Getting to know Rebecca and her husband, Steven Pinker, was a highlight of my last years at MIT. So I’m thrilled that Rebecca will be visiting UT Austin next week to give a talk on Spinoza, related to her latest book The Mattering Instinct (which I’m reading right now), and hosted by me and my colleague Galen Strawson in UT’s philosophy department. More info is in the poster below. If you’re in Austin, I hope to see you there!


The 88-year-old Donald Knuth has published a 5-page document about how Claude was able to solve a tricky graph theory problem that arose while he was working on the latest volume of The Art of Computer Programming—a series that Knuth is still writing after half a century. As you’d expect from Knuth, the document is almost entirely about the graph theory problem itself and Claude’s solution to it, eschewing broader questions about the nature of machine intelligence and how LLMs are changing life on Earth. To anyone who’s been following AI-for-math lately, the fact that Claude now can help with this sort of problem won’t come as a great shock. The virality is presumably because Knuth is such a legend that to watch him interact productively with an LLM is sort of like watching Leibniz, Babbage, or Turing do the same.


John Baez is a brilliant mathematical physicist and writer, who was blogging about science before the concept of “blogging” even existed, and from whom I’ve learned an enormous amount. But regarding John’s quest for the past 15 years — namely, to use category theory to help solve the climate crisis (!) — I always felt like the Cookie Monster would, with equal intellectual justification, say that the key to arresting climate change was for him to eat more Oreos. Then I read this Quanta article on the details of Baez’s project, and … uh … I confess it failed to change my view. Maybe someday I’ll understand why it’s better to say using category theory what I would’ve said in a 100x simpler way without category theory, but I fear that day is not today.

Updatez!

Friday, February 20th, 2026
  1. The STOC’2026 accepted papers list is out. It seems to me that there’s an emperor’s bounty of amazing stuff this year. I felt especially gratified to see the paper on the determination of BusyBeaver(5) on the list, reflecting a broad view of what theory of computing is about.
  2. There’s a phenomenal profile of Henry Yuen in Quanta magazine. Henry is now one of the world leaders of quantum complexity theory, involved in breakthroughs like MIP*=RE and now pioneering the complexity theory of quantum states and unitary transformations (the main focus of this interview). I’m proud that Henry tells Quanta that learned about the field in 2007 or 2008 from a blog called … what was it again? … Shtetl-Optimized? I’m also proud that I got to help mentor Henry when he was a PhD student of my wife Dana Moshkovitz at MIT. Before I read this Quanta profile, I didn’t even know the backstory about Henry’s parents surviving and fleeing the Cambodian genocide, or about Henry growing up working in his parents’ restaurant. Henry never brought any of that up!
  3. See Lance’s blog for an obituary of Joe Halpern, a pioneer of the branch of theoretical computer science that deals with reasoning about knowledge (e.g., the muddy children puzzle), who sadly passed away last week. I knew Prof. Halpern a bit when I was an undergrad at Cornell. He was a huge presence in the Cornell CS department who’ll be sorely missed.
  4. UT Austin has announced the formation of a School of Computing, which will bring together the CS department (where I work) with statistics, data science, and several other departments. Many of UT’s peer institutions have recently done the same. Naturally, I’m excited for what this says about the expanded role of computing at UT going forward. We’ll be looking to hire even more new faculty than we were before!
  5. When I glanced at the Chronicle of Higher Education to see what was new, I learned that researchers at OpenAI had proposed a technical solution, called “watermarking,” that might help tackle the crisis of students relying on AI to write all their papers … but that OpenAI had declined to deploy that solution. The piece strongly advocates a legislative mandate in favor of watermarking LLM outputs, and addresses some of the main counterarguments to that position.
  6. For those who can’t get enough podcasts of me, here are the ones I’ve done recently. Quantum: Science vs. Mythology on the Peggy Smedley Show. AI Alignment, Complexity Theory, and the Computability of Physics, on Alexander Chin’s Philosophy Podcast. And last but not least, What Is Quantum Computing? on the Robinson Erhardt Podcast.
  7. Also, here’s an article that quotes me entitled “Bitcoin needs a quantum upgrade. So why isn’t it happening?” Also, here’s a piece that interviews me in Investor’s Business Daily, entitled “Is quantum computing the next big tech shift?” (I have no say over these titles.)

On reducing the cost of breaking RSA-2048 to 100,000 physical qubits

Sunday, February 15th, 2026

So, a group based in Sydney, Australia has put out a preprint with a new estimate of the resource requirements for Shor’s algorithm, claiming that if you use LDPC codes rather than the surface code, you should be able to break RSA-2048 with fewer than 100,000 physical qubits, which is an order-of-magnitude improvement over the previous estimate by friend-of-the-blog Craig Gidney. I’ve now gotten sufficiently many inquiries about it that it’s passed the threshold of blog-necessity.

A few quick remarks, and then we can discuss more in the comments section:

  • Yes, this is serious work. The claim seems entirely plausible to me, although it would be an understatement to say that I haven’t verified the details. The main worry I’d have is simply that LDPC codes are harder to engineer than the surface code (especially for superconducting qubits, less so for trapped-ion), because you need wildly nonlocal measurements of the error syndromes. Experts (including Gidney himself, if he likes!) should feel free to opine in the comments.
  • I have no idea by how much this shortens the timeline for breaking RSA-2048 on a quantum computer. A few months? Dunno. I, for one, had already “baked in” the assumption that further improvements were surely possible by using better error-correcting codes. But it’s good to figure it out explicitly.
  • On my Facebook, I mused that it might be time for the QC community to start having a conversation about whether work like this should still be openly published—a concern that my friends in the engineering side of QC have expressed to me. I got strong pushback from cryptographer and longtime friend Nadia Heninger, who told me that the crypto community has already had this conversation for decades, and has come down strongly on the side of open publication, albeit with “responsible disclosure” waiting periods, which are often 90 days. While the stakes would surely be unusually high with a full break of RSA-2048, Nadia didn’t see that the basic principles there were any different. Nadia’s arguments updated me in the direction of saying that groups with further improvements to the resource requirements for Shor’s algorithm should probably just go ahead and disclose what they’ve learned, and the crypto community will have their backs as having done what they’ve learned over the decades was the right thing. Certainly, any advantage that such disclosure would give to hackers, who could take the new Shor circuits and simply submit them to the increasingly powerful QCs that will gradually come online via cloud services, needs to be balanced against the loud, clear, open warning the world will get to migrate faster to quantum-resistant encryption.
  • I’m told that these days, the biggest practical game is breaking elliptic curve cryptography, not breaking Diffie-Hellman or RSA. Somewhat ironically, elliptic curve crypto is likely to fall to quantum computers a bit before RSA and Diffie-Hellman will fall, because ECC’s “better security” (against classical attacks, that is) led people to use 256-bit keys rather than 2,048-bit keys, and Shor’s algorithm mostly just cares about the key size.
  • In the acknowledgments of the paper, I’m thanked for “thoughtful feedback on the title.” Indeed, their original title was about “breaking RSA-2048″ with 100,000 physical qubits. When they sent me a draft, I pointed out to them that they need to change it, since journalists would predictably misinterpret it to mean that they’d already done it, rather than simply saying that it could be done.

Quantum is my happy place

Wednesday, January 28th, 2026
  • Here’s a 53-minute podcast that I recorded this afternoon with a high school student named Micah Zarin, and which ended up covering …[checks notes] … consciousness, free will, brain uploading, the Church-Turing Thesis, AI, quantum mechanics and its various interpretations, quantum gravity, quantum computing, and the discreteness or continuity of the laws of physics. I strongly recommend 2x speed as usual.
  • QIP’2026, the world’s premier quantum computing conference, is happening right now in Riga, Latvia, locally organized by a team headed by the great Andris Ambainis, who I’ve known since 1999 and who’s played a bigger role in my career than almost anyone else. I’m extremely sorry not to be there, despite what I understand to be the bitter cold. Family and teaching obligations mean that I jet around the world so much less than I used to. But many of my students and colleagues are there, and I’ll plan a blog post on news from QIP next week.
  • Greg Burnham of Epoch AI tells me that Epoch has released a list of AI-for-math challenge problems—i.e., open math problems that are below the level of P vs. NP and the Riemann Hypothesis but still of very serious research interest, and that they’re putting forward as worthy targets right now for trying to solve with AI assistance. A few examples that should be familiar to some Shtetl-Optimized readers: degree vs. sensitivity of Boolean functions, improving the constant in the exponent of the General Number Field Sieve, giving an algorithm to test whether a knot has unknotting number of 1, and extending Apéry’s proof of the irrationality of ζ(3) to other constants. Notably, for each problem, alongside a beautifully written description by a (human) expert, they also show you what the state-of-the-art models were able to do on that problem when they tried.
  • There’s been a major advance in understanding constant-depth quantum circuits, by my former PhD student Daniel Grier (now a professor at UCSD), along with his PhD student Jackson Morris and Kewen Wu of IAS. Namely, they show that any function computable in TC0 (constant-depth, polynomial-size classical circuits with threshold gates) is also computable in QAC0 (constant-depth quantum circuits with 1-qubit and generalized Toffoli gates), as long as you provide many copies of the input. Two examples of such TC0 functions, which we therefore now know to be in QAC0 given many copies of the input, are Parity and Majority. It’s been a central open problem of quantum complexity theory for a quarter-century to prove that Parity is not in QAC0, complementing the celebrated result from the 1980s that Parity is not in classical AC0 (a constant-depth circuit class that, for all we know, might be incomparable with QAC0). It’s known that showing Parity∉QAC0 is equivalent to showing that QAC0 can’t implement the “fanout” function, which makes many copies of an input bit. To say that we’ve gained a new understanding of why this problem is so hard would be an understatement.

More on whether useful quantum computing is “imminent”

Sunday, December 21st, 2025

These days, the most common question I get goes something like this:

A decade ago, you told people that scalable quantum computing wasn’t imminent. Now, though, you claim it plausibly is imminent. Why have you reversed yourself??

I appreciated the friend of mine who paraphrased this as follows: “A decade ago you said you were 35. Now you say you’re 45. Explain yourself!”


A couple weeks ago, I was delighted to attend Q2B in Santa Clara, where I gave a keynote talk entitled “Why I Think Quantum Computing Works” (link goes to the PowerPoint slides). This is one of the most optimistic talks I’ve ever given. But mostly that’s just because, uncharacteristically for me, here I gave short shrift to the challenge of broadening the class of problems that achieve huge quantum speedups, and just focused on the experimental milestones achieved over the past year. With every experimental milestone, the little voice in my head that asks “but what if Gil Kalai turned out to be right after all? what if scalable QC wasn’t possible?” grows quieter, until now it can barely be heard.

Going to Q2B was extremely helpful in giving me a sense of the current state of the field. Ryan Babbush gave a superb overview (I couldn’t have improved a word) of the current status of quantum algorithms, while John Preskill’s annual where-we-stand talk was “magisterial” as usual (that’s the word I’ve long used for his talks), making mine look like just a warmup act for his. Meanwhile, Quantinuum took a victory lap, boasting of their recent successes in a way that I considered basically justified.


After returning from Q2B, I then did an hour-long podcast with “The Quantum Bull” on the topic “How Close Are We to Fault-Tolerant Quantum Computing?” You can watch it here:

As far as I remember, this is the first YouTube interview I’ve ever done that concentrates entirely on the current state of the QC race, skipping any attempt to explain amplitudes, interference, and other basic concepts. Despite (or conceivably because?) of that, I’m happy with how this interview turned out. Watch if you want to know my detailed current views on hardware—as always, I recommend 2x speed.

Or for those who don’t have the half hour, a quick summary:

  • In quantum computing, there are the large companies and startups that might succeed or might fail, but are at least trying to solve the real technical problems, and some of them are making amazing progress. And then there are the companies that have optimized for doing IPOs, getting astronomical valuations, and selling a narrative to retail investors and governments about how quantum computing is poised to revolutionize optimization and machine learning and finance. Right now, I see these two sets of companies as almost entirely disjoint from each other.
  • The interview also contains my most direct condemnation yet of some of the wild misrepresentations that IonQ, in particular, has made to governments about what QC will be good for (“unlike AI, quantum computers won’t hallucinate because they’re deterministic!”)
  • The two approaches that had the most impressive demonstrations in the past year are trapped ions (especially Quantinuum but also Oxford Ionics) and superconducting qubits (especially Google but also IBM), and perhaps also neutral atoms (especially QuEra but also Infleqtion and Atom Computing).
  • Contrary to a misconception that refuses to die, I haven’t dramatically changed my views on any of these matters. As I have for a quarter century, I continue to profess a lot of confidence in the basic principles of quantum computing theory worked out in the mid-1990s, and I also continue to profess ignorance of exactly how many years it will take to realize those principles in the lab, and of which hardware approach will get there first.
  • But yeah, of course I update in response to developments on the ground, because it would be insane not to! And 2025 was clearly a year that met or exceeded my expectations on hardware, with multiple platforms now boasting >99.9% fidelity two-qubit gates, at or above the theoretical threshold for fault-tolerance. This year updated me in favor of taking more seriously the aggressive pronouncements—the “roadmaps”—of Google, Quantinuum, QuEra, PsiQuantum, and other companies about where they could be in 2028 or 2029.
  • One more time for those in the back: the main known applications of quantum computers remain (1) the simulation of quantum physics and chemistry themselves, (2) breaking a lot of currently deployed cryptography, and (3) eventually, achieving some modest benefits for optimization, machine learning, and other areas (but it will probably be a while before those modest benefits win out in practice). To be sure, the detailed list of quantum speedups expands over time (as new quantum algorithms get discovered) and also contracts over time (as some of the quantum algorithms get dequantized). But the list of known applications “from 30,000 feet” remains fairly close to what it was a quarter century ago, after you hack away the dense thickets of obfuscation and hype.

I’m going to close this post with a warning. When Frisch and Peierls wrote their now-famous memo in March 1940, estimating the mass of Uranium-235 that would be needed for a fission bomb, they didn’t publish it in a journal, but communicated the result through military channels only. As recently as February 1939, Frisch and Meitner had published in Nature their theoretical explanation of recent experiments, showing that the uranium nucleus could fission when bombarded by neutrons. But by 1940, Frisch and Peierls realized that the time for open publication of these matters had passed.

Similarly, at some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already. This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems, a process that I’m grateful is already underway.


Update: Someone on Twitter who’s “long $IONQ” says he’ll be posting about and investigating me every day, never resting until UT Austin fires me, in order to punish me for slandering IonQ and other “pure play” SPAC IPO quantum companies. And also, because I’ve been anti-Trump and pro-Biden. He confabulates that I must be trying to profit from my stance (eg by shorting the companies I criticize), it being inconceivable to him that anyone would say anything purely because they care about what’s true.

Podcasts!

Saturday, November 22nd, 2025

A 9-year-old named Kai (“The Quantum Kid”) and his mother interviewed me about closed timelike curves, wormholes, Deutsch’s resolution of the Grandfather Paradox, and the implications of time travel for computational complexity:

This is actually one of my better podcasts (and only 24 minutes long), so check it out!


Here’s a podcast I did a few months ago with “632nm” about P versus NP and my other usual topics:


For those who still can’t get enough, here’s an interview about AI alignment for the “Hidden Layers” podcast that I did a year ago, and that I think I forgot to share on this blog at the time:


What else is in the back-catalog? Ah yes: the BBC interviewed me about quantum computing for a segment on Moore’s Law.


As you may have heard, Steven Pinker recently wrote a fantastic popular book about the concept of common knowledge, entitled When Everyone Knows That Everyone Knows… Steve’s efforts render largely obsolete my 2015 blog post Common Knowledge and Aumann’s Agreement Theorem, one of the most popular posts in this blog’s history. But I’m willing to live with that, not only because Steven Pinker is Steven Pinker, but also because he used my post as a central source for the topic. Indeed, you should watch his podcast with Richard Hanania, where Steve lucidly explains Aumann’s Agreement Theorem, noting how he first learned about it from this blog.