Archive for the ‘Quantum’ Category

Quantum Complexity Theory Student Project Showcase #5 (2025 Edition)!

Thursday, July 31st, 2025

Sorry for the long blog-hiatus! I was completely occupied for weeks, teaching an intensive course on theoretical computer science to 11-year-olds (!), at a math camp in St. Louis that was also attended by my 8-year-old son. Soon I’ll accompany my 12-year-old daughter to a science camp in Connecticut, where I’ll also give lectures.

There’s a great deal to say about these experiences, but for now: it’s been utterly transformative and life-affirming to spend my days in teaching precocious, enthusiastic, non-jaded children the material I love in the real world, rather than (let’s say) in scrolling through depressing world news and social media posts and emails from hateful trolls on my phone. It’s made me feel 25 years younger (well, at least until I need to walk up a flight of stairs). It’s made me want to go back to actual research and teaching, which besides family and friends have been the main sources of joy in my life.


So on that note, and without further ado: I hereby present the latest Quantum Complexity Theory Student Project Showcase! As the name suggests, this is where I share a selection of the best research projects, from the students who took my graduate class on Quantum Complexity Theory at UT Austin this past spring.

See here for the four previous iterations of the Showcase:

(As you can see, the timing hasn’t been 100% consistent.)

I expect that, as in past editions, many of this year’s projects will lead to published research papers, or at the very least, preprints on the arXiv.


And now, really without further ado, the projects!

(1) Quantum Hermite Transform and Gaussian Goldreich-Levin, by Vishnu Iyer and Siddhartha Jain.

Vishnu and Sid propose a new primitive for quantum algorithms—the Hermite transform, as opposed to the Fourier transform—and give at least one successful example of its use. They’d be eager to know if anyone can think of other applications!

(2) Quantum Statistical Witness Indistinguishability, by Shafik Nassar and Ronak Ramachandran.

In modern cryptography, even if it isn’t statistical zero-knowledge (SZK), a proof protocol might have the weaker property of being statistically witness-indistinguishable (SWI): that is, Arthur can’t tell which of two possible yes-witnesses Merlin holds. Here Shafik and Ronak initiate the study of quantum SWI, and prove the basic properties of this notion, such as the equivalence of honest and dishonest verifier. Hopefully this will serve as a springboard for someone to find an actual QSWI protocol for an interesting problem.

(3) A Zero-Knowledge Protocol for Keyed Unitary Families, by David Joy and Angela Zhang.

Continuing the theme of quantum zero-knowledge, David and Angela give a protocol by which Merlin can convince Arthur that he knows a unitary relating one pure state to another, without revealing the unitary. Again continuing a theme, applications of this protocol are sought!

(4) On Query Lower Bounds for Aaronson-Kuperberg Unitary Synthesis Circuits, by Arko Banerjee.

Back in 2006, when we formulated our so-called “Unitary Synthesis Conjecture,” Greg Kuperberg and I showed that if a quantum algorithm applies an n-qubit unitary U(f) after making a single query to a Boolean function f, then as we range over f’s, there can be at most 4n possible values of U(f). Here Arko improves our bound to 2n, which is tight. He also tries extremely hard to generalize our bound to the two-query case, not quite succeeding but proving partial results that hopefully will be helpful to others.

(5) Quantum Search with Non-Interacting Bosons and Fermions, by Aravind Karthigeyan.

This one really made me think. Aravind studies the problem of search for a single marked vertex, on the complete graph with N vertices, using either M bosons or M fermions that can hop between the vertices. With M bosons, he shows that the search succeeds in Θ(√(N/M)) time with high probability, which is just the usual runtime for Grover search with M parallel searchers. With fermions, by contrast, he shows that more time is needed. Why? Because of the Pauli Exclusion Principle! The fermions all “step on each others’ toes,” competing to be the one that jumps onto the marked vertex, which limits the advantage of having M fermions searching in parallel.

(6) Limits to Pseudodeterminism in Quantum Communication Protocols, by Jiawei Li.

Jiawei revisits the famous Hidden Matching Problem, which is known to have an exponential gap between its randomized one-way communication complexity of ~√n, and its quantum one-way communication complexity of ~log(n). He makes a new observation about this problem: namely, if you want the exponential quantum communication advantage, then you must content yourself with a protocol that can generate many different possible correct answers with appreciable probabilities (i.e., that generates large min-entropy). In other words, no good quantum protocol for the problem is pseudodeterministic. This complements, for example, my and Shih-Han Hung’s work, which showed the same statement for quantum supremacy experiments based on Random Circuit Sampling, or the long line of works that showed it for experiments that violate the Bell/CHSH inequality.

Congratulations to my students for their accomplishments, and thanks to them for giving me permission to include their work in this showcase!

Raymond Laflamme (1960-2025)

Tuesday, June 24th, 2025

Even with everything happening in the Middle East right now, even with (relatedly) everything happening in my own family (my wife and son sheltering in Tel Aviv as Iranian missiles rained down), even with all the rather ill-timed travel I’ve found myself doing as these events unfolded (Ecuador and the Galapagos and now STOC’2025 in Prague) … there’s been another thing, a huge one, weighing on my soul.

Ray Laflamme played a major role in launching the whole field of quantum computing and information, and also a major role in launching my own career. The world has lost him too soon. I’ve lost him too soon.

After growing up in Quebec—I still hear his French-Canadian accent, constantly on the verge of laughter, as I’m writing this—Ray went into physics and became a PhD student of Stephen Hawking. No, not a different Stephen Hawking. If you’ve read or watched anything by or about Hawking, including A Brief History of Time, you might remember the story where Hawking believed for a while that time would reverse itself as the universe contracted in a Big Crunch, with omelettes unscrambling themselves, old people turning into children, etc. etc., but then two graduate students persuaded him that that was totally wrong, and entropy would continue to increase like normal. Anyway, Ray was one of those students (Don Page was the other). I’d always meant to ask Ray to explain what argument changed Hawking’s mind, since the idea of entropy decreasing during contraction just seemed obviously wrong to me! Only today, while writing this post, did I find a 1993 paper by Hawking, Laflamme, and Lyons that explains the matter perfectly clearly, including three fallacious intuitions that Hawking had previously held. (Even though, as they comment, “the anatomy of error is not ruled by logic.”)

Anyway, in the mid-1990s, starting at Los Alamos National Lab and continuing at the University of Waterloo, Ray became a pioneer of the then-new field of quantum computing and information. In 1997, he was a coauthor of one of the seminal original papers that proved the possibility of fault-tolerant quantum computation with a constant error rate, what we now call the Threshold Theorem (Aharonov and Ben-Or had such a result independently). He made lots of other key early contributions to the theory of quantum error-correcting codes and fault-tolerance.

When it comes to Ray’s scientific achievements after his cosmology work with Hawking and after quantum fault-tolerance—well, there are many, but let me talk about two. Perhaps the biggest is the KLM (Knill-Laflamme-Milburn) Theorem. It would be fair to say that KLM started the entire field of optical or photonic quantum computation, as it’s existed in the 21st century. In one sentence, what KLM showed is that it’s possible to build a universal quantum computer using only

  1. identical single-photon states,
  2. a network of “linear-optical elements” (that is, beamsplitters and phaseshifters) that the photons travel through, and
  3. feedforward measurements—that is, measurements of an optical mode that tell you how many photons are there, in such a way that you can condition (using a classical computer) which optical elements to apply next on the outcome of the measurement.

All of a sudden, there was a viable path to building a quantum computer out of photons, where you wouldn’t need to get pairs of photons to interact with each other, which had previously been the central sticking point. The key insight was that feedforward measurements, combined with the statistical properties of identical bosons (what the photons are), are enough to simulate the effect of two-photon interactions.

Have you heard of PsiQuantum, the startup in Palo Alto with a $6 billion valuation and hundreds of employees that’s right now trying to build an optical quantum computer with a million qubits? Or Xanadu, its competitor in Toronto? These, in some sense, are companies that grew out of a theorem: specifically the KLM Theorem.

For me, though, the significance of KLM goes beyond the practical. In 2011, I used the KLM Theorem, together with the fact (known since the 1950s) that photonic amplitudes are the permanents of matrices, to give a new proof of Leslie Valiant’s celebrated 1979 theorem that calculating the permanent is a #P-complete problem. Thus, as I pointed out in a talk two years ago at Ray’s COVID-delayed 60th birthday conference, entitled Ray Laflamme, Complexity Theorist (?!), KLM had said something new about computational complexity, without any intention of doing so. More generally, KLM was crucial backdrop to my and Alex Arkhipov’s later work on BosonSampling, where we gave strong evidence that some classical computational hardness—albeit probably not universal quantum computation—remains in linear optics, even if one gets rid of KLM’s feedforward measurements.

(Incidentally, I gave my talk at Ray’s birthday conference by Zoom, as I had a conflicting engagement. I’m now sad about that: had I known that that would’ve been my last chance to see Ray, I would’ve cancelled any other plans.)

The second achievement of Ray’s that I wanted to mention was his 1998 creation, again with his frequent collaborator Manny Knill, of the One Clean Qubit or “DQC1” model of quantum computation. In this model, you get to apply an arbitrary sequence of 2-qubit unitary gates, followed by measurements at the end, just like in standard quantum computing—but the catch is that the initial state consists of just a single qubit in the state |0⟩, and all other qubits in the maximally mixed state. If all qubits started in the maximally mixed state, then nothing would ever happen, because the maximally mixed state is left invariant by all unitary transformations. So it would stand to reason that, if all but one of the qubits start out maximally mixed, then almost nothing happens. The big surprise is that this is wrong. Instead you get a model that, while probably not universal for quantum computation, can do a variety of things in polynomial time that we don’t know how to do classically, including estimating the traces of exponentially large unitary matrices and the Jones polynomials of trace closures of braids (indeed, both of these problems turn out to be DQC1-complete). The discovery of DQC1 was one of the first indications that there’s substructure within BQP. Since then, the DQC1 model has turned up again and again in seemingly unrelated investigations in quantum complexity theory—way more than you’d have any right to expect a priori.

Beyond his direct contributions to quantum information, Ray will be remembered as one of the great institution-builders of our field. He directed the Institute for Quantum Computing (IQC) at the University of Waterloo in Canada, from its founding in 2002 until he finally stepped down in 2017. This includes the years 2005-2007, when I was a postdoc at IQC—two of the most pivotal years of my life, when I first drove a car and went out on dates (neither of which I do any longer, for different reasons…), when I started this blog, when I worked on quantum money and learnability of quantum states and much more, and when I taught the course that turned into my book Quantum Computing Since Democritus. I fondly remember Ray, as my “boss,” showing me every possible kindness. He even personally attended the Quantum Computing Since Democritus lectures, which is why he appears as a character in the book.

As if that wasn’t enough, Ray also directed the quantum information program of the Canadian Institute for Advanced Research (CIFAR). If you ever wondered why Canada, as a nation, has punched so far above its weight in quantum computing and information for the past quarter-century—Ray Laflamme is part of the answer.

At the same time, if you imagine the stereotypical blankfaced university administrator, who thinks and talks only in generalities and platitudes (“how can we establish public-private partnerships to build a 21st-century quantum workforce?”) … well, Ray was whatever is the diametric opposite of that. Despite all his responsibilities, Ray never stopped being a mensch, a friend, an intellectually curious scientist, a truth-teller, and a jokester. Whenever he and I talked, probably at least a third of the conversation was raucous laughter.

I knew that Ray had spent many years battling cancer. I naïvely thought he was winning, or had won. But as so often with cancer, it looks like the victory was only temporary. I miss him already. He was a ray of light in the world—a ray that sparkles, illuminates, and as we now know, even has the latent power of universal quantum computation.

Quantum! AI! Everything but Trump!

Wednesday, April 30th, 2025
  • Grant Sanderson, of 3blue1brown, has put up a phenomenal YouTube video explaining Grover’s algorithm, and dispelling the fundamental misconception about quantum computing, that QC works simply by “trying all the possibilities in parallel.” Let me not futz around: this video explains, in 36 minutes, what I’ve tried to explain over and over on this blog for 20 years … and it does it better. It’s a masterpiece. Yes, I consulted with Grant for this video (he wanted my intuitions for “why is the answer √N?”), and I even have a cameo at the end of it, but I wish I had made the video. Damn you, Grant!
  • The incomparably great, and absurdly prolific, blogger Zvi Mowshowitz and yours truly spend 1 hour and 40 minutes discussing AI existential risk, education, blogging, and more. I end up “interviewing” Zvi, who does the majority of the talking, which is fine by me, as he has many important things to say! (Among them: his searing critique of those K-12 educators who see it as their life’s mission to prevent kids from learning too much too fast—I’ve linked his best piece on this from the header of this blog.) Thanks so much to Rick Coyle for arranging this conversation.
  • Progress in quantum complexity theory! In 2000, John Watrous showed that the Group Non-Membership problem is in the complexity class QMA (Quantum Merlin-Arthur). In other words, if some element g is not contained in a given subgroup H of an exponentially large finite group G, which is specified via a black box, then there’s a short quantum proof that g∉H, with only ~log|G| qubits, which can be verified on a quantum computer in time polynomial in log|G|. This soon raised the question of whether Group Non-Membership could be used to separate QMA from QCMA by oracles, where QCMA (Quantum Classical Merlin Arthur), defined by Aharonov and Naveh in 2002, is the subclass of QMA where the proof needs to be classical, but the verification procedure can still be quantum. In other words, could Group Non-Membership be the first non-quantum example where quantum proofs actually help?

    In 2006, alas, Greg Kuperberg and I showed that the answer was probably “no”: Group Non-Membership has “polynomial QCMA query complexity.” This means that there’s a QCMA protocol for the problem where Arthur makes only polylog|G| quantum queries to the group oracle—albeit, possibly an exponential in log|G| number of quantum computation steps besides that! To prove our result, Greg and I needed to make mild use of the Classification of Finite Simple Groups, one of the crowning achievements of 20th-century mathematics (its proof is about 15,000 pages long). We conjectured (but couldn’t prove) that someone else, who knew more about the Classification than we did, could show that Group Non-Membership was simply in QCMA outright.

    Now, after almost 20 years, François Le Gall, Harumichi Nishimura, and Dhara Thakkar have finally proven our conjecture—showing that Group Order, and therefore also Group Non-Membership, are indeed in QCMA. They did indeed need to use the Classification, doing one thing for almost all finite groups covered by the Classification, but a different thing for groups of “Ree type” (whatever those are).

    Interestingly, the Group Membership problem had also been a candidate for separating BQP/qpoly, or quantum polynomial time with polynomial-size quantum advice—my personal favorite complexity class—from BQP/poly, or the same thing with polynomial-size classical advice. And it might conceivably still be! The authors explain to me that their protocol doesn’t put Group Membership (with group G and subgroup H depending only on the input length n) into BQP/poly, the reason being that their short classical witnesses for g∉H depend on both g and H, in contrast to Watrous’s quantum witnesses which depended only on H. So there’s still plenty that’s open here! Actually, for that matter, I don’t know of good evidence that the entire Group Membership problem isn’t in BQP—i.e., that quantum computers can’t just solve the whole thing outright, with no Merlins or witnesses in sight!

    Anyway, huge congratulations to Le Gall, Nishimura, and Thakkar for peeling back our ignorance of these matters a bit further! Reeeeeeeee!
  • Potential big progress in quantum algorithms! Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone (GLM) have given what they present as a quantum algorithm to estimate the determinant of an n×n matrix A, exponentially faster in some contexts than we know how to do it classically.

    [Update (May 5): In the comments, Alessandro Luongo shares a paper where he and Changpeng Shao describe what appears to be essentially the same algorithm back in 2020.]

    The algorithm is closely related to the 2008 HHL (Harrow-Hassidim-Lloyd) quantum algorithm for solving systems of linear equations. Which means that anyone who knows the history of this class of quantum algorithms knows to ask immediately: what’s the fine print? A couple weeks ago, when I visited Harvard and MIT, I had a chance to catch up with Seth Lloyd, so I asked him, and he kindly told me. Firstly, we assume the matrix A is Hermitian and positive semidefinite. Next, we assume A is sparse, and not only that, but there’s a QRAM data structure that points to its nonzero entries, so you don’t need to do Grover search or the like to find them, and can query them in coherent superposition. Finally, we assume that all the eigenvalues of A are at least some constant λ>0. The algorithm then estimates det(A), to multiplicative error ε, in time that scales linearly with log(n), and polynomially with 1/λ and 1/ε.

    Now for the challenge I leave for ambitious readers: is there a classical randomized algorithm to estimate the determinant under the same assumptions and with comparable running time? In other words, can the GLM algorithm be “Ewinized”? Seth didn’t know, and I think it’s a wonderful crisp open question! On the one hand, if Ewinization is possible, it wouldn’t be the first time that publicity on this blog had led to the brutal murder of a tantalizing quantum speedup. On the other hand … well, maybe not! I also consider it possible that the problem solved by GLM—for exponentially-large, implicitly-specified matrices A—is BQP-complete, as for example was the general problem solved by HHL. This would mean, for example, that one could embed Shor’s factoring algorithm into GLM, and that there’s no hope of dequantizing it unless P=BQP. (Even then, though, just like with the HHL algorithm, we’d still face the question of whether the GLM algorithm was “independently useful,” or whether it merely reproduced quantum speedups that were already known.)

    Anyway, quantum algorithms research lives! So does dequantization research! If basic science in the US is able to continue at all—the thing I promised not to talk about in this post—we’ll have plenty to keep us busy over the next few years.

Theoretical Computer Science for AI Alignment … and More

Thursday, April 10th, 2025

In this terrifying time for the world, I’m delighted to announce a little glimmer of good news. I’m receiving a large grant from the wonderful Open Philanthropy, to build up a group of students and postdocs over the next few years, here at UT Austin, to do research in theoretical computer science that’s motivated by AI alignment. We’ll think about some of the same topics I thought about in my time at OpenAI—interpretability of neural nets, cryptographic backdoors, out-of-distribution generalization—but we also hope to be a sort of “consulting shop,” to whom anyone in the alignment community can come with theoretical computer science problems.

I already have two PhD students and several undergraduate students working in this direction. If you’re interested in doing a PhD in CS theory for AI alignment, feel free to apply to the CS PhD program at UT Austin this coming December and say so, listing me as a potential advisor.

Meanwhile, if you’re interested in a postdoc in CS theory for AI alignment, to start as early as this coming August, please email me your CV and links to representative publications, and arrange for two recommendation letters to be emailed to me.


The Open Philanthropy project will put me in regular contact with all sorts of people who are trying to develop complexity theory for AI interpretability and alignment. One great example of such a person is Eric Neyman—previously a PhD student of Tim Roughgarden at Columbia, now at the Alignment Research Center, the Berkeley organization founded by my former student Paul Christiano. Eric has asked me to share an exciting announcement, along similar lines to the above:

The Alignment Research Center (ARC) is looking for grad students and postdocs for its visiting researcher program. ARC is trying to develop algorithms for explaining neural network behavior, with the goal of advancing AI safety (see here for a more detailed summary). Our research approach is fairly theory-focused, and we are interested in applicants with backgrounds in CS theory or ML. Visiting researcher appointments are typically 10 weeks long, and are offered year-round.

If you are interested, you can apply here. (The link also provides more details about the role, including some samples of past work done by ARC.) If you have any questions, feel free to email hiring@alignment.org.

Some of my students and I are working closely with the ARC team. I like what I’ve seen of their research so far, and would encourage readers with the relevant background to apply.


Meantime, I of course continue to be interested in quantum computing! I’ve applied for multiple grants to continue doing quantum complexity theory, though whether or not I can get such grants will alas depend (among other factors) whether the US National Science Foundation continues to exist, as more than a shadow of what it was. The signs look ominous; Science magazine reports that the NSF just cut by half the number of awarded graduate fellowships, and this has almost certainly directly affected students who I know and care about.


Meantime we all do the best we can. My UTCS colleague, Chandrajit Bajaj, is currently seeking a postdoc in the general area of Statistical Machine Learning, Mathematics, and Statistical Physics, for up to three years. Topics include:

  • Learning various dynamical systems through their Stochastic Hamiltonians.  This involves many subproblems in geometry, stochastic optimization and stabilized flows which would be interesting in their own right.
  • Optimizing  task dynamics on different algebraic varieties of applied interest — Grassmanians, the Stiefel and Flag manifolds, Lie groups, etc.

If you’re interested, please email Chandra at bajaj@cs.utexas.edu.


Thanks so much to the folks at Open Philanthropy, and to everyone else doing their best to push basic research forward even while our civilization is on fire.

On the JPMC/Quantinuum certified quantum randomness demo

Wednesday, March 26th, 2025

These days, any quantum computing post I write ought to begin with the disclaimer that the armies of Sauron are triumphing around the globe, this is the darkest time for humanity most of us have ever known, and nothing else matters by comparison. Certainly not quantum computing. Nevertheless stuff happens in quantum computing and it often brings me happiness to blog about it—certainly more happiness than doomscrolling or political arguments.


So then: today JP Morgan Chase announced that, together with Quantinuum and DoE labs, they’ve experimentally demonstrated the protocol I proposed in 2018, and further developed in a STOC’2023 paper with Shih-Han Hung, for using current quantum supremacy experiments to generate certifiable random bits for use in cryptographic applications. See here for our paper in Nature—the JPMC team was gracious enough to include me and Shih-Han as coauthors.

Mirroring a conceptual split in the protocol itself, Quantinuum handled the quantum hardware part of my protocol, while JPMC handled the rest: modification of the protocol to make it suitable for trapped ions, as well as software to generate pseudorandom challenge circuits to send to the quantum computer over the Internet, then to verify the correctness of the quantum computer’s outputs (thereby ensuring, under reasonable complexity assumptions, that the outputs contained at least a certain amount of entropy), and finally to extract nearly uniform random bits from the outputs. The experiment used Quantinuum’s 56-qubit trapped-ion quantum computer, which was given and took a couple seconds to respond to each challenge. Verification of the outputs was done using the Frontier and Summit supercomputers. The team estimates that about 70,000 certified random bits were generated over 18 hours, in such a way that, using the best currently-known attack, you’d need at least about four Frontier supercomputers working continuously to spoof the quantum computer’s outputs, and get the verifier to accept non-random bits.

We should be clear that this gap, though impressive from the standpoint of demonstrating quantum supremacy with trapped ions, is not yet good enough for high-stakes cryptographic applications (more about that later). Another important caveat is that the parameters of the experiment aren’t yet good enough for my and Shih-Han’s formal security reduction to give assurances: instead, for the moment one only has “practical security,” or security against a class of simplified yet realistic attackers. I hope that future experiments will build on the JPMC/Quantinuum achievement and remedy these issues.


The story of this certified randomness protocol starts seven years ago, when I had lunch with Or Sattath at a Japanese restaurant in Tel Aviv. Or told me that I needed to pay more attention to the then-recent Quantum Lightning paper by Mark Zhandry. I already know that paper is great, I said. You don’t know the half of it, Or replied. As one byproduct of what he’s doing, for example, Mark gives a way to measure quantum money states in order to get certified random bits—bits whose genuine randomness (not pseudorandomness) is certified by computational intractability, something that wouldn’t have been possible in a classical world.

Well, why do you even need quantum money states for that? I asked. Why not just use, say, a quantum supremacy experiment based on Random Circuit Sampling, like the one Google is now planning to do (i.e., the experiment Google would do, a year later after this conversation)? Then, the more I thought about that question, the more I liked the idea that these “useless” Random Circuit Sampling experiments would do something potentially useful despite themselves, generating certified entropy as just an inevitable byproduct of passing our benchmarks for sampling from certain classically-hard probability distributions. Over the next couple weeks, I worked out some of the technical details of the security analysis (though not all! it was a big job, and one that only got finished years later, when I brought Shih-Han to UT Austin as a postdoc and worked with him on it for a year).

I emailed the Google team about the idea; they responded enthusiastically. I also got in touch with UT Austin’s intellectual property office to file a provisional patent, the only time I’ve done that my career. UT and I successfully licensed the patent to Google, though the license lapsed when Google’s priorities changed. Meantime, a couple years ago, when I visited Quantinuum’s lab in Broomfield, Colorado, I learned that a JPMC-led collaboration toward an experimental demonstration of the protocol was then underway. The protocol was well-suited to Quantinuum’s devices, particularly given their ability to apply two-qubit gates with all-to-all connectivity and fidelity approaching 99.9%.

I should mention that, in the intervening years, others had also studied the use of quantum computers to generate cryptographically certified randomness; indeed it became a whole subarea of quantum computing. See especially the seminal work of Brakerski, Christiano, Mahadev, Vazirani, and Vidick, which gave a certified randomness protocol that (unlike mine) relies only on standard cryptographic assumptions and allows verification in classical polynomial time. The “only” downside is that implementing their protocol securely seems to require a full fault-tolerant quantum computer (capable of things like Shor’s algorithm), rather than current noisy devices with 50-100 qubits.


For the rest of this post, I’ll share a little FAQ, adapted from my answers to a journalist’s questions. Happy to answer additional questions in the comments.

  • To what extent is this a world-first?

Well, it’s the first experimental demonstration of a protocol to generate cryptographically certified random bits with the use of a quantum computer.

To remove any misunderstanding: if you’re just talking about the use of quantum phenomena to generate random bits, without certifying the randomness of those bits to a faraway skeptic, then that’s been easy to do for generations (just stick a Geiger counter next to some radioactive material!). The new part, the part that requires a quantum computer, is all about the certification.

Also: if you’re talking about the use of separated, entangled parties to generate certified random bits by violating the Bell inequality (see eg here) — that approach does give certification, but the downside is that you need to believe that the two parties really are unable to communicate with each other, something that you couldn’t certify in practice over the Internet.  A quantum-computer-based protocol like mine, by contrast, requires just a single quantum device.

  • Why is the certification element important?

In any cryptographic application where you need to distribute random bits over the Internet, the fundamental question is, why should everyone trust that these bits are truly random, rather than being backdoored by an adversary?

This isn’t so easy to solve.  If you consider any classical method for generating random bits, an adversary could substitute a cryptographic pseudorandom generator without anyone being the wiser.

The key insight behind the quantum protocol is that a quantum computer can solve certain problems efficiently, but only (it’s conjectured, and proven under plausible assumptions) by sampling an answer randomly — thereby giving you certified randomness, once you verify that the quantum computer really has solved the problem in question.  Unlike with a classical computer, there’s no way to substitute a pseudorandom generator, since randomness is just an inherent part of a quantum computer’s operation — specifically, when the entangled superposition state randomly collapses on measurement.

  • What are the applications and possible uses?

One potential application is to proof-of-stake cryptocurrencies, like Ethereum.  These cryptocurrencies are vastly more energy-efficient than “proof-of-work” cryptocurrencies (like Bitcoin), but they require lotteries to be run constantly to decide which currency holder gets to add the next block to the blockchain (and get paid for it).  Billions of dollars are riding on these lotteries being fair.

Other potential applications are to zero-knowledge protocols, lotteries and online gambling, and deciding which precincts to audit in elections. See here for a nice perspective article that JPMC put together discussing these and other potential applications.

Having said all this, a major problem right now is that verifying the results using a classical computer is extremely expensive — indeed, basically as expensive as spoofing the results would be.  This problem, and other problems related to verification (eg “why should everyone else trust the verifier?”), are the reasons why most people will probably pass on this solution in the near future, and generate random bits in simpler, non-quantum-computational ways.

We do know, from e.g. Brakerski et al.’s work, that the problem of making the verification fast is solvable with sufficient advancements in quantum computing hardware.  Even without hardware advancements, it might also be solvable with new theoretical ideas — one of my favorite research directions.

  • Is this is an early win for quantum computing?

It’s not directly an advancement in quantum computing hardware, but yes, it’s a very nice demonstration of such advancements — of something that’s possible today but wouldn’t have been possible just a few short years ago.  It’s a step toward using current, non-error-corrected quantum computers for a practical application that’s not itself about quantum mechanics but that really does inherently require quantum computers.

Of course it’s personally gratifying to see something I developed get experimentally realized after seven years.  Huge congratulations to the teams at JP Morgan Chase and Quantinuum, and thanks to them for the hard work they put into this.


Unrelated Announcement: See here for a podcast about quantum computing that I recorded with, of all organizations, the FBI. As I told the gentlemen who interviewed me, I’m glad the FBI still exists, let alone its podcast!

Jacob Barandes and Me

Tuesday, March 4th, 2025

Please enjoy Harvard’s Jacob Barandes and yours truly duking it out for 2.5 hours on YouTube about the interpretation of quantum mechanics, and specifically Jacob’s recent proposal involving “indivisible stochastic dynamics,” with Curt Jaimungal as moderator. As always, I strongly recommend watching with captions turned on and at 2X speed.

To summarize what I learned in one paragraph: just like in Bohmian mechanics, Jacob wants classical trajectories for particles, which are so constructed to reproduce the predictions of QM perfectly. But unlike the Bohmians, Jacob doesn’t want to commit to any particular rule for the evolution of those particle trajectories. He merely asserts, metaphysically, that the trajectories exist. My response was basically, “OK fine, you can do that if you want, but what does it buy me?” We basically went around in circles on that question the entire time, though hopefully with many edutaining disgressions.

Despite the lack of resolution, I felt pretty good about the conversation afterward: Jacob got an extensive opportunity to explain his ideas to listeners, along with his detailed beefs against both the Many-Worlds and Copenhagen interpretations. Meanwhile, even though I spoke less than Jacob, I did get some opportunities to do my job, pushing back and asking the kinds of questions I imagined most physicists would ask (even though I’m not a physicist, I felt compelled to represent them!). Jacob and I ended the conversation much as we began: disagreeing on extremely friendly terms.

Then, alas, I read the comments on YouTube and got depressed. Apparently, I’m a hidebound academic elitist who’s failed to grasp Jacob’s revolutionary, paradigm-smashing theory, and who kept arrogantly interrupting with snide, impertinent questions (“OK, but what can I do with this theory that I couldn’t do before?”). And, I learned, the ultimate proof of my smug, ivory-tower malice was to be found in my body language, the way I constantly smiled nervously and rocked back and forth. I couldn’t help but wonder: have these people watched any other YouTube videos that I’m in? I don’t get to pick how I look and sound. I came out of the factory this way.

One commenter opined that I must hate Jacob’s theory only because I’ve poured my life into quantum computing, which depends on superposition, the confusing concept that Jacob has now unmasked as a farce. Presumably it’s beyond this person’s comprehension that Jacob makes exactly the same predictions as I make for what a quantum computer will do when built; Jacob just prefers a different way of talking about it.

I was reminded that optimizing for one’s scientific colleagues is wildly different from optimizing for YouTube engagement. In science, it’s obvious to everyone that the burden of proof is on whoever is presenting the new idea—and that this burden is high, especially with anything as well-trodden and skull-strewn as the foundations of quantum mechanics, albeit not infinitely high. The way the game works is: other people try as hard as they can to shoot the new idea down, so we see how it fares under duress. This is not a sign of contempt for new ideas, but of respect for them.

On YouTube, the situation is precisely reversed. There, anyone perceived as the “mainstream establishment” faces a near-insurmountable burden of proof, while anyone perceived as “renegade” wins by default if they identify any hole whatsoever in mainstream understanding. Crucially, the renegade’s own alternative theories are under no particular burden; indeed, the details of their theories are not even that important or relevant. I don’t want to list science YouTubers who’ve learned to exploit that dynamic masterfully, though I’m told one rhymes with “Frabine Schlossenfelder.” Of course this mirrors what’s happened in the wider world, where RFK Jr. now runs American health policy, Tulsi Gabbard runs the intelligence establishment, and other conspiracy theorists have at last fired all the experts and taken control of our civilization, and are eagerly mashing the buttons to see what happens. I’d take Jacob Barandes, or even Sabine, a billion times over the lunatics in power. But I do hope Jacob turns out to be wrong about Many-Worlds, because it would give my solace to know that there are other branches of the wavefunction where things are a little more sane.

FAQ on Microsoft’s topological qubit thing

Thursday, February 20th, 2025

Q1. Did you see Microsoft’s announcement?
A. Yes, thanks, you can stop emailing to ask! Microsoft’s Chetan Nayak was even kind enough to give me a personal briefing a few weeks ago. Yesterday I did a brief interview on this for the BBC’s World Business Report, and I also commented for MIT Technology Review.

Q2. What is a topological qubit?
A. It’s a special kind of qubit built using nonabelian anyons, which are excitations that can exist in a two-dimensional medium, behaving neither as fermions nor as bosons. The idea grew out of seminal work by Alexei Kitaev, Michael Freedman, and others starting in the late 1990s. Topological qubits have proved harder to create and control than ordinary qubits.

Q3. Then why do people care about topological qubits?
A. The dream is that they could eventually be more resilient to decoherence than regular qubits, since an error, in order to matter, needs to change the topology of how the nonabelian anyons are braided around each other. So you’d have some robustness built in to the physics of your system, rather than having to engineer it laboriously at the software level (via quantum fault-tolerance).

Q4. Did Microsoft create the first topological qubit?
A. Well, they say they did! [Update: Commenters point out to me that buried in Nature‘s review materials is the following striking passage: “The editorial team wishes to point out that the results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices. The work is published for introducing a device architecture that might enable fusion experiments using future Majorana zero modes.” So, the situation is that Microsoft is unambiguously claiming to have created a topological qubit, and they just published a relevant paper in Nature, but their claim to have created a topological qubit has not yet been accepted by peer review.]

Q5. Didn’t Microsoft claim the experimental creation of Majorana zero modes—a building block of topological qubits—back in 2018, and didn’t they then need to retract their claim?
A. Yep. Certainly that history is making some experts cautious about the new claim. When I asked Chetan Nayak how confident I should be, his response was basically “look, we now have a topological qubit that’s behaving fully as a qubit; how much more do people want?”

Q6. Is this a big deal?
A. If the claim stands, I’d say it would be a scientific milestone for the field of topological quantum computing and physics beyond. The number of topological qubits manipulated in a single experiment would then have finally increased from 0 to 1, and depending on how you define things, arguably a “new state of matter” would even have been created, one that doesn’t appear in nature (but only in Nature).

Q7. Is this useful?
A. Not yet! If anyone claims that a single qubit, or even 30 qubits, are already useful for speeding up computation, you can ignore anything else that person says. (Certainly Microsoft makes no such claim.) On the question of what we believe quantum computers will or won’t eventually be useful for, see like half the archives of this blog over the past twenty years.

Q8. Does this announcement vindicate topological qubits as the way forward for quantum computing?
A. Think of it this way. If Microsoft’s claim stands, then topological qubits have finally reached some sort of parity with where more traditional qubits were 20-30 years ago. I.e., the non-topological approaches like superconducting, trapped-ion, and neutral-atom have an absolutely massive head start: there, Google, IBM, Quantinuum, QuEra, and other companies now routinely do experiments with dozens or even hundreds of entangled qubits, and thousands of two-qubit gates. Topological qubits can win if, and only if, they turn out to be so much more reliable that they leapfrog the earlier approaches—sort of like the transistor did to the vacuum tube and electromechanical relay. Whether that will happen is still an open question, to put it extremely mildly.

Q9. Are there other major commercial efforts to build topological qubits?
A. No, it’s pretty much just Microsoft [update: apparently Nokia Bell Labs also has a smaller, quieter effort, and Delft University in the Netherlands also continues work in the area, having ended an earlier collaboration with Microsoft]. Purely as a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog.

Q10. Is Microsoft now on track to scale to a million topological qubits in the next few years?
A. In the world of corporate PR and pop-science headlines, sure, why not? As Bender from Futurama says, “I can guarantee anything you want!” In the world of reality, a “few years” certainly feels overly aggressive to me, but good luck to Microsoft and good luck to its competitors! I foresee exciting times ahead, provided we still have a functioning civilization in which to enjoy them.

Update (Feb 20): Chetan Nayak himself comments here, to respond to criticisms about Microsoft’s Nature paper lacking direct evidence for majorana zero modes or topological qubits. He says that the paper, though published this week, was submitted a year ago, before the evidence existed. Of course we all look forward to the followup paper.

Good news for once! A faster Quantum Fourier Transform

Thursday, January 23rd, 2025

Update: In the comments, Craig Gidney points out that Ronit’s O(n log2 n) quantum circuits for the exact QFT were already published by Cleve and Watrous in 2000 (in a paper whose main point was something else, parallelization). Ronit’s O(n (log log n)2) circuits for the approximate QFT still appear to be new (Gidney says he and others knew related techniques but had never explicitly combined them). Of course, while the exact result was Platonically “known,” it wasn’t sufficiently well known that any of the four quantum algorithms experts I’d consulted had heard of it! Hopefully this very post will go some way toward fixing the situation.

Another Update: Richard Cleve writes in to say that the approximate QFT circuits were known also—albeit, in an unpublished 2-page abstract by Ahokas, Hales, and himself from the 2003 ERATO conference, as well as a followup Master’s thesis by Ahokas. Unlike with the exact case, I’m not kicking myself trying to understand how I missed these.

Ironically, I hope this post helps get this prior work a well-deserved mention when the QFT is covered in introductory quantum information classes.

Meanwhile, my hope that Ronit returns to do more theory is undiminished! When I was a kid, I too started by rediscovering things (like the integral for the length of a curve) that were centuries old, then rediscovering things (like an efficient algorithm for isotonic regression) that were decades old, then rediscovering things (like BQP⊆PP) that were about a year old … until I finally started discovering things (like the collision lower bound) that were zero years old. This is the way.


In my last post, I tried to nudge the arc of history back onto the narrow path of reasoned dialogue, walking the mile-high tightrope between shrill, unsupported accusation and naïve moral blindness. For my trouble, I was condemned about equally by leftists for my right-wing sympathies and by rightists for my left-wing ones. So today, I’ll ignore the fate of civilization and return to quantum computing theory: a subject that’s reliably brought joy to my life for a quarter-century, and still does, even as my abilities fade. It turns out there is a consolation for advancing age and senility, and it’s called “students.”

This fall, I returned from my two-year leave at OpenAI to teach my undergrad Introduction to Quantum Information Science course at UT Austin. This course doesn’t pretend to bring students all the way to the research frontier, and yet sometimes it’s done so anyway. It was in my first offering of Intro to QIS, eight years ago, that I encountered the then 17-year-old Ewin Tang, who broke the curve and then wanted an independent study project. So I gave her the problem of proving that the Kerenidis-Prakash quantum algorithm achieves an exponential speedup over any classical algorithm for the same task, not expecting anything to come of it. But after a year of work, Ewin refuted my conjecture by dequantizing the K-P algorithm—a breakthrough that led to the demolition of many other hopes for quantum machine learning. (Demolishing people’s hopes? In complexity theory, we call that a proud day’s work.)

Today I’m delighted to announce that my undergrad quantum course has led to another quantum advance. One day, after my lecture, a junior named Ronit Shah came to me with an idea for how best to distinguish three possible states of a qubit, rather than only two. For some reason I didn’t think much of it at the time, even though it would later turn out that Ronit had essentially rediscovered the concept of POVMs, the Pretty Good Measurement (PGM), and the 2002 theorem that the PGM is optimal for distinguishing sets of states subject to a transitive group action.

Later, after I’d lectured about Shor’s algorithm, and one of its centerpieces, the O(n2)-gate recursive circuit for the Quantum Fourier Transform, Ronit struck a second time. He told me it should be possible to give a smaller circuit by recursively reducing the n-qubit QFT to two (n/2)-qubit QFTs, rather than to a single (n-1)-qubit QFT.

This was surely just a trivial confusion, perfectly excusable in an undergrad. Did Ronit perhaps not realize that an n-qubit unitary is actually a 2n×2n matrix, so he was proposing to pass directly from 2n×2n to 2n/2×2n/2, rather than to 2n-1×2n-1?

No, he said, he understood that perfectly well. He still thought the plan would work. Then he emailed me a writeup—claiming to implement the exact n-qubit QFT in O(n log2n) gates, the first-ever improvement over O(n2), and also the approximate n-qubit QFT in O(n (log log n)2) gates, the first-ever improvement over O(n log n). He used fast integer multiplication algorithms to make the new recursions work.

At that point, I did something I’m still ashamed of: I sat on Ronit’s writeup for three weeks. When I at last dug it out of my inbox and read it, I could discover no reason why it was wrong, or unoriginal, or unimportant. But I didn’t trust myself, so with Ronit’s permission I sent the work to some of my oldest quantum friends: Ronald de Wolf, Cris Moore, Andrew Childs, and Wim van Dam. They agreed, after some back-and-forth, that the new circuits looked legit. A keystone of Shor’s algorithm, of quantum computing itself, and of my undergrad class had seen its first real improvement since 1994.

Last night Ronit’s paper appeared on the arXiv where you can read it.

In case anyone asks: no, this probably has no practical implication for speeding up factoring on a quantum computer, since the QFT wasn’t the expensive part of Shor’s algorithm anyway—that’s the modular exponentiation—and also, the O(n log n) approximate QFT would already have been used in practice. But it’s conceivable that Ronit’s circuits could speed up other practical quantum computing tasks! And no, we have no idea what’s the ultimate limit here, as usual in circuit complexity. Could the exact n-qubit QFT even be doable in O(n) gates?

I’d love for Ronit to continue in quantum computing theory. But in what’s surely a sign of the times, he’s just gone on leave from UT to intern at an AI hardware startup. I hope he returns and does some more theory, but if he doesn’t, I’m grateful that he shared this little gem with us on his way to more world-changing endeavors.

Above my pay grade: Jensen Huang and the quantum computing stock market crash

Thursday, January 9th, 2025

Update (Jan. 13): Readers might enjoy the Bankless Podcast, in which I and Justin Drake of the Ethereum engineering team discuss quantum computing and its impact on cryptocurrency. I learned something interesting from Justin—namely that Satoshi has about $90 billion worth of bitcoin that’s never been touched since the cryptocurrency’s earliest days, much of which (added: the early stuff, the stuff not additionally protected by a hash function) would be stealable by anyone who could break elliptic curve cryptography—for example, by using a scalable quantum computer. At what point in time, if any, would this stash acquire the moral or even legal status of (say) gold doubloons just lying on the bottom of the ocean? Arrr, ’tis avast Hilbert space!


Apparently Jensen Huang, the CEO of NVIDIA, opined on an analyst call this week that quantum computing was plausibly still twenty years away from being practical. As a direct result, a bunch of publicly-traded quantum computing companies (including IonQ, Rigetti, and D-Wave) fell 40% or more in value, and even Google/Alphabet stock fell on the news.

So then friends and family attuned to the financial markets started sending me messages asking for my reaction, as the world’s semi-unwilling Quantum Computing Opiner-in-Chief.

My reaction? Mostly just that it felt really weird for all those billions of dollars to change hands, or evaporate, based on what a microchip CEO offhandedly opined about my tiny little field, while I (like much of that field) could’ve remained entirely oblivious to it, were it not for all of their messages!

But was Jensen Huang right in his time estimate? And, relatedly, what is the “correct” valuation of quantum computing companies? Alas, however much more I know about quantum computing than Jensen Huang does, the knowledge does not enable me to answer to either question.

I can, of course, pontificate about the questions, as I can pontificate about anything.

To start with the question of timelines: yes, there’s a lot still to be done, and twenty years might well be correct. But as I’ve pointed out before, within the past year we’ve seen 2-qubit gates with ~99.9% fidelity, which is very near the threshold for practical fault-tolerance. And of course, Google has now demonstrated fault-tolerance that becomes more and more of a win with increasing code size. So no, I can’t confidently rule out commercially useful quantum simulations within the next decade. Like, it sounds fanciful, but then I remember how fanciful it would’ve seemed in 2012 that we’d have conversational AI by 2022. I was alive in 2012! And speaking of which, if you really believe (as many people now do) AI will match or exceed human capabilities in most fields in the next decade, then that will scramble all the other timelines too. And presumably Jensen Huang understands these points as well as anyone.

Now for the valuation question. On the one hand, Shtetl-Optimized readers will know that there’s been plenty of obfuscation and even outright lying, to journalists, the public, and investors, about what quantum computing will be good for and how soon. To whatever extent the previous valuations were based on that lying, a brutal correction was of course in order, regardless of what triggered it.

On the other hand, I can’t say with certainty that high valuations are wrong! After all, even if there’s only a 10% chance that something will produce $100B in value, that would still justify a $10B valuation. It’s a completely different way of thinking than what we’re used to in academia.

For whatever it’s worth, my own family’s money is just sitting in index funds and CDs. I have no quantum computing investments of any kind. I do sometimes accept consulting fees to talk to quantum computing startups and report back my thoughts. When I do, my highest recommendation is: “these people are smart and honest, everything they say about quantum algorithms is correct insofar as I can judge, and I hope they succeed. I wouldn’t invest my own money, but I’m very happy if you or anyone else does.” Meanwhile, my lowest recommendation is: “these people are hypesters and charlatans, and I hope they fail. But even then, I can’t say with confidence that their valuation won’t temporarily skyrocket, in which case investing in them would presumably have been the right call.”

So basically: it’s good that I became an academic rather than an investor.


Having returned from family vacation, I hope to get back to a more regular blogging schedule … let’s see how it goes!

The Google Willow thing

Tuesday, December 10th, 2024

Yesterday I arrived in Santa Clara for the Q2B (Quantum 2 Business) conference, which starts this morning, and where I’ll be speaking Thursday on “Quantum Algorithms in 2024: How Should We Feel?” and also closing the conference via an Ask-Us-Anything session with John Preskill. (If you’re at Q2B, reader, come and say hi!)

And to coincide with Q2B, yesterday Google’s Quantum group officially announced “Willow,” its new 105-qubit superconducting chip with which it’s demonstrated an error-corrected surface code qubit as well as a new, bigger quantum supremacy experiment based on Random Circuit Sampling. I was lucky to be able to attend Google’s announcement ceremony yesterday afternoon at the Computer History Museum in Mountain View, where friend-of-the-blog-for-decades Dave Bacon and other Google quantum people explained exactly what was done and took questions (the technical level was surprisingly high for this sort of event). I was also lucky to get a personal briefing last week from Google’s Sergio Boixo on what happened.

Meanwhile, yesterday Sundar Pichai tweeted about Willow, and Elon Musk replied “Wow.” It cannot be denied that those are both things that happened.

Anyway, all yesterday, I then read comments on Twitter, Hacker News, etc. complaining that, since there wasn’t yet a post on Shtetl-Optimized, how could anyone possibly know what to think of this?? For 20 years I’ve been trying to teach the world how to fish in Hilbert space, but (sigh) I suppose I’ll just hand out some more fish. So, here are my comments:

  1. Yes, this is great. Yes, it’s a real milestone for the field. To be clear: for anyone who’s been following experimental quantum computing these past five years (say, since Google’s original quantum supremacy milestone in 2019), there’s no particular shock here. Since 2019, Google has roughly doubled the number of qubits on its chip and, more importantly, increased the qubits’ coherence time by a factor of 5. Meanwhile, their 2-qubit gate fidelity is now roughly 99.7% (for controlled-Z gates) or 99.85% (for “iswap” gates), compared to ~99.5% in 2019. They then did the more impressive demonstrations that predictably become possible with more and better qubits. And yet, even if the progress is broadly in line with what most of us expected, it’s still of course immensely gratifying to see everything actually work! Huge congratulations to everyone on the Google team for a well-deserved success.
  2. I already blogged about this!!! Specifically, I blogged about Google’s fault-tolerance milestone when its preprint appeared on the arXiv back in August. To clarify, what we’re all talking about now is the same basic technical advance that Google already reported in August, except now with the PR blitz from Sundar Pichai on down, a Nature paper, an official name for the chip (“Willow”), and a bunch of additional details about it.
  3. Scientifically, the headline result is that, as they increase the size of their surface code, from 3×3 to 5×5 to 7×7, Google finds that their encoded logical qubit stays alive for longer rather than shorter. So, this is a very important threshold that’s now been crossed. As Dave Bacon put it to me, “eddies are now forming”—or, to switch metaphors, after 30 years we’re now finally tickling the tail of the dragon of quantum fault-tolerance, the dragon that (once fully awoken) will let logical qubits be preserved and acted on for basically arbitrary amounts of time, allowing scalable quantum computation.
  4. Having said that, Sergio Boixo tells me that Google will only consider itself to have created a “true” fault-tolerant qubit, once it can do fault-tolerant two-qubit gates with an error of ~10-6 (and thus, on the order of a million fault-tolerant operations before suffering a single error). We’re still some ways from that milestone: after all, in this experiment Google created only a single encoded qubit, and didn’t even try to do encoded operations on it, let alone on multiple encoded qubits. But all in good time. Please don’t ask me to predict how long, though empirically, the time from one major experimental QC milestone to the next now seems to be measured in years, which are longer than weeks but shorter than decades.
  5. Google has also announced a new quantum supremacy experiment on its 105-qubit chip, based on Random Circuit Sampling with 40 layers of gates. Notably, they say that, if you use the best currently-known simulation algorithms (based on Johnnie Gray’s optimized tensor network contraction), as well as an exascale supercomputer, their new experiment would take ~300 million years to simulate classically if memory is not an issue, or ~1025 years if memory is an issue (note that a mere ~1010 years have elapsed since the Big Bang). Probably some people have come here expecting me to debunk those numbers, but as far as I know they’re entirely correct, with the caveats stated. Naturally it’s possible that better classical simulation methods will be discovered, but meanwhile the experiments themselves will also rapidly improve.
  6. Having said that, the biggest caveat to the “1025 years” result is one to which I fear Google drew insufficient attention. Namely, for the exact same reason why (as far as anyone knows) this quantum computation would take ~1025 years for a classical computer to simulate, it would also take ~1025 years for a classical computer to directly verify the quantum computer’s results!! (For example, by computing the “Linear Cross-Entropy” score of the outputs.) For this reason, all validation of Google’s new supremacy experiment is indirect, based on extrapolations from smaller circuits, ones for which a classical computer can feasibly check the results. To be clear, I personally see no reason to doubt those extrapolations. But for anyone who wonders why I’ve been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments: well, this is why! We’re now deeply into the unverifiable regime that I warned about.
  7. In his remarks yesterday, Google Quantum AI leader Hartmut Neven talked about David Deutsch’s argument, way back in the 1990s, that quantum computers should force us to accept the reality of the Everettian multiverse, since “where else could the computation have happened, if it wasn’t being farmed out to parallel universes?” And naturally there was lots of debate about that on Hacker News and so forth. Let me confine myself here to saying that, in my view, the new experiment doesn’t add anything new to this old debate. It’s yet another confirmation of the predictions of quantum mechanics. What those predictions mean for our understanding of reality can continue to be argued as it’s been since the 1920s.
  8. Cade Metz did a piece about Google’s announcement for the New York Times. Alas, when Cade reached out to me for comment, I decided that it would be too awkward, after what Cade did to my friend Scott Alexander almost four years ago. I talked to several other journalists, such as Adrian Cho for Science.
  9. No doubt people will ask me what this means for superconducting qubits versus trapped-ion or neutral-atom or photonic qubits, or for Google versus its many competitors in experimental QC. And, I mean, it’s not bad for Google or for superconducting QC! These past couple years I’d sometimes commented that, since Google’s 2019 announcement of quantum supremacy via superconducting qubits, the trapped-ion and neutral-atom approaches had seemed to be pulling ahead, with spectacular results from Quantinuum (trapped-ion) and QuEra (neutral atoms) among others. One could think of Willow as Google’s reply, putting the ball in competitors’ courts likewise to demonstrate better logical qubit lifetime with increasing code size (or, better yet, full operations on logical qubits exceeding that threshold, without resorting to postselection). The great advantage of trapped-ion qubits continues to be that you can move the qubits around (and also, the two-qubit gate fidelities seem somewhat ahead of superconducting). But to compensate, superconducting qubits have the advantage that the gates are a thousand times faster, which makes feasible to do experiments that require collecting millions of samples.
  10. Of course the big question, the one on everyone’s lips, was always how quantum computing skeptic Gil Kalai was going to respond. But we need not wonder! On his blog, Gil writes: “We did not study yet these particular claims by Google Quantum AI but my general conclusion apply to them ‘Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.’ ”  Most of Gil’s post is devoted to re-analyzing data from Google’s 2019 quantum supremacy experiment, which Gil continues to believe can’t possibly have done what was claimed. Gil’s problem is that the 2019 experiment was long ago superseded anyway: besides the new and more inarguable Google result, IBM, Quantinuum, QuEra, and USTC have now all also reported Random Circuit Sampling experiments with good results. I predict that Gil, and others who take it as axiomatic that scalable quantum computing is impossible, will continue to have their work cut out for them in this new world.

Update: Here’s Sabine Hossenfelder’s take. I don’t think she and I disagree about any of the actual facts; she just decided to frame things much more negatively. Ironically, I guess 20 years of covering hyped, dishonestly-presented non-milestones in quantum computing has inclined me to be pretty positive when a group puts in this much work, demonstrates a real milestone, and talks about it without obvious falsehoods!