Archive for the ‘Quantum’ Category

HSBC unleashes yet another “qombie”: a zombie claim of quantum advantage that isn’t

Thursday, September 25th, 2025

Today, I got email after email asking me to comment on a new paper from HSBC—yes, the bank—together with IBM. The paper claims to use a quantum computer to get a 34% advantage in predictions of financial trading data. (See also blog posts here and here, or numerous popular articles that you can easily find and I won’t link.) What have we got? Let’s read the abstract:

The estimation of fill probabilities for trade orders represents a key ingredient in the optimization of algorithmic trading strategies. It is bound by the complex dynamics of financial markets with inherent uncertainties, and the limitations of models aiming to learn from multivariate financial time series that often exhibit stochastic properties with hidden temporal patterns. In this paper, we focus on algorithmic responses to trade inquiries in the corporate bond market and investigate fill probability estimation errors of common machine learning models when given real production-scale intraday trade event data, transformed by a quantum algorithm running on IBM Heron processors, as well as on noiseless quantum simulators for comparison. We introduce a framework to embed these quantum-generated data transforms as a decoupled offline component that can be selectively queried by models in lowlatency institutional trade optimization settings. A trade execution backtesting method is employed to evaluate the fill prediction performance of these models in relation to their input data. We observe a relative gain of up to ∼ 34% in out-of-sample test scores for those models with access to quantum hardware-transformed data over those using the original trading data or transforms by noiseless quantum simulation. These empirical results suggest that the inherent noise in current quantum hardware contributes to this effect and motivates further studies. Our work demonstrates the emerging potential of quantum computing as a complementary explorative tool in quantitative finance and encourages applied industry research towards practical applications in trading.

As they say, there are more red flags here than in a People’s Liberation Army parade. To critique this paper is not quite “shooting fish in a barrel,” because the fish are already dead before we’ve reached the end of the abstract.

They see a quantum advantage for the task in question, but only because of the noise in their quantum hardware? When they simulate the noiseless quantum computation classically, the advantage disappears? WTF? This strikes me as all but an admission that the “advantage” is just a strange artifact of the particular methods that they decided to compare—that it has nothing really to do with quantum mechanics in general, or with quantum computational speedup in particular.

Indeed, the possibility of selection bias rears its head. How many times did someone do some totally unprincipled, stab-in-the-dark comparison of a specific quantum learning method against a specific classical method, and get predictions from the quantum method that were worse than whatever they got classically … so then they didn’t publish a paper about it?

If it seems like I’m being harsh, it’s because to my mind, the entire concept of this sort of study is fatally flawed from the beginning, optimized for generating headlines rather than knowledge.  The first task, I would’ve thought, is to show the reality of quantum computational advantage in the system or algorithm under investigation, even just for a useless benchmark problem. Only after one has done that, has one earned the right to look for a practical benefit in algorithmic trading or predicting financial time-series data or whatever, coming from that same advantage. If you skip the first step, then whatever “benefits” you get from your quantum computer are overwhelmingly likely to be cargo cult benefits.

And yet none of it matters. The paper can, more or less, openly admit all this right in the abstract, and yet it will still predictably generate lots of credulous articles in the business and financial news about HSBC using quantum computers to improve bond trading!—which, one assumes, was the point of the exercise from the beginning. Qombies roam the earth: undead narratives of “quantum advantage for important business problems” detached from any serious underlying truth-claim. And even here at one of the top 50 quantum computing blogs on the planet, there’s nothing I can do about it other than scream into the void.


Update (Sep. 26): Someone let me know that Martin Shkreli, the “pharma bro,” will be hosting a conference call for investors to push back on quantum computing hype. He announced on X that he’s offering quantum computing experts $2k each to speak in his call. On the off chance that Shkreli reads this blog: I’d be willing to do it for $50k. And if Shkreli were to complain about my jacking up the price… 😄

Quantum Information Supremacy

Friday, September 12th, 2025

I’m thrilled that our paper entitled Demonstrating an unconditional separation between quantum and classical information resources, based on a collaboration between UT Austin and Quantinuum, is finally up on the arXiv. I’m equally thrilled that my coauthor and former PhD student William Kretschmer — who led the theory for this project, and even wrote much of the code — is now my faculty colleague at UT Austin! My physics colleague Nick Hunter-Jones and my current PhD student Sabee Grewal made important contributions as well. I’d especially like to thank the team at Quantinuum for recognizing a unique opportunity to test and showcase their cutting-edge hardware, and collaborating with us wild-eyed theorists to make it happen. This is something that, crucially, would not have been feasible with the quantum computing hardware of only a couple years ago.

Here’s our abstract, which I think explains what we did clearly enough, although do read the paper for more:

A longstanding goal in quantum information science is to demonstrate quantum computations that cannot be feasibly reproduced on a classical computer. Such demonstrations mark major milestones: they showcase fine control over quantum systems and are prerequisites for useful quantum computation. To date, quantum advantage has been demonstrated, for example, through violations of Bell inequalities and sampling-based quantum supremacy experiments. However, both forms of advantage come with important caveats: Bell tests are not computationally difficult tasks, and the classical hardness of sampling experiments relies on unproven complexity-theoretic assumptions. Here we demonstrate an unconditional quantum advantage in information resources required for a computational task, realized on Quantinuum’s H1-1 trapped-ion quantum computer operating at a median two-qubit partial-entangler fidelity of 99.941(7)%. We construct a task for which the most space-efficient classical algorithm provably requires between 62 and 382 bits of memory, and solve it using only 12 qubits. Our result provides the most direct evidence yet that currently existing quantum processors can generate and manipulate entangled states of sufficient complexity to access the exponentiality of Hilbert space. This form of quantum advantage — which we call quantum information supremacy — represents a new benchmark in quantum computing, one that does not rely on unproven conjectures.

I’m very happy to field questions about this paper in the comments section.


Unrelated Announcement: As some of you might have seen, yesterday’s Wall Street Journal carried a piece by Dan Kagan-Kans on “The Rise of ‘Conspiracy Physics.'” I talked to the author for the piece, and he quoted this blog in the following passage:

This resentment of scientific authority figures is the major attraction of what might be called “conspiracy physics.” Most fringe theories are too arcane for listeners to understand, but anyone can grasp the idea that academic physics is just one more corrupt and self-serving establishment. The German physicist Sabine Hossenfelder has attracted 1.72 million YouTube subscribers in part by attacking her colleagues: “Your problem is that you’re lying to the people who pay you,” she declared. “Your problem is that you’re cowards without a shred of scientific integrity.”

In this corner of the internet, the scientist Scott Aaronson has written, “Anyone perceived as the ‘mainstream establishment’ faces a near-insurmountable burden of proof, while anyone perceived as ‘renegade’ wins by default if they identify any hole whatsoever in mainstream understanding.”

Updates!

Wednesday, August 13th, 2025

(1) My 8-year-old son asked me last week, “daddy, did you hear that GPT-5 is now out?” So yes, I’m indeed aware that GPT-5 is now out! I’ve just started playing around with it. For detailed reports on what’s changed and how impressive it is compared to previous models, see for example Zvi #1, #2, #3. Briefly, it looks like there are major reductions in hallucinations and sycophancy, and improvements in practical usefulness for coding and other tasks, even while the “raw intelligence” is unlikely to blow away someone who was already well-acquainted with o3 and Opus 4 other state-of-the-art models, the way ChatGPT and then GPT-4 blew away people who had no idea what was possible in late 2022 and early 2023. Partly how impressive a result you see depends on which of several GPT-5 models your query gets routed to, which you don’t entirely control. Anyway, there’s grist here for the people who claim that progress toward AGI is slowing down, but also grist for the people who claim that it continues pretty much as expected within our post-ChatGPT reality!

(2) In other belated news, OpenAI and DeepMind (and then, other companies) announced that they achieved Gold Medal performance on the International Math Olympiad, by solving 5 of the 6 problems (there was one problem, the 6th and hardest, that all of the AIs struggled with). Most importantly, this means that I’ve won $100 from my friend Ernest Davis, AI expert at NYU, who bet me $100 that no AI would earn a Gold Medal at the International Math Olympiad by December 4, 2026. Even though I’m normally risk-averse and reluctant to take bets, I considered this one to be extremely safe, and indeed I won it with more than a year to spare.

(3) I’ve signed an open letter to OpenAI, along with many of my fellow former OpenAI employees as well as distinguished scientists and writers (Geoffrey Hinton, Stuart Russell, Sheldon Glashow, Sean Carroll, Matt Yglesias…), asking for more transparency about OpenAI’s continuing efforts to change its own structure. The questions basically ask OpenAI to declare, in writing, whether it has or hasn’t now completely abandoned the original nonprofit goals with which the organization was founded in 2015.

(4) At Lighthaven, the rationalist meeting space in Berkeley that I recently visited (and that our friend Cade Metz recently cast aspersions on in the New York Times), there’s going to be a writer’s residency called Inkhaven for the whole month of November. The idea—which I love—is that you either write a new blog post every day, or else you get asked to leave (while you also attend workshops, etc. to improve your writing skills). I’d attend myself for the month if teaching and family obligations didn’t conflict; someone standing over me with a whip to make me write is precisely what I need these days! As it is, I’m one of the three advisors to Inkhaven, along with Scott Alexander and Gwern, and I’ll be visiting for a long weekend to share my blogging wisdom, such as I have. Apply now if you’re interested!

(5) Alas, the Springer journal Frontiers of Computer Science has published a nonsense paper, entitled “SAT requires exhaustive search,” claiming to solve (or dissolve, or reframe, or something) the P versus NP problem. It looks indistinguishable from the stuff I used to get in my inbox every week—and now, in the ChatGPT era, get every day. That this was published indicates a total breakdown of the peer review process. Worse, when Eric Allender, Ryan Williams, and others notified the editors of this, asking for the paper to be retracted, the editor-in-chief declined to do so: see this guest post on Lance’s blog for a detailed account. As far as I’m concerned, Frontiers of Computer Science has now completely discredited itself as a journal; publication there means nothing more than publication in viXra. Minus 10 points for journals themselves as an institution, plus 10 points for just posting stuff online and letting it be filtered by experts who care.

(6) Uma Girish and Rocco Servedio released an arXiv preprint called Forrelation is Extremally Hard. Recall that, in the Forrelation problem, you’re given oracle access to two n-bit Boolean functions f and g, and asked to estimate the correlation between f and the Fourier transform of g. I introduced this problem in 2009, as a candidate for an oracle separation between BQP and the polynomial hierarchy—a conjecture that Ran Raz and Avishay Tal finally proved in 2018. What I never imagined was that Forrelation could lead to an oracle separation between EQP (that is, Exact Quantum Polynomial Time) and the polynomial hierarchy. For that, I thought you’d need to go back to the original Recursive Fourier Sampling problem of Bernstein and Vazirani. But Uma and Rocco show, using “bent Boolean functions” (get bent!) and totally contrary to my intuition, that the exact (zero-error) version of Forrelation is already classically hard, taking Ω(2n/4) queries by any randomized algorithm. They leave open whether exact Forrelation needs ~Ω(2n/2) randomized queries, which would match the upper bound, and also whether exact Forrelation is not in PH.

(7) The Google quantum group, to little fanfare, published a paper entitled Constructive interference at the edge of quantum ergodic dynamics. Here, they use their 103-qubit superconducting processor to measure Out-of-Time-Order Correlators (OTOCs) in a many-body scrambling process, and claim to get a verifiable speedup over the best classical methods. If true, this is a great step toward verifiable quantum supremacy for a useful task, for some definition of “useful.”

(8) Last night, on the arXiv, the team at USTC in China reported that it’s done Gaussian BosonSampling with 3,050 photons and 8,176 modes. They say that this achieves quantum supremacy, much more clearly than any previous BosonSampling demonstration, beating (for example) all existing simulations based on tensor network contraction. Needless to say, this still suffers from the central problem of all current sampling-based quantum supremacy experiments, namely the exponential time needed for direct classical verification of the outputs.

Quantum Complexity Theory Student Project Showcase #5 (2025 Edition)!

Thursday, July 31st, 2025

Sorry for the long blog-hiatus! I was completely occupied for weeks, teaching an intensive course on theoretical computer science to 11-year-olds (!), at a math camp in St. Louis that was also attended by my 8-year-old son. Soon I’ll accompany my 12-year-old daughter to a science camp in Connecticut, where I’ll also give lectures.

There’s a great deal to say about these experiences, but for now: it’s been utterly transformative and life-affirming to spend my days in teaching precocious, enthusiastic, non-jaded children the material I love in the real world, rather than (let’s say) in scrolling through depressing world news and social media posts and emails from hateful trolls on my phone. It’s made me feel 25 years younger (well, at least until I need to walk up a flight of stairs). It’s made me want to go back to actual research and teaching, which besides family and friends have been the main sources of joy in my life.


So on that note, and without further ado: I hereby present the latest Quantum Complexity Theory Student Project Showcase! As the name suggests, this is where I share a selection of the best research projects, from the students who took my graduate class on Quantum Complexity Theory at UT Austin this past spring.

See here for the four previous iterations of the Showcase:

(As you can see, the timing hasn’t been 100% consistent.)

I expect that, as in past editions, many of this year’s projects will lead to published research papers, or at the very least, preprints on the arXiv.


And now, really without further ado, the projects!

(1) Quantum Hermite Transform and Gaussian Goldreich-Levin, by Vishnu Iyer and Siddhartha Jain.

Vishnu and Sid propose a new primitive for quantum algorithms—the Hermite transform, as opposed to the Fourier transform—and give at least one successful example of its use. They’d be eager to know if anyone can think of other applications!

(2) Quantum Statistical Witness Indistinguishability, by Shafik Nassar and Ronak Ramachandran.

In modern cryptography, even if it isn’t statistical zero-knowledge (SZK), a proof protocol might have the weaker property of being statistically witness-indistinguishable (SWI): that is, Arthur can’t tell which of two possible yes-witnesses Merlin holds. Here Shafik and Ronak initiate the study of quantum SWI, and prove the basic properties of this notion, such as the equivalence of honest and dishonest verifier. Hopefully this will serve as a springboard for someone to find an actual QSWI protocol for an interesting problem.

(3) A Zero-Knowledge Protocol for Keyed Unitary Families, by David Joy and Angela Zhang.

Continuing the theme of quantum zero-knowledge, David and Angela give a protocol by which Merlin can convince Arthur that he knows a unitary relating one pure state to another, without revealing the unitary. Again continuing a theme, applications of this protocol are sought!

(4) On Query Lower Bounds for Aaronson-Kuperberg Unitary Synthesis Circuits, by Arko Banerjee.

Back in 2006, when we formulated our so-called “Unitary Synthesis Conjecture,” Greg Kuperberg and I showed that if a quantum algorithm applies an n-qubit unitary U(f) after making a single query to a Boolean function f, then as we range over f’s, there can be at most 4n possible values of U(f). Here Arko improves our bound to 2n, which is tight. He also tries extremely hard to generalize our bound to the two-query case, not quite succeeding but proving partial results that hopefully will be helpful to others.

(5) Quantum Search with Non-Interacting Bosons and Fermions, by Aravind Karthigeyan.

This one really made me think. Aravind studies the problem of search for a single marked vertex, on the complete graph with N vertices, using either M bosons or M fermions that can hop between the vertices. With M bosons, he shows that the search succeeds in Θ(√(N/M)) time with high probability, which is just the usual runtime for Grover search with M parallel searchers. With fermions, by contrast, he shows that more time is needed. Why? Because of the Pauli Exclusion Principle! The fermions all “step on each others’ toes,” competing to be the one that jumps onto the marked vertex, which limits the advantage of having M fermions searching in parallel.

(6) Limits to Pseudodeterminism in Quantum Communication Protocols, by Jiawei Li.

Jiawei revisits the famous Hidden Matching Problem, which is known to have an exponential gap between its randomized one-way communication complexity of ~√n, and its quantum one-way communication complexity of ~log(n). He makes a new observation about this problem: namely, if you want the exponential quantum communication advantage, then you must content yourself with a protocol that can generate many different possible correct answers with appreciable probabilities (i.e., that generates large min-entropy). In other words, no good quantum protocol for the problem is pseudodeterministic. This complements, for example, my and Shih-Han Hung’s work, which showed the same statement for quantum supremacy experiments based on Random Circuit Sampling, or the long line of works that showed it for experiments that violate the Bell/CHSH inequality.

Congratulations to my students for their accomplishments, and thanks to them for giving me permission to include their work in this showcase!

Raymond Laflamme (1960-2025)

Tuesday, June 24th, 2025

Even with everything happening in the Middle East right now, even with (relatedly) everything happening in my own family (my wife and son sheltering in Tel Aviv as Iranian missiles rained down), even with all the rather ill-timed travel I’ve found myself doing as these events unfolded (Ecuador and the Galapagos and now STOC’2025 in Prague) … there’s been another thing, a huge one, weighing on my soul.

Ray Laflamme played a major role in launching the whole field of quantum computing and information, and also a major role in launching my own career. The world has lost him too soon. I’ve lost him too soon.

After growing up in Quebec—I still hear his French-Canadian accent, constantly on the verge of laughter, as I’m writing this—Ray went into physics and became a PhD student of Stephen Hawking. No, not a different Stephen Hawking. If you’ve read or watched anything by or about Hawking, including A Brief History of Time, you might remember the story where Hawking believed for a while that time would reverse itself as the universe contracted in a Big Crunch, with omelettes unscrambling themselves, old people turning into children, etc. etc., but then two graduate students persuaded him that that was totally wrong, and entropy would continue to increase like normal. Anyway, Ray was one of those students (Don Page was the other). I’d always meant to ask Ray to explain what argument changed Hawking’s mind, since the idea of entropy decreasing during contraction just seemed obviously wrong to me! Only today, while writing this post, did I find a 1993 paper by Hawking, Laflamme, and Lyons that explains the matter perfectly clearly, including three fallacious intuitions that Hawking had previously held. (Even though, as they comment, “the anatomy of error is not ruled by logic.”)

Anyway, in the mid-1990s, starting at Los Alamos National Lab and continuing at the University of Waterloo, Ray became a pioneer of the then-new field of quantum computing and information. In 1997, he was a coauthor of one of the seminal original papers that proved the possibility of fault-tolerant quantum computation with a constant error rate, what we now call the Threshold Theorem (Aharonov and Ben-Or had such a result independently). He made lots of other key early contributions to the theory of quantum error-correcting codes and fault-tolerance.

When it comes to Ray’s scientific achievements after his cosmology work with Hawking and after quantum fault-tolerance—well, there are many, but let me talk about two. Perhaps the biggest is the KLM (Knill-Laflamme-Milburn) Theorem. It would be fair to say that KLM started the entire field of optical or photonic quantum computation, as it’s existed in the 21st century. In one sentence, what KLM showed is that it’s possible to build a universal quantum computer using only

  1. identical single-photon states,
  2. a network of “linear-optical elements” (that is, beamsplitters and phaseshifters) that the photons travel through, and
  3. feedforward measurements—that is, measurements of an optical mode that tell you how many photons are there, in such a way that you can condition (using a classical computer) which optical elements to apply next on the outcome of the measurement.

All of a sudden, there was a viable path to building a quantum computer out of photons, where you wouldn’t need to get pairs of photons to interact with each other, which had previously been the central sticking point. The key insight was that feedforward measurements, combined with the statistical properties of identical bosons (what the photons are), are enough to simulate the effect of two-photon interactions.

Have you heard of PsiQuantum, the startup in Palo Alto with a $6 billion valuation and hundreds of employees that’s right now trying to build an optical quantum computer with a million qubits? Or Xanadu, its competitor in Toronto? These, in some sense, are companies that grew out of a theorem: specifically the KLM Theorem.

For me, though, the significance of KLM goes beyond the practical. In 2011, I used the KLM Theorem, together with the fact (known since the 1950s) that photonic amplitudes are the permanents of matrices, to give a new proof of Leslie Valiant’s celebrated 1979 theorem that calculating the permanent is a #P-complete problem. Thus, as I pointed out in a talk two years ago at Ray’s COVID-delayed 60th birthday conference, entitled Ray Laflamme, Complexity Theorist (?!), KLM had said something new about computational complexity, without any intention of doing so. More generally, KLM was crucial backdrop to my and Alex Arkhipov’s later work on BosonSampling, where we gave strong evidence that some classical computational hardness—albeit probably not universal quantum computation—remains in linear optics, even if one gets rid of KLM’s feedforward measurements.

(Incidentally, I gave my talk at Ray’s birthday conference by Zoom, as I had a conflicting engagement. I’m now sad about that: had I known that that would’ve been my last chance to see Ray, I would’ve cancelled any other plans.)

The second achievement of Ray’s that I wanted to mention was his 1998 creation, again with his frequent collaborator Manny Knill, of the One Clean Qubit or “DQC1” model of quantum computation. In this model, you get to apply an arbitrary sequence of 2-qubit unitary gates, followed by measurements at the end, just like in standard quantum computing—but the catch is that the initial state consists of just a single qubit in the state |0⟩, and all other qubits in the maximally mixed state. If all qubits started in the maximally mixed state, then nothing would ever happen, because the maximally mixed state is left invariant by all unitary transformations. So it would stand to reason that, if all but one of the qubits start out maximally mixed, then almost nothing happens. The big surprise is that this is wrong. Instead you get a model that, while probably not universal for quantum computation, can do a variety of things in polynomial time that we don’t know how to do classically, including estimating the traces of exponentially large unitary matrices and the Jones polynomials of trace closures of braids (indeed, both of these problems turn out to be DQC1-complete). The discovery of DQC1 was one of the first indications that there’s substructure within BQP. Since then, the DQC1 model has turned up again and again in seemingly unrelated investigations in quantum complexity theory—way more than you’d have any right to expect a priori.

Beyond his direct contributions to quantum information, Ray will be remembered as one of the great institution-builders of our field. He directed the Institute for Quantum Computing (IQC) at the University of Waterloo in Canada, from its founding in 2002 until he finally stepped down in 2017. This includes the years 2005-2007, when I was a postdoc at IQC—two of the most pivotal years of my life, when I first drove a car and went out on dates (neither of which I do any longer, for different reasons…), when I started this blog, when I worked on quantum money and learnability of quantum states and much more, and when I taught the course that turned into my book Quantum Computing Since Democritus. I fondly remember Ray, as my “boss,” showing me every possible kindness. He even personally attended the Quantum Computing Since Democritus lectures, which is why he appears as a character in the book.

As if that wasn’t enough, Ray also directed the quantum information program of the Canadian Institute for Advanced Research (CIFAR). If you ever wondered why Canada, as a nation, has punched so far above its weight in quantum computing and information for the past quarter-century—Ray Laflamme is part of the answer.

At the same time, if you imagine the stereotypical blankfaced university administrator, who thinks and talks only in generalities and platitudes (“how can we establish public-private partnerships to build a 21st-century quantum workforce?”) … well, Ray was whatever is the diametric opposite of that. Despite all his responsibilities, Ray never stopped being a mensch, a friend, an intellectually curious scientist, a truth-teller, and a jokester. Whenever he and I talked, probably at least a third of the conversation was raucous laughter.

I knew that Ray had spent many years battling cancer. I naïvely thought he was winning, or had won. But as so often with cancer, it looks like the victory was only temporary. I miss him already. He was a ray of light in the world—a ray that sparkles, illuminates, and as we now know, even has the latent power of universal quantum computation.

Quantum! AI! Everything but Trump!

Wednesday, April 30th, 2025
  • Grant Sanderson, of 3blue1brown, has put up a phenomenal YouTube video explaining Grover’s algorithm, and dispelling the fundamental misconception about quantum computing, that QC works simply by “trying all the possibilities in parallel.” Let me not futz around: this video explains, in 36 minutes, what I’ve tried to explain over and over on this blog for 20 years … and it does it better. It’s a masterpiece. Yes, I consulted with Grant for this video (he wanted my intuitions for “why is the answer √N?”), and I even have a cameo at the end of it, but I wish I had made the video. Damn you, Grant!
  • The incomparably great, and absurdly prolific, blogger Zvi Mowshowitz and yours truly spend 1 hour and 40 minutes discussing AI existential risk, education, blogging, and more. I end up “interviewing” Zvi, who does the majority of the talking, which is fine by me, as he has many important things to say! (Among them: his searing critique of those K-12 educators who see it as their life’s mission to prevent kids from learning too much too fast—I’ve linked his best piece on this from the header of this blog.) Thanks so much to Rick Coyle for arranging this conversation.
  • Progress in quantum complexity theory! In 2000, John Watrous showed that the Group Non-Membership problem is in the complexity class QMA (Quantum Merlin-Arthur). In other words, if some element g is not contained in a given subgroup H of an exponentially large finite group G, which is specified via a black box, then there’s a short quantum proof that g∉H, with only ~log|G| qubits, which can be verified on a quantum computer in time polynomial in log|G|. This soon raised the question of whether Group Non-Membership could be used to separate QMA from QCMA by oracles, where QCMA (Quantum Classical Merlin Arthur), defined by Aharonov and Naveh in 2002, is the subclass of QMA where the proof needs to be classical, but the verification procedure can still be quantum. In other words, could Group Non-Membership be the first non-quantum example where quantum proofs actually help?

    In 2006, alas, Greg Kuperberg and I showed that the answer was probably “no”: Group Non-Membership has “polynomial QCMA query complexity.” This means that there’s a QCMA protocol for the problem where Arthur makes only polylog|G| quantum queries to the group oracle—albeit, possibly an exponential in log|G| number of quantum computation steps besides that! To prove our result, Greg and I needed to make mild use of the Classification of Finite Simple Groups, one of the crowning achievements of 20th-century mathematics (its proof is about 15,000 pages long). We conjectured (but couldn’t prove) that someone else, who knew more about the Classification than we did, could show that Group Non-Membership was simply in QCMA outright.

    Now, after almost 20 years, François Le Gall, Harumichi Nishimura, and Dhara Thakkar have finally proven our conjecture—showing that Group Order, and therefore also Group Non-Membership, are indeed in QCMA. They did indeed need to use the Classification, doing one thing for almost all finite groups covered by the Classification, but a different thing for groups of “Ree type” (whatever those are).

    Interestingly, the Group Membership problem had also been a candidate for separating BQP/qpoly, or quantum polynomial time with polynomial-size quantum advice—my personal favorite complexity class—from BQP/poly, or the same thing with polynomial-size classical advice. And it might conceivably still be! The authors explain to me that their protocol doesn’t put Group Membership (with group G and subgroup H depending only on the input length n) into BQP/poly, the reason being that their short classical witnesses for g∉H depend on both g and H, in contrast to Watrous’s quantum witnesses which depended only on H. So there’s still plenty that’s open here! Actually, for that matter, I don’t know of good evidence that the entire Group Membership problem isn’t in BQP—i.e., that quantum computers can’t just solve the whole thing outright, with no Merlins or witnesses in sight!

    Anyway, huge congratulations to Le Gall, Nishimura, and Thakkar for peeling back our ignorance of these matters a bit further! Reeeeeeeee!
  • Potential big progress in quantum algorithms! Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone (GLM) have given what they present as a quantum algorithm to estimate the determinant of an n×n matrix A, exponentially faster in some contexts than we know how to do it classically.

    [Update (May 5): In the comments, Alessandro Luongo shares a paper where he and Changpeng Shao describe what appears to be essentially the same algorithm back in 2020.]

    The algorithm is closely related to the 2008 HHL (Harrow-Hassidim-Lloyd) quantum algorithm for solving systems of linear equations. Which means that anyone who knows the history of this class of quantum algorithms knows to ask immediately: what’s the fine print? A couple weeks ago, when I visited Harvard and MIT, I had a chance to catch up with Seth Lloyd, so I asked him, and he kindly told me. Firstly, we assume the matrix A is Hermitian and positive semidefinite. Next, we assume A is sparse, and not only that, but there’s a QRAM data structure that points to its nonzero entries, so you don’t need to do Grover search or the like to find them, and can query them in coherent superposition. Finally, we assume that all the eigenvalues of A are at least some constant λ>0. The algorithm then estimates det(A), to multiplicative error ε, in time that scales linearly with log(n), and polynomially with 1/λ and 1/ε.

    Now for the challenge I leave for ambitious readers: is there a classical randomized algorithm to estimate the determinant under the same assumptions and with comparable running time? In other words, can the GLM algorithm be “Ewinized”? Seth didn’t know, and I think it’s a wonderful crisp open question! On the one hand, if Ewinization is possible, it wouldn’t be the first time that publicity on this blog had led to the brutal murder of a tantalizing quantum speedup. On the other hand … well, maybe not! I also consider it possible that the problem solved by GLM—for exponentially-large, implicitly-specified matrices A—is BQP-complete, as for example was the general problem solved by HHL. This would mean, for example, that one could embed Shor’s factoring algorithm into GLM, and that there’s no hope of dequantizing it unless P=BQP. (Even then, though, just like with the HHL algorithm, we’d still face the question of whether the GLM algorithm was “independently useful,” or whether it merely reproduced quantum speedups that were already known.)

    Anyway, quantum algorithms research lives! So does dequantization research! If basic science in the US is able to continue at all—the thing I promised not to talk about in this post—we’ll have plenty to keep us busy over the next few years.

Theoretical Computer Science for AI Alignment … and More

Thursday, April 10th, 2025

In this terrifying time for the world, I’m delighted to announce a little glimmer of good news. I’m receiving a large grant from the wonderful Open Philanthropy, to build up a group of students and postdocs over the next few years, here at UT Austin, to do research in theoretical computer science that’s motivated by AI alignment. We’ll think about some of the same topics I thought about in my time at OpenAI—interpretability of neural nets, cryptographic backdoors, out-of-distribution generalization—but we also hope to be a sort of “consulting shop,” to whom anyone in the alignment community can come with theoretical computer science problems.

I already have two PhD students and several undergraduate students working in this direction. If you’re interested in doing a PhD in CS theory for AI alignment, feel free to apply to the CS PhD program at UT Austin this coming December and say so, listing me as a potential advisor.

Meanwhile, if you’re interested in a postdoc in CS theory for AI alignment, to start as early as this coming August, please email me your CV and links to representative publications, and arrange for two recommendation letters to be emailed to me.


The Open Philanthropy project will put me in regular contact with all sorts of people who are trying to develop complexity theory for AI interpretability and alignment. One great example of such a person is Eric Neyman—previously a PhD student of Tim Roughgarden at Columbia, now at the Alignment Research Center, the Berkeley organization founded by my former student Paul Christiano. Eric has asked me to share an exciting announcement, along similar lines to the above:

The Alignment Research Center (ARC) is looking for grad students and postdocs for its visiting researcher program. ARC is trying to develop algorithms for explaining neural network behavior, with the goal of advancing AI safety (see here for a more detailed summary). Our research approach is fairly theory-focused, and we are interested in applicants with backgrounds in CS theory or ML. Visiting researcher appointments are typically 10 weeks long, and are offered year-round.

If you are interested, you can apply here. (The link also provides more details about the role, including some samples of past work done by ARC.) If you have any questions, feel free to email hiring@alignment.org.

Some of my students and I are working closely with the ARC team. I like what I’ve seen of their research so far, and would encourage readers with the relevant background to apply.


Meantime, I of course continue to be interested in quantum computing! I’ve applied for multiple grants to continue doing quantum complexity theory, though whether or not I can get such grants will alas depend (among other factors) whether the US National Science Foundation continues to exist, as more than a shadow of what it was. The signs look ominous; Science magazine reports that the NSF just cut by half the number of awarded graduate fellowships, and this has almost certainly directly affected students who I know and care about.


Meantime we all do the best we can. My UTCS colleague, Chandrajit Bajaj, is currently seeking a postdoc in the general area of Statistical Machine Learning, Mathematics, and Statistical Physics, for up to three years. Topics include:

  • Learning various dynamical systems through their Stochastic Hamiltonians.  This involves many subproblems in geometry, stochastic optimization and stabilized flows which would be interesting in their own right.
  • Optimizing  task dynamics on different algebraic varieties of applied interest — Grassmanians, the Stiefel and Flag manifolds, Lie groups, etc.

If you’re interested, please email Chandra at bajaj@cs.utexas.edu.


Thanks so much to the folks at Open Philanthropy, and to everyone else doing their best to push basic research forward even while our civilization is on fire.

On the JPMC/Quantinuum certified quantum randomness demo

Wednesday, March 26th, 2025

These days, any quantum computing post I write ought to begin with the disclaimer that the armies of Sauron are triumphing around the globe, this is the darkest time for humanity most of us have ever known, and nothing else matters by comparison. Certainly not quantum computing. Nevertheless stuff happens in quantum computing and it often brings me happiness to blog about it—certainly more happiness than doomscrolling or political arguments.


So then: today JP Morgan Chase announced that, together with Quantinuum and DoE labs, they’ve experimentally demonstrated the protocol I proposed in 2018, and further developed in a STOC’2023 paper with Shih-Han Hung, for using current quantum supremacy experiments to generate certifiable random bits for use in cryptographic applications. See here for our paper in Nature—the JPMC team was gracious enough to include me and Shih-Han as coauthors.

Mirroring a conceptual split in the protocol itself, Quantinuum handled the quantum hardware part of my protocol, while JPMC handled the rest: modification of the protocol to make it suitable for trapped ions, as well as software to generate pseudorandom challenge circuits to send to the quantum computer over the Internet, then to verify the correctness of the quantum computer’s outputs (thereby ensuring, under reasonable complexity assumptions, that the outputs contained at least a certain amount of entropy), and finally to extract nearly uniform random bits from the outputs. The experiment used Quantinuum’s 56-qubit trapped-ion quantum computer, which was given and took a couple seconds to respond to each challenge. Verification of the outputs was done using the Frontier and Summit supercomputers. The team estimates that about 70,000 certified random bits were generated over 18 hours, in such a way that, using the best currently-known attack, you’d need at least about four Frontier supercomputers working continuously to spoof the quantum computer’s outputs, and get the verifier to accept non-random bits.

We should be clear that this gap, though impressive from the standpoint of demonstrating quantum supremacy with trapped ions, is not yet good enough for high-stakes cryptographic applications (more about that later). Another important caveat is that the parameters of the experiment aren’t yet good enough for my and Shih-Han’s formal security reduction to give assurances: instead, for the moment one only has “practical security,” or security against a class of simplified yet realistic attackers. I hope that future experiments will build on the JPMC/Quantinuum achievement and remedy these issues.


The story of this certified randomness protocol starts seven years ago, when I had lunch with Or Sattath at a Japanese restaurant in Tel Aviv. Or told me that I needed to pay more attention to the then-recent Quantum Lightning paper by Mark Zhandry. I already know that paper is great, I said. You don’t know the half of it, Or replied. As one byproduct of what he’s doing, for example, Mark gives a way to measure quantum money states in order to get certified random bits—bits whose genuine randomness (not pseudorandomness) is certified by computational intractability, something that wouldn’t have been possible in a classical world.

Well, why do you even need quantum money states for that? I asked. Why not just use, say, a quantum supremacy experiment based on Random Circuit Sampling, like the one Google is now planning to do (i.e., the experiment Google would do, a year later after this conversation)? Then, the more I thought about that question, the more I liked the idea that these “useless” Random Circuit Sampling experiments would do something potentially useful despite themselves, generating certified entropy as just an inevitable byproduct of passing our benchmarks for sampling from certain classically-hard probability distributions. Over the next couple weeks, I worked out some of the technical details of the security analysis (though not all! it was a big job, and one that only got finished years later, when I brought Shih-Han to UT Austin as a postdoc and worked with him on it for a year).

I emailed the Google team about the idea; they responded enthusiastically. I also got in touch with UT Austin’s intellectual property office to file a provisional patent, the only time I’ve done that my career. UT and I successfully licensed the patent to Google, though the license lapsed when Google’s priorities changed. Meantime, a couple years ago, when I visited Quantinuum’s lab in Broomfield, Colorado, I learned that a JPMC-led collaboration toward an experimental demonstration of the protocol was then underway. The protocol was well-suited to Quantinuum’s devices, particularly given their ability to apply two-qubit gates with all-to-all connectivity and fidelity approaching 99.9%.

I should mention that, in the intervening years, others had also studied the use of quantum computers to generate cryptographically certified randomness; indeed it became a whole subarea of quantum computing. See especially the seminal work of Brakerski, Christiano, Mahadev, Vazirani, and Vidick, which gave a certified randomness protocol that (unlike mine) relies only on standard cryptographic assumptions and allows verification in classical polynomial time. The “only” downside is that implementing their protocol securely seems to require a full fault-tolerant quantum computer (capable of things like Shor’s algorithm), rather than current noisy devices with 50-100 qubits.


For the rest of this post, I’ll share a little FAQ, adapted from my answers to a journalist’s questions. Happy to answer additional questions in the comments.

  • To what extent is this a world-first?

Well, it’s the first experimental demonstration of a protocol to generate cryptographically certified random bits with the use of a quantum computer.

To remove any misunderstanding: if you’re just talking about the use of quantum phenomena to generate random bits, without certifying the randomness of those bits to a faraway skeptic, then that’s been easy to do for generations (just stick a Geiger counter next to some radioactive material!). The new part, the part that requires a quantum computer, is all about the certification.

Also: if you’re talking about the use of separated, entangled parties to generate certified random bits by violating the Bell inequality (see eg here) — that approach does give certification, but the downside is that you need to believe that the two parties really are unable to communicate with each other, something that you couldn’t certify in practice over the Internet.  A quantum-computer-based protocol like mine, by contrast, requires just a single quantum device.

  • Why is the certification element important?

In any cryptographic application where you need to distribute random bits over the Internet, the fundamental question is, why should everyone trust that these bits are truly random, rather than being backdoored by an adversary?

This isn’t so easy to solve.  If you consider any classical method for generating random bits, an adversary could substitute a cryptographic pseudorandom generator without anyone being the wiser.

The key insight behind the quantum protocol is that a quantum computer can solve certain problems efficiently, but only (it’s conjectured, and proven under plausible assumptions) by sampling an answer randomly — thereby giving you certified randomness, once you verify that the quantum computer really has solved the problem in question.  Unlike with a classical computer, there’s no way to substitute a pseudorandom generator, since randomness is just an inherent part of a quantum computer’s operation — specifically, when the entangled superposition state randomly collapses on measurement.

  • What are the applications and possible uses?

One potential application is to proof-of-stake cryptocurrencies, like Ethereum.  These cryptocurrencies are vastly more energy-efficient than “proof-of-work” cryptocurrencies (like Bitcoin), but they require lotteries to be run constantly to decide which currency holder gets to add the next block to the blockchain (and get paid for it).  Billions of dollars are riding on these lotteries being fair.

Other potential applications are to zero-knowledge protocols, lotteries and online gambling, and deciding which precincts to audit in elections. See here for a nice perspective article that JPMC put together discussing these and other potential applications.

Having said all this, a major problem right now is that verifying the results using a classical computer is extremely expensive — indeed, basically as expensive as spoofing the results would be.  This problem, and other problems related to verification (eg “why should everyone else trust the verifier?”), are the reasons why most people will probably pass on this solution in the near future, and generate random bits in simpler, non-quantum-computational ways.

We do know, from e.g. Brakerski et al.’s work, that the problem of making the verification fast is solvable with sufficient advancements in quantum computing hardware.  Even without hardware advancements, it might also be solvable with new theoretical ideas — one of my favorite research directions.

  • Is this is an early win for quantum computing?

It’s not directly an advancement in quantum computing hardware, but yes, it’s a very nice demonstration of such advancements — of something that’s possible today but wouldn’t have been possible just a few short years ago.  It’s a step toward using current, non-error-corrected quantum computers for a practical application that’s not itself about quantum mechanics but that really does inherently require quantum computers.

Of course it’s personally gratifying to see something I developed get experimentally realized after seven years.  Huge congratulations to the teams at JP Morgan Chase and Quantinuum, and thanks to them for the hard work they put into this.


Unrelated Announcement: See here for a podcast about quantum computing that I recorded with, of all organizations, the FBI. As I told the gentlemen who interviewed me, I’m glad the FBI still exists, let alone its podcast!

Jacob Barandes and Me

Tuesday, March 4th, 2025

Please enjoy Harvard’s Jacob Barandes and yours truly duking it out for 2.5 hours on YouTube about the interpretation of quantum mechanics, and specifically Jacob’s recent proposal involving “indivisible stochastic dynamics,” with Curt Jaimungal as moderator. As always, I strongly recommend watching with captions turned on and at 2X speed.

To summarize what I learned in one paragraph: just like in Bohmian mechanics, Jacob wants classical trajectories for particles, which are so constructed to reproduce the predictions of QM perfectly. But unlike the Bohmians, Jacob doesn’t want to commit to any particular rule for the evolution of those particle trajectories. He merely asserts, metaphysically, that the trajectories exist. My response was basically, “OK fine, you can do that if you want, but what does it buy me?” We basically went around in circles on that question the entire time, though hopefully with many edutaining disgressions.

Despite the lack of resolution, I felt pretty good about the conversation afterward: Jacob got an extensive opportunity to explain his ideas to listeners, along with his detailed beefs against both the Many-Worlds and Copenhagen interpretations. Meanwhile, even though I spoke less than Jacob, I did get some opportunities to do my job, pushing back and asking the kinds of questions I imagined most physicists would ask (even though I’m not a physicist, I felt compelled to represent them!). Jacob and I ended the conversation much as we began: disagreeing on extremely friendly terms.

Then, alas, I read the comments on YouTube and got depressed. Apparently, I’m a hidebound academic elitist who’s failed to grasp Jacob’s revolutionary, paradigm-smashing theory, and who kept arrogantly interrupting with snide, impertinent questions (“OK, but what can I do with this theory that I couldn’t do before?”). And, I learned, the ultimate proof of my smug, ivory-tower malice was to be found in my body language, the way I constantly smiled nervously and rocked back and forth. I couldn’t help but wonder: have these people watched any other YouTube videos that I’m in? I don’t get to pick how I look and sound. I came out of the factory this way.

One commenter opined that I must hate Jacob’s theory only because I’ve poured my life into quantum computing, which depends on superposition, the confusing concept that Jacob has now unmasked as a farce. Presumably it’s beyond this person’s comprehension that Jacob makes exactly the same predictions as I make for what a quantum computer will do when built; Jacob just prefers a different way of talking about it.

I was reminded that optimizing for one’s scientific colleagues is wildly different from optimizing for YouTube engagement. In science, it’s obvious to everyone that the burden of proof is on whoever is presenting the new idea—and that this burden is high, especially with anything as well-trodden and skull-strewn as the foundations of quantum mechanics, albeit not infinitely high. The way the game works is: other people try as hard as they can to shoot the new idea down, so we see how it fares under duress. This is not a sign of contempt for new ideas, but of respect for them.

On YouTube, the situation is precisely reversed. There, anyone perceived as the “mainstream establishment” faces a near-insurmountable burden of proof, while anyone perceived as “renegade” wins by default if they identify any hole whatsoever in mainstream understanding. Crucially, the renegade’s own alternative theories are under no particular burden; indeed, the details of their theories are not even that important or relevant. I don’t want to list science YouTubers who’ve learned to exploit that dynamic masterfully, though I’m told one rhymes with “Frabine Schlossenfelder.” Of course this mirrors what’s happened in the wider world, where RFK Jr. now runs American health policy, Tulsi Gabbard runs the intelligence establishment, and other conspiracy theorists have at last fired all the experts and taken control of our civilization, and are eagerly mashing the buttons to see what happens. I’d take Jacob Barandes, or even Sabine, a billion times over the lunatics in power. But I do hope Jacob turns out to be wrong about Many-Worlds, because it would give my solace to know that there are other branches of the wavefunction where things are a little more sane.

FAQ on Microsoft’s topological qubit thing

Thursday, February 20th, 2025

Q1. Did you see Microsoft’s announcement?
A. Yes, thanks, you can stop emailing to ask! Microsoft’s Chetan Nayak was even kind enough to give me a personal briefing a few weeks ago. Yesterday I did a brief interview on this for the BBC’s World Business Report, and I also commented for MIT Technology Review.

Q2. What is a topological qubit?
A. It’s a special kind of qubit built using nonabelian anyons, which are excitations that can exist in a two-dimensional medium, behaving neither as fermions nor as bosons. The idea grew out of seminal work by Alexei Kitaev, Michael Freedman, and others starting in the late 1990s. Topological qubits have proved harder to create and control than ordinary qubits.

Q3. Then why do people care about topological qubits?
A. The dream is that they could eventually be more resilient to decoherence than regular qubits, since an error, in order to matter, needs to change the topology of how the nonabelian anyons are braided around each other. So you’d have some robustness built in to the physics of your system, rather than having to engineer it laboriously at the software level (via quantum fault-tolerance).

Q4. Did Microsoft create the first topological qubit?
A. Well, they say they did! [Update: Commenters point out to me that buried in Nature‘s review materials is the following striking passage: “The editorial team wishes to point out that the results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices. The work is published for introducing a device architecture that might enable fusion experiments using future Majorana zero modes.” So, the situation is that Microsoft is unambiguously claiming to have created a topological qubit, and they just published a relevant paper in Nature, but their claim to have created a topological qubit has not yet been accepted by peer review.]

Q5. Didn’t Microsoft claim the experimental creation of Majorana zero modes—a building block of topological qubits—back in 2018, and didn’t they then need to retract their claim?
A. Yep. Certainly that history is making some experts cautious about the new claim. When I asked Chetan Nayak how confident I should be, his response was basically “look, we now have a topological qubit that’s behaving fully as a qubit; how much more do people want?”

Q6. Is this a big deal?
A. If the claim stands, I’d say it would be a scientific milestone for the field of topological quantum computing and physics beyond. The number of topological qubits manipulated in a single experiment would then have finally increased from 0 to 1, and depending on how you define things, arguably a “new state of matter” would even have been created, one that doesn’t appear in nature (but only in Nature).

Q7. Is this useful?
A. Not yet! If anyone claims that a single qubit, or even 30 qubits, are already useful for speeding up computation, you can ignore anything else that person says. (Certainly Microsoft makes no such claim.) On the question of what we believe quantum computers will or won’t eventually be useful for, see like half the archives of this blog over the past twenty years.

Q8. Does this announcement vindicate topological qubits as the way forward for quantum computing?
A. Think of it this way. If Microsoft’s claim stands, then topological qubits have finally reached some sort of parity with where more traditional qubits were 20-30 years ago. I.e., the non-topological approaches like superconducting, trapped-ion, and neutral-atom have an absolutely massive head start: there, Google, IBM, Quantinuum, QuEra, and other companies now routinely do experiments with dozens or even hundreds of entangled qubits, and thousands of two-qubit gates. Topological qubits can win if, and only if, they turn out to be so much more reliable that they leapfrog the earlier approaches—sort of like the transistor did to the vacuum tube and electromechanical relay. Whether that will happen is still an open question, to put it extremely mildly.

Q9. Are there other major commercial efforts to build topological qubits?
A. No, it’s pretty much just Microsoft [update: apparently Nokia Bell Labs also has a smaller, quieter effort, and Delft University in the Netherlands also continues work in the area, having ended an earlier collaboration with Microsoft]. Purely as a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog.

Q10. Is Microsoft now on track to scale to a million topological qubits in the next few years?
A. In the world of corporate PR and pop-science headlines, sure, why not? As Bender from Futurama says, “I can guarantee anything you want!” In the world of reality, a “few years” certainly feels overly aggressive to me, but good luck to Microsoft and good luck to its competitors! I foresee exciting times ahead, provided we still have a functioning civilization in which to enjoy them.

Update (Feb 20): Chetan Nayak himself comments here, to respond to criticisms about Microsoft’s Nature paper lacking direct evidence for majorana zero modes or topological qubits. He says that the paper, though published this week, was submitted a year ago, before the evidence existed. Of course we all look forward to the followup paper.