Archive for the ‘Complexity’ Category

Quantum Complexity Theory Student Project Showcase #5 (2025 Edition)!

Thursday, July 31st, 2025

Sorry for the long blog-hiatus! I was completely occupied for weeks, teaching an intensive course on theoretical computer science to 11-year-olds (!), at a math camp in St. Louis that was also attended by my 8-year-old son. Soon I’ll accompany my 12-year-old daughter to a science camp in Connecticut, where I’ll also give lectures.

There’s a great deal to say about these experiences, but for now: it’s been utterly transformative and life-affirming to spend my days in teaching precocious, enthusiastic, non-jaded children the material I love in the real world, rather than (let’s say) in scrolling through depressing world news and social media posts and emails from hateful trolls on my phone. It’s made me feel 25 years younger (well, at least until I need to walk up a flight of stairs). It’s made me want to go back to actual research and teaching, which besides family and friends have been the main sources of joy in my life.


So on that note, and without further ado: I hereby present the latest Quantum Complexity Theory Student Project Showcase! As the name suggests, this is where I share a selection of the best research projects, from the students who took my graduate class on Quantum Complexity Theory at UT Austin this past spring.

See here for the four previous iterations of the Showcase:

(As you can see, the timing hasn’t been 100% consistent.)

I expect that, as in past editions, many of this year’s projects will lead to published research papers, or at the very least, preprints on the arXiv.


And now, really without further ado, the projects!

(1) Quantum Hermite Transform and Gaussian Goldreich-Levin, by Vishnu Iyer and Siddhartha Jain.

Vishnu and Sid propose a new primitive for quantum algorithms—the Hermite transform, as opposed to the Fourier transform—and give at least one successful example of its use. They’d be eager to know if anyone can think of other applications!

(2) Quantum Statistical Witness Indistinguishability, by Shafik Nassar and Ronak Ramachandran.

In modern cryptography, even if it isn’t statistical zero-knowledge (SZK), a proof protocol might have the weaker property of being statistically witness-indistinguishable (SWI): that is, Arthur can’t tell which of two possible yes-witnesses Merlin holds. Here Shafik and Ronak initiate the study of quantum SWI, and prove the basic properties of this notion, such as the equivalence of honest and dishonest verifier. Hopefully this will serve as a springboard for someone to find an actual QSWI protocol for an interesting problem.

(3) A Zero-Knowledge Protocol for Keyed Unitary Families, by David Joy and Angela Zhang.

Continuing the theme of quantum zero-knowledge, David and Angela give a protocol by which Merlin can convince Arthur that he knows a unitary relating one pure state to another, without revealing the unitary. Again continuing a theme, applications of this protocol are sought!

(4) On Query Lower Bounds for Aaronson-Kuperberg Unitary Synthesis Circuits, by Arko Banerjee.

Back in 2006, when we formulated our so-called “Unitary Synthesis Conjecture,” Greg Kuperberg and I showed that if a quantum algorithm applies an n-qubit unitary U(f) after making a single query to a Boolean function f, then as we range over f’s, there can be at most 4n possible values of U(f). Here Arko improves our bound to 2n, which is tight. He also tries extremely hard to generalize our bound to the two-query case, not quite succeeding but proving partial results that hopefully will be helpful to others.

(5) Quantum Search with Non-Interacting Bosons and Fermions, by Aravind Karthigeyan.

This one really made me think. Aravind studies the problem of search for a single marked vertex, on the complete graph with N vertices, using either M bosons or M fermions that can hop between the vertices. With M bosons, he shows that the search succeeds in Θ(√(N/M)) time with high probability, which is just the usual runtime for Grover search with M parallel searchers. With fermions, by contrast, he shows that more time is needed. Why? Because of the Pauli Exclusion Principle! The fermions all “step on each others’ toes,” competing to be the one that jumps onto the marked vertex, which limits the advantage of having M fermions searching in parallel.

(6) Limits to Pseudodeterminism in Quantum Communication Protocols, by Jiawei Li.

Jiawei revisits the famous Hidden Matching Problem, which is known to have an exponential gap between its randomized one-way communication complexity of ~√n, and its quantum one-way communication complexity of ~log(n). He makes a new observation about this problem: namely, if you want the exponential quantum communication advantage, then you must content yourself with a protocol that can generate many different possible correct answers with appreciable probabilities (i.e., that generates large min-entropy). In other words, no good quantum protocol for the problem is pseudodeterministic. This complements, for example, my and Shih-Han Hung’s work, which showed the same statement for quantum supremacy experiments based on Random Circuit Sampling, or the long line of works that showed it for experiments that violate the Bell/CHSH inequality.

Congratulations to my students for their accomplishments, and thanks to them for giving me permission to include their work in this showcase!

Quantum! AI! Everything but Trump!

Wednesday, April 30th, 2025
  • Grant Sanderson, of 3blue1brown, has put up a phenomenal YouTube video explaining Grover’s algorithm, and dispelling the fundamental misconception about quantum computing, that QC works simply by “trying all the possibilities in parallel.” Let me not futz around: this video explains, in 36 minutes, what I’ve tried to explain over and over on this blog for 20 years … and it does it better. It’s a masterpiece. Yes, I consulted with Grant for this video (he wanted my intuitions for “why is the answer √N?”), and I even have a cameo at the end of it, but I wish I had made the video. Damn you, Grant!
  • The incomparably great, and absurdly prolific, blogger Zvi Mowshowitz and yours truly spend 1 hour and 40 minutes discussing AI existential risk, education, blogging, and more. I end up “interviewing” Zvi, who does the majority of the talking, which is fine by me, as he has many important things to say! (Among them: his searing critique of those K-12 educators who see it as their life’s mission to prevent kids from learning too much too fast—I’ve linked his best piece on this from the header of this blog.) Thanks so much to Rick Coyle for arranging this conversation.
  • Progress in quantum complexity theory! In 2000, John Watrous showed that the Group Non-Membership problem is in the complexity class QMA (Quantum Merlin-Arthur). In other words, if some element g is not contained in a given subgroup H of an exponentially large finite group G, which is specified via a black box, then there’s a short quantum proof that g∉H, with only ~log|G| qubits, which can be verified on a quantum computer in time polynomial in log|G|. This soon raised the question of whether Group Non-Membership could be used to separate QMA from QCMA by oracles, where QCMA (Quantum Classical Merlin Arthur), defined by Aharonov and Naveh in 2002, is the subclass of QMA where the proof needs to be classical, but the verification procedure can still be quantum. In other words, could Group Non-Membership be the first non-quantum example where quantum proofs actually help?

    In 2006, alas, Greg Kuperberg and I showed that the answer was probably “no”: Group Non-Membership has “polynomial QCMA query complexity.” This means that there’s a QCMA protocol for the problem where Arthur makes only polylog|G| quantum queries to the group oracle—albeit, possibly an exponential in log|G| number of quantum computation steps besides that! To prove our result, Greg and I needed to make mild use of the Classification of Finite Simple Groups, one of the crowning achievements of 20th-century mathematics (its proof is about 15,000 pages long). We conjectured (but couldn’t prove) that someone else, who knew more about the Classification than we did, could show that Group Non-Membership was simply in QCMA outright.

    Now, after almost 20 years, François Le Gall, Harumichi Nishimura, and Dhara Thakkar have finally proven our conjecture—showing that Group Order, and therefore also Group Non-Membership, are indeed in QCMA. They did indeed need to use the Classification, doing one thing for almost all finite groups covered by the Classification, but a different thing for groups of “Ree type” (whatever those are).

    Interestingly, the Group Membership problem had also been a candidate for separating BQP/qpoly, or quantum polynomial time with polynomial-size quantum advice—my personal favorite complexity class—from BQP/poly, or the same thing with polynomial-size classical advice. And it might conceivably still be! The authors explain to me that their protocol doesn’t put Group Membership (with group G and subgroup H depending only on the input length n) into BQP/poly, the reason being that their short classical witnesses for g∉H depend on both g and H, in contrast to Watrous’s quantum witnesses which depended only on H. So there’s still plenty that’s open here! Actually, for that matter, I don’t know of good evidence that the entire Group Membership problem isn’t in BQP—i.e., that quantum computers can’t just solve the whole thing outright, with no Merlins or witnesses in sight!

    Anyway, huge congratulations to Le Gall, Nishimura, and Thakkar for peeling back our ignorance of these matters a bit further! Reeeeeeeee!
  • Potential big progress in quantum algorithms! Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone (GLM) have given what they present as a quantum algorithm to estimate the determinant of an n×n matrix A, exponentially faster in some contexts than we know how to do it classically.

    [Update (May 5): In the comments, Alessandro Luongo shares a paper where he and Changpeng Shao describe what appears to be essentially the same algorithm back in 2020.]

    The algorithm is closely related to the 2008 HHL (Harrow-Hassidim-Lloyd) quantum algorithm for solving systems of linear equations. Which means that anyone who knows the history of this class of quantum algorithms knows to ask immediately: what’s the fine print? A couple weeks ago, when I visited Harvard and MIT, I had a chance to catch up with Seth Lloyd, so I asked him, and he kindly told me. Firstly, we assume the matrix A is Hermitian and positive semidefinite. Next, we assume A is sparse, and not only that, but there’s a QRAM data structure that points to its nonzero entries, so you don’t need to do Grover search or the like to find them, and can query them in coherent superposition. Finally, we assume that all the eigenvalues of A are at least some constant λ>0. The algorithm then estimates det(A), to multiplicative error ε, in time that scales linearly with log(n), and polynomially with 1/λ and 1/ε.

    Now for the challenge I leave for ambitious readers: is there a classical randomized algorithm to estimate the determinant under the same assumptions and with comparable running time? In other words, can the GLM algorithm be “Ewinized”? Seth didn’t know, and I think it’s a wonderful crisp open question! On the one hand, if Ewinization is possible, it wouldn’t be the first time that publicity on this blog had led to the brutal murder of a tantalizing quantum speedup. On the other hand … well, maybe not! I also consider it possible that the problem solved by GLM—for exponentially-large, implicitly-specified matrices A—is BQP-complete, as for example was the general problem solved by HHL. This would mean, for example, that one could embed Shor’s factoring algorithm into GLM, and that there’s no hope of dequantizing it unless P=BQP. (Even then, though, just like with the HHL algorithm, we’d still face the question of whether the GLM algorithm was “independently useful,” or whether it merely reproduced quantum speedups that were already known.)

    Anyway, quantum algorithms research lives! So does dequantization research! If basic science in the US is able to continue at all—the thing I promised not to talk about in this post—we’ll have plenty to keep us busy over the next few years.

Theoretical Computer Science for AI Alignment … and More

Thursday, April 10th, 2025

In this terrifying time for the world, I’m delighted to announce a little glimmer of good news. I’m receiving a large grant from the wonderful Open Philanthropy, to build up a group of students and postdocs over the next few years, here at UT Austin, to do research in theoretical computer science that’s motivated by AI alignment. We’ll think about some of the same topics I thought about in my time at OpenAI—interpretability of neural nets, cryptographic backdoors, out-of-distribution generalization—but we also hope to be a sort of “consulting shop,” to whom anyone in the alignment community can come with theoretical computer science problems.

I already have two PhD students and several undergraduate students working in this direction. If you’re interested in doing a PhD in CS theory for AI alignment, feel free to apply to the CS PhD program at UT Austin this coming December and say so, listing me as a potential advisor.

Meanwhile, if you’re interested in a postdoc in CS theory for AI alignment, to start as early as this coming August, please email me your CV and links to representative publications, and arrange for two recommendation letters to be emailed to me.


The Open Philanthropy project will put me in regular contact with all sorts of people who are trying to develop complexity theory for AI interpretability and alignment. One great example of such a person is Eric Neyman—previously a PhD student of Tim Roughgarden at Columbia, now at the Alignment Research Center, the Berkeley organization founded by my former student Paul Christiano. Eric has asked me to share an exciting announcement, along similar lines to the above:

The Alignment Research Center (ARC) is looking for grad students and postdocs for its visiting researcher program. ARC is trying to develop algorithms for explaining neural network behavior, with the goal of advancing AI safety (see here for a more detailed summary). Our research approach is fairly theory-focused, and we are interested in applicants with backgrounds in CS theory or ML. Visiting researcher appointments are typically 10 weeks long, and are offered year-round.

If you are interested, you can apply here. (The link also provides more details about the role, including some samples of past work done by ARC.) If you have any questions, feel free to email hiring@alignment.org.

Some of my students and I are working closely with the ARC team. I like what I’ve seen of their research so far, and would encourage readers with the relevant background to apply.


Meantime, I of course continue to be interested in quantum computing! I’ve applied for multiple grants to continue doing quantum complexity theory, though whether or not I can get such grants will alas depend (among other factors) whether the US National Science Foundation continues to exist, as more than a shadow of what it was. The signs look ominous; Science magazine reports that the NSF just cut by half the number of awarded graduate fellowships, and this has almost certainly directly affected students who I know and care about.


Meantime we all do the best we can. My UTCS colleague, Chandrajit Bajaj, is currently seeking a postdoc in the general area of Statistical Machine Learning, Mathematics, and Statistical Physics, for up to three years. Topics include:

  • Learning various dynamical systems through their Stochastic Hamiltonians.  This involves many subproblems in geometry, stochastic optimization and stabilized flows which would be interesting in their own right.
  • Optimizing  task dynamics on different algebraic varieties of applied interest — Grassmanians, the Stiefel and Flag manifolds, Lie groups, etc.

If you’re interested, please email Chandra at bajaj@cs.utexas.edu.


Thanks so much to the folks at Open Philanthropy, and to everyone else doing their best to push basic research forward even while our civilization is on fire.

On the JPMC/Quantinuum certified quantum randomness demo

Wednesday, March 26th, 2025

These days, any quantum computing post I write ought to begin with the disclaimer that the armies of Sauron are triumphing around the globe, this is the darkest time for humanity most of us have ever known, and nothing else matters by comparison. Certainly not quantum computing. Nevertheless stuff happens in quantum computing and it often brings me happiness to blog about it—certainly more happiness than doomscrolling or political arguments.


So then: today JP Morgan Chase announced that, together with Quantinuum and DoE labs, they’ve experimentally demonstrated the protocol I proposed in 2018, and further developed in a STOC’2023 paper with Shih-Han Hung, for using current quantum supremacy experiments to generate certifiable random bits for use in cryptographic applications. See here for our paper in Nature—the JPMC team was gracious enough to include me and Shih-Han as coauthors.

Mirroring a conceptual split in the protocol itself, Quantinuum handled the quantum hardware part of my protocol, while JPMC handled the rest: modification of the protocol to make it suitable for trapped ions, as well as software to generate pseudorandom challenge circuits to send to the quantum computer over the Internet, then to verify the correctness of the quantum computer’s outputs (thereby ensuring, under reasonable complexity assumptions, that the outputs contained at least a certain amount of entropy), and finally to extract nearly uniform random bits from the outputs. The experiment used Quantinuum’s 56-qubit trapped-ion quantum computer, which was given and took a couple seconds to respond to each challenge. Verification of the outputs was done using the Frontier and Summit supercomputers. The team estimates that about 70,000 certified random bits were generated over 18 hours, in such a way that, using the best currently-known attack, you’d need at least about four Frontier supercomputers working continuously to spoof the quantum computer’s outputs, and get the verifier to accept non-random bits.

We should be clear that this gap, though impressive from the standpoint of demonstrating quantum supremacy with trapped ions, is not yet good enough for high-stakes cryptographic applications (more about that later). Another important caveat is that the parameters of the experiment aren’t yet good enough for my and Shih-Han’s formal security reduction to give assurances: instead, for the moment one only has “practical security,” or security against a class of simplified yet realistic attackers. I hope that future experiments will build on the JPMC/Quantinuum achievement and remedy these issues.


The story of this certified randomness protocol starts seven years ago, when I had lunch with Or Sattath at a Japanese restaurant in Tel Aviv. Or told me that I needed to pay more attention to the then-recent Quantum Lightning paper by Mark Zhandry. I already know that paper is great, I said. You don’t know the half of it, Or replied. As one byproduct of what he’s doing, for example, Mark gives a way to measure quantum money states in order to get certified random bits—bits whose genuine randomness (not pseudorandomness) is certified by computational intractability, something that wouldn’t have been possible in a classical world.

Well, why do you even need quantum money states for that? I asked. Why not just use, say, a quantum supremacy experiment based on Random Circuit Sampling, like the one Google is now planning to do (i.e., the experiment Google would do, a year later after this conversation)? Then, the more I thought about that question, the more I liked the idea that these “useless” Random Circuit Sampling experiments would do something potentially useful despite themselves, generating certified entropy as just an inevitable byproduct of passing our benchmarks for sampling from certain classically-hard probability distributions. Over the next couple weeks, I worked out some of the technical details of the security analysis (though not all! it was a big job, and one that only got finished years later, when I brought Shih-Han to UT Austin as a postdoc and worked with him on it for a year).

I emailed the Google team about the idea; they responded enthusiastically. I also got in touch with UT Austin’s intellectual property office to file a provisional patent, the only time I’ve done that my career. UT and I successfully licensed the patent to Google, though the license lapsed when Google’s priorities changed. Meantime, a couple years ago, when I visited Quantinuum’s lab in Broomfield, Colorado, I learned that a JPMC-led collaboration toward an experimental demonstration of the protocol was then underway. The protocol was well-suited to Quantinuum’s devices, particularly given their ability to apply two-qubit gates with all-to-all connectivity and fidelity approaching 99.9%.

I should mention that, in the intervening years, others had also studied the use of quantum computers to generate cryptographically certified randomness; indeed it became a whole subarea of quantum computing. See especially the seminal work of Brakerski, Christiano, Mahadev, Vazirani, and Vidick, which gave a certified randomness protocol that (unlike mine) relies only on standard cryptographic assumptions and allows verification in classical polynomial time. The “only” downside is that implementing their protocol securely seems to require a full fault-tolerant quantum computer (capable of things like Shor’s algorithm), rather than current noisy devices with 50-100 qubits.


For the rest of this post, I’ll share a little FAQ, adapted from my answers to a journalist’s questions. Happy to answer additional questions in the comments.

  • To what extent is this a world-first?

Well, it’s the first experimental demonstration of a protocol to generate cryptographically certified random bits with the use of a quantum computer.

To remove any misunderstanding: if you’re just talking about the use of quantum phenomena to generate random bits, without certifying the randomness of those bits to a faraway skeptic, then that’s been easy to do for generations (just stick a Geiger counter next to some radioactive material!). The new part, the part that requires a quantum computer, is all about the certification.

Also: if you’re talking about the use of separated, entangled parties to generate certified random bits by violating the Bell inequality (see eg here) — that approach does give certification, but the downside is that you need to believe that the two parties really are unable to communicate with each other, something that you couldn’t certify in practice over the Internet.  A quantum-computer-based protocol like mine, by contrast, requires just a single quantum device.

  • Why is the certification element important?

In any cryptographic application where you need to distribute random bits over the Internet, the fundamental question is, why should everyone trust that these bits are truly random, rather than being backdoored by an adversary?

This isn’t so easy to solve.  If you consider any classical method for generating random bits, an adversary could substitute a cryptographic pseudorandom generator without anyone being the wiser.

The key insight behind the quantum protocol is that a quantum computer can solve certain problems efficiently, but only (it’s conjectured, and proven under plausible assumptions) by sampling an answer randomly — thereby giving you certified randomness, once you verify that the quantum computer really has solved the problem in question.  Unlike with a classical computer, there’s no way to substitute a pseudorandom generator, since randomness is just an inherent part of a quantum computer’s operation — specifically, when the entangled superposition state randomly collapses on measurement.

  • What are the applications and possible uses?

One potential application is to proof-of-stake cryptocurrencies, like Ethereum.  These cryptocurrencies are vastly more energy-efficient than “proof-of-work” cryptocurrencies (like Bitcoin), but they require lotteries to be run constantly to decide which currency holder gets to add the next block to the blockchain (and get paid for it).  Billions of dollars are riding on these lotteries being fair.

Other potential applications are to zero-knowledge protocols, lotteries and online gambling, and deciding which precincts to audit in elections. See here for a nice perspective article that JPMC put together discussing these and other potential applications.

Having said all this, a major problem right now is that verifying the results using a classical computer is extremely expensive — indeed, basically as expensive as spoofing the results would be.  This problem, and other problems related to verification (eg “why should everyone else trust the verifier?”), are the reasons why most people will probably pass on this solution in the near future, and generate random bits in simpler, non-quantum-computational ways.

We do know, from e.g. Brakerski et al.’s work, that the problem of making the verification fast is solvable with sufficient advancements in quantum computing hardware.  Even without hardware advancements, it might also be solvable with new theoretical ideas — one of my favorite research directions.

  • Is this is an early win for quantum computing?

It’s not directly an advancement in quantum computing hardware, but yes, it’s a very nice demonstration of such advancements — of something that’s possible today but wouldn’t have been possible just a few short years ago.  It’s a step toward using current, non-error-corrected quantum computers for a practical application that’s not itself about quantum mechanics but that really does inherently require quantum computers.

Of course it’s personally gratifying to see something I developed get experimentally realized after seven years.  Huge congratulations to the teams at JP Morgan Chase and Quantinuum, and thanks to them for the hard work they put into this.


Unrelated Announcement: See here for a podcast about quantum computing that I recorded with, of all organizations, the FBI. As I told the gentlemen who interviewed me, I’m glad the FBI still exists, let alone its podcast!

Ryan Williams strikes again

Monday, February 24th, 2025

Update (Feb. 27): While we’re on the subject of theoretical computer science, friends-of-the-blog Adam Klivans and Raghu Meka have asked me to publicize that STOC’2025 TheoryFest, to be held June 23-27 in Prague, is eagerly seeking proposals for workshops. The deadline is March 9th.


  • Because of a recent breakthrough by Cook and Mertz on Tree Evaluation, Ryan now shows that every problem solvable in t time on a multitape Turing machine is also solvable in close to √t space
  • As a consequence, he shows that there are problems solvable in O(n) space that require nearly quadratic time on multitape Turing machines
  • If this could be applied recursively to boost the polynomial degree, then P≠PSPACE
  • On Facebook, someone summarized this result as “there exists an elephant that can’t fit through a mouse hole.” I pointed out that for decades, we only knew how to show there was a blue whale that didn’t fit through the mouse hole
  • I’ll be off the Internet for much of today (hopefully only today?) because of jury duty! Good thing you’ll have Ryan’s amazing new paper to keep y’all busy…

Update (Feb. 25): It occurs to me that the new result is yet another vindication for Ryan’s style of doing complexity theory—a style that I’ve variously described with the phrases “ironic complexity theory” and “caffeinated alien reductions,” and that’s all about using surprising upper bounds for one thing to derive unsurprising lower bounds for a different thing, sometimes with a vertigo-inducing chain of implications in between. This style has a decidedly retro feel to it: it’s been clear since the 1960s both that there are surprising algorithms (for example for matrix multiplication), and that the time and space hierarchy theorems let us prove at least some separations. The dream for decades was to go fundamentally beyond that, separating complexity classes by “cracking their codes” and understanding the space of all possible things they can express. Alas, except for low-level circuit classes, that program has largely failed, for reasons partly explained by the Natural Proofs barrier. So Ryan achieves his successes by simply doubling down on two things that have worked since the beginning: (1) finding even more surprising algorithms (or borrowing surprising algorithms from other people), and then (2) combining those algorithms with time and space hierarchy theorems in clever ways to achieve new separations.

Good news for once! A faster Quantum Fourier Transform

Thursday, January 23rd, 2025

Update: In the comments, Craig Gidney points out that Ronit’s O(n log2 n) quantum circuits for the exact QFT were already published by Cleve and Watrous in 2000 (in a paper whose main point was something else, parallelization). Ronit’s O(n (log log n)2) circuits for the approximate QFT still appear to be new (Gidney says he and others knew related techniques but had never explicitly combined them). Of course, while the exact result was Platonically “known,” it wasn’t sufficiently well known that any of the four quantum algorithms experts I’d consulted had heard of it! Hopefully this very post will go some way toward fixing the situation.

Another Update: Richard Cleve writes in to say that the approximate QFT circuits were known also—albeit, in an unpublished 2-page abstract by Ahokas, Hales, and himself from the 2003 ERATO conference, as well as a followup Master’s thesis by Ahokas. Unlike with the exact case, I’m not kicking myself trying to understand how I missed these.

Ironically, I hope this post helps get this prior work a well-deserved mention when the QFT is covered in introductory quantum information classes.

Meanwhile, my hope that Ronit returns to do more theory is undiminished! When I was a kid, I too started by rediscovering things (like the integral for the length of a curve) that were centuries old, then rediscovering things (like an efficient algorithm for isotonic regression) that were decades old, then rediscovering things (like BQP⊆PP) that were about a year old … until I finally started discovering things (like the collision lower bound) that were zero years old. This is the way.


In my last post, I tried to nudge the arc of history back onto the narrow path of reasoned dialogue, walking the mile-high tightrope between shrill, unsupported accusation and naïve moral blindness. For my trouble, I was condemned about equally by leftists for my right-wing sympathies and by rightists for my left-wing ones. So today, I’ll ignore the fate of civilization and return to quantum computing theory: a subject that’s reliably brought joy to my life for a quarter-century, and still does, even as my abilities fade. It turns out there is a consolation for advancing age and senility, and it’s called “students.”

This fall, I returned from my two-year leave at OpenAI to teach my undergrad Introduction to Quantum Information Science course at UT Austin. This course doesn’t pretend to bring students all the way to the research frontier, and yet sometimes it’s done so anyway. It was in my first offering of Intro to QIS, eight years ago, that I encountered the then 17-year-old Ewin Tang, who broke the curve and then wanted an independent study project. So I gave her the problem of proving that the Kerenidis-Prakash quantum algorithm achieves an exponential speedup over any classical algorithm for the same task, not expecting anything to come of it. But after a year of work, Ewin refuted my conjecture by dequantizing the K-P algorithm—a breakthrough that led to the demolition of many other hopes for quantum machine learning. (Demolishing people’s hopes? In complexity theory, we call that a proud day’s work.)

Today I’m delighted to announce that my undergrad quantum course has led to another quantum advance. One day, after my lecture, a junior named Ronit Shah came to me with an idea for how best to distinguish three possible states of a qubit, rather than only two. For some reason I didn’t think much of it at the time, even though it would later turn out that Ronit had essentially rediscovered the concept of POVMs, the Pretty Good Measurement (PGM), and the 2002 theorem that the PGM is optimal for distinguishing sets of states subject to a transitive group action.

Later, after I’d lectured about Shor’s algorithm, and one of its centerpieces, the O(n2)-gate recursive circuit for the Quantum Fourier Transform, Ronit struck a second time. He told me it should be possible to give a smaller circuit by recursively reducing the n-qubit QFT to two (n/2)-qubit QFTs, rather than to a single (n-1)-qubit QFT.

This was surely just a trivial confusion, perfectly excusable in an undergrad. Did Ronit perhaps not realize that an n-qubit unitary is actually a 2n×2n matrix, so he was proposing to pass directly from 2n×2n to 2n/2×2n/2, rather than to 2n-1×2n-1?

No, he said, he understood that perfectly well. He still thought the plan would work. Then he emailed me a writeup—claiming to implement the exact n-qubit QFT in O(n log2n) gates, the first-ever improvement over O(n2), and also the approximate n-qubit QFT in O(n (log log n)2) gates, the first-ever improvement over O(n log n). He used fast integer multiplication algorithms to make the new recursions work.

At that point, I did something I’m still ashamed of: I sat on Ronit’s writeup for three weeks. When I at last dug it out of my inbox and read it, I could discover no reason why it was wrong, or unoriginal, or unimportant. But I didn’t trust myself, so with Ronit’s permission I sent the work to some of my oldest quantum friends: Ronald de Wolf, Cris Moore, Andrew Childs, and Wim van Dam. They agreed, after some back-and-forth, that the new circuits looked legit. A keystone of Shor’s algorithm, of quantum computing itself, and of my undergrad class had seen its first real improvement since 1994.

Last night Ronit’s paper appeared on the arXiv where you can read it.

In case anyone asks: no, this probably has no practical implication for speeding up factoring on a quantum computer, since the QFT wasn’t the expensive part of Shor’s algorithm anyway—that’s the modular exponentiation—and also, the O(n log n) approximate QFT would already have been used in practice. But it’s conceivable that Ronit’s circuits could speed up other practical quantum computing tasks! And no, we have no idea what’s the ultimate limit here, as usual in circuit complexity. Could the exact n-qubit QFT even be doable in O(n) gates?

I’d love for Ronit to continue in quantum computing theory. But in what’s surely a sign of the times, he’s just gone on leave from UT to intern at an AI hardware startup. I hope he returns and does some more theory, but if he doesn’t, I’m grateful that he shared this little gem with us on his way to more world-changing endeavors.

Podcasts!

Wednesday, December 4th, 2024

Update (Dec. 9): For those who still haven’t gotten enough, check out a 1-hour Zoom panel discussion about quantum algorithms, featuring yours truly along with my distinguished colleagues Eddie Farhi, Aram Harrow, and Andrew Childs, moderated by Barry Sanders, as part of the QTML’2024 conference held in Melbourne (although, it being Thanksgiving week, none of the four panelists were actually there in person). Part of the panel devolves into a long debate between me and Eddie about how interesting quantum algorithms are if they don’t achieve speedups over classical algorithms, and whether some quantum algorithms papers mislead people by not clearly addressing the speedup question (you get one guess as to which side I took). I resolved going in to keep my comments as civil and polite as possible—you can judge for yourself how well I succeeded! Thanks very much to Barry and the other QTML organizers for making this happen.


Do you like watching me spout about AI alignment, watermarking, my time at OpenAI, the P versus NP problem, quantum computing, consciousness, Penrose’s views on physics and uncomputability, university culture, wokeness, free speech, my academic trajectory, and much more, despite my slightly spastic demeanor and my many verbal infelicities? Then holy crap are you in luck today! Here’s 2.5 hours of me talking to former professional poker players (and now wonderful Austin-based friends) Liv Boeree and her husband Igor Kurganov about all of those topics. (Or 1.25 hours if you watch at 2x speed, as I strongly recommend.)

But that’s not all! Here I am talking to Harvard’s Hrvoje Kukina, in a much shorter (45-minute) podcast focused on quantum computing, cosmological bounds on information processing, and the idea of the universe as a computer:

Last but not least, here I am in an hour-long podcast (this one audio-only) with longtime friend Kelly Weinersmith and her co-host Daniel Whiteson, talking about quantum computing.

Enjoy!

Steven Rudich (1961-2024)

Saturday, November 2nd, 2024

I was sure my next post would be about the election—the sword of Damocles hanging over the United States and civilization as a whole. Instead, I have sad news, but also news that brings memories of warmth, humor, and complexity-theoretic insight.

Steven Rudich—professor at Carnegie Mellon, central figure of theoretical computer science since the 1990s, and a kindred spirit and friend—has died at the too-early age of 63. While I interacted with him much more seldom than I wish I had, it would be no exaggeration to call him one of the biggest influences on my life and career.

I first became aware of Steve at age 17, when I read the Natural Proofs paper that he coauthored with Razborov. I was sitting in the basement computer room at Telluride House at Cornell, and still recall the feeling of awe that came over me with every page. This one paper changed my scientific worldview. It expanded my conception of what the P versus NP problem was about and what theoretical computer science could even do—showing how it could turn in on itself, explain its own difficulties in proving problems hard in terms of the truth of those same problems’ hardness, and thereby transmute defeat into victory. I may have been bowled over by the paper’s rhetoric as much as by its results: it was like, you’re allowed to write that way?

I was nearly as impressed by Steve’s PhD thesis, which was full of proofs that gave off the appearance of being handwavy, “just phoning it in,” but were in reality completely rigorous. The result that excited me the most said that, if a certain strange combinatorial conjecture was true, then there was essentially no hope of proving that P≠NP∩coNP relative to a random oracle with probability 1. I played around with the combinatorial conjecture but couldn’t make headway on it; a year or two later, I was excited when I met Clifford Smyth and he told me that he, Kahn, and Saks had just proved it. Rudich’s conjecture directly inspired me to work on what later became the Aaronson-Ambainis Conjecture, which is still unproved, but which if true, similarly implies that there’s no hope of proving P≠BQP relative to a random oracle with probability 1.

When I applied to CS PhD programs in 1999, I wrote about how I wanted to sing the ideas of theoretical computer science from the rooftops—just like Steven Rudich had done, with the celebrated Andrew’s Leap summer program that he’d started at Carnegie Mellon. (How many other models were there? Indeed, how many other models are there today?) I was then honored beyond words when Steve called me on the phone, before anyone else had, and made an hourlong pitch for me to become his student. “You’re what I call a ‘prefab’,” he said. “You already have the mindset that I try to instill in students by the end of their PhDs.” I didn’t have much self-confidence then, which is why I can still quote Steve’s words a quarter-century later. In the ensuing years, when (as often) I doubted myself, I’d think back to that phone call with Steve, and my burning desire to be what he apparently thought I was.

Alas, when I arrived in Pittsburgh for CMU’s visit weekend, I saw Steve holding court in front of a small crowd of students, dispensing wisdom and doing magic tricks. I was miffed that he never noticed or acknowledged me: had he already changed his mind about me, lost interest? It was only later that I learned that Steve was going blind at the time, and literally hadn’t seen me.

In any case, while I came within a hair of accepting CMU’s offer, in the end I chose Berkeley. I wasn’t yet 100% sure that I wanted to do quantum computing (as opposed to AI or classical complexity theory), but the lure of the Bay Area, of the storied CS theory group where Steve himself had studied, and of Steve’s academic sibling Umesh Vazirani proved too great.

Full of regrets about the road not taken, I was glad that, in the summer between undergrad and PhD, I got to attend the PCMI summer school on computational complexity at the Institute for Advanced Study in Princeton, where Steve gave a spectacular series of lectures. By that point, Steve was almost fully blind. He put transparencies up, sometimes upside-down until the audience corrected him, and then lectured about them entirely from memory. He said that doing CS theory sightless was a new, more conceptual experience for him.

Even in his new condition, Steve’s showmanship hadn’t left him; he held the audience spellbound as few academics do. And in a special lecture on “how to give talks,” he spilled his secrets.

“What the speaker imagines the audience is thinking,” read one slide. And then, inside the thought bubbles: “MORE! HARDER! FASTER! … Ahhhhh yes, QED! Truth is beauty.”

“What the audience is actually thinking,” read the next slide, below which: “When is this over? I need to pee. Can I get a date with the person next to me?” (And this was before smartphones.) And yet, Steve explained, rather than resenting the many demands on the audience’s attention, a good speaker would break through, meet people where they were, just as he was doing right then.

I listened, took mental notes, resolved to practice this stuff. I reflected that, even if my shtick only ever became 10% as funny or fluid as Steve’s, I’d still come out way ahead.

It’s possible that the last time I saw Steve was in 2007, when I visited Carnegie Mellon to give a talk about algebrization, a new barrier to solving P vs. NP (and other central problems of complexity theory) that Avi Wigderson and I had recently discovered. When I started writing the algebrization paper, I very consciously modeled it after the Natural Proofs paper; the one wouldn’t have been thinkable without the other. So you can imagine how much it meant to me when Steve liked algebrization—when, even though he couldn’t see my slides, he got enough from the spoken part of the talk to burst with “conceptual” questions and comments.

Steve not only peeled back the mystery of P vs NP insofar as anyone has. He did it with exuberance and showmanship and humor and joy and kindness. I won’t forget him.


I’ve written here only about the tiniest sliver of Steve’s life: namely, the sliver where it intersected mine. I wish that sliver were a hundred times bigger, so that there’d be a hundred times more to write. But CS theory, and CS more broadly, are communities. When I posted about Steve’s passing on Facebook, I got inundated by comments from friends of mine who (as it turned out) had taken Steve’s courses, or TA’d for him, or attended Andrew’s Leap, or otherwise knew him, and on whom he’d left a permanent impression—and I hadn’t even known any of this.

So I’ll end this post with a request: please share your Rudich stories in the comments! I’d especially love specific recollections of his jokes, advice, insights, or witticisms. We now live in a world where, even in the teeth of the likelihood that P≠NP, powerful algorithms running in massive datacenters nevertheless try to replicate the magic of human intelligence, by compressing and predicting all the text on the public Internet. I don’t know where this is going, but I can’t imagine that it would hurt for the emerging global hive-mind to know more about Steven Rudich.


My podcast with Brian Greene

Friday, October 18th, 2024

Yes, he’s the guy from The Elegant Universe book and TV series. Our conversation is 1 hour 40 minutes; as usual I strongly recommend listening at 2x speed. The topics, chosen by Brian, include quantum computing (algorithms, hardware, error-correction … the works), my childhood, the interpretation of quantum mechanics, the current state of AI, the future of sentient life in the cosmos, and mathematical Platonism. I’m happy with how it turned out; in particular, my verbal infelicities seem to have been at a minimum this time. I recommend skipping the YouTube comments if you want to stay sane, but do share your questions and reactions in the comments here. Thanks to Brian and his team for doing this. Enjoy!


Update (Oct. 28): If that’s not enough Scott Aaronson video content for you, please enjoy another quantum computing podcast interview, this one with Ayush Prakash and shorter (clocking in at 45 minutes). Ayush pitched this podcast to me as an opportunity to explain quantum computing to Gen Z. Thus, I considered peppering my explanations of interference and entanglement with such phrases as ‘fo-shizzle’ and ‘da bomb,’ but I desisted after reflecting that whatever youth slang I knew was probably already outdated whenever I’d picked it up, back in the twentieth century.

My Nutty, Extremist Beliefs

Sunday, October 13th, 2024

In nearly twenty years of blogging, I’ve unfortunately felt more and more isolated and embattled. It now feels like anything I post earns severe blowback, from ridicule on Twitter, to pseudonymous comment trolls, to scary and aggressive email bullying campaigns. Reflecting on this, though, I came to see that such strong reactions are an understandable response to my extremist stances. When your beliefs smash the Overton Window into tiny shards like mine do, what do you expect? Just consider some of the intransigent, hard-line stances I’ve taken here on Shtetl-Optimized:

(1) US politics. I’m terrified of right-wing authoritarian populists and their threat to the Enlightenment. For that and many other reasons, I vote straight-ticket Democrat, donate to Democratic campaigns, and encourage everyone else to do likewise. But I also wish my fellow Democrats would rein in the woke stuff, stand up more courageously to the world’s autocrats, and study more economics, so they understand why rent control, price caps, and other harebrained interventions will always fail.

(2) Quantum computing. I’m excited about the prospects of QC, so much that I’ve devoted most of my career to that field. But I also think many of QC’s commercial applications have been wildly oversold to investors, funding agencies, and the press, and I haven’t been afraid to say so.

(3) AI. I think the spectacular progress of AI over the past few years raises scary questions about where we’re headed as a species.  I’m neither in the camp that says “we’ll almost certainly die unless we shut down AI research,” nor the camp that says “the good guys need to race full-speed ahead to get AGI before the bad guys get it.” I’d like us to proceed in AI research with caution and guardrails and the best interests of humanity in mind, rather than the commercial interests of particular companies.

(4) Climate change. I think anthropogenic climate change is 100% real and one of the most urgent problems facing humanity, and those who deny this are being dishonest or willfully obtuse.  But because I think that, I also think it’s way past time to explore technological solutions like modular nuclear reactors, carbon capture, and geoengineering. I think we can’t virtue-signal or kumbaya our way out of the climate crisis.

(5) Feminism and dating. I think the emancipation of women is one of the modern world’s greatest triumphs.  I reserve a special hatred for misogynistic, bullying men. But I also believe, from experience, that many sensitive, nerdy guys severely overcorrected on feminist messaging, to the point that they became terrified of the tiniest bit of assertiveness or initiative in heterosexual courtship. I think this terror has led millions of them to become bitter “incels.”  I want to figure out ways to disrupt the incel pipeline, by teaching shy nerdy guys to have healthy, confident dating lives, without thereby giving asshole guys license to be even bigger assholes.

(6) Israel/Palestine. I’m passionately in favor of Israel’s continued existence as a Jewish state, without which my wife’s family and many of my friends’ and colleagues’ families would have been exterminated. However, I also despise Bibi and the messianic settler movement to which he’s beholden. I pray for a two-state solution where Israelis and Palestinians will coexist in peace, free from their respective extremists.

(7) Platonism. I think that certain mathematical questions, like the Axiom of Choice or the Continuum Hypothesis, might not have any Platonic truth-value, there being no fact of the matter beyond what can be proven from various systems of axioms. But I also think, with Gödel, that statements of elementary arithmetic, like the Goldbach Conjecture or P≠NP, are just Platonically true or false independent of any axiom system.

(8) Science and religion. As a secular rationalist, I’m acutely aware that no ancient religion can be “true,” in the sense believed by either the ancients or modern fundamentalists. Still, the older I’ve gotten, the more I’ve come to see religions as vast storehouses containing (among much else) millennia of accumulated wisdom about how humans can or should live. As in the parable of Chesterton’s Fence, I think this wisdom is often far from obvious and nearly impossible to derive from first principles. So I think that, at the least, secularists will need to figure out their own long-term methods to encourage many of the same things that religion once did—such as stable families, childbirth, self-sacrifice and courage in defending one’s community, and credible game-theoretic commitments to keeping promises and various other behaviors.

(9) Foreign policy and immigration. I’d like the US to stand more courageously against evil regimes, such as those of China, Russia, and Iran. At the same time, I’d like the US to open our gates much wider to students, scientists, and dissidents from those nations who seek freedom in the West. I think our refusal to do enough of this is a world-historic self-own.

(10) Academia vs. industry. I think both have advantages and disadvantages for people in CS and other technical fields. At their best, they complement each other. When advising a student which path to pursue, I try to find out all I can about the student’s goals and personality.

(11) Population ethics. I’m worried about how the earth will support 9 or 10 billion people with first-world living standards, which is part of why I’d like career opportunities for women, girls’ education, contraception, and (early-term) abortion to become widely available everywhere on earth. All the same, I’m not an antinatalist. I think raising one or more children in a loving home should generally be celebrated as a positive contribution to the world.

(12) The mind-body problem. I think it’s possible that there’s something profound we don’t yet understand about consciousness and its relation to the physical world. At the same time, I think the burden is clearly on the mind-body dualists to articulate what that something might be, and how to reconcile it with the known laws of physics. I admire the audacity of Roger Penrose in tackling this question head-on, but I don’t think his solution works.

(13) COVID response. I think the countries that did best tended to be those that had some coherent stategy—whether that was “let the virus rip, keep schools open, quarantine only the old and sick,” or “aggressively quarantine everyone and wait for a vaccine.” I think countries torn between these strategies, like the US, tended to get the worst of all worlds. On the other hand, I think the US did one huge thing right, which was greatly to accelerate (by historical standards) the testing and distribution of the mRNA vaccines. For the sake of the millions who died and the billions who had their lives interrupted, I only wish we’d rushed the vaccines much more. We ought now to be spending trillions on a vaccine pipeline that’s ready to roll within weeks as soon as the next pandemic hits.

(14) P versus NP. From decades of intuition in math and theoretical computer science, I think we can be fairly confident of P≠NP—but I’d “only” give it, say, 97% odds. Here as elsewhere, we should be open to the possibility of world-changing surprises.

(15) Interpretation of QM. I get really annoyed by bad arguments against the Everett interpretation, which (contrary to a popular misconception) I understand to result from scientifically conservative choices. But I’m also not an Everettian diehard. I think that, if you push questions like “but is anyone home in the other branches?” hard enough, you arrive at questions about personal identity and consciousness that were profoundly confusing even before quantum mechanics. I hope we someday learn something new that clarifies the situation.

Anyway, with extremist, uncompromising views like those, is it any surprise that I get pilloried and denounced so often?

All the same, I sometimes ask myself: what was the point of becoming a professor, seeking and earning the hallowed protections of tenure, if I can’t then freely express radical, unbalanced, batshit-crazy convictions like the ones in this post?