Archive for the ‘Quantum’ Category

Podcasts!

Wednesday, December 4th, 2024

Update (Dec. 9): For those who still haven’t gotten enough, check out a 1-hour Zoom panel discussion about quantum algorithms, featuring yours truly along with my distinguished colleagues Eddie Farhi, Aram Harrow, and Andrew Childs, moderated by Barry Sanders, as part of the QTML’2024 conference held in Melbourne (although, it being Thanksgiving week, none of the four panelists were actually there in person). Part of the panel devolves into a long debate between me and Eddie about how interesting quantum algorithms are if they don’t achieve speedups over classical algorithms, and whether some quantum algorithms papers mislead people by not clearly addressing the speedup question (you get one guess as to which side I took). I resolved going in to keep my comments as civil and polite as possible—you can judge for yourself how well I succeeded! Thanks very much to Barry and the other QTML organizers for making this happen.


Do you like watching me spout about AI alignment, watermarking, my time at OpenAI, the P versus NP problem, quantum computing, consciousness, Penrose’s views on physics and uncomputability, university culture, wokeness, free speech, my academic trajectory, and much more, despite my slightly spastic demeanor and my many verbal infelicities? Then holy crap are you in luck today! Here’s 2.5 hours of me talking to former professional poker players (and now wonderful Austin-based friends) Liv Boeree and her husband Igor Kurganov about all of those topics. (Or 1.25 hours if you watch at 2x speed, as I strongly recommend.)

But that’s not all! Here I am talking to Harvard’s Hrvoje Kukina, in a much shorter (45-minute) podcast focused on quantum computing, cosmological bounds on information processing, and the idea of the universe as a computer:

Last but not least, here I am in an hour-long podcast (this one audio-only) with longtime friend Kelly Weinersmith and her co-host Daniel Whiteson, talking about quantum computing.

Enjoy!

Thanksgiving

Thursday, November 28th, 2024

I’m thankful to the thousands of readers of this blog.  Well, not the few who submit troll comments from multiple pseudonymous handles, but the 99.9% who don’t. I’m thankful that they’ve stayed here even when events (as they do more and more often) send me into a spiral of doomscrolling and just subsisting hour-to-hour—when I’m left literally without words for weeks.

I’m thankful for Thanksgiving itself.  As I often try to explain to non-Americans (and to my Israeli-born wife), it’s not primarily about the turkey but rather about the sides: the stuffing, the mashed sweet potatoes with melted marshmallows, the cranberry jello mold.  The pumpkin pie is good too.

I’m thankful that we seem to be on the threshold of getting to see the birth of fault-tolerant quantum computing, nearly thirty years after it was first theorized.

I’m thankful that there’s now an explicit construction of pseudorandom unitaries — and that, with further improvement, this would lead to a Razborov-Rudich natural proofs barrier for the quantum circuit complexity of unitaries, explaining for the first time why we don’t have superpolynomial lower bounds for that quantity.

I’m thankful that there’s been recent progress on QMA versus QCMA (that is, quantum versus classical proofs), with a full classical oracle separation now possibly in sight.

I’m thankful that, of the problems I cared about 25 years ago — the maximum gap between classical and quantum query complexities of total Boolean functions, relativized BQP versus the polynomial hierarchy, the collision problem, making quantum computations classically verifiable — there’s now been progress if not a full solution for almost all of them. And yet I’m thankful as well that lots of great problems remain open.

I’m thankful that the presidential election wasn’t all that close (by contemporary US standards, it was a ““landslide,”” 50%-48.4%).  Had it been a nail-biter, not only would I fear violence and the total breakdown of our constitutional order, I’d kick myself that I hadn’t done more to change the outcome.  As it is, there’s no denying that a plurality of Americans actually chose this, and now they’re going to get it good and hard.

I’m thankful that, while I absolutely do see Trump’s return as a disaster for the country and for civilization, it’s not a 100% unmitigated disaster.  The lying chaos monster will occasionally rage for things I support rather than things I oppose.  And if he actually plunges the country into another Great Depression through tariffs, mass deportations, and the like, hopefully that will make it easier to repudiate his legacy in 2028.

I’m thankful that, whatever Jews around the world have had to endure over the past year — both the physical attacks and the moral gaslighting that it’s all our fault — we’ve already endured much worse on both fronts, not once but countless times over 3000 years, and this is excellent Bayesian evidence that we’ll survive the latest onslaught as well.

I’m thankful that my family remains together, and healthy. I’m thankful to have an 11-year-old who’s a striking wavy-haired blonde who dances and does gymnastics (how did that happen?) and wants to be an astrophysicist, as well as a 7-year-old who now often beats me in chess and loves to solve systems of two linear equations in two unknowns.

I’m thankful that, compared to what I imagined my life would be as an 11-year-old, my life is probably in the 50th percentile or higher.  I haven’t saved the world, but I haven’t flamed out either.  Even if I do nothing else from this point, I have a stack of writings and results that I’m proud of. And I fully intend to do something else from this point.

I’m thankful that the still-most-powerful nation on earth, the one where I live, is … well, more aligned with good than any other global superpower in the miserable pageant of human history has been.  I’m thankful to live in the first superpower in history that has some error-correction machinery built in, some ability to repudiate its past sins (and hopefully its present sins, in the future).  I’m thankful to live in the first superpower that has toleration of Jews and other religious minorities built in as a basic principle, with the possible exception of the Persian Empire under Cyrus.

I’m thankful that all eight of my great-grandparents came to the US in 1905, back when Jewish mass immigration was still allowed.  Of course there’s a selection effect here: if they hadn’t made it, I wouldn’t be here to ponder it.  Still, it seems appropriate to express gratitude for the fact of existing, whatever metaphysical difficulties might inhere in that act.

I’m thankful that there’s now a ceasefire between Israel and Lebanon that Israel’s government saw fit to agree to.  While I fear that this will go the way of all previous ceasefires — Hezbollah “obeys” until it feels ready to strike again, so then Israel invades Lebanon again, then more civilians die, then there’s another ceasefire, rinse and repeat, etc. — the possibility always remains that this time will be the charm, for all people on both sides who want peace.

I’m thankful that our laws of physics are so constructed that G, c, and ℏ, three constants that are relatively easy to measure, can be combined to tell us the fundamental units of length and time, even though those units — the Planck time, 10-43 seconds, and the Planck length, 10-33 centimeters — are themselves below the reach of any foreseeable technology, and to atoms as atoms are to the solar system.

I’m thankful that, almost thirty years after I could have and should have, I’ve now finally learned the proof of the irrationality of π.

I’m thankful that, if I could go back in time to my 14-year-old self, I could tell him firstly, that female heterosexual attraction to men is a real phenomenon in the world, and secondly, that it would sometimes fixate on him (the future him, that is) in particular.

I’m thankful for red grapefruit, golden mangos, seedless watermelons, young coconuts (meat and water), mangosteen, figs, dates, and even prunes.  Basically, fruit is awesome, the more so after whatever selective breeding and genetic engineering humans have done to it.

I’m thankful for Futurama, and for the ability to stream every episode of it in order, as Dana, the kids, and I have been doing together all fall.  I’m thankful that both of my kids love it as much as I do—in which case, how far from my values and worldview could they possibly be? Even if civilization is destroyed, it will have created 100 episodes of something this far out on the Pareto frontier of lowbrow humor, serious intellectual content, and emotional depth for a future civilization to discover.  In short: “good news, everyone!”

My podcast with Brian Greene

Friday, October 18th, 2024

Yes, he’s the guy from The Elegant Universe book and TV series. Our conversation is 1 hour 40 minutes; as usual I strongly recommend listening at 2x speed. The topics, chosen by Brian, include quantum computing (algorithms, hardware, error-correction … the works), my childhood, the interpretation of quantum mechanics, the current state of AI, the future of sentient life in the cosmos, and mathematical Platonism. I’m happy with how it turned out; in particular, my verbal infelicities seem to have been at a minimum this time. I recommend skipping the YouTube comments if you want to stay sane, but do share your questions and reactions in the comments here. Thanks to Brian and his team for doing this. Enjoy!


Update (Oct. 28): If that’s not enough Scott Aaronson video content for you, please enjoy another quantum computing podcast interview, this one with Ayush Prakash and shorter (clocking in at 45 minutes). Ayush pitched this podcast to me as an opportunity to explain quantum computing to Gen Z. Thus, I considered peppering my explanations of interference and entanglement with such phrases as ‘fo-shizzle’ and ‘da bomb,’ but I desisted after reflecting that whatever youth slang I knew was probably already outdated whenever I’d picked it up, back in the twentieth century.

My Nutty, Extremist Beliefs

Sunday, October 13th, 2024

In nearly twenty years of blogging, I’ve unfortunately felt more and more isolated and embattled. It now feels like anything I post earns severe blowback, from ridicule on Twitter, to pseudonymous comment trolls, to scary and aggressive email bullying campaigns. Reflecting on this, though, I came to see that such strong reactions are an understandable response to my extremist stances. When your beliefs smash the Overton Window into tiny shards like mine do, what do you expect? Just consider some of the intransigent, hard-line stances I’ve taken here on Shtetl-Optimized:

(1) US politics. I’m terrified of right-wing authoritarian populists and their threat to the Enlightenment. For that and many other reasons, I vote straight-ticket Democrat, donate to Democratic campaigns, and encourage everyone else to do likewise. But I also wish my fellow Democrats would rein in the woke stuff, stand up more courageously to the world’s autocrats, and study more economics, so they understand why rent control, price caps, and other harebrained interventions will always fail.

(2) Quantum computing. I’m excited about the prospects of QC, so much that I’ve devoted most of my career to that field. But I also think many of QC’s commercial applications have been wildly oversold to investors, funding agencies, and the press, and I haven’t been afraid to say so.

(3) AI. I think the spectacular progress of AI over the past few years raises scary questions about where we’re headed as a species.  I’m neither in the camp that says “we’ll almost certainly die unless we shut down AI research,” nor the camp that says “the good guys need to race full-speed ahead to get AGI before the bad guys get it.” I’d like us to proceed in AI research with caution and guardrails and the best interests of humanity in mind, rather than the commercial interests of particular companies.

(4) Climate change. I think anthropogenic climate change is 100% real and one of the most urgent problems facing humanity, and those who deny this are being dishonest or willfully obtuse.  But because I think that, I also think it’s way past time to explore technological solutions like modular nuclear reactors, carbon capture, and geoengineering. I think we can’t virtue-signal or kumbaya our way out of the climate crisis.

(5) Feminism and dating. I think the emancipation of women is one of the modern world’s greatest triumphs.  I reserve a special hatred for misogynistic, bullying men. But I also believe, from experience, that many sensitive, nerdy guys severely overcorrected on feminist messaging, to the point that they became terrified of the tiniest bit of assertiveness or initiative in heterosexual courtship. I think this terror has led millions of them to become bitter “incels.”  I want to figure out ways to disrupt the incel pipeline, by teaching shy nerdy guys to have healthy, confident dating lives, without thereby giving asshole guys license to be even bigger assholes.

(6) Israel/Palestine. I’m passionately in favor of Israel’s continued existence as a Jewish state, without which my wife’s family and many of my friends’ and colleagues’ families would have been exterminated. However, I also despise Bibi and the messianic settler movement to which he’s beholden. I pray for a two-state solution where Israelis and Palestinians will coexist in peace, free from their respective extremists.

(7) Platonism. I think that certain mathematical questions, like the Axiom of Choice or the Continuum Hypothesis, might not have any Platonic truth-value, there being no fact of the matter beyond what can be proven from various systems of axioms. But I also think, with Gödel, that statements of elementary arithmetic, like the Goldbach Conjecture or P≠NP, are just Platonically true or false independent of any axiom system.

(8) Science and religion. As a secular rationalist, I’m acutely aware that no ancient religion can be “true,” in the sense believed by either the ancients or modern fundamentalists. Still, the older I’ve gotten, the more I’ve come to see religions as vast storehouses containing (among much else) millennia of accumulated wisdom about how humans can or should live. As in the parable of Chesterton’s Fence, I think this wisdom is often far from obvious and nearly impossible to derive from first principles. So I think that, at the least, secularists will need to figure out their own long-term methods to encourage many of the same things that religion once did—such as stable families, childbirth, self-sacrifice and courage in defending one’s community, and credible game-theoretic commitments to keeping promises and various other behaviors.

(9) Foreign policy and immigration. I’d like the US to stand more courageously against evil regimes, such as those of China, Russia, and Iran. At the same time, I’d like the US to open our gates much wider to students, scientists, and dissidents from those nations who seek freedom in the West. I think our refusal to do enough of this is a world-historic self-own.

(10) Academia vs. industry. I think both have advantages and disadvantages for people in CS and other technical fields. At their best, they complement each other. When advising a student which path to pursue, I try to find out all I can about the student’s goals and personality.

(11) Population ethics. I’m worried about how the earth will support 9 or 10 billion people with first-world living standards, which is part of why I’d like career opportunities for women, girls’ education, contraception, and (early-term) abortion to become widely available everywhere on earth. All the same, I’m not an antinatalist. I think raising one or more children in a loving home should generally be celebrated as a positive contribution to the world.

(12) The mind-body problem. I think it’s possible that there’s something profound we don’t yet understand about consciousness and its relation to the physical world. At the same time, I think the burden is clearly on the mind-body dualists to articulate what that something might be, and how to reconcile it with the known laws of physics. I admire the audacity of Roger Penrose in tackling this question head-on, but I don’t think his solution works.

(13) COVID response. I think the countries that did best tended to be those that had some coherent stategy—whether that was “let the virus rip, keep schools open, quarantine only the old and sick,” or “aggressively quarantine everyone and wait for a vaccine.” I think countries torn between these strategies, like the US, tended to get the worst of all worlds. On the other hand, I think the US did one huge thing right, which was greatly to accelerate (by historical standards) the testing and distribution of the mRNA vaccines. For the sake of the millions who died and the billions who had their lives interrupted, I only wish we’d rushed the vaccines much more. We ought now to be spending trillions on a vaccine pipeline that’s ready to roll within weeks as soon as the next pandemic hits.

(14) P versus NP. From decades of intuition in math and theoretical computer science, I think we can be fairly confident of P≠NP—but I’d “only” give it, say, 97% odds. Here as elsewhere, we should be open to the possibility of world-changing surprises.

(15) Interpretation of QM. I get really annoyed by bad arguments against the Everett interpretation, which (contrary to a popular misconception) I understand to result from scientifically conservative choices. But I’m also not an Everettian diehard. I think that, if you push questions like “but is anyone home in the other branches?” hard enough, you arrive at questions about personal identity and consciousness that were profoundly confusing even before quantum mechanics. I hope we someday learn something new that clarifies the situation.

Anyway, with extremist, uncompromising views like those, is it any surprise that I get pilloried and denounced so often?

All the same, I sometimes ask myself: what was the point of becoming a professor, seeking and earning the hallowed protections of tenure, if I can’t then freely express radical, unbalanced, batshit-crazy convictions like the ones in this post?

Quantum advantage for NP approximation? For REAL this time?

Saturday, October 5th, 2024

The other night I spoke at a quantum computing event and was asked—for the hundredth time? the thousandth?—whether I agreed that the quantum algorithm called QAOA was poised revolutionize industries by finding better solutions to NP-hard optimization problems. I replied that while serious, worthwhile research on that algorithm continues, alas, so far I have yet to see a single piece of evidence that QAOA outperforms the best classical heuristics on any problem that anyone cares about. (Note added: in the comments, Ashley Montanaro shares a paper with empirical evidence that QAOA provides a modest polynomial speedup over known classical heuristics for random k-SAT. This is the best/only such evidence I’ve seen, and which still stands as far as I know!)

I added I was sad to see the arXiv flooded with thousands of relentlessly upbeat QAOA papers that dodge the speedup question by simply never raising it at all. I said that, in my experience, these papers reliably led outsiders to conclude that surely there must be lots of excellent known speedups from QAOA—since otherwise, why would so many people be writing papers about it?

Anyway, the person right after me talked about a “quantum dating app” (!) they were developing.

I figured that, as usual, my words had thudded to the ground with zero impact, truth never having had a chance against what sounds good and what everyone wants to hear.

But then, the morning afterward, someone from the audience emailed me that, incredulous at my words, he went through a bunch of QAOA papers, looking for the evidence of its beating classical algorithms that he knew must be in them, and was shocked to find the evidence missing, just as I had claimed! So he changed his view.

That one message filled me with renewed hope about my ability to inject icy blasts of reality into the quantum algorithms discourse.


So, with that prologue, surely I’m about to give you yet another icy blast of quantum algorithms not helping for optimization problems?

Aha! Inspired by Scott Alexander, this is the part of the post where, having led you one way, I suddenly jerk you the other way. My highest loyalty, you see, is not to any narrative, but only to THE TRUTH.

And the truth is this: this summer, my old friend Stephen Jordan and seven coauthors, from Google and elsewhere, put out a striking preprint about a brand-new quantum algorithm for optimization problems that they call Decoded Quantum Interferometry (DQI). This week Stephen was gracious enough to explain the new algorithm in detail when he visited our group at UT Austin.

DQI can be used for a variety of NP-hard optimization problems, at least in the regime of approximation where they aren’t NP-hard. But a canonical example is what the authors call “Optimal Polynomial Intersection” or OPI, which involves finding a low-degree polynomial that intersects as many subsets as possible from a given list. Here’s the formal definition:

OPI. Given integers n<p with p prime, we’re given as input subsets S1,…,Sp-1 of the finite field Fp. The goal is to find a degree-(n-1) polynomial Q that maximizes the number of y∈{1,…,p-1} such that Q(y)∈Sy, i.e. that intersects as many of the subsets as possible.

For this problem, taking as an example the case p-1=10n and |Sy|=⌊p/2⌋ for all y, Stephen et al. prove that DQI satisfies a 1/2 + (√19)/20 ≈ 0.7179 fraction of the p-1 constraints in polynomial time. By contrast, they say the best classical polynomial-time algorithm they were able to find satisfies an 0.55+o(1) fraction of the constraints.

To my knowledge, this is the first serious claim to get a better approximation ratio quantumly for an NP-hard problem, since Farhi et al. made the claim for QAOA solving something called MAX-E3LIN2 back in 2014, and then my blogging about it led to a group of ten computer scientists finding a classical algorithm that got an even better approximation.

So, how did Stephen et al. pull this off? How did they get around the fact that, again and again, exponential quantum speedups only seem to exist for algebraically structured problems like factoring or discrete log, and not for problems like 3SAT or Max-Cut that lack algebraic structure?

Here’s the key: they didn’t. Instead they leaned into the fact, by targeting an optimization problem that (despite being NP-hard) has loads of algebraic structure! The key insight, in their new DQI algorithm, is that the Quantum Fourier Transform can be used to reduce other NP-hard problems to problems of optimal decoding of a suitable error-correcting code. (This insight built on the breakthrough two years ago by Yamakawa and Zhandry, giving a quantum algorithm that gets an exponential speedup for an NP search problem relative to a random oracle.)

Now, sometimes the reduction to a coding theory problem is “out of the frying pan and into the fire,” as the new optimization problem is no easier than the original one. In the special case of searching for a low-degree polynomial, however, the optimal decoding problem ends up being for the Reed-Solomon code, where we’ve known efficient classical algorithms for generations, famously including the Berlekamp-Welch algorithm.

One open problem that I find extremely interesting is whether OPI, in the regime where DQI works, is in coNP or coAM, or has some other identifiable structural feature that presumably precludes its being NP-hard.

Regardless, though, as of this week, the hope of using quantum computers to get better approximation ratios for NP-hard optimization problems is back in business! Will that remain so? Or will my blogging about such an attempt yet again lead to its dequantization? Either way I’m happy.

Quantum Computing: Between Hope and Hype

Sunday, September 22nd, 2024

So, back in June the White House announced that UCLA would host a binational US/India workshop, for national security officials from both countries to learn about the current status of quantum computing and post-quantum cryptography. It fell to my friend and colleague Rafail Ostrovsky to organize the workshop, which ended up being held last week. When Rafi invited me to give the opening talk, I knew he’d keep emailing until I said yes. So, on the 3-hour flight to LAX, I wrote the following talk in a spiral notebook, which I then delivered the next morning with no slides. I called it “Quantum Computing: Between Hope and Hype.” I thought Shtetl-Optimized readers might be interested too, since it contains my reflections on a quarter-century in quantum computing, and prognostications on what I expect soon. Enjoy, and let me know what you think!


Quantum Computing: Between Hope and Hype
by Scott Aaronson

September 16, 2024

When Rafi invited me to open this event, it sounded like he wanted big-picture pontification more than technical results, which is just as well, since I’m getting old for the latter. Also, I’m just now getting back into quantum computing after a two-year leave at OpenAI to think about the theoretical foundations of AI safety. Luckily for me, that was a relaxing experience, since not much happened in AI these past two years. [Pause for laughs] So then, did anything happen in quantum computing while I was away?

This, of course, has been an extraordinary time for both quantum computing and AI, and not only because the two fields were mentioned for the first time in an American presidential debate (along with, I think, the problem of immigrants eating pets). But it’s extraordinary for quantum computing and for AI in very different ways. In AI, practice is wildly ahead of theory, and there’s a race for scientific understanding to catch up to where we’ve gotten via the pure scaling of neural nets and the compute and data used to train them. In quantum computing, it’s just the opposite: there’s right now a race for practice to catch up to where theory has been since the mid-1990s.

I started in quantum computing around 1998, which is not quite as long as some people here, but which does cover most of the time since Shor’s algorithm and the rest were discovered. So I can say: this past year or two is the first time I’ve felt like the race to build a scalable fault-tolerant quantum computer is actually underway. Like people are no longer merely giving talks about the race or warming up for the race, but running the race.

Within just the last few weeks, we saw the group at Google announce that they’d used the Kitaev surface code, with distance 7, to encode one logical qubit using 100 or so physical qubits, in superconducting architecture. They got a net gain: their logical qubit stays alive for maybe twice as long as the underlying physical qubits do. And crucially, they find that their logical coherence time increases as they pass to larger codes, with higher distance, on more physical qubits. With superconducting, there are still limits to how many physical qubits you can stuff onto a chip, and eventually you’ll need communication of qubits between chips, which has yet to be demonstrated. But if you could scale Google’s current experiment even to 1500 physical qubits, you’d probably be below the threshold where you could use that as a building block for a future scalable fault-tolerant device.

Then, just last week, a collaboration between Microsoft and Quantinuum announced that, in the trapped-ion architecture, they applied pretty substantial circuits to logically-encoded qubits—-again in a way that gets a net gain in fidelity over not doing error-correction, modulo a debate about whether they’re relying too much on postselection. So, they made a GHZ state, which is basically like a Schrödinger cat, out of 12 logically encoded qubits. They also did a “quantum chemistry simulation,” which had only two logical qubits, but which required three logical non-Clifford gates—which is the hard kind of gate when you’re doing error-correction.

Because of these advances, as well as others—what QuEra is doing with neutral atoms, what PsiQuantum and Xanadu are doing with photonics, etc.—I’m now more optimistic than I’ve ever been that, if things continue at the current rate, either there are useful fault-tolerant QCs in the next decade, or else something surprising happens to stop that. Plausibly we’ll get there not just with one hardware architecture, but with multiple ones, much like the Manhattan Project got a uranium bomb and a plutonium bomb around the same time, so the question will become which one is most economic.

If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too!

Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.

So, I think this moment is a good one for reflection. We’re used to quantum computing having this air of unreality about it. Like sure, we go to conferences, we prove theorems about these complexity classes like BQP and QMA, the experimenters do little toy demos that don’t scale. But if this will ever be practical at all, then for all we know, not for another 200 years. It feels really different to think of this as something plausibly imminent. So what I want to do for the rest of this talk is to step back and ask, what are the main reasons why people regarded this as not entirely real? And what can we say about those reasons in light of where we are today?


Reason #1

For the general public, maybe the overriding reason not to take QC seriously has just been that it sounded too good to be true. Like, great, you’ll have this magic machine that’s gonna exponentially speed up every problem in optimization and machine learning and finance by trying out every possible solution simultaneously, in different parallel universes. Does it also dice peppers?

For this objection, I’d say that our response hasn’t changed at all in 30 years, and it’s simply, “No, that’s not what it will do and not how it will work.” We should acknowledge that laypeople and journalists and unfortunately even some investors and government officials have been misled by the people whose job it was to explain this stuff to them.

I think it’s important to tell people that the only hope of getting a speedup from a QC is to exploit the way that QM works differently from classical probability theory — in particular, that it involves these numbers called amplitudes, which can be positive, negative, or even complex. With every quantum algorithm, what you’re trying to do is choreograph a pattern of interference where for each wrong answer, the contributions to its amplitude cancel each other out, whereas the contributions to the amplitude of the right answer reinforce each other. The trouble is, it’s only for a few practical problems that we know how to do that in a way that vastly outperforms the best known classical algorithms.

What are those problems? Here, for all the theoretical progress that’s been made in these past decades, I’m going to give the same answer in 2024 that I would’ve given in 1998. Namely, there’s the simulation of chemistry, materials, nuclear physics, or anything else where many-body quantum effects matter. This was Feynman’s original application from 1981, but probably still the most important one commercially. It could plausibly help with batteries, drugs, solar cells, high-temperature superconductors, all kinds of other things, maybe even in the next few years.

And then there’s breaking public-key cryptography, which is not commercially important, but is important for other reasons well-known to everyone here.

And then there’s everything else. For problems in optimization, machine learning, finance, and so on, there’s typically a Grover’s speedup, but that of course is “only” a square root and not an exponential, which means that it will take much longer before it’s relevant in practice. And one of the earliest things we learned in quantum computing theory is that there’s no “black-box” way to beat the Grover speedup. By the way, that’s also relevant to breaking cryptography — other than the subset of cryptography that’s based on abelian groups and can be broken by Shor’s algorithm or the like. The centerpiece of my PhD thesis, twenty years ago, was the theorem that you can’t get more than a Grover-type polynomial speedup for the black-box problem of finding collisions in cryptographic hash functions.

So then what remains? Well, there are all sorts heuristic quantum algorithms for classical optimization and machine learning problems — QAOA (Quantum Approximate Optimization Algorithm), quantum annealing, and so on — and we can hope that sometimes they’ll beat the best classical heuristics for the same problems, but it will be trench warfare, not just magically speeding up everything. There are lots of quantum algorithms somehow inspired by the HHL (Harrow-Hassidim-Lloyd) algorithm for solving linear systems, and we can hope that some of those algorithms will get exponential speedups for end-to-end problems that matter, as opposed to problems of transforming one quantum state to another quantum state. We can of course hope that new quantum algorithms will be discovered. And most of all, we can look for entirely new problem domains, where people hadn’t even considered using quantum computers before—new orchards in which to pick low-hanging fruit. Recently, Shih-Han Hung and I, along with others, have proposed using current QCs to generate cryptographically certified random numbers, which could be used in post-state cryptocurrencies like Ethereum. I’m hopeful that people will find other protocol applications of QC like that one — “proof of quantum work.” [Another major potential protocol application, which Dan Boneh brought up after my talk, is quantum one-shot signatures.]

Anyway, taken together, I don’t think any of this is too good to be true. I think it’s genuinely good and probably true!


Reason #2

A second reason people didn’t take seriously that QC was actually going to happen was the general thesis of technological stagnation, at least in the physical world. You know, maybe in the 40s and 50s, humans built entirely new types of machines, but nowadays what do we do? We issue press releases. We make promises. We argue on social media.

Nowadays, of course, pessimism about technological progress seems hard to square with the revolution that’s happening in AI, another field that spent decades being ridiculed for unfulfilled promises and that’s now fulfilling the promises. I’d also speculate that, to the extent there is technological stagnation, most of it is simply that it’s become really hard to build new infrastructure—high-speed rail, nuclear power plants, futuristic cities—for legal reasons and NIMBY reasons and environmental review reasons and Baumol’s cost disease reasons. But none of that really applies to QC, just like it hasn’t applied so far to AI.


Reason #3

A third reason people didn’t take this seriously was the sense of “It’s been 20 years already, where’s my quantum computer?” QC is often compared to fusion power, another technology that’s “eternally just over the horizon.” (Except, I’m no expert, but there seems to be dramatic progress these days in fusion power too!)

My response to the people who make that complaint was always, like, how much do you know about the history of technology? It took more than a century for heavier-than-air flight to go from correct statements of the basic principle to reality. Universal programmable classical computers surely seemed more fantastical from the standpoint of 1920 than quantum computers seem today, but then a few decades later they were built. Today, AI provides a particularly dramatic example where ideas were proposed a long time ago—neural nets, backpropagation—those ideas were then written off as failures, but no, we now know that the ideas were perfectly sound; it just took a few decades for the scaling of hardware to catch up to the ideas. That’s why this objection never had much purchase by me, even before the dramatic advances in experimental quantum error-correction of the last year or two.


Reason #4

A fourth reason why people didn’t take QC seriously is that, a century after the discovery of QM, some people still harbor doubts about quantum mechanics itself. Either they explicitly doubt it, like Leonid Levin, Roger Penrose, or Gerard ‘t Hooft. Or they say things like, “complex Hilbert space in 2n dimensions is a nice mathematical formalism, but mathematical formalism is not reality”—the kind of thing you say when you want to doubt, but not take full intellectual responsibility for your doubts.

I think the only thing for us to say in response, as quantum computing researchers—and the thing I consistently have said—is man, we welcome that confrontation! Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that.


Reason #5

My final reason why people didn’t take QC seriously is the only technical one I’ll discuss here. Namely, maybe quantum mechanics is fine but fault-tolerant quantum computing is fundamentally “screened off” or “censored” by decoherence or noise—and maybe the theory of quantum fault-tolerance, which seemed to indicate the opposite, makes unjustified assumptions. This has been the position of Gil Kalai, for example.

The challenge for that position has always been to articulate, what is true about the world instead? Can every realistic quantum system be simulated efficiently by a classical computer? If so, how? What is a model of correlated noise that kills QC without also killing scalable classical computing?—which turns out to be a hard problem.

In any case, I think this position has been dealt a severe blow by the Random Circuit Sampling quantum supremacy experiments of the past five years. Scientifically, the most important thing we’ve learned from these experiments is that the fidelity seems to decay exponentially with the number of qubits, but “only” exponentially — as it would if the errors were independent from one gate to the next, precisely as the theory of quantum fault-tolerance assumes. So for anyone who believes this objection, I’d say that the ball is now firmly in their court.


So, if we accept that QC is on the threshold of becoming real, what are the next steps? There are the obvious ones: push forward with building better hardware and using it to demonstrate logical qubits and fault-tolerant operations on them. Continue developing better error-correction methods. Continue looking for new quantum algorithms and new problems for those algorithms to solve.

But there’s also a less obvious decision right now. Namely, do we put everything into fault-tolerant qubits, or do we continue trying to demonstrate quantum advantage in the NISQ (pre-fault-tolerant) era? There’s a case to be made that fault-tolerance will ultimately be needed for scaling, and anything you do without fault-tolerance is some variety of non-scalable circus trick, so we might as well get over the hump now.

But I’d like to advocate putting at least some thought into how to demonstrate a quantum advantage in the near-term. Thay could be via cryptographic protocols, like those that Kahanamoku-Meyer et al. have proposed. It could be via pseudorandom peaked quantum circuits, a recent proposal by me and Yuxuan Zhang—if we can figure out an efficient way to generate the circuits. Or we could try to demonstrate what William Kretschmer, Harry Buhrman, and I have called “quantum information supremacy,” where, instead of computational advantage, you try to do an experiment that directly shows the vastness of Hilbert space, via exponential advantages for quantum communication complexity, for example. I’m optimistic that that might be doable in the very near future, and have been working with Quantinuum to try to do it.

On the one hand, when I started in quantum computing 25 years ago, I reconciled myself to the prospect that I’m going to study what fundamental physics implies about the limits of computation, and maybe I’ll never live to see any of it experimentally tested, and that’s fine. On the other hand, once you tell me that there is a serious prospect of testing it soon, then I become kind of impatient. Some part of me says, let’s do this! Let’s try to achieve forthwith what I’ve always regarded as the #1 application of quantum computers, more important than codebreaking or even quantum simulation: namely, disproving the people who said that scalable quantum computing was impossible.

My podcast with Dan Faggella

Sunday, September 15th, 2024

Dan Faggella recorded an unusual podcast with me that’s now online. He introduces me as a “quantum physicist,” which is something that I never call myself (I’m a theoretical computer scientist) but have sort of given up on not being called by others. But the ensuing 85-minute conversation has virtually nothing to do with physics, or anything technical at all.

Instead, Dan pretty much exclusively wants to talk about moral philosophy: my views about what kind of AI, if any, would be a “worthy successor to humanity,” and how AIs should treat humans and vice versa, and whether there’s any objective morality at all, and (at the very end) what principles ought to guide government regulation of AI.

So, I inveigh against “meat chauvinism,” and expand on the view that locates human specialness (such as it is) in what might be the unclonability, unpredictability, and unrewindability of our minds, and plead for comity among the warring camps of AI safetyists.

The central point of disagreement between me and Dan ended up centering around moral realism: Dan kept wanting to say that a future AGI’s moral values would probably be as incomprehensible to us as are ours to a sea snail, and that we need to make peace with that. I replied that, firstly, things like the Golden Rule strike me as plausible candidates for moral universals, which all thriving civilizations (however primitive or advanced) will agree about in the same way they agree about 5 being a prime number. And secondly, that if that isn’t true—if the morality of our AI or cyborg descendants really will be utterly alien to us—then I find it hard to have any preferences at all about the future they’ll inhabit, and just want to enjoy life while I can! That which (by assumption) I can’t understand, I’m not going to issue moral judgments about either.

Anyway, rewatching the episode, I was unpleasantly surprised by my many verbal infelicities, my constant rocking side-to-side in my chair, my sometimes talking over Dan in my enthusiasm, etc. etc., but also pleasantly surprised by the content of what I said, all of which I still stand by despite the terrifying moral minefields into which Dan invited me. I strongly recommend watching at 2x speed, which will minimize the infelicities and make me sound smarter. Thanks so much to Dan for making this happen, and let me know what you think!

Added: See here for other podcasts in the same series and on the same set of questions, including with Nick Bostrom, Ben Goertzel, Dan Hendrycks, Anders Sandberg, and Richard Sutton.

Quantum fault-tolerance milestones dropping like atoms

Tuesday, September 10th, 2024

Update: I’d been wavering—should I vote for the terrifying lunatic, ranting about trans criminal illegal aliens cooking cat meat, or for the nice woman constantly making faces as though the lunatic was completely cracking her up? But when the woman explicitly came out in favor of AI and quantum computing research … that really sealed the deal for me.


Between roughly 2001 and 2018, I’ve happy to have done some nice things in quantum computing theory, from the quantum lower bound for the collision problem to the invention of shadow tomography.  I hope that’s not the end of it.  QC research brought me about as much pleasure as anything in life did.  So I hope my tired brain can be revved up a few more times, between now and whenever advances in AI or my failing health or the collapse of civilization makes the issue moot. If not, though, there are still many other quantum activities to fill my days: teaching (to which I’ve returned after two years), advising my students and postdocs, popular writing and podcasts and consulting, and of course, learning about the latest advances in quantum computing so I can share them with you, my loyal readers.

On that note, what a time it is in QC!  Basically, one experimental milestone after another that people talked about since the 90s is finally being achieved, to the point where it’s become hard to keep up with it all. Briefly though:

A couple weeks ago, the Google group announced an experiment that achieved net gain from the use of Kitaev’s surface code, using 101 physical qubits to encode 1 logical qubit. The headline result here is that, in line with theory, they see the performance improve as they pass to larger codes with more physical qubits and higher distance. Their best demonstrated code has a distance of 7, which is enough to get “beyond break-even” (their logical qubit lasts more than twice as long as the underlying physical qubits), and is also enough that any future improvements to the hardware will get amplified a lot. With superconducting qubits, one is (alas) still limited by how many one can cram onto a single chip. On paper, though, they say that scaling the same setup to a distance-27 code with ~1500 physical qubits would get them down to an error rate of 10-6, good enough to be a building block in a future fault-tolerant QC. They also report correlated bursts of errors that come about once per hour, from a still-unknown source that appears not to be cosmic rays. I hope it’s not Gil Kalai in the next room.

Separately, just this morning, Microsoft and Quantinuum announced that they entangled 12 logical qubits on a 56-physical-qubit trapped-ion processor, building on earlier work that I blogged about in April. They did this by applying a depth-3 logical circuit with 12 logical CNOT gates, to prepare a cat state. They report an 0.2% error rate when they do this, which is 11x better than they would’ve gotten without using error-correction. (Craig Gidney, in the comments, says that these results still involve postselection.)

The Microsoft/Quantinuum group also did what they called a “chemistry simulation” involving 13 physical qubits. The latter involved “only” 2 logical qubits and 4 logical gates, but 3 of those gates were non-Clifford, which are the hard kind when one is doing error-correction using a transversal code. (CNOT, by contrast, is a Clifford gate.)

Apart from the fact that Google is using superconducting qubits while Microsoft/Quantinuum are using trapped ions, the two results are incomparable in terms of what they demonstrate. Google is just scaling up a single logical qubit, but showing (crucially) that their error rate decreases with increasing size and distance. Microsoft and Quantinuum are sticking with “small” logical qubits with insufficient distance, but they’re showing that they can apply logical circuits that entangle up to 12 of these qubits.

Microsoft also announced today a new collaboration with the startup company Atom Computing, headquartered near Quantinuum in Colorado, which is trying to build neutral-atom QCs (like QuEra in Boston). Over the past few years, Microsoft’s quantum group has decisively switched from a strategy of “topological qubits or bust” to a strategy of “anything that works,” although they assure me that they also remain committed to the topological approach.

Anyway, happy to hear in the comments from anyone who knows more details, or wants to correct me on any particular, or has questions which I or others can try our best to answer.

Let me end by sticking my neck out. If hardware progress continues at the rate we’ve seen for the past year or two, then I find it hard to understand why we won’t have useful fault-tolerant QCs within the next decade. (And now to retreat my neck a bit: the “if” clause in that sentence is important and non-removable!)

Quantum developments!

Thursday, July 11th, 2024

Perhaps like the poor current President of the United States, I can feel myself fading, my memory and verbal facility and attention to detail failing me, even while there’s so much left to do to battle the nonsense in the world. I started my career on an accelerated schedule—going to college at 15, finishing my PhD at 22, etc. etc.—and the decline is (alas) also hitting me early, at the ripe age of 43.

Nevertheless, I do seem to remember that this was once primarily a quantum computing blog, and that I was known to the world as a quantum computing theorist. And exciting things continue to happen in quantum computing…


First, a company in the UK called Oxford Ionics has announced that it now has a system of trapped-ion qubits in which it’s prepared two-qubit maximally entangled states with 99.97% fidelity. If true, this seems extremely good. Indeed, it seems better than the numbers from bigger trapped-ion efforts, and quite close to the ~99.99% that you’d want for quantum fault-tolerance. But maybe there’s a catch? Will they not be able to maintain this kind of fidelity when doing a long sequence of programmable two-qubit gates on dozens of qubits? Can the other trapped-ion efforts actually achieve similar fidelities in head-to-head comparisons? Anyway, I was surprised to see how little attention the paper got on SciRate. I look forward to hearing from experts in the comment section.


Second, I almost forgot … but last week Quantinuum announced that it’s done a better quantum supremacy experiment based on Random Circuit Sampling with 56 qubits—similar to what Google and USTC did in 2019-2020, but this time using 2-qubit gates with 99.84% fidelities (rather than merely ~99.5%). This should set a new standard for those looking to simulate these things using tensor network methods.


Third, a new paper by Schuster, Haferkamp, and Huang gives a major advance on k-designs and pseudorandom unitaries. Roughly speaking, the paper shows that even in one dimension, a random n-qubit quantum circuit, with alternating brickwork layers of 2-qubit gates, forms a “k-design” after only O(k polylog k log n) layers of gates. Well, modulo one caveat: the “random circuit” isn’t from the most natural ensemble, but has to have some of its 2-qubit gates set to the identity, namely those that straddle certain contiguous blocks of log n qubits. This seems like a purely technical issue—how could randomizing those straddling gates make the mixing behavior worse?—but future work will be needed to address it. Notably, the new upper bound is off from the best-possible k layers by only logarithmic factors. (For those tuning in from home: a k-design informally means a collection of n-qubit unitaries such that, from the perspective of degree-k polynomials, choosing a unitary randomly from the collection looks the same as choosing randomly among all n-qubit unitary transformations—i.e., from the Haar measure.)

Anyway, even in my current decrepit state, I can see that such a result would have implications for … well, all sorts of things that quantum computing and information theorists care about. Again I welcome any comments from experts!


Incidentally, congratulations to Peter Shor for winning the Shannon Award!

UmeshFest

Friday, May 10th, 2024

Unrelated Announcements: See here for a long interview with me in The Texas Orator, covering the usual stuff (quantum computing, complexity theory, AI safety). And see here for a podcast with me and Spencer Greenberg about a similar mix of topics.


A couple weeks ago, I helped organize UmeshFest: Don’t Miss This Flight, a workshop at UC Berkeley’s Simons Institute to celebrate the 26th birthday of my former PhD adviser Umesh Vazirani. Peter Shor, John Preskill, Manuel Blum, Madhu Sudan, Sanjeev Arora, and dozens of other luminaries of quantum and classical computation were on hand to help tell the story of quantum computing theory and Umesh’s central role in it. There was also constant roasting of Umesh—of his life lessons from the squash court, his last-minute organizational changes and phone calls at random hours. I was delighted to find that my old coinage of “Umeshisms” was simply standard usage among the attendees.


At Berkeley, many things were as I remembered them—my favorite Thai eatery, the bubble tea, the Campanile—but not everything was the same. Here I am in front of Berkeley’s Gaza encampment, a.k.a. its “Anti Zionism Zone” or what was formerly Sproul Plaza (zoom into the chalk):

I felt a need to walk through the Anti Zionism Zone day after day (albeit unassumingly, neither draped in an Israeli flag nor looking to start an argument with anyone), for more-or-less the same reasons why the US regularly sends aircraft carriers through the Strait of Taiwan.


Back in the more sheltered environment of the Simons Institute, it was great to be among friends, some of whom I hadn’t seen since before Covid. Andris Ambainis and I worked together for a bit on an open problem in quantum query complexity, for old times’ sake (we haven’t solved it yet).

And then there were talks! I thought I’d share my own talk, which was entitled The Story of BQP (Bounded-Error Quantum Polynomial-Time). Here are the PowerPoint slides, but I’ll also share screen-grabs for those of you who constantly complain that you can’t open PPTX files.

I was particularly proud of the design of my title slide:

Moving on:

The class BQP/qpoly, I should explain, is all about an advisor who’s all-wise and perfectly benevolent, but who doesn’t have a lot of time to meet with his students, so he simply doles out the same generic advice to all of them, regardless of their thesis problem x.

I then displayed my infamous “Umeshisms” blog post from 2005—one of the first posts in the history of this blog:

As I explained, now that I hang out with the rationalist and AI safety communities, which are also headquartered in Berkeley, I’ve learned that my “Umeshisms” post somehow took on a life of its own. Once, when dining at one of the rationalists’ polyamorous Berkeley group houses, I said this has been lovely but I’ll now need to leave, to visit my PhD former adviser Umesh Vazirani. “You mean the Umesh?!” the rationalists excitedly exclaimed. “Of Umeshisms? If you’ve never missed a flight?”

But moving on:

(Note that by “QBPP,” Bethiaume and Brassard meant what we now call BQP.)

Feynman and Deutsch asked exactly the right question—does simulating quantum mechanics on a classical computer inherently produce an exponential slowdown, or not?—but they lacked most of the tools to start formally investigating the question. A factor-of-two quantum speedup for the XOR function could be dismissed as unimpressive, while a much greater quantum speedup for the “constant vs. balanced” problem could be dismissed as a win against only deterministic classical algorithms, rather than randomized algorithms. Deutsch-Jozsa may have been the first time that an apparent quantum speedup faltered in an honest comparison against classical algorithms. It certainly wasn’t the last!

Ah, but this is where Bernstein and Vazirani enter the scene.

Bernstein and Vazirani didn’t merely define BQP, which remains the central object of study in quantum complexity theory. They also established its most basic properties:

And, at least in the black-box model, Bernstein and Vazirani gave the first impressive quantum speedup for a classical problem that survived in a fair comparison against the best classical algorithm:

The Recursive Bernstein-Vazirani problem, also called Recursive Fourier Sampling, is constructed as a “tree” of instances of the Bernstein-Vazirani problem, where to query the Boolean function at any given level, you need to solve a Bernstein-Vazirani problem for a Boolean function at the level below it, and then run the secret string s through a fixed Boolean function g. For more, see my old paper Quantum Lower Bound for Recursive Fourier Sampling.

Each Bernstein-Vazirani instance has classical query complexity n and quantum query complexity 1. So, if the tree of instances has depth d, then overall the classical query complexity is nd, while the quantum query complexity is only 2d. Where did the 2 come from? From the need to uncompute the secret strings s at each level, to enable quantum interference at the next level up—thereby forcing us to run the algorithm twice. A key insight.

The Recursive Fourier Sampling separation set the stage for Simon’s algorithm, which gave a more impressive speedup in the black-box model, and thence for the famous Shor’s algorithm for factoring and discrete log:

But Umesh wasn’t done establishing the most fundamental properties of BQP! There’s also the seminal 1994 paper by Bennett, Bernstein, Brassard, and Vazirani:

In light of the BV and BBBV papers, let’s see how BQP seems to fit with classical complexity classes—an understanding that’s remained largely stable for the past 30 years:

We can state a large fraction of the research agenda of the whole field, to this day, as questions about BQP:

I won’t have time to discuss all of these questions, but let me at least drill down on the first few.

Many people hoped the list of known problems in BQP would now be longer than it is. So it goes: we don’t decide the truth, we only discover it.

As a 17-year-old just learning about quantum computing in 1998 by reading the Bernstein-Vazirani paper, I was thrilled when I managed to improve their containment BQP ⊆ P#P to BQP ⊆ PP. I thought that would be my big debut in quantum complexity theory. I was then crushed when I learned that Adleman, DeMarrais, and Huang had proved the same thing a year prior. OK, but at least it wasn’t, like, 50 years prior! Maybe if I kept at it, I’d reach the frontier soon enough.

Umesh, from the very beginning, raised the profound question of BQP’s relation to the polynomial hierarchy. Could we at least construct an oracle relative to which BQP⊄PH—or, closely related, relative to which P=NP≠BQP? Recursive Fourier Sampling was a already candidate for such a separation. I spent months trying to prove that candidate wasn’t in PH, but failed. That led me eventually to propose a very different problem, Forrelation, which seemed like a stronger candidate, although I couldn’t prove that either. Finally, in 2018, after four years of effort, Ran Raz and Avishay Tal proved that my Forrelation problem was not in PH, thereby resolving Umesh’s question after a quarter century.

We now know three different ways by which a quantum computer can not merely solve any BQP problem efficiently, but prove its answer to a classical skeptic via an interactive protocol! Using quantum communication, using two entangled (but non-communicating) quantum computers, or using cryptography (this last a breakthrough of Umesh’s PhD student Urmila Mahadev). It remains a great open problem, first posed to my knowledge by Daniel Gottesman, whether one can do it with none of these things.

To see many of the advantages of quantum computation over classical, we’ve learned that we need to broaden our vision beyond BQP (which is a class of languages), to promise problems (like estimating the expectation values of observables), sampling problems (like BosonSampling and Random Circuit Sampling), and relational problems (like the Yamakawa-Zhandry problem, subject of a recent breakthrough). It’s conceivable that quantum advantage could remain for such problems even if it turned out that P=BQP.

A much broader question is whether BQP captures all languages that can be efficiently decided using “reasonable physical resources.” What about chiral quantum field theories, like the Standard Model of elementary particles? What about quantum theories of gravity? Good questions!

Since it was Passover during the talk, I literally said “Dayenu” to Umesh: “if you had only given us BQP, that would’ve been enough! but you didn’t, you gave us so much more!”

Happy birthday Umesh!! We look forward to celebrating again on all your subsequent power-of-2 birthdays.