Quantum Computing: Between Hope and Hype
So, back in June the White House announced that UCLA would host a binational US/India workshop, for national security officials from both countries to learn about the current status of quantum computing and post-quantum cryptography. It fell to my friend and colleague Rafail Ostrovsky to organize the workshop, which ended up being held last week. When Rafi invited me to give the opening talk, I knew he’d keep emailing until I said yes. So, on the 3-hour flight to LAX, I wrote the following talk in a spiral notebook, which I then delivered the next morning with no slides. I called it “Quantum Computing: Between Hope and Hype.” I thought Shtetl-Optimized readers might be interested too, since it contains my reflections on a quarter-century in quantum computing, and prognostications on what I expect soon. Enjoy, and let me know what you think!
Quantum Computing: Between Hope and Hype
by Scott Aaronson
September 16, 2024
When Rafi invited me to open this event, it sounded like he wanted big-picture pontification more than technical results, which is just as well, since I’m getting old for the latter. Also, I’m just now getting back into quantum computing after a two-year leave at OpenAI to think about the theoretical foundations of AI safety. Luckily for me, that was a relaxing experience, since not much happened in AI these past two years. [Pause for laughs] So then, did anything happen in quantum computing while I was away?
This, of course, has been an extraordinary time for both quantum computing and AI, and not only because the two fields were mentioned for the first time in an American presidential debate (along with, I think, the problem of immigrants eating pets). But it’s extraordinary for quantum computing and for AI in very different ways. In AI, practice is wildly ahead of theory, and there’s a race for scientific understanding to catch up to where we’ve gotten via the pure scaling of neural nets and the compute and data used to train them. In quantum computing, it’s just the opposite: there’s right now a race for practice to catch up to where theory has been since the mid-1990s.
I started in quantum computing around 1998, which is not quite as long as some people here, but which does cover most of the time since Shor’s algorithm and the rest were discovered. So I can say: this past year or two is the first time I’ve felt like the race to build a scalable fault-tolerant quantum computer is actually underway. Like people are no longer merely giving talks about the race or warming up for the race, but running the race.
Within just the last few weeks, we saw the group at Google announce that they’d used the Kitaev surface code, with distance 7, to encode one logical qubit using 100 or so physical qubits, in superconducting architecture. They got a net gain: their logical qubit stays alive for maybe twice as long as the underlying physical qubits do. And crucially, they find that their logical coherence time increases as they pass to larger codes, with higher distance, on more physical qubits. With superconducting, there are still limits to how many physical qubits you can stuff onto a chip, and eventually you’ll need communication of qubits between chips, which has yet to be demonstrated. But if you could scale Google’s current experiment even to 1500 physical qubits, you’d probably be below the threshold where you could use that as a building block for a future scalable fault-tolerant device.
Then, just last week, a collaboration between Microsoft and Quantinuum announced that, in the trapped-ion architecture, they applied pretty substantial circuits to logically-encoded qubits—-again in a way that gets a net gain in fidelity over not doing error-correction, modulo a debate about whether they’re relying too much on postselection. So, they made a GHZ state, which is basically like a Schrödinger cat, out of 12 logically encoded qubits. They also did a “quantum chemistry simulation,” which had only two logical qubits, but which required three logical non-Clifford gates—which is the hard kind of gate when you’re doing error-correction.
Because of these advances, as well as others—what QuEra is doing with neutral atoms, what PsiQuantum and Xanadu are doing with photonics, etc.—I’m now more optimistic than I’ve ever been that, if things continue at the current rate, either there are useful fault-tolerant QCs in the next decade, or else something surprising happens to stop that. Plausibly we’ll get there not just with one hardware architecture, but with multiple ones, much like the Manhattan Project got a uranium bomb and a plutonium bomb around the same time, so the question will become which one is most economic.
If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too!
Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.
So, I think this moment is a good one for reflection. We’re used to quantum computing having this air of unreality about it. Like sure, we go to conferences, we prove theorems about these complexity classes like BQP and QMA, the experimenters do little toy demos that don’t scale. But if this will ever be practical at all, then for all we know, not for another 200 years. It feels really different to think of this as something plausibly imminent. So what I want to do for the rest of this talk is to step back and ask, what are the main reasons why people regarded this as not entirely real? And what can we say about those reasons in light of where we are today?
Reason #1
For the general public, maybe the overriding reason not to take QC seriously has just been that it sounded too good to be true. Like, great, you’ll have this magic machine that’s gonna exponentially speed up every problem in optimization and machine learning and finance by trying out every possible solution simultaneously, in different parallel universes. Does it also dice peppers?
For this objection, I’d say that our response hasn’t changed at all in 30 years, and it’s simply, “No, that’s not what it will do and not how it will work.” We should acknowledge that laypeople and journalists and unfortunately even some investors and government officials have been misled by the people whose job it was to explain this stuff to them.
I think it’s important to tell people that the only hope of getting a speedup from a QC is to exploit the way that QM works differently from classical probability theory — in particular, that it involves these numbers called amplitudes, which can be positive, negative, or even complex. With every quantum algorithm, what you’re trying to do is choreograph a pattern of interference where for each wrong answer, the contributions to its amplitude cancel each other out, whereas the contributions to the amplitude of the right answer reinforce each other. The trouble is, it’s only for a few practical problems that we know how to do that in a way that vastly outperforms the best known classical algorithms.
What are those problems? Here, for all the theoretical progress that’s been made in these past decades, I’m going to give the same answer in 2024 that I would’ve given in 1998. Namely, there’s the simulation of chemistry, materials, nuclear physics, or anything else where many-body quantum effects matter. This was Feynman’s original application from 1981, but probably still the most important one commercially. It could plausibly help with batteries, drugs, solar cells, high-temperature superconductors, all kinds of other things, maybe even in the next few years.
And then there’s breaking public-key cryptography, which is not commercially important, but is important for other reasons well-known to everyone here.
And then there’s everything else. For problems in optimization, machine learning, finance, and so on, there’s typically a Grover’s speedup, but that of course is “only” a square root and not an exponential, which means that it will take much longer before it’s relevant in practice. And one of the earliest things we learned in quantum computing theory is that there’s no “black-box” way to beat the Grover speedup. By the way, that’s also relevant to breaking cryptography — other than the subset of cryptography that’s based on abelian groups and can be broken by Shor’s algorithm or the like. The centerpiece of my PhD thesis, twenty years ago, was the theorem that you can’t get more than a Grover-type polynomial speedup for the black-box problem of finding collisions in cryptographic hash functions.
So then what remains? Well, there are all sorts heuristic quantum algorithms for classical optimization and machine learning problems — QAOA (Quantum Approximate Optimization Algorithm), quantum annealing, and so on — and we can hope that sometimes they’ll beat the best classical heuristics for the same problems, but it will be trench warfare, not just magically speeding up everything. There are lots of quantum algorithms somehow inspired by the HHL (Harrow-Hassidim-Lloyd) algorithm for solving linear systems, and we can hope that some of those algorithms will get exponential speedups for end-to-end problems that matter, as opposed to problems of transforming one quantum state to another quantum state. We can of course hope that new quantum algorithms will be discovered. And most of all, we can look for entirely new problem domains, where people hadn’t even considered using quantum computers before—new orchards in which to pick low-hanging fruit. Recently, Shih-Han Hung and I, along with others, have proposed using current QCs to generate cryptographically certified random numbers, which could be used in post-state cryptocurrencies like Ethereum. I’m hopeful that people will find other protocol applications of QC like that one — “proof of quantum work.” [Another major potential protocol application, which Dan Boneh brought up after my talk, is quantum one-shot signatures.]
Anyway, taken together, I don’t think any of this is too good to be true. I think it’s genuinely good and probably true!
Reason #2
A second reason people didn’t take seriously that QC was actually going to happen was the general thesis of technological stagnation, at least in the physical world. You know, maybe in the 40s and 50s, humans built entirely new types of machines, but nowadays what do we do? We issue press releases. We make promises. We argue on social media.
Nowadays, of course, pessimism about technological progress seems hard to square with the revolution that’s happening in AI, another field that spent decades being ridiculed for unfulfilled promises and that’s now fulfilling the promises. I’d also speculate that, to the extent there is technological stagnation, most of it is simply that it’s become really hard to build new infrastructure—high-speed rail, nuclear power plants, futuristic cities—for legal reasons and NIMBY reasons and environmental review reasons and Baumol’s cost disease reasons. But none of that really applies to QC, just like it hasn’t applied so far to AI.
Reason #3
A third reason people didn’t take this seriously was the sense of “It’s been 20 years already, where’s my quantum computer?” QC is often compared to fusion power, another technology that’s “eternally just over the horizon.” (Except, I’m no expert, but there seems to be dramatic progress these days in fusion power too!)
My response to the people who make that complaint was always, like, how much do you know about the history of technology? It took more than a century for heavier-than-air flight to go from correct statements of the basic principle to reality. Universal programmable classical computers surely seemed more fantastical from the standpoint of 1920 than quantum computers seem today, but then a few decades later they were built. Today, AI provides a particularly dramatic example where ideas were proposed a long time ago—neural nets, backpropagation—those ideas were then written off as failures, but no, we now know that the ideas were perfectly sound; it just took a few decades for the scaling of hardware to catch up to the ideas. That’s why this objection never had much purchase by me, even before the dramatic advances in experimental quantum error-correction of the last year or two.
Reason #4
A fourth reason why people didn’t take QC seriously is that, a century after the discovery of QM, some people still harbor doubts about quantum mechanics itself. Either they explicitly doubt it, like Leonid Levin, Roger Penrose, or Gerard ‘t Hooft. Or they say things like, “complex Hilbert space in 2n dimensions is a nice mathematical formalism, but mathematical formalism is not reality”—the kind of thing you say when you want to doubt, but not take full intellectual responsibility for your doubts.
I think the only thing for us to say in response, as quantum computing researchers—and the thing I consistently have said—is man, we welcome that confrontation! Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that.
Reason #5
My final reason why people didn’t take QC seriously is the only technical one I’ll discuss here. Namely, maybe quantum mechanics is fine but fault-tolerant quantum computing is fundamentally “screened off” or “censored” by decoherence or noise—and maybe the theory of quantum fault-tolerance, which seemed to indicate the opposite, makes unjustified assumptions. This has been the position of Gil Kalai, for example.
The challenge for that position has always been to articulate, what is true about the world instead? Can every realistic quantum system be simulated efficiently by a classical computer? If so, how? What is a model of correlated noise that kills QC without also killing scalable classical computing?—which turns out to be a hard problem.
In any case, I think this position has been dealt a severe blow by the Random Circuit Sampling quantum supremacy experiments of the past five years. Scientifically, the most important thing we’ve learned from these experiments is that the fidelity seems to decay exponentially with the number of qubits, but “only” exponentially — as it would if the errors were independent from one gate to the next, precisely as the theory of quantum fault-tolerance assumes. So for anyone who believes this objection, I’d say that the ball is now firmly in their court.
So, if we accept that QC is on the threshold of becoming real, what are the next steps? There are the obvious ones: push forward with building better hardware and using it to demonstrate logical qubits and fault-tolerant operations on them. Continue developing better error-correction methods. Continue looking for new quantum algorithms and new problems for those algorithms to solve.
But there’s also a less obvious decision right now. Namely, do we put everything into fault-tolerant qubits, or do we continue trying to demonstrate quantum advantage in the NISQ (pre-fault-tolerant) era? There’s a case to be made that fault-tolerance will ultimately be needed for scaling, and anything you do without fault-tolerance is some variety of non-scalable circus trick, so we might as well get over the hump now.
But I’d like to advocate putting at least some thought into how to demonstrate a quantum advantage in the near-term. Thay could be via cryptographic protocols, like those that Kahanamoku-Meyer et al. have proposed. It could be via pseudorandom peaked quantum circuits, a recent proposal by me and Yuxuan Zhang—if we can figure out an efficient way to generate the circuits. Or we could try to demonstrate what William Kretschmer, Harry Buhrman, and I have called “quantum information supremacy,” where, instead of computational advantage, you try to do an experiment that directly shows the vastness of Hilbert space, via exponential advantages for quantum communication complexity, for example. I’m optimistic that that might be doable in the very near future, and have been working with Quantinuum to try to do it.
On the one hand, when I started in quantum computing 25 years ago, I reconciled myself to the prospect that I’m going to study what fundamental physics implies about the limits of computation, and maybe I’ll never live to see any of it experimentally tested, and that’s fine. On the other hand, once you tell me that there is a serious prospect of testing it soon, then I become kind of impatient. Some part of me says, let’s do this! Let’s try to achieve forthwith what I’ve always regarded as the #1 application of quantum computers, more important than codebreaking or even quantum simulation: namely, disproving the people who said that scalable quantum computing was impossible.
Follow
Comment #1 September 22nd, 2024 at 5:46 pm
[…] Quantum, Speaking Truth to Parallelism. You can follow any responses to this entry through the RSS 2.0 […]
Comment #2 September 22nd, 2024 at 5:47 pm
[…] Speaking Truth to Parallelism. You may comply with any responses to this entry by means of the RSS 2.0 […]
Comment #3 September 22nd, 2024 at 7:27 pm
What we know about QC amply justifies saying that “In quantum computing, it’s just the opposite: there’s right now a race for practice to catch up to where theory has been since the mid-1990s”, in contrast to the situation for AI, but I think another side of theory, that of foundations of QM, still restricts interest in QC to enthusiasts. My sense is that decision makers at all corporate levels are waiting until they understand QM, as in this article yesterday, for example, https://thequantuminsider.com/2024/09/21/how-to-talk-to-your-boss-about-quantum-the-business-chat-you-didnt-know-you-needed/. One of the headings is “But It’s Too Hard to Understand!” The suggested talking points for that particular complaint look to me only convincing for the already convinced. I think investment is limited partly because the sales pitch to corporate buyers is on life support: you **could** get fired for buying a quantum computer.
The ideas that QM/QC “involves these numbers called amplitudes, which can be positive, negative, or even complex” and that “With every quantum algorithm, what you’re trying to do is choreograph a pattern of interference” are both decent takes, of course, but I think they’re not great.
I think focusing on “the way that QM works differently from classical probability theory” is problematic because Generalized Probability Theory, the other GPT, is classically understandable as a theory of expected relative frequencies for classically incompatible prepared states (when joint probabilities are not an appropriate model) and that the Poisson bracket gives us a noncommutative transformation algebra for Classical Mechanics, if it isn’t straw-manned, that can model contextuality as well as QM can.
*If* we accept that noncommutativity is classically natural, that leaves us wondering what the difference between QM and CM+noncommutativity might be. A look at the Lorentz invariant vacuum state of QFT suggests that what can be called quantum noise is Lorentz invariant, with its amplitude determined by the action ℏ, making it different from thermal noise in QFT, which is not Lorentz invariant and which has its amplitude determined by the energy kT.
There is a lot more needed to fill this out and you may not post this under your new comments policy, so I will stop at this, except to say that in this perspective it is the classically understandable differences between the symmetry groups and amplitudes of quantum and thermal noise that are pivotal: noncommutativity is not much of an issue. I have linked to my YouTube channel, so you can look at the most recent talk there if it appeals (there’s a link in the video description to a PDF for a quicker look).
Apart from this response to the beginning of your #1, #2-#5 seem to me mostly well-taken. I’m uncertain when we will have what can be called quantum computers, but I think the reasons they might not happen are not technological. Better understanding the noise presumably will help us better understand #5.
Comment #4 September 22nd, 2024 at 8:19 pm
Very nice summary. Thank you.
Comment #5 September 22nd, 2024 at 9:25 pm
> Either they explicitly doubt it, like Leonid Levin, Roger Penrose, Gerard ‘t Hooft.
> maybe quantum mechanics is fine but fault-tolerant quantum computing is fundamentally “screened off” or “censored” by decoherence or noise—and maybe the theory of quantum fault-tolerance, which seemed to indicate the opposite, makes unjustified assumptions.
> The challenge for that position has always been to articulate, what is true about the world instead?
I guess one can steelman some of it with the Penrose-style argument that the Shapiro time delay differences due to the differential curvature inevitably results in decoherence at the macroscopic scales. But it seems like we are lacking a microscopic description of gravity to make it precise. Sadly, the Bouwmeester proposal for testing this regime experimentally does not seem to be closer to reality 20 odd years later.
Comment #6 September 23rd, 2024 at 12:34 am
[…] is a new very nice blog post Quantum Computing: Between Hope and Hype on “Shtetl […]
Comment #7 September 23rd, 2024 at 1:39 am
What if someone proves P=NP or SAT needs only 2^n^a time at an a<1 before quantum computers are built?
Comment #8 September 23rd, 2024 at 5:11 am
anonymous #7: Then that changes the story! A huge advance on 3SAT could actually leave quantum simulation in place as a central application area for QC, but it would plausibly kill codebreaking and the other application areas (depending on whether the new algorithm was practical).
Having said that, if QC could only be rendered irrelevant via a revolution in classical complexity theory of this magnitude, then that would seem like a pretty strong argument in favor of QC.
Comment #9 September 23rd, 2024 at 5:18 am
Shmi #5: Yes, Penrose is (famously) one of the only QM skeptics who takes seriously his responsibility to provide an alternative vision of what would be true about the world. But it’s not only that there remains zero evidence for Penrose’s vision, if you don’t accept his philosophical arguments about Gödel and consciousness (which I don’t). It’s also that it’s totally unclear that Penrose’s gravitational decoherence mechanism, if it existed, would actually kill QC! Plausibly it could be treated as just yet another decoherence source to be overcome using error-correction. And of course, whatever it is that gives our microtubules uncomputable powers, could presumably be exploited to go far beyond BQP anyway!
Comment #10 September 23rd, 2024 at 6:09 am
I didn’t much expect you would post my #3, but I more didn’t expect you would post it and not reply. Your house, your (new) rules:-) More to the point, it’s your time and you have many other things you want to do or must do with it. I will add one comment, then I’ll drop it if you choose not to respond.
I call CM+Noncommutativity+LorentzInvariantNoise “CM+”, which has representations as a noncommutative algebra of operators acting on Hilbert spaces. As such, CM+ and QM are different, in a well-defined but not entirely straightforward way, but they have exactly the same measurement theory, so that anything that can be described using QM can be described by CM+. As such, any given hardware does what it does, unaffected by how it is described. [How we describe hardware affects what new hardware we choose to construct, however.]
*In this perspective*, love it or not, what then distinguishes hardware that we might call ‘quantum’ is that the effects of the Lorentz Invariant Noise on what it does cannot be described by a mean field theory: Planck’s constant, as the amplitude of that Lorentz Invariant Noise, contributes more substantively to what the the hardware does than thermal noise does. That physics will find technological applications, which in some cases will justifiably be called computation. Analog computation with Noncommutativity and Lorentz Invariant Noise as resources is entirely reasonable from a (sophisticated) classical physics perspective. What my #3 suggested was that being incrementally closer to being able to say that we understand the relationship between CM/CM+ and QM will make a difference to investment.
A final addition: Analog computation with Noncommutativity and *Thermal* Noise as resources is also a possibility according to the mathematical formalism, which might perhaps be more practical than running dilution fridges (though the physics might not oblige). A ‘Boltzmann computer’ or a ‘Gibbs computer’, as you will.
Comment #11 September 23rd, 2024 at 6:10 am
Hi Scott,
Thanks or the interesting writeup. I want to add a 6th reason which I have encountered a lot, that even if you can do it, it will be so expensive that it’ll be economically unfeasible. Like, space elevator kind of ideas. And even in case it’s feasible there’s still the question of whether it’ll be profitable or whether running and maintaining these things will ever make commercial sense. I don’t have a strong opinion on that one way or another but I think it’s a reasonable concern. Best,
Sabine
Comment #12 September 23rd, 2024 at 8:17 am
Sabine #11: Thanks! Your sixth reason would have been a strong one circa 2006. Since then, though, it’s become apparent that (commercially justified or not) tech companies, VCs, and governments can and will invest many billions of dollars to make this happen. Do you think people are simply unaware of the scale of the current effort, or do they think that even it can’t suffice, that this will take trillions rather than mere billions?
Comment #13 September 23rd, 2024 at 9:48 am
> Namely, there’s the simulation of chemistry, materials, nuclear physics, or anything else where many-body quantum effects matter. This was Feynman’s original application from 1981, but probably still the most important one commercially
… which is sad, since as I keep pointing out (supported by the AWS survey https://arxiv.org/abs/2310.03011), there’s no known quantum algorithm which can reliably solve a many-body quantum problem.
Comment #14 September 23rd, 2024 at 10:16 am
“namely, disproving the people who said that scalable quantum computing was impossible”
The asymmetry in enthusiasm is that those people would have to try and build every possible Quantum Computer architecture under the sun with the expectation that they would all fail one way or another… clearly that’s not the way to live one’s life, so the only strategy is to sit back and relax, waiting for every one else to fail, and if someone does it they can just go “ah well… nice… I guess I was wrong! LOL” 😛
Comment #15 September 23rd, 2024 at 10:18 am
I don’t expect to persuade Vladimir #13, but for the benefit of others here, I should say: virtually every expert in materials science, quantum chemistry, and many-body physics who I’ve talked to about it has expressed at least some optimism that QCs will speed up the simulation of quantum systems in scientifically interesting and potentially commercially useful ways—starting with (eg) the Fermi-Hubbard model and proceeding to the FeMoCo complex and more. Based on past exchanges, I suspect Vladimir is using a restrictive definition of “reliably solve a many-body quantum problem” that would rule out most of this stuff on technicalities.
Comment #16 September 23rd, 2024 at 11:11 am
Scott #15:
Let me quote from the caveats subsection of the Fermi-Hubbard section in the AWS survey:
> […] when preparing the ground state via quantum phase estimation or eigenstate filtering methods, it is necessary to prepare an initial state with an overlap that decays no worse than polynomially with system size; otherwise, the overall complexity will be superpolynomial.
My view is that if you can reliably prepare an initial state with no worse than polynomially decaying overlap with a many-body ground state, you’ve basically already solved the problem. I encourage you to run this statement by experts in materials science, quantum chemistry, and many-body physics.
Comment #17 September 23rd, 2024 at 11:17 am
Scott #12
“Your sixth reason would have been a strong one circa 2006. Since then, though, it’s become apparent that (commercially justified or not) tech companies, VCs, and governments can and will invest many billions of dollars to make this happen.”
But the current investments from government and wall street are based on wrong understanding (as you wrote) and somewhat unrealistic expectations about what can be delivered and when, so who knows when those investments may dry up (“internal” effort in publicly owned big tech can shift on a dime).
Maybe the QC field should start to float the idea that QCs are the thing that will make AI much better … throw some mubmo-jumbo on how quantum amplitudes indeed add up and subtract very much contributions in a neural net!
Another thing is whether the manufacturing of QC circuits can be made at scale. Classic electronics certainly can be made at scale, but even in that case only a handful of companies have the know-how and capital to build cutting edge chip-fabs (which is a decades long investment), and the margin of return are quite slim. So that would be a massive investment effort, competing with everything else.
Comment #18 September 23rd, 2024 at 5:45 pm
Fred #17:
> the current investments from government and wall street are based on wrong understanding (as you wrote)
I don’t believe Scott ever wrote that. The existence of bullsh*t quantum computing hype does not imply that every dollar of investment in QC is due to said hype.
Comment #19 September 24th, 2024 at 1:44 am
Hi Scott,
Yes, what I am saying is that it might turn out to be >100 times as expensive as anticipated, putting us into the trillion dollar range or above. Leaving aside the question of where to even get that amount of money, you’d then also have to ask whether you will realistically ever make sufficient profit to justify the investment.
Even when it comes to cryptography, how much money can & will the Chinese possibly spend to decrypt stolen US secrets that are plausibly decades old. A trillion? 10 trillion? 100 trillion? At some point it’ll be so ridiculously expensive that it’s no longer worth it or that much money is just impossible to get together.
I’m not saying that this *will* happen. Maybe there’ll be a way to scale up QC without blowing up costs. I just mean that seeing the issues with scaling up fridges and connecting traps etc it’s a valid concern.
Sabine
Comment #20 September 24th, 2024 at 9:37 am
Martin Mertens #18
Well, Scott wrote this:
“We should acknowledge that laypeople and journalists and unfortunately even some investors and government officials have been misled by the people whose job it was to explain this stuff to them.”
It’s just impossible to tell what portion of investment is based on reasonable expectations vs unreasonable expectations. In the short term accurate understanding from investors doesn’t necessarily matter all that much, as long as the tech does progress enough thanks to initial investments in order to keep the ball rolling and then bring further more mature investments..
Comment #21 September 24th, 2024 at 4:00 pm
I think all the applications you list are sensible, but miss the following case for optimism about QC: no one has ever had the chance to actually use a large error free machine! So many important advances in computing have been discovered by accident or trial and error, and I think you can make a good case that the entire AI revolution was only possible because our society first happened to really like playing 3d video games and there’s a good hardware overlap between that computation and training and evaluating neural networks. Pretty much all of our optimism for QC relies only on applications where we can make a good performance estimate from pen and paper math, and not to denigrate those efforts in the slightest, but they can’t possibly cover the full space of performant and useful quantum algorithms, and I’m sure there will be some crazy surprises once these machines actually work at scale.
Comment #22 September 24th, 2024 at 4:09 pm
Eliot K #21: I explicitly mentioned the possibility of new quantum algorithms being discovered, and advocated continuing effort there. But the fundamental difficulty with quantum algorithms is, always, since the beginning, that it’s not good enough to notice a positive (that a quantum computer can do something). You also need to convince yourself of a negative (that a classical computer can’t do the same thing just as efficiently). This is why I expect it to be much harder to find new applications of QCs by “just messing around,” than it was to find new applications of classical computers that way.
Comment #23 September 24th, 2024 at 7:48 pm
@Scott #22:
You don’t need to do that, though. If the quantum computer solves a problem, it still solves the problem even if a classical computer could solve it faster. So for the quantum computer to be preferrable, all that’s needed is that we don’t know how to solve the problem classically.
Comment #24 September 24th, 2024 at 7:55 pm
Ben Standeven #23: Yes, of course. The trouble is that thinking that way creates a strong incentive not to look too hard to see whether a classical algorithm with similar performance might exist. Ask how I know! 😀
Comment #25 September 24th, 2024 at 9:42 pm
“if QC could only be rendered irrelevant via a revolution in classical complexity theory of this magnitude, then that would seem like a pretty strong argument in favor of QC.”
Really? A 2^n^a at a<1 for 3SAT algorithm could derail QC?
Why India? India does not seem to have any reasonable quantum base.
Comment #26 September 24th, 2024 at 10:32 pm
anonymous #25: I mean, either the improvement for classical algorithms for NP-complete problems is not a huge deal and doesn’t derail QC, or else it derails QC because it’s a huge deal. There’s no option of “it derails QC despite not being a huge deal,” was my point.
I’m not sure how this workshop came about, to be honest, but I think maybe it was when Modi visited the US?
Comment #27 September 25th, 2024 at 7:44 am
Scott #22 and #24: Respectfully, I’m not sure that I agree with you. I think the nature of the algorithms under exploration will change qualitatively once we have useful hardware QCs. One of the major difficulties with current quantum supremacy demonstrations stems from the fact that they tend to be about (respectfully) obscure complexity-theoretic math problems that haven’t been subjected to much classical optimization before. So it isn’t immediately obvious what their classical difficulty is, and figuring that out requires extensive and nontrivial classical optimization.
But I think that once useful hardware QCs come around, most people will start messing around with potential algorithms for practically (i.e. economically) useful applications. And, by their nature, these problems will have much better-understood classical difficulty than today’s quantum supremacy problems do. Most of the hard work on the classical side will have already been done!
So I doubt that we’ll see many examples of a practical quantum advantage that survive any level of serious scrutiny for very long. It’ll be hard to sustain them once industry has actual pre-existing benchmarks for the exact same problems.
And, even in the (probably unlikely) case that (a) someone actually does discover a quantum algorithm that gives a practical advantage (over the current classical state of the art) for an economically useful problem, but then (b) someone else discovers a classical algorithm that beats it, I think it’d be fair to count that as a genuine practical “win” for QC – even one that may not be satisfying to a complexity theorist.
Comment #28 September 25th, 2024 at 9:00 am
Scott #22: I completely agree that finding useful quantum algorithms is much harder than classical ones, both because we don’t really understand the space of possible speedup mechanisms all that well and because, as you say, any quantum approach needs to dramatically outperform the best classical option to be viable. Realistic estimates put the prefactor advantage of parallel silicon over fault tolerant quantum computers as at minimum a trillion (I’m thinking of Babbush et al’s PRX from a few years ago but there are likely more recent estimates). And given the cost differential quantum computing is really a tool of absolute last resort, to be applied to problems where all known classical methods have been exhausted. So my guess is that most QC exploration will aim at problems where a lot of work has already been put into classical methods, and they’ve failed. That doesn’t preclude other researchers from following a quantum algorithms breakthrough with an improved classical method that beats it, of course (as you say, this happens!), but it does raise the bar.
All those caveats in mind, I guess I’m still more optimistic than you are that the actual availability of the devices will be a major step change in algorithm discovery. Right now there are really only a handful of core speedup mechanisms doing most of the work in almost every performant quantum algorithm; if even one new qualitatively distinct mechanism is discovered through exploration on real hardware that could unlock a huge array of new applications.
Comment #29 September 25th, 2024 at 9:15 am
Scott –
Regarding the second-to-last paragraph in the main posting on demonstrations of quantum computational advantage, I recall some talk about repeatedly running a random circuit sampling experiment just long enough to find a collision, and using (e.g.) Feynman’s path integral formation on the collision to determine its amplitude.
If I understand it, the idea is that if there was no error-free execution of the circuit then the outputs would be uniform and the time to find a collision would be bounded by the birthday paradox, but if the quantum computer faithfully executed the circuit, at least sometimes, then a collision could be found at least quadratically faster (depending on the fidelity of the circuit.)
I also gathered that back-of-the-envelope calculations suggested that a collision could be found with decent fidelities and with ~50-60 qubits in a couple of weeks of continuous operation of a superconducting-qubit computer. I guess the time to find a collision would scale very nicely with the fidelity of the two-qubit gates.
To me it’s an interesting experiment that at least could show that the computer, at least sometimes, held possession of a bespoke highly-entangled stated. It’s maybe easier to do than some other approaches, although it takes a full month of dedication on the QC. Can you say whether that idea got legs? Perhaps a couple of weeks is too long, or the advantages over linear cross-entropy benchmarking are not so clear.
Comment #30 September 25th, 2024 at 12:11 pm
Given the current pace of technological progress, it seems inevitable that any major improvements can come only with huge state based support. If not in Trillions, but at least in tens of billions (US Govt finding the hard way that anything less you cant even have semiconductor fabs as advanced as those in Taiwan). This is for Fusion, QC, Curing Cancer or even where there is no dearth of private funding like AI. Not only you need this kind of monetary investment, it needs to be in a single effort akin to Apollo, LHC etc and not distributing it thin among many. Now the question is whether the Governments across the world have the appetite for this kind of investment, when there is no National pride or competition is involved. Take for example curing cancer, which neither US nor EU have taken up as priority as Apollo or LHC, in spite of the fact that any breakthrough would have immensely benefitted the populace. So until China throws that Sputnik moment in any of these, it will be hard to get western governments out of their slumber.
Comment #31 September 25th, 2024 at 2:22 pm
Mark Spinelli #29: Yeah, I talked to the authors of that paper enough that they offered me coauthorship (which I turned down)! I completely agree, I think their collision-based variant of Random Circuit Sampling is worth a try, and I’m pretty optimistic that some experimental group will bite.
Comment #32 September 25th, 2024 at 2:38 pm
Ted #27: I mean, it’s not like we’re pondering this question in a vacuum! We now have 20+ years of experience with people trying to design quantum algorithms for economically useful problems by messing around with heuristics, complexity theory be damned. And we’ve seen exactly how it goes: namely, they confidently claim quantum advantage, whether it’s there or not. And because that’s the narrative everyone wants to hear, it’s gotten halfway around the world before the skeptics have even started the work of refuting it. And then comes the next unfounded quantum advantage claim, and so on.
Your position depends on the belief that, once we have actual scalable QCs to experiment with, this dynamic will suddenly and permanently change. Maybe—I hope so! But I beg some forgiveness if the history makes me less optimistic than you.
Comment #33 September 25th, 2024 at 2:43 pm
You are correct. Even if P=NP and SAT is in say n^3 complexity, people may not care. To solve factoring you have to unravel multiplication algorithm to circuits at the level of AND-OR-NOT gates and look for a witness by using P=NP algorithm. Forget about discrete logarithm and the involved circuits for modular exponentiation. Yikes looks like SHA256 in depth complexity. Why bother using P=NP at all since we can use Shor directly. Outside cryptography, in other fields, what use P=NP can even provide that traditional optimization cannot provide? Seems like P=NP would only be of theoretical value.
Comment #34 September 25th, 2024 at 4:54 pm
Dear Scott,
This is my first time commenting, I’ve been following your blog and research for two years now but I wanted to say that I really appreciate this most recent blog and in general the work you put into the spaces of QC and AI.
I wanted to add on to the idea of unique and niche applications of QC in the future by telling you about the research I am starting in my PhD at the University of Plymouth this fall, and how I ended up in the position I am now.
Currently, I am about to start a PhD in Quantum Computer Music, led by Dr. Eduardo Reck Miranda who helped write the first book on the subject in 2022. Dr. Miranda did his PhD on AI in Music in the early 1990s, as well as much more that I encourage you to read online. The idea and premise of the research is pointing out that since the 1950s with the Mark I and II computers, music technology and computer science have always been growing together, and that this won’t stop in the 2020s with the emergence of more viable quantum computers.
I want to make it clear that as a research scientist I align myself with telling people in my circles that QC is commonly misunderstood to be faster and better than classical computers in every way when in reality there are very niche advantages in the form of Shor’s/Grover’s/Bernstein-Vazirani’s etc. algorithms and not just some God machine that puts all of classical computers to rest.
I was first introduced to QC in the summer of 2022 where I took a class at Indiana University, and have been completely obsessed with quantum information science since then. During this time I got to explore creating quantum circuits in Qiskit, and bashing my head against Mike and Ike and Thomas Wong’s books until I felt I came away with 0.5% more than I did the week before.
During my undergrad, which I just finished this past spring studying Oboe Performance at Indiana University’s Jacobs School of Music, I had the opportunity to be a Science Undergraduate Laboratory Intern (SULI) at Argonne National Laboratory under the mentorship of Dr. Yuri Alexeev.
Since this was the summer of 2023, I came out the other end with my first publication “Sonifying Quantum Decoherence on IBM Quantum Devices: Mapping T1 Decoherence Values from Microseconds to Hertz”. As complicated as that might sound, it was quite a simple mapping where a lower T1 value was not just representative of a shorter coherence time but of a lower musical pitch, and vice versa a higher T1 value represented not only longer coherence times (I mean it’s still millionths of a second) but a higher musical pitch.
Eventually with another paper I wrote with my colleagues, in October 2023 I ended up in Berlin, Germany presenting my research at the 2nd International Symposium on Quantum Computing and Musical Creativity (ISQCMC). This conference is where I got to meet my now PhD advisor Dr. Miranda in person, as well hear about the upcoming PhD program that I am starting October 1st.
The company sponsoring my PhD is called Moth, who is a startup that began in February 2024 that focuses on quantum computing applications in the arts, music, and gaming, so I’m the music side of that where everyone else has a stronger STEM background that does research in my company.
There’s a lot to learn and Moth is trying to establish themselves as the first to look ahead at what in 2024 seems to be a completely bonkers application of QC, but the goal is one day to build research that can benefit music producers whose tasks on Digital Audio Workstations (DAWs) are limited computationally. Who knows how the upcoming prospect of fault-tolerant QC will impact this research in the next decade.
Sorry for writing so much, I wanted to tell you my story since I very much value what you have to say in QC and AI, and I find your perspectives in science (and politics) to be refreshing 🙂
All my best,
Alex Alani
Comment #35 September 25th, 2024 at 7:05 pm
Scott #32: Well, you certainly have much more experience than I do with the sociology of QC hucksters. But it seems to me that scalable QCs will indeed cause the dynamic to “suddenly and permanently change”.
Today, it’s really hard – essentially impossible – to disprove a claim of quantum advantage based on heuristics. We can only say “Well, there’s no theoretical evidence for an advantage – and while we can’t classically simulate this heuristic quantum algorithm out to the regime where its proponents claim an advantage, if you extrapolate the curves out, then they seem to be converging to the same asymptotic scaling behavior as the classical algorithm…” – and by that point, you’ve already long since lost 99% of the general public (including the popular media). So the media can semi-defensibly run the story on the grounds that “It hasn’t been ruled out, and it’s ‘big if true’.”
But once we have actual hardware that we can test on, we’ll be able to give a much, much simpler refutation: “We tried their proposed algorithm, and it didn’t work.” I think that the general public will find this counterargument to be much, much more understandable than the current ones. So anyone claiming a quantum advantage will face a much higher bar to being taken seriously.
(Of course, even after we have scalable QCs, huckers may always be able to claim that “If we just scale up the QC compute by one more order of magnitude, then all kinds of magic will suddenly kick in with my heuristic algorithm.” But I think that once we’ve developed some kind of quantitative empirical intuition for the actual dollar costs and benefits of various quantum algorithms, people will take these kind of claims much less seriously than they do today.)
Comment #36 September 25th, 2024 at 9:50 pm
I am not sure if this is exactly a contribution to human knowledge, but here’s a (novel?) naysay for QC.
It is clearly very difficult to build a quantum computer. Another name for a quantum computer is a reduction from a problem in BQP to an experiment. In other words if we had a lot of experiments that were BQP-complete, we would already have a lot of quantum computers, and since we don’t, they aren’t. All of our economic hopes appear to rest on the expectation that the macroscopic natural world is in BQP, but not in P, even though it isn’t BQP-complete.
DFT corrections for electron correlation continue to improve, expanding the range of quantum chemical problems solvable accurately in P. Perhaps no asymptote will ever be reached and chemistry will turn out to lie in P, with parameters coming from a finite amount of computation in BQP, after all.
Comment #37 September 25th, 2024 at 9:59 pm
Concerned #36: Alas, your naysay is not novel. 🙂 I think I was debating that possibility on this blog with Gil Kalai 15 years ago. Of course, if the ordinary natural world is all in P, with only some strange engineered systems (e.g. fault-tolerant quantum computers) jutting out into BQP-P, then I’d very much like to understand what new principles cause that to happen. Hopefully the attempt to build fault-tolerant QCs will eluciate those principles.
Comment #38 September 26th, 2024 at 9:46 pm
All agreed, except PsiQ – not gonna happen
Comment #39 September 27th, 2024 at 6:12 am
Hi Scott, here are some comments on the two nice posts on quantum computing that dealt especially with progress in quantum error correction.
1) Overall, there are several points where I agree with Scott, such as the crucial importance of improving the quality of 2-qubit gates.
2) I see nothing wrong with Scott’s optimism and hopes, and these hopes are shared by many in the quantum computing community (and outside of it as well). I myself am hopeful that we are now in the “money time” for quantum computers. Naturally, I expect that the outcome will be negative—for instance, efforts to improve quantum 2-qubit gates will hit a wall.
3) In terms of progress toward demonstrating “blatantly good” quantum error correction and achieving “blatant” quantum advantage, what people now hope to accomplish in the next five years is very similar to what they aimed to achieve ten years ago (in the then-upcoming five years). In fact, for blatant quantum advantage, some even prematurely claimed they achieved it five years ago. Of course, it’s possible that what was unrealistic optimism ten years ago is realistic now (or will be realistic ten years from now).
4) My suggestion is to make future efforts more transparent and allow researchers to examine the details of the proposals to demonstrate advantage, as well as the specific experimental claims and data.
5) Of course, I always took quantum computing research very seriously, and, in fact, I consider myself as part of this research. So the general framing of the present post does not fit my point of view. It’s true that some regard the whole idea as “unserious,” which is also a legitimate point of view.
For me, it’s fascinating not only to understand the physical reality (where I tend to think our world is devoid of quantum computing powers) but also to consider others’ viewpoints. Of course, there is much uncertainty surrounding various issues related to this endeavor.
One of the open questions is to what extent the new (and very understandable) comment policy on this blog is suitable for discussions that have been very interesting (and, I believe, useful) for the past 15 years.
6) Scott wrote: “The challenge for that position has always been to articulate, what is true about the world instead?”
Absolutely!
(As a matter of fact it is not only “instead”. The question also applies to quantum systems that do not involve any form of quantum error correction, even if quantum error correction can be manifested in some systems. See comment #37.)
7) “What is a model of correlated noise that kills QC without also killing scalable classical computing?”
a) When it comes to correlated errors, I have an old result from 2005 or 2006 that (for sufficiently small error rate) no matter how severe the correlations are, log-depth quantum computing (and thus efficient factoring) is possible.
b) The very low computational class (LDP) that emerged from my work with Guy Kindler (and my subsequent works) allows robust classical information but not robust quantum information.
8) “In any case, I think this position has been dealt a severe blow by the Random Circuit Sampling quantum supremacy experiments of the past five years. Scientifically, the most important thing we’ve learned from these experiments is that the fidelity seems to decay exponentially with the number of qubits, but “only” exponentially — as it would if the errors were independent from one gate to the next, precisely as the theory of quantum fault-tolerance assumes.”
a) This phenomenon that Scott mentions (if true) is encouraging for the possibility of quantum error correction but it is *not* precisely what is needed. The assumption of independent errors between gates still allows for the possibility that a single error-event could affect many qubits.
b) Specifically, in the case of Google’s 2019 experiment, the fidelity’s excellent decay with the number of qubits is cause for concern regarding the quality of the experimental claims. This issue (among others) is explored in the 2023 arXiv paper by Rinott, Shoham, and me. In general, finding ways to scrutinize the quality of experimental claims is an important issue.
9) “So for anyone who believes this objection, I’d say that the ball is now firmly in their court.”
The ball is *always* firmly in my court.
Comment #40 September 29th, 2024 at 2:05 pm
What could cause QC to fundamentally cost trillions? Is there an indication that there is an exponentially increasing energy requirement?
Comment #41 September 29th, 2024 at 3:43 pm
Oh My Goodness. I can’t contain my admiration for the IDF. What a supremely competent organisation. King David’s chest would swell with pride to lead them into battle. King Leonidas would have welcomed them to stand with the Spartans against the Persians at Thermopylae. They are a historically outstanding military.
Comment #42 September 29th, 2024 at 6:24 pm
Gil #39: Thanks as always for sharing your thoughts here!
Chris #40: No, there are no indications that I know about of exponentially increasing energy requirements (and that seems like something that would kill QC entirely, rather than “merely” making it cost trillions). The fear is just that the “mundane” engineering problems end up being several orders of magnitude harder than currently anticipated.
Comment #43 October 1st, 2024 at 3:39 am
The Israeli air strike on the Houthi’s required air refueling. A video was released showing a video based boom control system in the Israeli tanker that uses 2D and 3D (special glasses) monitoring technologies. This system came from the US along with the refueling tankers. This system has never worked properly in US tankers but worked fine in the Israeli tankers after presumed Israeli modifications. Interesting choice for release.
Related to this is the modernization/replacement program for the aged US in-air tankers is another hard-to-believe case study of failure due to inept engineering. This goes on the ever lengthening list of current US weapons programs that are essentially failures due to current poor engineering.
Comment #44 October 2nd, 2024 at 3:38 am
I focus on the revolving door corruption (in my view though not illegal) endemic to oversight of US weapons programs and the deterioration of engineering quality and compare to Israeli competence but in fairness near peers have similar issues. Satellite reconnaissance indicates that the latest Chinese nuclear submarine sank immediately when put into the water at the shipyard.
Comment #45 October 2nd, 2024 at 11:14 pm
Why is breaking RSA the most talked about quantum computing problem and not discrete logarithm though Shor’s algorithm breaks both? Is the qubit count necessary for factoring much smaller than breaking discrete logarithm?
Comment #46 October 5th, 2024 at 4:22 pm
You may have seen this video from Jordan of Iranian ballistic missiles streaming overhead toward Israel. It looks surreal but unfortunately very real.
https://www.instagram.com/reel/DAl5kOyMEsc/?utm_source=ig_embed&ig_rid=b3016225-2fdf-4e34-bf28-63b96e7cb0b2&ig_mid=E5CE295C-B0F4-452C-9C87-C7D0AA03411B
Comment #47 October 7th, 2024 at 6:19 am
Love your blog! Have you seen Garnet Chan’s work on the potential for a speed up for quantum chemistry? It is much less clear then you make out, ie here: https://www.nature.com/articles/s41467-023-37587-6 and https://arxiv.org/abs/2407.11235
I assume you’re basing your claim regarding a speed up on quantum phase estimation, but if you look into the resource estimates they are prohibitively expensive https://journals.aps.org/pra/abstract/10.1103/PhysRevA.106.032428
So that even ground state energies would take a huge amount of resources for any plausible near term quantum computer. This will not be at all that useful we can already get good quality ground state energies for materials like lithiated graphite.
https://pubs.rsc.org/en/content/articlelanding/2014/ra/c3ra47187j
and it hasn’t transformed materials science because the problem is we can’t get them fast enough. You need to be able to do very high throughput evaluations of energies/forces to either screen or do molecular simulations
There may be a few cases where we don’t have good enough classical algorithms but it seems unlikely to me but even if that’s true it won’t be that useful unless it’s really fast.
Unsurprisingly perhaps this problem has already been basically solved with AI ie you simply train a large neural network to predict energies and forces from a large database of high quality calculations and this gives you the speed you need. This approach is already here and useful and transforming the field of materials science and molecular simulation. And yet government officials and others have no idea about this and continue to pour money into quantum computing on the false premise that it will revolutionize’ chemistry especially here in Australia.
Here’s one of many examples of the potential of this approach where standard simulation
methods were proven wrong for protein simulations. https://www.science.org/doi/10.1126/sciadv.adn4397 And here is another where it was shown that this approach can provide state of the art drug protein ligand binding values https://arxiv.org/abs/2401.16062
There is still the potential that a quantum computer could be useful for providing some training data to these neural nets if there are cases where truly no good classical approaches work but that will be a fairly minor contribution and seems fairly unlikely to me and certainly not worth the price.
If you could help raise awareness of this discussion instead of just repeating that chemistry will be the one useful application that would be great.
Comment #48 October 7th, 2024 at 6:37 am
Scott # 15
FeMoCo is a bad example it is highly solvable using classical algorithms:
https://www.arxiv.org/abs/2407.07411
Also Vlad is totally right about the state preparation problem. Either you have a good initial state and classical algos are fine or you dont but quantum computers will have the same problem as there’s no evidence of a speed up for finding good initial states.
Comment #49 January 31st, 2025 at 12:47 am
Great post, Scott! I really appreciated your insights on the race to build scalable fault-tolerant quantum computers. The updates on Google’s Kitaev surface code and Microsoft’s collaboration with Quantinuum were especially interesting. Thanks for sharing your thoughts—very helpful and inspiring!