That Financial Times QC skepticism piece
Several people have asked me to comment about a Financial Times opinion piece entitled The Quantum Computing Bubble (subtitle: “The industry has yet to demonstrate any real utility, despite the fanfare, billions of VC dollars and three Spacs”) (archive link). The piece is purely deflationary—not a positive word in it—though it never goes so far as to suggest that QC is blocked by any Gil-Kalai-like fundamental principle, nor does it even evince curiosity about that question.
As it happens, the author, physicist Nikita Gourianov, had emailed me a few days ago with some nice words about my own skeptical efforts on Shtetl-Optimized, and a request for comment on his article. So, as a way to get back into blogging after a 2-week hiatus, I figured I’d share my respoinse.
Hi Nikita,
Thanks for the kind words about my blog, and for your piece, which I just read. There’s a great deal of truth in what you write, but I also take issue with a few points. You say:
A convincing strategy for overcoming these errors has not yet been demonstrated, making it unclear as to when — if ever — it will become possible to build a large-scale, fault-tolerant quantum computer.
In one sense this is tautologically true — the only fully convincing and clear demonstration that something is possible is to do it, as with the Wright brothers or the Trinity nuclear test. In other sense, though, we’ve known the “strategy” since the 1990s. It’s just that the fault-tolerance theorem called for gate fidelities 5-6 orders of magnitude better than anything achievable at the time. In the 25 years since, about 3 of those orders of magnitude have been achieved, so it doesn’t take any great imagination to foresee that the remainder could be as well. A layperson reading your piece might not understand this.
As for applications, my position has always been that if there were zero applications, it would still be at least as scientifically important to try to build QCs as it was to build the LHC, LIGO, or the James Webb telescope. If there are real applications, such as simulating chemical dynamics, or certifiable randomness — and there very well might be — then those are icing on the cake. This, of course, radically differs from the vision that now gets presented to investors and the press (hence all the railing on my blog!), but it also differs from what a reader of your piece would take away.
Anyway, thanks again for sharing!
Best,
Scott
Comment #1 August 29th, 2022 at 1:39 pm
What a polite response. Maybe even too polite, in that I’m not sure it conveys your issues with the piece. But I guess it’s already been published, so what’s the point to arguing?
Comment #2 August 29th, 2022 at 2:39 pm
With regard to your statement:
“As for applications, my position has always been that if there were zero applications, it would still be at least as scientifically important to try to build QCs as it was to build the LHC, LIGO, or the James Webb telescope.”
I would assume the author of the Financial Times article agrees with this but would amend your position — which your following statements seem to echo — by stating that this is best carried out by government funding rather than misleading VCs and investors and potentially facilitating a financial “bubble”.
Comment #3 August 29th, 2022 at 2:42 pm
I have a question.
Quantum computers use basic unitary gates, for example the CNOT gate takes two qubits as input, one acting as a control, i.e. the other qubit state is affected (not operation) depending on this qubit.
My understanding is that, for spin, such a gate could be realized by applying a magnetic field for a specific time duration.
But since the control qubit is typically in some superposition state (spin up and spin down), the entire gadget producing the magnetic field of the CNOT gate would have to be kept in a superposition of both applying and not applying the field, right?
As a result a CNOT gate would have to be a quantum system, but is this really achievable? How big such a gate would be in terms of equivalent digital qubits?
The issue of “initializing” such a CNOT gate doesn’t seem obvious either (in the same way qubits have to be prepared in their correct initial state).
PS: I hope I don’t get an answer of the type “of course, that’s why building a QC is hard!”… yet we’re not supposed to be skeptical.
Comment #4 August 29th, 2022 at 3:19 pm
I could be entirely wrong about this, but my main skepticism about realizing a QC is that, unlike a classical circuits, you can’t easily “modularize” quantum circuits: there’s just no obvious way to take two separate quantum circuits and add them together in a way such that the sum still behaves like a bigger quantum circuit with (roughly) double the amount of qubits.
With classical circuits we use impedance matching to add circuits together (perfectly) to form bigger ones (Moore’s law), but with quantum circuits there’s no magic trick to extend coherence when doubling the size of the circuit.
In other words, two sub-circuits of N qubits each have 2^N possible states on their own, for a total of 2^(N+1) total states, but, when joined together, ideally they would have to suddenly handle 2^(2N) states (2N qubits have now to be in perfect coherence instead of just N), and such a joining can’t be achieved through some “standard” port.
Therefore scalability is probably exponentially hard.
Without such modularity, I just don’t see how we’re ever gonna build QCs with millions of physical qubits, let alone millions of digital qubits.
Comment #5 August 29th, 2022 at 3:20 pm
In case anyone is interested, here are some additional messages I sent to Nikita, which also address some of the comments above. (If Nikita wants to post his part of the exchange here, he’s more than welcome to, but I didn’t think to ask permission.)
I don’t think anyone really knows whether superconducting qubits will be the way to go. They do have a problem of cross-talk between nearby qubits, although they can now achieve >99.5% fidelity for 2-qubit nearest-neighbor gates and >99.9% for 1-qubit gates, and ~99.99% would surely be enough to achieve fault-tolerant QC using surface codes if anyone cared enough and spent enough money. In the meantime, though, photonics or trapped ions or neutral atoms or maybe even topological qubits could pull ahead over the next decade — each one has its strong partisans, and while there are difficulties with scaling each (obviously so, or we’d already be there 🙂 ), what’s interesting is that they’re DIFFERENT difficulties. So as I see it, there are two possibilities: either there’s eventually a platform that just works, the way silicon transistors just worked for scalable classical computing. Or, if there’s some principle that explains why an insurmountable obstacle will arise for EVERY platform, then that itself would constitute a giant new discovery about the physical world! Either way, though, the way to find out the truth is to try it and see! That’s why part of me is happy that money is finally being invested in QC commensurate with its scientific importance, even though another part is unhappy that it’s often for the wrong reasons — and intellectual honesty compels me to say so out loud.
What the fault-tolerance theorem demonstrates, I’d say, is that based on known physics together with very weak assumptions about the nature of errors, there is no fundamental, thermodynamics-like obstacle to building a scalable QC. There’s “merely” the practical problem — as there was in the past for heavier-than-air flight or fission, and is there is today for controlled fusion. And to say that something hasn’t yet been proven to be feasible in practice, strikes me as no different from just reiterating that it hasn’t already been done!
Comment #6 August 29th, 2022 at 3:23 pm
fred #3:
But since the control qubit is typically in some superposition state (spin up and spin down), the entire gadget producing the magnetic field of the CNOT gate would have to be kept in a superposition of both applying and not applying the field, right?
No, that’s flatly incorrect. The CNOT’s control qubit potentially has to be in superposition — which is why the CNOT is a 2-qubit gate! — but that’s all. The “gadget producing the magnetic field” is typically a big, macroscopic piece of classical machinery, and there’s no quantum uncertainty about whether it’s applying a CNOT at a given time: it either is or it isn’t.
Comment #7 August 29th, 2022 at 3:38 pm
fred #4:
I could be entirely wrong about this…
I think so, yes. 😉
You can build bigger quantum circuits out of smaller ones, just like you can do with classical circuits. Indeed, modularity is a standard part of quantum algorithm design. Of course you can’t combine quantum circuits in arbitrary ways (e.g., ways that break unitarity) and still end up with something valid, but who ever said you could?
As for scaling, there’s the extremely well-known difficulty that you ultimately need fault-tolerant qubits, and that induces a large overhead and also requires physical qubits with error rates below some threshold, even as they’re interacting with each other. I don’t see that the need for modularity imposes any additional difficulty beyond that one.
Comment #8 August 29th, 2022 at 3:42 pm
Scott #6
“The CNOT’s control qubit potentially has to be in superposition — which is why the CNOT is a 2-qubit gate! — but that’s all. […] there’s no quantum uncertainty about whether it’s applying a CNOT at a given time: it either is or it isn’t.”
So you mean that the CNOT gate is like a Schrodinger cat?
From an external observer’s point of view, its state has to be perfectly hidden (both dead and alive) based on the control qubit in superposition?
But, still, if the gate is a macro system, it’s not just about hiding its state from the outside world, but also making sure that it doesn’t radiate energy (noise) to cause decoherence of the rest of the QC, no?
Basically you don’t want any of your gates to start acting as observers of the rest of the QC.
Comment #9 August 29th, 2022 at 3:45 pm
Gourianov’s comment
“the commonly forgotten caveat here is that there are many alternative cryptographic schemes that are not vulnerable to quantum computers. It would be far from impossible to simply replace these vulnerable schemes with so-called “quantum-secure” ones”
is rather unfortunate given that just a few weeks ago one of the more prominent schemes (isogeny-based SIKE) was suddenly and decisively broken, even by a classical algorithm. I doubt that there’s enough confidence in any of the surviving “quantum-secure” schemes to roll them out to replace RSA/ECC any time soon.
Comment #10 August 29th, 2022 at 4:11 pm
“The CNOT’s control qubit potentially has to be in superposition — which is why the CNOT is a 2-qubit gate! — but that’s all. The “gadget producing the magnetic field” is typically a big, macroscopic piece of classical machinery, and there’s no quantum uncertainty about whether it’s applying a CNOT at a given time: it either is or it isn’t.”
I don’t get it.
How isn’t the gate in a superposition of both applying its effect and not applying it?
My understanding is that a CNOT gate has two inputs qb1 and qb2 (the control).
If qb2 was entirely spin up, the CNOT gate would switch the spin of qb1 by applying a magnetic field for a certain period of time, long enough to cause qb1 to rotate its spin.
If qb2 was entirely spin down (0), then the CNOT gate would have nothing to do on qb1 (i.e. no magnetic field is applied).
But, if qb2 is in a superposition of 0/1 (spin up/spin down), how is it that the gate isn’t also in a superposition of being applied and not applied to qb1? At least for the duration of the “computation”, until the state of qb1 (and all other qbits) settles?
In other words, the entirety of the CNOT gate has to be entangled with the control qubit, no?
Comment #11 August 29th, 2022 at 4:35 pm
Fred #3: I believe that from a many-worlds perspective, the very thing that makes a computer quantum (as opposed to classical) is that the qubits in the quantum processing unit don’t get entangled with whatever macroscopic apparatus physically enacts the quantum logic gates. In the many-worlds picture, a regular classical computer performs bit logic operations by applying voltages or magnetic fields (or whatever) – not so different from a quantum computer. The only difference is that a regular computer’s bits get entangled with the outside environment just as you describe, so the bits’ quantum state quickly decoheres and becomes effectively classical within each decohered “world”.
From a many-worlds perspective, the hardest part of building a large quantum computer isn’t entangling the qubits together – that happens inevitably – but instead is keeping the qubit entanglement entirely “internal” and controlled, and preventing the qubits from also getting entangled with the macroscopic control apparatus. (At least that’s the case for “delicate” qubits, like superconducting transmons. Maybe less so for weakly interacting qubits like photonic qubits.)
The power of quantum computing lies in the delicate interplay between the highly entangled qubits in the QPU and the classical control operations (including the human operators!), which (ideally) are in a perfectly unentangled tensor product state with the QPU. It’s just as important to keep the classical side classical as it is to keep the quantum side quantum.
Comment #12 August 29th, 2022 at 4:59 pm
Ronald de Wolf #9: It’s a fair point. I thought about calling attention to the recent downfall of much of isogeny-based crypto, but then decided that lattice-based crypto still looks pretty solid, so that it would be more of a quibble around the edges than a response to the core claim.
Comment #13 August 29th, 2022 at 5:05 pm
fred #10: I fully endorse how Ted #11 explained this. It might help to forget about the “if-then” interpretation of the CNOT gate, and just think about it as one particular 4×4 unitary matrix, which you can either apply to a pair of qubits, or not apply, but which you’re never in a superposition of applying and not.
Incidentally, networks of CNOTs acting on up to a few dozen entangled qubits are already a pretty common experimental reality (in ion traps, for example).
Comment #14 August 29th, 2022 at 6:48 pm
Why do you think that certifiable randomness is a “real application” for QC? It is nearly infinitely cheaper to use a non-aligned group of people to produce certifiable randomness, and those results are nearly infinitely cheaper to certify.
Related: you listed only “simulating chemical dynamics”, not any possibly interesting and hyper-difficult CS problems that might be amenable to simplification with Grover’s, Shor’s, or NextPerson’s algorithms. Do you not have as confidence in finding those types of problems?
Comment #15 August 29th, 2022 at 7:29 pm
Paul Hoffman #14: Which certifiable randomness scheme do you favor instead, then, and what exactly do you mean by “non-aligned”? For what it’s worth, I’m not at all confident that this will succeed as an application of QC, but the general idea of using QC to prove cryptographically-desirable behavior does now strike me as one of the most promising directions that I know.
The trouble with Grover’s algorithm is that, with any foreseeable fault-tolerance scheme, you need a staggeringly large problem before the square-root speedup wins out over the fault-tolerance overhead. And Shor’s algorithm and its variants are mostly good for breaking crypto. And the other quantum algorithms for classical problems that have been discovered in the past 28 years mostly have a limited range of application and/or unclear prospects for speedup, even while some of them have been extremely important theoretically.
Comment #16 August 29th, 2022 at 8:06 pm
The certifiable randomness scheme I favor instead has been described by various people over the past few decades. A group of well-known people are each asked to publicly provide a 128-bit value. Everyone’s number is concatenated and the result is hashed with function that is believed to not have any pre-image attacks. The only way that the result is not random is if every member of the group colludes with everyone else in the group before announcing their value.
Even in that very unlikely case, the amount of non-random bits is bounded by the amount of computation they can perform between the time the group is selected and the values are announced. Certification comes by each well-known person saying “yes, that’s the result I chose”. (Various refinements to this simplified description have been suggested to reduce the ability of a fully-colluding group from succeeding at getting more than a few bits being predictable.)
Thanks for your view on Grover’s/Shor’s/NextPerson’s. 28 years is indeed a long time, and maybe even billions of dollars is not enough to find problems that fit the algorithms.
Comment #17 August 29th, 2022 at 8:54 pm
What I got out of the many discussions on this blog about QC uses is that the original Feynman’s proposal of simulating quantum systems is still the likeliest and most interesting application.
Comment #18 August 30th, 2022 at 12:57 am
Hello Scott (#15), Security folks and crypto engineers have been saying for a while that, in crypto, quantum computing solves problems that we already have good-enough solutions for. When you ask for a classical scheme for “certifiable randomness”, the exact scheme is likely to depend on your exact requirements, but this is a problem that has been studied for decades that has good solutions. I suggest looking at coin flipping protocols (e.g., every participant picks a random number, publishes a commitment to it, then everyone reveals, and you hash their concatenation).
Some reading to learn more about generating cryptographically random numbers without various certifiability properties: https://crypto.stackexchange.com/q/1858/, https://eprint.iacr.org/2012/643, https://eprint.iacr.org/2016/1067.
Some reading to see other cryptographers expressing that, from a practical/engineering standpoint, quantum brings little value in this area: https://crypto.stackexchange.com/q/64810/ (“complete hype”, “attacking the part of the problem that doesn’t need new technology”), https://www.schneier.com/blog/archives/2008/10/quantum_cryptog.html (“still unbelievably cool, in theory, and nearly useless in real life”), https://www.schneier.com/blog/archives/2018/08/gchq_on_quantum.html (“clever idea, but basically useless in practice” – see also the GCHQ report and the section on QRNGs).
Comment #19 August 30th, 2022 at 2:03 am
Is this “three orders of magnitude in 25 years” evenly spaced? So is it like 8 years for each, or is it more like 2-8-15? The two seem to me to indicate very different futures.
Comment #20 August 30th, 2022 at 5:01 am
Some Security Person #18 (and Paul Hoffman #16): I’m familiar with coin-flipping protocols. If you want a continual source of random bits, rather than having it be a one-off thing, isn’t the issue typically that an individual can always strategically withdraw participation (claiming their Internet connection was down or whatever) if they see that the result isn’t going to go their way?
QKD is a completely different topic from QC.
Comment #21 August 30th, 2022 at 8:46 am
Ted, Scott #13
Thanks,
mathematically I understood that the gate is represented as a matrix multiplication acting on the two bits at once and giving the right output, but I was misunderstanding the physical implementation of such a gate: the “control” bit isn’t acting as an actual “input” to the gate, i.e. it’s not controlling the gate operation, but, rather, the gate mechanism is applied to both qubits (like applying a laser or magnetic pulse), and the two qubits are arranged so that the interplay between their states (under the effect of the gate) is what ends up making one qubit appear to be the “control” one.
So then indeed the gate is a classical macroscopic gadget, it doesn’t (shouldn’t) be entangled with the qubits.
I found an example of this in the paper
https://arxiv.org/pdf/cond-mat/0102482.pdf (they talk about realizing a C-NOT/C-ROT operation).
Comment #22 August 30th, 2022 at 9:00 am
Ronald #9 and Scott #12 – Gourianov makes the ill-informed statement that, “most cryptography currently used to protect our internet traffic are based on the assumed hardness of the prime factorisation problem”. This doesn’t matter for his point, since all currently standard public-key cryptography is vulnerable to Shor’s algorithm. Still, it suggests that he is not all that familiar with cryptography.
Gourianov goes on to say that “there are *many* alternative cryptographic schemes that are not vulnerable to quantum computers”. Lattice-based cryptography is not *many* alternatives, to the extent that all variations of it rest on the hardness of the same thing, namely short vector or close vector. I wonder if his essay does not recognize the distinction between private-key cryptography, where there truly are a huge number of options; and public-key, which is a delicate and counterintuitive swindle and is where the real fight is.
Comment #23 August 30th, 2022 at 12:43 pm
Scott 12,
It’s not just one break. There were three last minute attacks on on three totally unrelated systems. The finalist Rainbow was taken out almost as badly as SIKE. The third attack was on lattice systems (CRYSTALS). It wasn’t bad enough to stop NIST from choosing them, but it’s not reassuring.
Comment #24 August 30th, 2022 at 12:57 pm
Regarding cryptography, is it also the case that all past communication relying on RSA and Diffie-Helman will have been retroactively rendered insecure once a quantum computer is built? If so, it seems like this is an issue that should be better publicized, and that we should be rushing to switch to a post-quantum cryptographic scheme ASAP!
Comment #25 August 30th, 2022 at 1:10 pm
SR #24: Yes, that’s a major reason for the worry. Anyone with secrets that need to stay secret for decades should probably be using post-quantum crypto already!
Comment #26 August 30th, 2022 at 4:57 pm
Hi Scott,
I am curious how quantum error correction addresses all the sources of error. I read about how error correcting codes can redundantly represent a logical qubit with many qubits, and are able to undo bit flip and sign flip errors without measuring the logical qubit, however it’s not clear to me why those are the only types of error possible. What if there is a small chance that the logical qubit gets measured inadvertently (equivalent to getting entangled with the macro classical environment)? How does quantum error correction fix these issues?
Comment #27 August 30th, 2022 at 5:46 pm
Former Student #26: Peter Shor’s fundamental observation, which started the whole subject of quantum error-correction back in 1995, was that every 1-qubit error can be represented via some appropriate linear combination of no error, an X error (i.e., bit-flip), a Z error (i.e., phase-flip), and an XZ error (i.e., both), all acting on the qubit in question. And that therefore, if your code detects and corrects bit-flips and phase-flips, then it automatically detects and corrects all other 1-qubit errors as well, even including measurements. I explain in more detail in Chapter 27 of my QIS undergrad lecture notes—although, even having seen the proof, you might have to try some examples before you really understand and believe it.
Comment #28 August 30th, 2022 at 9:35 pm
Former student #26: The correct model of an arbitrary map on states of a quantum system — including environmental entanglements, hidden measurements, etc. — is a transformation known as a TPCP; or in Nielsen and Chuang, and quantum map. A general TPCP outputs a mixed state or density matrix, even when the input is a pure or vector state. There is a fundamental result that goes back to Kraus and Stinespring, that every TPCP is a classical mixture of quantum superpositions of your favorite basis of linear operators on the Hilbert space of the system. It is not a unique decomposition, but that doesn’t matter, it’s valid. In quantum error correction, people usually choose tensor products of Pauli matrices — the bit and phase flips that you mention — as the favorite basis. I call this the multi-Pauli basis; the more usual, somewhat confusing name is the Pauli basis, even for multiple qubits.
Some other basis of matrices would also be valid for this purpose, but that doesn’t matter, the multi-Pauli basis works. Peter Shor indeed discovered, in a rather hands-on way, what is now understood in a more abstract form: If you can correct multi-Pauli errors, then you can correct quantum superpositions of them, and then you can correct classical mixtures of quantum superpositions of them. Now, in a realistic error model, there is more to it than this; there is also some amount of statistical dust (Kraus-Stinespring style) supported on large errors that you cannot correct. But that doesn’t matter much either, precisely because it is statistical dust. It is analogous to the negligible chance in classical error correction that a robust code will be swamped by too much error.
A good thought experiment is what happens if you have (say) n qubits and they are all simultaneously rotated by a small angle epsilon. At first glance, that looks very different from X, Y, and Z multi-Pauli errors.
However, if epsilon is favorable compared to n, then you get a power series expansion in multi-Pauli errors that converges rapidly. It is an important moment to reconstruct your intuition (or at least it was for me), but it all checks out in the end.
Comment #29 August 31st, 2022 at 8:22 am
But since each qubit is in a state of superposition, a mix of 0 and 1, the proportion being a continuous domain (complex numbers), doesn’t that make a QC an analog machine?
An like for every analog computers, you can only represent your input data imprecisely (that’s true of digital computers too), but also the errors would accumulate with the number of qubits and gates/operations (unlike with classical computers where the manipulation is truly perfect because it’s digital), right? And all those errors eventually lead to less than perfect cancellations of the final wave function where you expect them, resulting in Shor’s algorithm starting to output wrong answers with some probability? Is it just a matter of redoing the computation multiple times to average it all out?
Comment #30 August 31st, 2022 at 11:23 am
fred #29 – “But since each qubit is in a state of superposition, a mix of 0 and 1, the proportion being a continuous domain (complex numbers), doesn’t that make a QC an analog machine?”
No it doesn’t, because quantum amplitudes are statistical features and not stored information. Even a randomized classical bit, a coin flip if you like, can be in a continuous state in the sense that it has some probability p of being 1 (and probability 1-p of being 0), where p is a real number between 0 and 1. The key point is that a bit doesn’t actually know its own chance of being 0 or 1, it just knows a binary answer if you ask it. A qubit has the same relationship to its quantum amplitudes.
What is cool is that even though n qubits can’t store any more information than n bits, they can still be manipulated with gates (and Hamiltonians, etc.) with continuous parameters. In this sense, a quantum computer can slightly more resemble an analog machine, even though at heart it isn’t one.
“Is it just a matter of redoing the computation multiple times to average it all out?”
You do need to account for deviations in the quantum state, that much is true, but quantum error correction is far better than averaging out the error with repeated trials. Just as with classical error correction, a set noisy physical qubits can encode a smaller number of logical qubits that are less noisy than their physical host. The methods for this are essentially digital and not analog, so in that sense, you have to figure out a way to have it both ways. But you can, since quantum amplitudes are statistical features and not stored information.
There is one last trap whose escape is particularly subtle, and is solved with the Solovay-Kitaev theorem. It is one thing to protect logical qubits from environmental noise that can be pumped out from the physical qubits with error correction steps. But how do you protect the logical qubits from *your* imprecision, if you want to change them continuously? The answer is that you don’t change the logical qubits continuously, you only modify them using an allowed, finite set of discrete jumps. But the finite gate set can be used to approximate any desired continuous change, like a flea that hops around several times and eventually lands near its target. The Solovay-Kitaev theorem says that this can be computationally feasible, with a manageable amount of overhead.
Comment #31 August 31st, 2022 at 1:24 pm
Greg #30
thanks for the detailed explanations!
Comment #32 September 1st, 2022 at 6:54 am
Hi Scott. I like a lot what you talked about “if there were zero applications, it would still be at least as scientifically important to try to build QCs as it was to build the LHC, LIGO, or the James Webb telescope. If there are real applications, such as simulating chemical dynamics, or certifiable randomness — and there very well might be — then those are icing on the cake.”
Recently, many quantum start-ups are claiming they are doing quantum finance. And one even published a “white paper”. Seem this quantum finance is the most attractive field for these start-ups (easier money?). Many big banks and finance giants seem involved too.
Actually, there were also many research papers, e.g. https://arxiv.org/pdf/2201.02773.pdf for example they claim “In fact, finance is estimated to be the first industry sector to benefit from quantum computing, not only in the medium and long terms but even in the short term.”
My intuition is that these finance systems usually have large uncertainties, so the quantum advantage, if any, will be very weak and unstable, and can be easily beaten by optimizing some classical algorithm tolerating a certain error level.
Look forward to hearing your opinion on these “quantum finance” staff…
Comment #33 September 1st, 2022 at 7:44 am
Chaoyang Lu #32: Hi from CQIQC in Toronto, and good to have you here as always!
For finance problems, I expect many square-root Grover-style speedups, which are open to the usual objection that it’s a long, long, long time before they win against classical in practice. I’d be much more surprised if there turned out to be exponential speedups!
Comment #34 September 9th, 2022 at 2:36 am
Scott wrote
“So as I see it, there are two possibilities: either there’s eventually a platform that just works, the way silicon transistors just worked for scalable classical computing. Or, if there’s some principle that explains why an insurmountable obstacle will arise for EVERY platform, then that itself would constitute a giant new discovery about the physical world!”
I agree with pretty much everything else you wrote above, but I have to disagree on that one. The third possibility, which I think is the most likely one to happen, is that it’s just much more difficult and will take considerably longer to get into a range where interesting applications are than the quantum enthusiasts like to believe.
The prime example is nuclear fusion. We know it works, there’s no reason why it should stop working when scaled up, and yet despite 50+ years and god-knows-how-many billion dollars we’re still not getting energy out.
Most likely thing to happen with quantum computing is exactly the same: incremental progress over decades with very little return on investment.
Comment #35 September 9th, 2022 at 4:10 am
Sabine #34: I don’t really disagree, I was just taking a longer-term view. There’s a reason why I put the word “eventually” in my statement! 😀
Comment #36 September 9th, 2022 at 4:23 pm
Sabine, I should’ve added: while I can’t rule out your pessimistic scenario for the next decade or two, I don’t think it’s obviously correct either! There are efforts right now that as far as I can tell have a non-negligible chance of actually succeeding at building scalable QCs in the next decade (I’d say the same for fusion with net energy gain, incidentally). And if one or more of the efforts succeed, all that will remain will be to figure out what the devices are good for, besides scientific exploration and codebreaking! 🙂
Comment #37 September 9th, 2022 at 6:46 pm
Quantum computing may follow the path that virtual reality has. Back in the late 1980s it was hyped as the next big thing. By 2000 you almost never heard the term, but now it is appearing all around us. Quantum processors may start to make common appearance in computer in a couple of decades. It may be longer still before they play more of a central role.
Comment #38 September 11th, 2022 at 11:01 am
Is there any example of something useful which could be achieved with simulating chemical dynamics?
Comment #39 September 26th, 2022 at 6:51 am
Hi Scott,
I wonder if this analogy is helpful.
Imagine the task of balancing basketballs on each other.
Two is easy, three is hard. We could make a machine that is better than humans and maybe it could do 4. But could a machine ever do 50? There is no reason it couldn’t in principle but it would seem like a waste of time trying to get there.
Could this be the way adding qubits to a qc will turn out?
Not impossible in principle but obviously impossible in practice?
-Zac
Comment #40 September 26th, 2022 at 8:38 pm
Maximilian #38: Sure, simulating the chemical reactions that are used to make fertilizer? Or the binding of proteins with receptors? The reactions relevant to photovoltaics? All sorts of other chemical reactions of scientific and industrial significance?