Quantum Sabinacy

Sabine Hossenfelder—well-known to readers of Shtetl-Optimized for opposing the building of a higher-energy collider, and various other things—has weighed in on “quantum supremacy” in this blog post and this video. Sabine consulted with me by phone before doing the video and post, and despite what some might see as her negative stance, I agree with what she has to say substantially more than I disagree.

I do, however, have a few quibbles:

1. We don’t know that millions of physical qubits will be needed for useful simulations of quantum chemistry.  It all depends on how much error correction is needed and how good the error-correcting codes and simulation algorithms become. Like, sure, you can generate pessimistic forecasts by plugging numbers in to the best known codes and algorithms. But “the best known” is a rapidly moving target—one where there have already been orders-of-magnitude improvements in the last decade.

2. To my mind, there’s a big conceptual difference between a single molecule that you can’t efficiently simulate classically, and a programmable computer that you can’t efficiently simulate classically.  The difference, in short, is that only for the computer, and not for the molecule, would it ever make sense to say it had given you a wrong answer! In other words, a physical system becomes a “computer” when, and only when, you have sufficient understanding of, and control over, its state space and time evolution that you can ask the system to simulate something other than itself, and then judge whether it succeeded or failed at that goal.

3. The entire point of my recent work, on certified randomness generation (see for example here or here), is that sampling random bits with a NISQ-era device could have a practical application. That application is … I hope you’re sitting down for this … sampling random bits! And then, more importantly and nontrivially, proving to a faraway skeptic that the bits really were randomly generated.

4. While I was involved in some of the first proposals for NISQ quantum supremacy experiments (such as BosonSampling), I certainly can’t take sole credit for the idea of quantum supremacy!  The term, incidentally, was coined by John Preskill.

5. The term “NISQ” (Noisy Intermediate Scale Quantum) was also coined by John Preskill.  He had no intention of misleading investors—he just needed a term to discuss the quantum computers that will plausibly be available in the near future.  As readers of this blog know, there certainly has been some misleading of investors (and journalists, and the public…) about the applications of near-term QCs. But I don’t think you can lay it at the feet of the term “NISQ.”

23 Responses to “Quantum Sabinacy”

  1. Geoffrey Irving Says:

    Oh no! A while ago I was defending you as being quoted out of the context in the “quantum computers are a practically useful way to make randomness” article, but now you’ve said it explicitly: “That [practical] application is…sampling random bits”.

    I’d like to give you the opportunity to walk this back, since maybe I’m misinterpreting. I agree that random bits may be a great route to demonstrating quantum supremacy, but existing computers have no trouble generating sufficient randomness for practical applications (all you need is enough to see a PRNG, which is only a few hundred bits (and you don’t need to know which bits those are)).

    Would you agree with the statement that there’s very little probability (say <1%) that any mainstream crypto library on a mainstream computer will derive randomness (even indirectly) from a source using this kind of quantum supremacy-derived algorithm? (Slightly hard to phrase this since existing computers do use randomness coming from quantum sources.)

  2. Geoffrey Irving Says:

    In the above I forgot to set a time limit on the prediction: it needs a restriction either to mostly-classical computers or similar.

  3. Scott Says:

    Geoffrey: As I explicitly said in the post, the whole point of my scheme is to prove to a faraway skeptic—one who doesn’t trust your hardware—that the bits you generated are really random. If you don’t have that requirement, then generating random bits is obviously trivial with existing technology. If you do have the requirement, on the other hand, then you’ll have to do something interesting—and as far as I know, as long as it’s rooted in physics, it will either involve Bell inequality violation or quantum computation.

    The weird thing is, I’ve given hour-long talks where I’ve hammered home the above idea at least 20 times (“the entire point of this scheme is to prove the bits are random to a faraway skeptic…”), and then gotten questions afterward that showed that people completely missed it anyway (“why not just use my local hardware RNG? isn’t that random enough?”). Is there something about the requirement of provability that’s particularly hard to grasp?? 🙂

  4. Geoffrey Irving Says:

    That’s fair, but I think it’s this slight of hand that unfortunately fools otherwise solid organizations like Quanta: you’re saying it’s a *practical application* with a constraint, but not discussing the fact that in practice no one will need that constraint except as a demonstration of quantum supremacy. So I think the cautious pedagogical thing to do is simply to not refer to it as a practical application.

  5. Scott Says:

    Geoffrey #4: But it’s not obvious that no one will need the constraint. In proof-of-stake cryptocurrencies (like the next-generation Ethereum), there’s a pressing need for a large source of public random bits, to run lotteries to decide who gets to add a new block to the blockchain. And crucially, everyone needs to trust that these random bits are not secretly biased. And there are other applications for public, verified randomness—which is why, for example, NIST started its Randomness Beacon a few years ago (though note that that requires users to place their trust in NIST).

    As I discuss in the talk, there are several issues that could limit the practical value of my protocol, including the high cost of doing the verifications, and the question of “who trusts the verifier.” But a lack of applications for certified random bits is emphatically not the issue here.

  6. Bram Cohen Says:

    By far the most plausible practical result for quantum computation would be breaking public key cryptography. Not that it’s likely to break it in any ‘practical’ way, mind you, just break things enough that all us cryptographers have to switch to much more expensive and inconvenient algorithms for attacks which cost a billion dollars to pull off each time.

  7. asdf Says:

    Why stop with quantum computers when you can have industrial hypercomputation? That’s my next startup! It’s merely a question of technical feasibility! 🙂

    https://www.cs.york.ac.uk/nature/workshop/papers/gc-hypercomp.pdf

  8. Scott Says:

    asdf #7: A lot of the stuff that’s been written on ‘hypercomputation,’ gives all of us working on crazy-sounding models of computation a bad name. 😀

  9. P. Peng Says:

    Bram Cohen #6:
    Quite the ironic comment given that you are working on some protocols yourself that rely on problem hardness assumptions that are known not to apply to quantum computers. Your work assumes the size of a class group of binary quadratic form is hard to calculate. But this is a finite abelian group. The very type of “hidden subgroup” problem that kicked off excitement of quantum computing with Shor’s algorithm.

  10. RandomOracle Says:

    One thing Sabine suggests in passing (in the video) is that one could use a quantum computer to (efficiently) determine the color of a molecule, given the description of that molecule. Is this true?

    More formally, let MOLECOLOR be the following problem: we’re given a local hamiltonian H acting on n qubits and values a, b such that b – a = ct (representing the frequency/energy range of some color of interest in the visible spectrum) as well as a temperature T = ct; does the thermal state of H at temperature T have non-trivial support in the interval [a,b]?

    Is MOLECOLOR in BQP? In principle one could solve this by preparing a few copies of the thermal state and doing energy measurements to see if they fall within the range [a,b]. For 1D Hamiltonians, there’s a result by Bilgin and Boixio showing that the thermal state can be prepared in quantum polynomial time, provided the temperature is constant (which it is in our case, so we’re good). However, they mention that for higher dimensions their algorithm would not work, nor should we expect it to work because of the PCP theorem prohibiting even constant factor approximations (unless bad complexity theoretic stuff happens). Ok, but the constants matter here so maybe for a sufficiently large range (and temperature) such as those that we would care about in practice, it would work?
    Alternatively, is there some natural constraint that one could add to the problem to make it quantum tractable even in higher dimensions? For instance, would requiring that the Hamiltonian be geometrically local help?

  11. Scott Says:

    RandomOracle: Yes, calculating properties of the emission spectrum of a given molecule (like color) is a good example of a problem in BQP (or more precisely PromiseBQP).

  12. James Warren Says:

    Scott,

    <>

    But in practice quantum computers behave like they are well within BPP^NP (with the exception of Boson Sampling, forrelation, etc.). If quantum computers can solve a few problems outside of the PH, why can’t that ability be amplified to solve general #P problems or at least general NP problems like happens time and time again in computational complexity theory?

    If you showed that BQP \in BPP^NP and then showed that the NP-oracle could be simulated in BPP, and hence BQP \in BPP^BPP = BPP, would that run a foul of any relativization barriers? My understanding is that BPP=BQP would collapse the PH, but moving around universal quantifiers in polytime (but still exponential in the number of alterations) might not be expected to require the level of cleverness one would expect from a polytime algorithm for SAT or TQBF.

    So, the count of positive and negative terms can easily be translated into #SAT problems and the the count of positive and negative terms for a given output string can also easily be translated into #SAT, and the count of positive and negative terms for any output string can be translated into #SAT (with both Hadamard random bits and output string bits as the variables). If the underlying SAT instances could solve general NP problems, it would probably violate unitarity, so they are presumably fairly restricted. (If we’re lucky, they reduce to parity check matrices.)

    Output strings typically involve exponentially many positive and negative terms, where all but an exponentially smaller number of such terms cancel. (This necessarily holds in the case of a single decision qubit.)

    So, we could simulate BQP in BPP^NP if we could:

    (1) Bias the positive and negative terms enough (|A-B|/B > 1/poly(n)) that an FPRAS for #P could be used. I’m not sure how to go about this, but exponentially many positive and negative terms could be used for each output string, so it might require few ancillary qubits and less error correction.

    (2) Flatten the distribution of terms so that there are only polynomially many per output string (i.e |A+B| < poly(n)). This seems like the most straightforward way to go about it. One can add ancillary qubits and use controlled logic gates to scramble the output to enough output strings. The obvious difficulty is that there are round-off errors that need to be corrected someway. But, unless all error correction procedures are too "Hadamard-heavy" (and thus the number of terms hitting each output remains superpolynomial), it seems unlikely that there's much of a barrier.

    (3) Some other way.

    Scott #8,

    Wouldn't P!=PSPACE rule out hypercomputation in physicial systems because noise would destroy the hypercomputer in finite time? Does the #P hardness of Boson Sampling make it likely that consistently violating conservation of energy in GR would require FP=#P?

  13. Scott Says:

    James #12: There are sufficiently many errors in what you wrote that my genuine suggestion is to work your way through a complexity textbook (doing the exercises) and then come back here if you’re still interested.

    Briefly, though: yes, any proof of BQP=BPP would have to be non-relativizing, and we learned last year from Raz-Tal that the same is true even for any proof of BQP⊆BPPNP. This is something we know for certain.

    Also, I don’t agree with your central premise that “BQP is in BPPNP except for a few weird exceptions like BosonSampling and Forrelation.” I agree that it could seem that way if you were previously focused on a few specific problems like factoring, but from today’s perspective, the results on BosonSampling, etc. and the Raz-Tal oracle suggest a picture wherein the general problem of simulating a quantum circuit is outside the polynomial hierarchy, and only some extremely special cases (like simulating the circuits for factoring) are in classes like BPPNP.

    Also, even if no deterministic oracle weaker than a #P oracle seems to suffice to solve a sampling problem, that DOESN’T AT ALL imply that an oracle for the sampling problem would let you solve #P. Just like, even if you need superluminal communication to simulate QM in the context of a hidden-variable theory, that DOESN’T AT ALL imply that QM itself gives you the power of superluminal communication. Indeed, I regard the former distinction as one of the central insights to have emerged in this entire subject—which is why Arkhipov and I spent so much time on it in our 2011 BosonSampling paper. If one doesn’t understand how it could be true, then one really needs to work through the definitions until one does understand, before going further.

  14. fred Says:

    Hope this doesn’t count as “drive-by” linking as it seems related to the topic (and quite short)

    Generation and sampling of quantum states of light in a silicon chip:
    https://www.sciencedaily.com/releases/2019/07/190702112732.htm

  15. fred Says:

    Scott wrote:
    “the whole point of my scheme is to prove to a faraway skeptic—one who doesn’t trust your hardware—that the bits you generated are really random. ”

    I’m confused, I thought that randomness can’t be proven, and, in the past, you wrote:

    “under reasonable assumptions, these ideas can be used in principle to “certify” that a string of numbers was really produced randomly—something that one might’ve imagined impossible a priori. Unfortunately, the article also explains why this fact is of limited use in practice: because Kolmogorov complexity is uncomputable!”

    Mixing in the same sentence expressions like “reasonable assumptions”, “in principle”, then “limited use in practice”… are you basically saying that God would find it useful?

  16. Scott Says:

    fred #15: Randomness can’t be proven by a static proof, but it can be proven via a suitable interactive protocol, under plausible assumptions (e.g., “this untrusted device could do no more than such-and-such number of quantum computation steps per second”). That’s, like, the entire point of what we’re talking about, and the entire reason why it’s interesting.

    This is very different from the Kolmogorov complexity ideas covered in Part I of my American Scientist piece. It’s much more similar to the randomness-from-Bell-experiment ideas covered in Part II of the piece.

    We don’t know yet whether the new protocol will be useful in practice. The question did not escape our notice. We (= me and various people at Google) are working on it.

  17. Bram Cohen Says:

    P. Peng #9: Yes I’m well aware, hence the snark. We’re using those particular things because nothing else gets their features: Unique outputs, succinct proofs, no trusted setup, simple to calculate. And it rotates discriminants every few minutes, so an attack has to work on that time scale to be effective at all. But if anybody knows an algorithm for hashing to primes whose output can be verified faster than the time to do a bunch of primality checks I’d like to hear it.

  18. Viv Kendon Says:

    Hi Scott — another potentially “drive-by” comment picking up on your point 2. “…a physical system becomes a “computer” when…” fits with the definitions of physical computing in relation to science and engineering that I have been working on with colleagues including Susan Stepney http://susan-stepney.blogspot.com/ — see https://cacm.acm.org/magazines/2017/8/219600-the-natural-science-of-computing/abstract and https://royalsocietypublishing.org/doi/full/10.1098/rspa.2014.0182 — it is important to agree on what computing is before arguing about what quantum computing is or isn’t. Our framework makes clear the difference between doing an experiment on a molecule and simulating a model of that molecule.

  19. Scott Says:

    Viv #18: Glad to know that we see eye-to-eye on this! My personal definition of “computer,” namely “any physical system capable of giving you a wrong answer,” originated not from any systematic thought about the question, but simply from annoyance at all the people who said “but every molecule is just a QC for simulating itself!,” convinced that they were making an original, devastating anti-QC argument that had never occurred to anyone in the field. 🙂

  20. wolfgang Says:

    I read that breaking bitcoin addresses would require about 1500 qbits (not millions) and could become feasible in a decade, if one extrapolates current progress.
    In my opinion this would be the time when quantum computing would become “real” in the public perception – sooner than Sabine thinks.

  21. Scott Says:

    wolfgang #20: That would’ve been 1500 logical qubits. The way you get an estimate of millions is by multiplying that by the thousands of physical qubits per logical qubit needed for current fault-tolerant schemes. On the other hand, as I said, no one really knows yet how far those schemes can be improved (or maybe even mostly avoided, as with topological QC).

  22. anon Says:

    Scott #3:

    The current way of obtaining verifiable randomness is by precommit: You send me a hash, I send you a hash, we both acknowledge receipt, you send me a preimage, I send you a preimage, the xor of preimages is the random number. I rest assured that the random number is either at least as random as my preimage, or the world needs to move on to SHA(n+1).

    What do quantum computers add to this? Of course it would be nice if you could generate and publish a random number, and forevermore could prove to doubters that these really are random. Alas, this cannot work: You could generate a bunch of such numbers+proofs, and only keep your favorite.

    Is this about shaving off latency of some round-trip times? Then I’d be very skeptic about “practical”, since earth is small and light is fast.

    Or is it about multi-party protocols? I.e. you want to generate random numbers and prove their randomness to a very large number N of skeptics, some of which are trolls and some of which are your sockpuppets, with O(1) / O(N log N) communications and O(1) / O(log N) latency? Then the naive protocol fails, but I would be surprised if no classical protocol exists.

  23. Scott Says:

    anon #22: No, it has nothing to do with latency. The main application we have in mind is indeed generating verifiable random bits to publish to a large number of people over the Internet—for example, for proof-of-stake cryptocurrencies. A second application is private randomness, if (for example) you trust Google to maintain your private key for you, but are worried that their hardware random number generators might be faulty or compromised by an adversary.

    Of course cryptography has furnished us with wonderful coin-flipping protocols, which generate bits that are guaranteed to be random (under computational assumptions) as long as either of the two parties wanted them to be (and had the ability to generate random bits at all). However, when you try to generalize these protocols to N parties, there’s a well-known difficulty, which is that whoever moves last in the protocol could abort (pretend their Internet connection is down or whatever) if they don’t like the outcome. In any case this sort of thing is quite expensive if your goal is a randomness “beacon,” constantly broadcasting trusted random bits to anyone who might want them.

    With the QC-based protocol, a stream of challenges arrive at the QC that (we assume) were not predictable in advance by the QC, and in response, the QC has to generate a stream of bits that
    (1) can be verified as having come from the QC, and
    (2) can be verified (under computational assumptions) as containing a lot of real entropy, since not even a QC would’ve been able to quickly answer the challenges in a secretly deterministic way.
    From there you can use a randomness extractor to get nearly uniform random bits.

    I should say upfront, there are two drawbacks of the scheme that may limit its practicality.

    First, the classical verification is expensive (taking ~2n time if we use an n-qubit QC). The good news is that ~2n can still be done if n is 50 or 60—i.e., the best we’ll be able to hope for anyone in the near future—and also the verification only needs to be done occasionally anyway.

    The second drawback is that the stream of challenges needs to be unpredictable to QC. So you could reasonably ask: however we’re generating those challenges—for example, with a cryptographic protocol like the one you mentioned—why not just generate our random bits that way, skip the QC, and be done with it??

    The short answer is that, if you’re very paranoid, the QC gives you an upgrade in the level of security. Starting from bits that are merely unpredictable to the QC now, you get bits with no pattern that can ever be found—even if (for example) the crypto you’re using were to be broken in the future. Cryptographers call this property “forward secrecy.”

    Anyway, bottom line, it’s not obvious to me that the QC-based scheme will be useful in practice, but it’s also not obvious to me that it won’t be. It all depends on people’s security requirements, how efficient an implementation can be practically achieved, etc. etc.