Of course cryptography has furnished us with wonderful coin-flipping protocols, which generate bits that are guaranteed to be random (under computational assumptions) as long as *either* of the two parties wanted them to be (and had the ability to generate random bits at all). However, when you try to generalize these protocols to N parties, there’s a well-known difficulty, which is that whoever moves last in the protocol could abort (pretend their Internet connection is down or whatever) if they don’t like the outcome. In any case this sort of thing is quite expensive if your goal is a randomness “beacon,” constantly broadcasting trusted random bits to anyone who might want them.

With the QC-based protocol, a stream of challenges arrive at the QC that (we assume) were not predictable in advance by the QC, and in response, the QC has to generate a stream of bits that

(1) can be verified as having come from the QC, and

(2) can be verified (under computational assumptions) as containing a lot of real entropy, since not even a QC would’ve been able to quickly answer the challenges in a secretly deterministic way.

From there you can use a randomness extractor to get nearly uniform random bits.

I should say upfront, there are two drawbacks of the scheme that may limit its practicality.

First, the classical verification is expensive (taking ~2^{n} time if we use an n-qubit QC). The good news is that ~2^{n} can still be done if n is 50 or 60—i.e., the best we’ll be able to hope for anyone in the near future—and also the verification only needs to be done occasionally anyway.

The second drawback is that the stream of challenges needs to be unpredictable to QC. So you could reasonably ask: however we’re generating those challenges—for example, with a cryptographic protocol like the one you mentioned—why not just generate our random bits that way, skip the QC, and be done with it??

The short answer is that, if you’re very paranoid, the QC gives you an upgrade in the level of security. Starting from bits that are merely unpredictable to the QC now, you get bits with no pattern that can *ever* be found—even if (for example) the crypto you’re using were to be broken in the future. Cryptographers call this property “forward secrecy.”

Anyway, bottom line, it’s not obvious to me that the QC-based scheme will be useful in practice, but it’s also not obvious to me that it won’t be. It all depends on people’s security requirements, how efficient an implementation can be practically achieved, etc. etc.

]]>The current way of obtaining verifiable randomness is by precommit: You send me a hash, I send you a hash, we both acknowledge receipt, you send me a preimage, I send you a preimage, the xor of preimages is the random number. I rest assured that the random number is either at least as random as my preimage, or the world needs to move on to SHA(n+1).

What do quantum computers add to this? Of course it would be nice if you could generate and publish a random number, and forevermore could prove to doubters that these really are random. Alas, this cannot work: You could generate a bunch of such numbers+proofs, and only keep your favorite.

Is this about shaving off latency of some round-trip times? Then I’d be very skeptic about “practical”, since earth is small and light is fast.

Or is it about multi-party protocols? I.e. you want to generate random numbers and prove their randomness to a very large number N of skeptics, some of which are trolls and some of which are your sockpuppets, with O(1) / O(N log N) communications and O(1) / O(log N) latency? Then the naive protocol fails, but I would be surprised if no classical protocol exists.

]]>In my opinion this would be the time when quantum computing would become “real” in the public perception – sooner than Sabine thinks. ]]>

This is very different from the Kolmogorov complexity ideas covered in Part I of my *American Scientist* piece. It’s much more similar to the randomness-from-Bell-experiment ideas covered in Part II of the piece.

We don’t know yet whether the new protocol will be useful in practice. The question did not escape our notice. We (= me and various people at Google) are working on it.

]]>“the whole point of my scheme is to prove to a faraway skeptic—one who doesn’t trust your hardware—that the bits you generated are really random. ”

I’m confused, I thought that randomness can’t be proven, and, in the past, you wrote:

“under reasonable assumptions, these ideas can be used in principle to “certify” that a string of numbers was really produced randomly—something that one might’ve imagined impossible a priori. Unfortunately, the article also explains why this fact is of limited use in practice: because Kolmogorov complexity is uncomputable!”

Mixing in the same sentence expressions like “reasonable assumptions”, “in principle”, then “limited use in practice”… are you basically saying that God would find it useful?

]]>Generation and sampling of quantum states of light in a silicon chip:

https://www.sciencedaily.com/releases/2019/07/190702112732.htm