On the JPMC/Quantinuum certified quantum randomness demo

These days, any quantum computing post I write ought to begin with the disclaimer that the armies of Sauron are triumphing around the globe, this is the darkest time for humanity most of us have ever known, and nothing else matters by comparison. Certainly not quantum computing. Nevertheless stuff happens in quantum computing and it often brings me happiness to blog about it—certainly more happiness than doomscrolling or political arguments.


So then: today JP Morgan Chase announced that, together with Quantinuum and DoE labs, they’ve experimentally demonstrated the protocol I proposed in 2018, and further developed in a STOC’2023 paper with Shih-Han Hung, for using current quantum supremacy experiments to generate certifiable random bits for use in cryptographic applications. See here for our paper in Nature—the JPMC team was gracious enough to include me and Shih-Han as coauthors.

Mirroring a conceptual split in the protocol itself, Quantinuum handled the quantum hardware part of my protocol, while JPMC handled the rest: modification of the protocol to make it suitable for trapped ions, as well as software to generate pseudorandom challenge circuits to send to the quantum computer over the Internet, then to verify the correctness of the quantum computer’s outputs (thereby ensuring, under reasonable complexity assumptions, that the outputs contained at least a certain amount of entropy), and finally to extract nearly uniform random bits from the outputs. The experiment used Quantinuum’s 56-qubit trapped-ion quantum computer, which was given and took a couple seconds to respond to each challenge. Verification of the outputs was done using the Frontier and Summit supercomputers. The team estimates that about 70,000 certified random bits were generated over 18 hours, in such a way that, using the best currently-known attack, you’d need at least about four Frontier supercomputers working continuously to spoof the quantum computer’s outputs, and get the verifier to accept non-random bits.

We should be clear that this gap, though impressive from the standpoint of demonstrating quantum supremacy with trapped ions, is not yet good enough for high-stakes cryptographic applications (more about that later). Another important caveat is that the parameters of the experiment aren’t yet good enough for my and Shih-Han’s formal security reduction to give assurances: instead, for the moment one only has “practical security,” or security against a class of simplified yet realistic attackers. I hope that future experiments will build on the JPMC/Quantinuum achievement and remedy these issues.


The story of this certified randomness protocol starts seven years ago, when I had lunch with Or Sattath at a Japanese restaurant in Tel Aviv. Or told me that I needed to pay more attention to the then-recent Quantum Lightning paper by Mark Zhandry. I already know that paper is great, I said. You don’t know the half of it, Or replied. As one byproduct of what he’s doing, for example, Mark gives a way to measure quantum money states in order to get certified random bits—bits whose genuine randomness (not pseudorandomness) is certified by computational intractability, something that wouldn’t have been possible in a classical world.

Well, why do you even need quantum money states for that? I asked. Why not just use, say, a quantum supremacy experiment based on Random Circuit Sampling, like the one Google is now planning to do (i.e., the experiment Google would do, a year later after this conversation)? Then, the more I thought about that question, the more I liked the idea that these “useless” Random Circuit Sampling experiments would do something potentially useful despite themselves, generating certified entropy as just an inevitable byproduct of passing our benchmarks for sampling from certain classically-hard probability distributions. Over the next couple weeks, I worked out some of the technical details of the security analysis (though not all! it was a big job, and one that only got finished years later, when I brought Shih-Han to UT Austin as a postdoc and worked with him on it for a year).

I emailed the Google team about the idea; they responded enthusiastically. I also got in touch with UT Austin’s intellectual property office to file a provisional patent, the only time I’ve done that my career. UT and I successfully licensed the patent to Google, though the license lapsed when Google’s priorities changed. Meantime, a couple years ago, when I visited Quantinuum’s lab in Broomfield, Colorado, I learned that a JPMC-led collaboration toward an experimental demonstration of the protocol was then underway. The protocol was well-suited to Quantinuum’s devices, particularly given their ability to apply two-qubit gates with all-to-all connectivity and fidelity approaching 99.9%.

I should mention that, in the intervening years, others had also studied the use of quantum computers to generate cryptographically certified randomness; indeed it became a whole subarea of quantum computing. See especially the seminal work of Brakerski, Christiano, Mahadev, Vazirani, and Vidick, which gave a certified randomness protocol that (unlike mine) relies only on standard cryptographic assumptions and allows verification in classical polynomial time. The “only” downside is that implementing their protocol securely seems to require a full fault-tolerant quantum computer (capable of things like Shor’s algorithm), rather than current noisy devices with 50-100 qubits.


For the rest of this post, I’ll share a little FAQ, adapted from my answers to a journalist’s questions. Happy to answer additional questions in the comments.

  • To what extent is this a world-first?

Well, it’s the first experimental demonstration of a protocol to generate cryptographically certified random bits with the use of a quantum computer.

To remove any misunderstanding: if you’re just talking about the use of quantum phenomena to generate random bits, without certifying the randomness of those bits to a faraway skeptic, then that’s been easy to do for generations (just stick a Geiger counter next to some radioactive material!). The new part, the part that requires a quantum computer, is all about the certification.

Also: if you’re talking about the use of separated, entangled parties to generate certified random bits by violating the Bell inequality (see eg here) — that approach does give certification, but the downside is that you need to believe that the two parties really are unable to communicate with each other, something that you couldn’t certify in practice over the Internet.  A quantum-computer-based protocol like mine, by contrast, requires just a single quantum device.

  • Why is the certification element important?

In any cryptographic application where you need to distribute random bits over the Internet, the fundamental question is, why should everyone trust that these bits are truly random, rather than being backdoored by an adversary?

This isn’t so easy to solve.  If you consider any classical method for generating random bits, an adversary could substitute a cryptographic pseudorandom generator without anyone being the wiser.

The key insight behind the quantum protocol is that a quantum computer can solve certain problems efficiently, but only (it’s conjectured, and proven under plausible assumptions) by sampling an answer randomly — thereby giving you certified randomness, once you verify that the quantum computer really has solved the problem in question.  Unlike with a classical computer, there’s no way to substitute a pseudorandom generator, since randomness is just an inherent part of a quantum computer’s operation — specifically, when the entangled superposition state randomly collapses on measurement.

  • What are the applications and possible uses?

One potential application is to proof-of-stake cryptocurrencies, like Ethereum.  These cryptocurrencies are vastly more energy-efficient than “proof-of-work” cryptocurrencies (like Bitcoin), but they require lotteries to be run constantly to decide which currency holder gets to add the next block to the blockchain (and get paid for it).  Billions of dollars are riding on these lotteries being fair.

Other potential applications are to zero-knowledge protocols, lotteries and online gambling, and deciding which precincts to audit in elections. See here for a nice perspective article that JPMC put together discussing these and other potential applications.

Having said all this, a major problem right now is that verifying the results using a classical computer is extremely expensive — indeed, basically as expensive as spoofing the results would be.  This problem, and other problems related to verification (eg “why should everyone else trust the verifier?”), are the reasons why most people will probably pass on this solution in the near future, and generate random bits in simpler, non-quantum-computational ways.

We do know, from e.g. Brakerski et al.’s work, that the problem of making the verification fast is solvable with sufficient advancements in quantum computing hardware.  Even without hardware advancements, it might also be solvable with new theoretical ideas — one of my favorite research directions.

  • Is this is an early win for quantum computing?

It’s not directly an advancement in quantum computing hardware, but yes, it’s a very nice demonstration of such advancements — of something that’s possible today but wouldn’t have been possible just a few short years ago.  It’s a step toward using current, non-error-corrected quantum computers for a practical application that’s not itself about quantum mechanics but that really does inherently require quantum computers.

Of course it’s personally gratifying to see something I developed get experimentally realized after seven years.  Huge congratulations to the teams at JP Morgan Chase and Quantinuum, and thanks to them for the hard work they put into this.


Unrelated Announcement: See here for a podcast about quantum computing that I recorded with, of all organizations, the FBI. As I told the gentlemen who interviewed me, I’m glad the FBI still exists, let alone its podcast!

50 Responses to “On the JPMC/Quantinuum certified quantum randomness demo”

  1. Ilyas Says:

    Scott. First of all congratulations. I have been looking forward to today for a very long time. Secondly thanks for the fulsome and careful descriptoin of the “what”. Even by your standards this is really lovely piece. Lastly, I think that you and the rest of technical team have also done a great job of outlining where next. We accept that challenge (the ‘we’ here being Quantinuum) and look forward to a fascinating future. Congrats again, you have been nurturing this experiment for a while now.

  2. Don McKenzie Says:

    re: your first sentence. This would really be a good time for some short being to throw a gold ring into a volcano. Hang in there.

  3. joe Says:

    Your comment about the FBI still existing reminded me of an article I saw yesterday about DOGE coming for the atomic spectroscopy group at NIST: https://www.wired.com/story/nist-doge-layoffs-atomic-spectroscopy/

  4. Oscar Says:

    What is the advantage over this of a protocol like
    Alice generates N bits of (potentially quantum) random data and sends Bob the hash
    Bob generates N bits of (potentially quantum) random data and sends Alice the hash
    Now that both parties have committed to what their random data is, they share the data and xor it together.
    This way, both party can be certain that they have together generated a secure random number without needing a giant physics experiment.

  5. Bram Cohen Says:

    This concept is very confusing to me. It seems that what’s happening is I want randomness out of you, so I can send you a challenge, then you make a response to that challenge which contains certified randomness, meaning you had limited control over what it contained. This seems on its face to have a few fundamental limitations. FIrst of all, there was in some sense already some randomness because I sent you a challenge. Or maybe it’s okay to use sequential nonces? More seriously, there doesn’t seem to be anything stopping you from making multiple all valid responses to my query and returning whichever one of them has properties you like. For uses in blockchain these seem like inferior relatives of verifiable delay functions, which have the properties that they have a canonical correct answer and they enforce an approximate amount of wall clock time to generate the responses.

  6. Dylan Mahoney Says:

    Does the certified-ness of the outputs still allow for the possibility of some systematic bias? E.g. if somebody was generating bitstrings of length n for a lottery, and instead of generating a single uniformly random bitstring, they generated 10 of them and only publicly released the one that was “best” according to some hidden criterion, could your protocol detect this?

  7. A C Says:

    Congratulations, Scott!!

  8. Mark Spinelli Says:

    Congrats! Your patent application was just recently allowed in the US (a couple of months ago even).

    Is there a way to modify the scheme to have a pool of classical machines do the verification, rather than Frontier or Summit?

    In particular, could you use this to modify Brakerski et al. or Zhandry’s ideas and prepare \(\sum|x\rangle|f(x)\rangle\) for some quickly chosen Boolean f(x) composed entirely of classical gates (CCNOT, CSWAP, etc.), measuring the second register y in the computational basis to collapse the first register onto the preimages and measuring the first register d in the Hadamard basis or some other non-computational basis such as Square-Root-of-NOT, and then outsourcing the inversion of f(x) to classical machines to find all preimages of f(x)=y?

    The “certificate” is that d is a good sample of the Hadamard or Square-Root-of-NOT of all preimages of f(x).

  9. Adam Treat Says:

    Scott, this is very very cool. Congratulations! One question is whether the other protocol that relies on a fully fault tolerant QC is somehow better than yours for the “major problem” of the cost of verification or will it have the same problems?

    The “why should anyone trust the verifier” seems to be a hard one to overcome in the areas of certified cryptographic contracts since the whole thing is about not having to trust any centralized source. Then the cost of verification is extremely important to commoditize I guess.

  10. Alan Ho Says:

    Having been on the team at Google, I have an idea why Google changed its priorities, and what the community needs to do next. The problem came down to the fact that everyone thought without a verification that can be done in polynomial time, it isn’t a practical application. However, I think that is a mistake made by theorists – as long as there is a significant computational gap in spoofing VS verification, even if the verification is exponential it is OK to use the certified randomness for specific “public randomness” scenarios – for example leader election in a block chain protocol to replace verifiable delay functions (which are not quantum safe). @Scott – happy to discuss licensing your patent since I have my own company now. Let’s make this happen.

  11. Pradeep Niroula Says:

    Oscar #4, Bram #5: The coin-flipping protocol you describe lets Alice and Bob to combine their length-n bit-string to a common string, also of length-n. However, our protocol lets you achieve an expansion where the end-result is a much larger bitstring than what you started with. The question you raise is subtle enough that we point it out as a footnote in our paper which surveys the applications of certified randomness https://arxiv.org/pdf/2503.19759

    Bram #5, Dylan #6: Of course, someone may use the protocol to generate k samples and only return one, and that would only correspond to a reduction of about log k bits of entropy. Indeed, the guarantees we provide is unrelated to the downstream misuse and biasing of randomness — the only way to prevent is by having a binding protocol with all the participants, so no single party can do such postselection. We discuss such distributed protocols in our survey of applications https://arxiv.org/pdf/2503.19759 (see jointly certified randomness protocol therein)

    Mark #8: The verification embarrassingly parallel and therefore can be achieved with any pool of classical computers. Your comments about the Brakeski et l refer to a “fault-tolerant” version of the certified randomness protocol for which verification is known to be efficient. The current experiment is “non-fault-tolerant” and verification, unfortunately, happens to be expensive, at least right now.

    Adam #9: One feature of the protocol is that anyone can verify the execution of the protocol. Further, we discuss distributed protocols without centralized trust in our survey of applications https://arxiv.org/pdf/2503.19759 (see jointly certified randomness protocol therein).

  12. Scott Says:

    Ilyas #1 (and others): Thanks so much!!

  13. Scott Says:

    Oscar #4:

      What is the advantage over this of a protocol like
      Alice generates N bits of (potentially quantum) random data and sends Bob the hash
      Bob generates N bits of (potentially quantum) random data and sends Alice the hash
      Now that both parties have committed to what their random data is, they share the data and xor it together.

    As Pradeep Niroula #11 explained, with the quantum protocol we’re trying to achieve an exponential expansion in the amount of genuine randomness that we have. Having said that, XORing together different people’s seeds is indeed one proposal for how to generate the original challenge string to send to the QC (the challenge whose entropy then needs to be exponentially expanded).

  14. Scott Says:

    Bram Cohen #5:

      FIrst of all, there was in some sense already some randomness because I sent you a challenge.

    Yes, but exponentially less of it. Our protocol lets you take polylog(n) seed randomness, which you then pseudorandomly expand and use to generate a challenge circuit C, and convert it into n bits of genuine entropy in the outputs from C.

      More seriously, there doesn’t seem to be anything stopping you from making multiple all valid responses to my query and returning whichever one of them has properties you like.

    Aha! Sure you can do that, but in polynomial time you can only generate polynomially many valid responses. And then you could select a response where (say) the first log(n) bits are all 0’s, or otherwise decrease the entropy by O(log(n)) … but you could only decrease it by O(log(n)). Which means that whatever string you send back to the verifier will still have n-O(log(n)) bits of min-entropy. So, while that string can’t directly be used for a cryptographic application, it can be fed into a seeded random extractor (since we’ve already assumed anyway that the verifier has O(log(n)) truly random bits), in order to produce order n bits that are close to uniformly random.

  15. Scott Says:

    Dylan Mahoney #6: See my comment above replying to Bram Cohen.

    We show that, if

    (1) the responses from the quantum computer pass the Linear Cross-Entropy Benchmark and
    (2) our complexity assumptions hold,

    then after you pass the QC’s responses through a classical seeded randomness extractor, you’ll end up with a linear number of bits that are guaranteed to be exponentially close to uniformly random, and hence free from all non-negligible biases.

  16. Scott Says:

    Mark Spinelli #8: Thanks! I confess that I hadn’t seen the update on my own patent application—do you have a link? 😀

    Yes, the work of verifying the Linear Cross-Entropy Benchmark is highly parallelizable so can certainly be split across many machines. Having said that, the Brakerski et al. scheme—where you partially measure the state of the QC, then send a commitment to the verifier, then the verifier chooses in which basis to measure the remaining qubits—is based on a very different principle than my scheme. Eventually you could simply switch to their scheme, but I don’t know of any way to interpolate between the two (I’d be happy to hear a suggestion).

  17. Scott Says:

    Adam Treat #9:

      One question is whether the other protocol that relies on a fully fault tolerant QC is somehow better than yours for the “major problem” of the cost of verification or will it have the same problems?

    Yes, it’s much better there, in that it reduces the cost of verification from exponential to polynomial (by using fancy cryptography, which is what then requires the use of a fault-tolerant QC).

  18. Mark Spinelli Says:

    Scott #16:

    Indeed, here is a link (the Serial No. is US17/428,586)
    https://patents.google.com/patent/US20220100473A1/en?oq=US17%2f428%2c586

    And here is a link on the USPTO, indicating that the application was allowed as of January 30, 2025 (a document called Notice of Allowance and Fees Due was mailed then – indicating that the application was allowed and that some fees need to be paid to the PTO to have it mature to a granted patent.)
    https://patentcenter.uspto.gov/applications/17428586/ifw/docs?application=

    I’d defer you to yours (Austin’s) legal IP department for more details but that’s the joys and perils of a “letters patent” – it’s Latin for “open document” meaning everyone can read all the gory details about how the sausage was made, including the legal arguments put forth between the PTO and Austin’s attorneys about why the patent should be granted. The good news is that the PTO was convinced!

    As far as how to combine this with the tradition starting with Brakerski et al., my only ideas, such as they are, would be to split worrying about all the pre-images as done with Brakerski etc. with estimations of cross-entropy of the Hadamard transform of all the preimages as done in your protocol. But I’m not sure what else follows.

    In particular in Brakerski’s line of work there are only two preimages but I think generically there could be, say, m preimages if the first register had far fewer qubits than the second; a Hadamard test on m random n-bit strings will be slightly biased, which may be gleanable from a calculation of its cross-entropy. That is, from the pairs (d,y) published by the quantum computer, and from all the preimages \((x_1,x_2,\ldots,x_m)\) found by the classical miners with \(f(x_1)=f(x_2)=\cdots = f(x_m)=y\), one can also calculate the Hadamard of the preimages to see what the amplitude of d would be.

    But again that’s all fanciful and I’m not sure what benefits could be gotten.

  19. Tobias Maassen Says:

    Thank you.
    This is the best news I heard all year. JPMC is one of the most financial banks, and if they have a serious quantum crypto R&D, that gives me hope for safer communication than Signal groups. Thank you for bringing technology forward.

  20. asdf Says:

    And in 2025 just as in 2018, the threat model is Trump tweets :(.

  21. asdf Says:

    My understanding of this is vague but there are some cryptographic protocols that are proven secure in the random oracle model, but implemented in practice with hash functions instead of actual random oracles, to the discomfort of some cryptographers. Could this certified randomness method solve that issue?

  22. Scott Says:

    asdf #21: A random oracle is somewhat harder to get than just a stream of random bits, because the oracle needs to respond in a consistent way to exponentially many possible inputs. You could use my protocol to produce the random key for a hash function for simulating a random oracle, but I don’t see how to use it to get a random oracle itself.

  23. Ted Says:

    December 7, 2021: Quantinuum announces “the world’s first commercial product built using quantum computers” (namely, random cryptographic keys derived from the output of a quantum computer) on their blog: https://www.quantinuum.com/blog/introducing-quantum-origin

    March 26, 2025: Quantinuum announces “the first commercial application for quantum computers” (namely, random cryptographic seeds derived from the output of a quantum computer) on their blog: https://www.quantinuum.com/blog/quantinuum-introduces-first-commercial-application-for-quantum-computers

    History doesn’t repeat itself, but it rhymes. 🙂

    Congratulations Scott, Quantinuum, JPMC, DOE, and others!

  24. Nilima Nigam Says:

    Oh, very cool!! This is the best news I’ve seen since the announcements of the 3-D Kakeya conjecture about a month ago. Congratulations all around!

  25. William Gasarch Says:

    I am surprised that JP Morgan is involved.
    This is not an objection.

    They are a finance company so I would not have thought they have
    people working in quantum computing. (I would think they have ML and other more down-to-earth techniques.)

    So—why do they? Just curious.

  26. AF Says:

    Scott, a while ago, you wrote that the double-slit experiment is basically a 1-qubit quantum computer. If someone wants to use quantum hardware to generate random bits, why not generate each bit by sending a photon through that? The system can declare that the generated bit is a 0 if it lands on the left side of the detector, and a 1 if it lands on the right side of the detector.

    Is there something about the size of the quantum computer that makes the randomness of the bits more certifiable? Why should that matter if the randomness only enters because of wavefunction collapse?

    (sorry, I am a quantum computing newbie)

  27. Scott Says:

    AF #26: As I said in the post, the key question is, how do you prove to a skeptic over the Internet that you actually did the random experiment that you claimed to do, rather than just (eg) using a pseudorandom generator? The double-slit experiment doesn’t give you that. But solving a hard sampling problem with a quantum computer, in a way that we believe even a quantum computer could only solve randomly within the time limit, does.

  28. AF Says:

    Scott #27: Thanks!

    So it was never about the randomness itself, rather it is about proving with high probability that a very long string of bits came from a quantum computer?

  29. Scott Says:

    AF #28: It’s about proving that the bits actually are random—that the server didn’t cheat, violate the protocol, and produce the bits in a secretly deterministic way, whether using a quantum computer or a classical one.

  30. Ajit R. Jadhav Says:

    Dear Dr. Scott Aaronson,

    What problem, I seek to know, might the QC research, development and production community, and also the QC investment community, have if someone [e.g. me] proposed the following alternative for an RNG source that could obviously (and thereafter, even certifiably) could not be blamed for a back-door entry right at the RNG stage:

    An RNG source based on the classical deterministic chaos (with each of the last three terms being separately emphasized) in place of the QC or any QMcal entanglement-based source?

    As you would surely understand, Dr. Aaronson, the issue isn’t about “what is the true `randomness’.” It rather is about phrases like this one: “the weakest possible assumptions,” which appears somewhere in your L15.pdf. (Or, may be, therein, it was: “the strongest possible assumptions,” or similar. But you get the idea here, don’t you?)

    When I today searched on Google with strings similar to “certifiable randomness,” for about the first 4–5 pages or so, it all was about “quantum,” with the links also indirectly talking about the QMcal entanglement — but not a single one about the “classical.”

    Given the context (including the moneys involved), it was natural for me to think of the above question.

    Hope you consider it.

    Best,
    –Ajit

  31. Scott Says:

    Ajit #30: Using classical chaos can be fine if you trust the setup, but doesn’t provide certification against a dishonest server who just ignores the chaotic system and feeds you the output of a pseudorandom generator instead.

    Since I’ve already answered this question multiple times, all further questions of the form “why not just get your certified randomness from [physical process that obviously doesn’t provide the certification element]?” will be left in moderation!

  32. Jenny Says:

    > If you consider any classical method for generating random bits, an adversary could substitute a cryptographic pseudorandom generator without anyone being the wiser.

    Is this true unconditionally, or only assuming P = BPP? If you had a problem in BPP but not in P, could you use similar techniques to get certified classical randomness?

  33. Scott Says:

    Jenny #32: It’s true assuming that cryptographically secure pseudorandom generators exist (if that’s true then P=BPP, but we don’t know the converse). This is generally regarded as one of the safest assumptions in theoretical computer science.

  34. Concerned Says:

    Is there a simple way to see why measuring the output states of random polynomial-depth quantum circuits sample from a distribution that can’t classically be sampled from in polynomial time?

  35. Karl Svozil Says:

    Compared to random sequence generation methods using generalized beam splitters (implementing Hadamard-like transformations) or electronic quantum noise sources, what are the specific advantages of using protocols utilizing devices like Willow? Additionally, why might sequences generated by these alternative methods (beam splitters, noise sources) fail to pass patch cross-entropy benchmarking (XEB)?

  36. Scott Says:

    Karl Svozil #35: To answer the question for the 6th or 7th time … the advantage of using a quantum computer is that you can certify to a skeptic that the bits were really generated randomly, under cryptographic assumptions.

    Random bits that were generated using beamsplitters, noise sources, etc would have no reason whatsoever to be correlated with the ideal output distribution of a specific quantum circuit acting on qubits, which is what LXEB tests for.

  37. Scott Says:

    Concerned #34: See my paper with Sam Gunn for a formal hardness reduction. At some level, though, that reduction is just formalizing what’s intuitively obvious: that there’s no particular reason for a classical algorithm to be able to predict which outputs of a random quantum circuit get more constructive rather than destructive interference, without either representing the whole wavefunction, or summing over all paths, or doing tensor network contraction, or something else whose scaling with the number of qubits n is inherently exponential.

  38. Bruce Smith Says:

    Scott #33:

    > It’s true assuming that cryptographically secure pseudorandom generators exist (if that’s true then P=BPP, but we don’t know the converse).

    Can you intuitively describe the “world” (in the “five worlds” sense) in which P = BPP but there are no cryptographically secure pseudorandom generators?

    > This is generally regarded as one of the safest assumptions in theoretical computer science.

    It’s strictly stronger than P != NP, right? To me, it’s intuitively less certain, because it’s hard to imagine a SAT solver in P, but it’s not so hard to imagine that for any specific algorithm, there is a detector of its specific deviations from randomness that can run in a machine with true randomness and in DTIME(n^k) for some super-high k compared to that specific algorithm’s runtime and TM-size. And it’s downright hard to imagine ever *proving* there is no such detection algorithm for a proposed specific pseudorandom generator.

    To put it another way, to believe P = NP you have to believe in a magic thing somehow seeing into seemingly bigger things than itself (the NTIME calculation); but to believe a specific algorithm is a pseudorandom generator, you have to believe an infinite series of things bigger and bigger than it, can’t see into it well enough.

    Do you or others have a different view which would explain why you find it a safe assumption?

  39. Scott Says:

    Bruce Smith #38: A priori, it’s conceivable that P=BPP but not because of a PRG—just that every language in BPP happens to be in P for some other reason (!). It’s also conceivable that PRGs exist that are good enough for P=BPP, but not good enough for cryptography (recall that PRGs sufficient for P=BPP can be constructed assuming merely that E requires exponential circuits).

    Yes, the existence of PRGs is strictly stronger than P≠NP, but it’s known to be equivalent to the existence of one-way functions. In modern cryptography and complexity theory people often consider way, way, way more boutique assumptions than that one.

  40. NoGo Says:

    If verifying the results using a classical computer is extremely expensive, I wonder what is a practical benefit of these certified numbers?

    Basically, if you have resources to verify the certificate, say a quantum computer or enormous classical one, it seems you could produce random numbers yourself, at much lower cost than getting them from another party and supposedly paying for their expenses in producing the certificated numbers.

    Conceptually, no question, this is really cool, though!

  41. Scott Says:

    NoGo #40: Yes, that’s indeed the main practical issue. You’d need an application where the need for certification was so great that you were willing to pay the huge cost of classical verification (and also, had sufficient practical separation between verification cost and spoofing cost).

    Shtetl-Optimized: Serving You The Unhyped Truth Since 2005TM

  42. Rory Says:

    William Gasarch #25

    JP Morgan, Goldman Sachs, HSBC and Wells Fargo all have quantum computing research teams, I would classify the reasoning for this as fourfold

    1) Why the hell not? We have more money than God
    2) It provides good hype for our stock price
    3) If we lose patience with the program we can shut it down and move the researchers into the commercial side of the business
    4) If practical quantum computing is as disruptive to classical cryptography as people say, having in-house expertise and the software foundations to use quantum cryptography will be incredibly important – and a giant head-start on our competitors.

  43. Raoul Ohio Says:

    Rory #42:

    Judging from a casual glance at investment advice purveyors, most of the finance industry is clueless about AI and utterly clueless about QM.

  44. Raoul Ohio Says:

    CS People:

    Update on Xiaofeng Wang and IU:

    https://arstechnica.com/security/2025/03/computer-scientist-goes-silent-after-fbi-raid-and-purging-from-university-website/

    WTF?

  45. asdf Says:

    Unrelated but topical: Prof. Xiaofeng Wang of Indiana University seems to have been disappeared by the FBI or maybe escaped from capture. He has gone silent and his profile was removed from the university web site. Very creepy. Does anyone know him or what he was working on?

    https://arstechnica.com/security/2025/03/computer-scientist-goes-silent-after-fbi-raid-and-purging-from-university-website/

  46. asdf Says:

    Scott #22, I thought that oracles, random functions, etc. were implemented by just recording the queries and responses. The first query for input X gets a random response that the oracle simulator remembers, and later queries for X get the same response. A practical protocol can’t use exponentially many separate queries.

    It’s still messy though. Apparently the more worrisome protocols, used in smart contracts and such, use nested layers of simulated machines that each need their own randomness. Matt Green (cryptographyengineering.com) has a 3 part blog post about this, which is what got me wondering about the idea.

  47. Scott Says:

    asdf #46: In complexity theory, an “oracle” doesn’t even have a memory. It’s just a function mapping inputs to outputs. That’s why one needs to do something nontrivial to go, for example, from a pseudorandom generator to a pseudorandom function (the Goldreich-Goldwasser-Micali or GGM construction).

  48. NoGo Says:

    Thanks! I wonder what combination of technical advances and specific needs would make this practical. What would be the situation where producing random numbers locally is so expensive that one would be willing to pay a remote party to do so?

    Perhaps the cost can be reduced drastically if you don’t actually verify all of the numbers, only some, but your source of random number provider does not know which ones, so they have to be honest. Cool, but could this make it less expensive than producing locally?

    All situations where you would want an external source of randomness that I can think of are involving cheap hardware, like headless servers in a big data center. Well, and situation when you don’t trust your hardware (including a random number generator, if any)…

  49. Scott Says:

    NoGo #48: The main applications we have in mind are ones where you need public randomness—random bits broadcast to everyone over the Internet, eg all Ethereum users—rather than just a private cryptographic key. For the latter case, you should indeed just buy a hardware RNG. My protocol would make sense for private randomness only if you suspected any hardware RNG you could possibly buy of being backdoored by adversaries, but you did trust your classical computer and the software running on it, and were willing to buy a quantum computer and then use the classical computer to (very expensively) verify its outputs.

  50. Kristen Harris Says:

    Scott — I am so very happy that you are finding the problem of quantum computing in the brain to be interesting. Congratulations on this terrific work. Thank you for helping me to start seeing how quantum theory could help us understand synapses. Fingers crossed we can get our joint endeavors funded. To more successes! Kristen

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!