Quantum computing for policymakers and philosopher-novelists

Last week Rebecca Newberger Goldstein, the great philosopher and novelist who I’m privileged to call a friend, wrote to ask me whether I “see any particular political and security problems that are raised by quantum computing,” to help her prepare for a conference she’d be attending in which that question would be discussed.  So I sent her the response below, and then decided that it might be of broader interest.

Shtetl-Optimized regulars and QC aficionados will find absolutely nothing new here—move right along, you’ve been warned.  But I decided to post my (slightly edited) response to Rebecca anyway, for two reasons.  First, so I have something to send anyone who asks me the same question in the future—something that, moreover, as Feynman said about the Feynman Lectures on Physics, contains views “not far from my own.”  And second, because, while of course I’ve written many other popular-level quantum computing essays, with basically all of them, my goal was to get the reader to hear the music, so to speak.  On reflection, though, I think there might also be some value in a piece for business and policy people (not to mention humanist intellectuals) that sets aside the harmony of the interfering amplitudes, and just tries to convey some of the words to the song without egregious howlers—which is what Rebecca’s question about “political and security problems” forced me to do.  This being quantum computing, of course, much of what one finds in the press doesn’t even get the lyrics right!  So without further ado:


Dear Rebecca,

If you want something serious and thoughtful about your question, you probably won’t do much better than the recent essay “The Potential Impact of Quantum Computers on Society,” by my longtime friend and colleague Ronald de Wolf.

To elaborate my own thoughts, though: I feel like the political and security problems raised by quantum computing are mostly the usual ones raised by any new technology (national prestige competitions, haves vs have-nots, etc)—but with one added twist, coming from quantum computers’ famous ability to break our current methods for public-key cryptography.

As Ronald writes, you should think of a quantum computer as a specialized device, which is unlikely to improve all or even most of what we do with today’s computers, but which could give dramatic speedups for a few specific problems.  There are three most important types of applications that we know about today:

(1) Simulation of quantum physics and chemistry. This was Richard Feynman’s original application when he proposed quantum computing in 1981, and I think it’s still the most important one economically.  Having a fast, general-purpose quantum simulator could help a lot in designing new drugs, materials, solar cells, high-temperature superconductors, chemical reactions for making fertilizer, etc.  Obviously, these are not applications like web browsing or email that will directly affect the everyday computer user.  But they’re areas where you’d only need a few high-profile successes to generate billions of dollars of value.

(2) Breaking existing public-key cryptography.  This is the most direct political and security implication.  Every time you visit a website that begins with “https,” the authentication and encryption—including, e.g., protecting your credit card number—happen using a cryptosystem based on factoring integers or discrete logarithms or a few other related problems in number theory.  A full, universal quantum computer, if built, is known to be able to break all of this.

Having said that, we all know today that hackers, and intelligence agencies, can compromise people’s data in hundreds of more prosaic ways than by building a quantum computer!  Usually they don’t even bother trying to break the encryption, relying instead on poor implementations and human error.

And it’s also important to understand that a quantum computer wouldn’t mean the end of online security.  There are public-key cryptosystems currently under development—most notably, those based on lattices—that are believed to resist attack even by quantum computers; NIST is planning to establish standards for these systems over the next few years.  Switching to these “post-quantum” systems would be a significant burden, much like fixing the Y2K bug (and they’re also somewhat slower than our current systems), but hopefully it would only need to happen once.

As you might imagine, there’s some interest in switching to post-quantum cryptosystems even now—for example, if you wanted to encrypt messages today with some confidence they won’t be decrypted even 30 years from now.  Google did a trial of a post-quantum cryptosystem two years ago.  On the other hand, given that a large fraction of web servers still use 512-bit “export grade” cryptography that was already breakable in the 1990s (good news: commenter Viktor Dukhovni tells me that this has now been mostly fixed, since security experts, including my childhood friend Alex Halderman, raised a stink about it a few years ago), it’s a safe bet that getting everyone to upgrade would take quite a long time, even if the experts agreed (which they don’t yet) which of the various post-quantum cryptosystems should become the new standard.  And since, as I said, most attacks target mistakes in implementation rather than the underlying cryptography, we should expect any switch to post-quantum cryptography to make security worse rather than better in the short run.

As a radical alternative to post-quantum crypto, there’s also (ironically enough) quantum cryptography, which doesn’t work over the existing Internet—it requires setting up new communications infrastructure—but which has already been deployed in a tiny number of places, and which promises security based only on quantum physics (and, of course, on the proper construction of the hardware), as opposed to mathematical problems that a quantum computer or any other kind of computer could potentially solve.  According to a long-running joke (or not-quite-joke) in our field, one of the central applications of quantum computing will be to create demand for quantum cryptography!

Finally, there’s private-key cryptography—i.e., the traditional kind, where the sender and recipient meet in secret to agree on a key in advance—which is hardly threatened by quantum computing at all: you can achieve the same level of security as before, we think, by simply doubling the key lengths.  If there’s no constraint on key length, then the ultimate here is the one-time pad, which when used correctly, is theoretically unbreakable by anything short of actual physical access to the sender or recipient (e.g., hacking their computers, or beating down their doors with an ax).  But while private-key crypto might be fine for spy agencies, it’s impractical for widespread deployment on the Internet, unless we also have a secure way to distribute the keys.  This is precisely where public-key crypto typically gets used today, and where quantum crypto could in principle be used in the future: to exchange private keys that are then used to encrypt and decrypt the actual data.

I should also mention that, because it breaks elliptic-curve-based signature schemes, a quantum computer might let a thief steal billions of dollars’ worth of Bitcoin.  Again, this could in principle be fixed by migrating Bitcoin (and other cryptocurrencies) to quantum-resistant cryptographic problems, but that hasn’t been done yet.

(3) Optimization and machine learning.  These are obviously huge application areas for industry, defense, and pretty much anything else.  The main issue is just that we don’t know how to get as large a speedup from a quantum computer as we’d like for these applications.  A quantum computer, we think, will often be able to solve optimization and machine learning problems in something like the square root of the number of steps that would be needed classically, using variants of what’s called Grover’s algorithm.  So, that’s significant, but it’s not the exponential speedup and complete game-changer that we’d have for quantum simulation or for breaking public-key cryptography.  Most likely, a quantum computer will be able to achieve exponential speedups for these sorts of problems only in special cases, and no one knows yet how important those special cases will be in practice.  This is a still-developing research area—there might be further theoretical breakthroughs (in inventing new quantum algorithms, analyzing old algorithms, matching the performance of the quantum algorithms by classical algorithms, etc.), but it’s also possible that we won’t really understand the potential of quantum computers for these sorts of problems until we have the actual devices and can test them out.

 

As for how far away all this is: given the spectacular progress by Google and others over the last few years, my guess is that we’re at most a decade away from some small, special-purpose quantum computers (with ~50-200 qubits) that could be useful for quantum simulation.  These are what the physicist John Preskill called “Noisy Intermediate Scale Quantum” (NISQ) computers in his excellent recent essay.

However, my guess is also that it will take longer than that to get the full, error-corrected, universal quantum computers that would be needed for optimization and (most relevant to your question) for breaking public-key cryptography.  Currently, the engineering requirements for a “full, universal” quantum computer look downright scary—so we’re waiting either for further breakthroughs that would cut the costs by a few more orders of magnitude (which, by their very nature, can’t be predicted), or for some modern-day General Groves and Oppenheimer who’d be licensed to spend however many hundreds of billions of dollars it would take to make it happen sooner.

The race to build “NISQ” devices has been heating up, with a shift from pure academic research to venture capitalists and industrial efforts just within the last 4-5 years, noticeably changing the character of our field.

In this particular race, I think that the US is the clear world leader right now—specifically, Google, IBM, Intel, Microsoft, University of Maryland / NIST, and various startups—followed by Europe (with serious experimental efforts in the Netherlands, Austria, and the UK among other places).  Here I should mention that the EU has a new 1-billion-Euro initiative in quantum information.  Other countries that have made or are now making significant investments include Canada, Australia, China, and Israel.  Surprisingly, there’s been very little investment in Russia in this area, and less than I would’ve expected in Japan.

China is a very interesting case.  They’ve chosen to focus less on quantum computing than on the related areas of quantum communication and cryptography, where they’ve become the world leader.  Last summer, in a big upset, China launched the first satellite (“Micius”) specifically for quantum communications, and were able to use it to do quantum cryptography and to distribute entanglement over thousands of miles (from one end of China to the other), the previous record being maybe 100 miles.  If the US has anything comparable to this, it isn’t publicly known (my guess is that we don’t).

This past year, there were hearings in Congress about the need for the US to invest more in quantum information, for example to keep up with China, and it looks likely to happen.  As indifferent or hostile as the current administration has been toward science more generally, the government and defense people I’ve met are very much on board with quantum information—often more so than I am!  I’ve even heard China’s Micius satellite referred to as the “quantum Sputnik,” the thing that will spur the US and others to spend much more to keep up.

As you’d imagine, part of me is delighted that something so abstruse, and interesting for fundamental science, and close to my heart, is now getting attention and funding at this level.  But part of me is worried by how much of the current boom I know to be fueled by misconceptions, among policymakers and journalists and the general public, about what quantum computers will be able to do for us once we have them.  Basically, people think they’ll be magic oracles that will solve all problems faster, rather than just special classes of problems like the ones I enumerated above—and that they’ll simply allow the continuation of the Moore’s Law that we know and love, rather than being something fundamentally different.  I’ve been trying to correct these misconceptions, on my blog and elsewhere, to anyone who will listen, for all the good that’s done!  In any case, the history of AI reminds us that a crash could easily follow the current boom-time, if the results of quantum computing research don’t live up to people’s expectations.

I guess there’s one final thing I’ll say.  Quantum computers are sometimes analogized to nuclear weapons, as a disruptive technology with implications for global security that scientists theorized about decades before it became technically feasible.  But there are some fundamental differences.  Most obviously: the deterrent value of a nuclear weapon comes if everyone knows you have it but you never need to use it, whereas the intelligence value of a quantum computer comes if you use it but no one knows you have it.

(Which is related to how the Manhattan Project entered the world’s consciousness in August 1945, whereas Bletchley Park—which was much more important to the actual winning of WWII—remained secret until the 1970s.)

As I said before, once your adversaries realized that you had a universal quantum computer, or might have one soon, they could switch to quantum-resistant forms of encryption, at least for their most sensitive secrets—in which case, as far as encryption was concerned, everyone would be more-or-less back where they started.  Such a switch would be onerous, cost billions of dollars, and (in practice) probably open up its own security holes unrelated to quantum computing.  But we think we already basically understand how to do it.

This is one reason why, even in a hypothetical future where hostile powers got access to quantum computers (and despite the past two years, I still don’t think of the US as a “hostile power”—I mean, like, North Korea or ISIS or something…!)—even in that future, I’d still be much less concerned about the hostile powers having this brand-new technology, than I’d be about their having the generations-old technology of fission and fusion bombs.

Best,
Scott


Unrelated Update (June 8): Ian Tierney asked me to advertise a Kickstarter for a short film that he’s planning to make about Richard Feynman, and a letter that he wrote to his first wife Arlene after she died.

42 Responses to “Quantum computing for policymakers and philosopher-novelists”

  1. Richard Gaylord Says:

    “Rebecca Newberger Goldstein, the great philosopher and novelist “. i don’t think Goldstein qualifies as either a great philosopher or a great novelist but i do think that her first novel “The Mind-Body Problem” is superb and should be required reading in any college STEM curriculum (and even in high school). as for the article you refer to ” The Potential Impact of Quantum Computers on Society”, i think the impact will be greatest to the extent that it affects cyber-terrorism, which is an even greater threat to American society (and hence to the world) than either Trump or Zuckerberg (both of whom have used, or allowed to be used, classical computers as a political tool for destroying democracy as a viable political system).

  2. Daniel Bilar Says:

    Hi Prof. Aaronson

    I hope this note finds you well. Not to put too fine a point on it, but after reading the first page Motza’ei Shabbat of PDQP/poly=ALL, I said B”H”.

    In Torah (philosophical) terms, you wrote that grounding in reality (as in QM reality as experienced by vast++ majority of normal mortals), complemented by ‘advice’ (a commonly used word for Torah) and ‘non-collapsing measurements’ (viewpoints mo in meditative Prophetic traditio taught to mystical acolytes, ie Maase Merkava & lesser forms) enables answering of all questions (ie experiencing non-fragmented integrated perspective approaching the G’dhead). I love this short, accessible paper very much.

    If you permit my saying so: Although I perceive a distinct anti-Deist streak in your popular writings (can corroborate eg in Ghost in the QTM and QC since Democritus etc where your powers of precise exhaustive reasoning give way to out-of-character glibness & mockery, quite jarring & unnecessary for your argument actually), I often think you are like Moliere’s Jourdain speaking prose all your life w/o knowing it.

    Thanks for your time and your papers which I enjoy otherwise immensely.

    Daniel Bilar
    Norwich VT

  3. Scott Says:

    Daniel #2: I was not expecting a Torah analysis of my PDQP/qpoly=ALL paper! The intellectual exercise of providing one reminds me of what Scott Alexander did in Unsong.

    But I’m curious: what’s the Torah meaning of locally decodable codes, or of extending a Boolean function to a multilinear polynomial over a finite field so that one can do interpolation? Presumably, the Boolean domain {0,1} represents the light and darkness that HaShem separated on the first day of creation, while a “field” represents the dry land that He created on the third day. But where can we find the other four days of creation in my proof?

  4. Craig Says:

    Scott said, “As for how far away all this is: given the spectacular progress by Google and others over the last few years, my guess is that we’re at most a decade away from some small, special-purpose quantum computers (with ~50-200 qubits) that could be useful for quantum simulation.”

    That is a strange prediction given Google’s Bristlecone 72 qubit machine that they built recently. If that machine is successful, then we will all know soon, meaning within a year. This will not take a decade.

  5. Craig Says:

    Scott #3,

    According to Chazal, every letter in the Written Torah is important. Therefore, there is no Torah meaning of locally decodable codes, at least not with respect to the Written Torah.

    Also, you proved that PDQP/poly=ALL, but ALL is still a proper subset of the Urim v’Tumim.

  6. Scott Says:

    Craig #4: Bristlecone is in the process of being built—the 72 qubits need to be painstakingly calibrated one by one. (I know this because I was at Google a few weeks ago and they gave us a progress report.) But more to the point, even assuming the chip becomes fully operational within the next year or two, it will probably take a while longer until we have a NISQ device that’s actually useful for simulation (which might take 100-200 qubits at the least, and crucially, with better coherence times than we have now). In any case, there are huge unknowns about the timing, and I prefer to be very conservative with predictions; I believe in the ideal of underpromising and overdelivering. Even a statement like “my guess is we’re at most a decade away from devices that could be useful for simulation” is not one that I would’ve made a few years ago.

  7. fred Says:

    I’m so confused… What’s going to happen first / What should I worry about the most?!

    Quantum computers or Super AI?

  8. Craig Says:

    Scott 6,

    How many of the 72 qubits had Google calibrated when you visited them? Also, I assume that because QM is a linear theory, if all of the qubits are correctly calibrated but Bristlecone fails to achieve quantum supremacy, Google woukd not be able to make the excuse that “even though the qubits are calibrated perfectly individually, the combination of the qubits is causing noise”, correct?

  9. Scott Says:

    Craig #8: I believe that about 10 of the qubits were online. And it’s perfectly consistent with the linearity of QM for each qubit to work fine when the other ones are off, but for there to be multi-qubit noise terms. Indeed, not only “consistent”: this is what typically happens. On the other hand, the multi-qubit noise terms can in principle be isolated and understood, just like single-qubit noise can, and that’s what they’ve been doing. If any of this leads to the discovery of a violation of the axioms of QM, I guarantee you’ll hear about it. 🙂

  10. Scott Says:

    fred #7: I’d guess that building a scalable QC is an easier problem than building a super AI—or at any rate, certainly no harder, since you could just ask the super AI to build a scalable QC for you. 🙂 But if we held the time of arrival constant, I’d also say that super AI would be the one you should worry about much more.

  11. Yaqub Says:

    We take it that time travel would allow computers to solve all NP-Complete problems in polynomial time. If it is mathematically shown that P≠NP can physicists take it as prove that time travel is impossible?

  12. Scott Says:

    Yaqub #11: No, because you’re confusing two different questions: (1) the mathematical question of whether P=NP, and (2) the empirical question of whether NP-complete problems are efficiently solvable in the physical world, possibly using resources beyond P, such as quantum computers or closed timelike curves. A proof of P≠NP would have no bearing on whether closed timelike curves are possible.

  13. Scott Says:

    Richard Gaylord #1:

    i don’t think Goldstein qualifies as either a great philosopher or a great novelist but i do think that her first novel “The Mind-Body Problem” is superb…

    Would you agree to “great philosopher-novelist”? Like, surely we agree that Spinoza and Hume were greater philosophers, Twain and Dickens greater novelists, but when you ask Google for “philosopher novelists”, her picture’s right there next to some characters named Dostoyevsky, Sartre, Camus, Voltaire, and Nietzsche. 🙂

  14. Lena Hahn Says:

    Quantum computers operate on completely different principles to existing computers, which makes them really well suited to solving particular mathematical problems, like finding very large prime numbers. Since prime numbers are so important in cryptography, it’s likely that quantum computers would quickly be able to crack many of the systems that keep our online information secure. Because of these risks, researchers are already trying to develop technology that is resistant to quantum hacking, and on the flipside of that, it’s possible that quantum-based cryptographic systems would be much more secure than their conventional analogues.

  15. Scott Says:

    Lena #14: Actually, I don’t know of any interesting quantum speedup purely for finding large primes, except what follows from Grover’s algorithm. The famous speedup is for factoring a given composite.

  16. fred Says:

    It took 4.543 billion years for the Earth to assemble some of its atoms into macro structures exhibiting stable large scale quantum entanglement.

    As a cosmic achievement, that’s probably even more significant than when the Earth managed to re-assemble part of itself into structures that could reach its moon!

    Congratulations, dear Earth!

  17. Craig Says:

    Scott 9,

    Your statement, “And it’s perfectly consistent with the linearity of QM for each qubit to work fine when the other ones are off, but for there to be multi-qubit noise terms. Indeed, not only “consistent”: this is what typically happens. On the other hand, the multi-qubit noise terms can in principle be isolated and understood, just like single-qubit noise can, and that’s what they’ve been doing.” appears to lead to some serious engineering problems, unless I am misunderstanding you:

    If there were multi-qubit noise terms, then there would be 2^72 of them, each one corresponding to a subset of the set of 72 qubits. How does a Google engineer handle 2^72 many multi-qubit noise terms?

    Unless the magnitude of the errors in the multi-qubit noise terms are directly correlated with the magnitude of the errors in the single qubit noise-terms. Is that what you mean?

  18. Craig Says:

    Scott #9,

    Also you said, “If any of this leads to the discovery of a violation of the axioms of QM, I guarantee you’ll hear about it.”

    This leads to a question: Suppose that Google’s 72 qubit machine does not do everything that it was predicted to do, according to the laws of QM (for instance, it does not work properly), even if everything is calibrated perfectly. Would this in and of itself constitute a violation of the axioms of QM?

  19. Scott Says:

    Craig #17 and #18: But there aren’t 272 terms. In practice, the Hamiltonian is a sum of few-body terms (e.g., each qubit and its near neighbors), and there are much fewer of those. Or, if the experimental results are not consistent with their having arisen from such a spatially local Hamiltonian, then yes, that’s your violation—either of quantum mechanics itself, or of some other principle of physics (e.g., locality) that seems just as basic if not more.

  20. Craig Says:

    Scott 19,

    Great answer!

    It never even occurred to me to think that QC could act on qubits locally (even though nature works in this way) and one could still get quantum supremacy. Silly me.

    This makes QC even more interesting.

  21. fred Says:

    Scott #19
    When you say “near neighbors”, is that a restriction on which subset of qubits can be entangled with one another?

  22. Scott Says:

    fred #21: No, because you can entangle any qubit with any other one by simply applying a chain of nearest-neighbor operations, along some path from the first qubit to the second. Just like, in your classical life, you can (by definition) get from anywhere to anywhere else that’s connected to it by walking paths by taking one step at a time. 🙂

  23. asdf Says:

    https://ai.googleblog.com/2015/12/when-can-quantum-annealing-win.html

    Some kind of new development in quantum annealing is being claimed. Supremacy? Dunno. You’re probably already aware of whatever it is in any case.

  24. Viktor Dukhovni Says:

    Scott, in the letter you say:
    > On the other hand, given that a large fraction of web servers still use 512-bit “export grade” cryptography that was already breakable in the 1990s, …

    I believe this is no longer an accurate picture: 512-bit certificates are long-gone from the public Internet. Until a few years ago, many servers were exposed via LogJam to having 512-bit export DHE key agreement parameters replayed against their clients. This too is no longer “a large fraction of web servers”. Since LogJam and DROWN, TLS stacks have been patched to not accept small DH primes and not reuse DHE private keys for more than one connection, to not accept SSLv2, SSLv3 or export-grade algorithms. A lot of progress has been made over the last 3 to 4 years.

    Even for DNSSEC—where caches preclude end-to-end algorithm negotiation, and where, therefore, deprecated algorithms take longer to fade away—my survey of 7.6 signed domains shows that only ~15,000 are still using 512-bit keys, almost all hosted by just 3 small providers. The nagging will continue until even these are addressed.

  25. Scott Says:

    Viktor #24: OK, thanks very much for the update! I’m glad to hear that the situation has improved since I read about it a few years ago. From Alex Halderman, my childhood friend (and a lead author on Logjam), I hear a lot about when things are spectacularly broken (Alex’s favorite topic), and not quite as much about when they actually get fixed after he and others raised a stink about them. 😀

  26. Dmitri Says:

    Speaking of misconceptions. Are these statements true?

    1. Any algorithm taking a quantum state as an input requires several copies of that state.
    2. Quantum computer must be in a coherence with its quantum input.

    Or, in other words, any quantum state is always associated with some reproducible preparation process, which must at least share entangled particles with the quantum processing device.

    So, I started doubting these statements once I faced such things as quantum money and discussion of quantum advice in your “Quantum Computing since Democritus”. Assuming we indeed have a quantum advice (that preexists in the universe and allows to solve hard problems) safely concealed. Do we really can use it? How do we make it coherent with our computer? How do we deal with the fact that it will be available only once?

  27. Scott Says:

    Dmitri #26: No, I don’t agree with either statement (insofar as I understand what you mean by them). There are often interesting things you can do given just a single copy of a quantum state—for example, verify it as a banknote, or use it as an advice state in Watrous’s group membership protocol. And these tasks need not even damage the state at all—so long as the output of your computation would be predictable given complete knowledge of the quantum state. (On the other hand, since you don’t have complete knowledge of the quantum state, or you do but you lack enough computational power to calculate the results of measuring the state, you can still learn something interesting!) Also, in these sorts of protocols, there’s no need whatsoever for the money or the advice state to be pre-entangled with the quantum computer. The state presumably will become entangled with the QC’s internal qubits during the course of the quantum computation, but it doesn’t need to be entangled at the start.

  28. jonas Says:

    > Finally, there’s private-key cryptography […] which is hardly threatened by quantum computing at all: you can achieve the same level of security as before, we think, by simply doubling the key lengths.

    I’d like to mention that D. J. Bernstein, respected security researcher, claims that there will be no practical attacks on private key cryptography that will be any easier with quantum computers than with only classical computing. This means you won’t need to double key lengths just because of quantum attacks. You might need to increase key lengths in some cryptographical system because of classical attacks of course. See “https://blog.cr.yp.to/20171017-collisions.html” .

  29. Scott Says:

    jonas #28: I just read that piece. Most of it is talking about finding collisions in hash functions, where, in contrast to pure Grover search, the known algorithm achieving a quantum speedup (the BHT algorithm) requires a large quantumly-accessible RAM. I have many disagreements with what Bernstein writes there—e.g., he compares a massively parallel classical algorithm against a sequential quantum algorithm, without even trying to explain or justify the discrepancy—but it’s mostly irrelevant to what I was talking about in the letter, which was attacking private-key crypto by pure brute-force search over the keys. There, the only counterargument Bernstein offers to the statement that Grover’s algorithm might force you to double the key lengths is that maybe, in practice, the constant-factor overheads (from quantum error correction, etc.) will be so large as to negate the Grover speedup. Which … well, yes, obviously. My point was: no one can rule out, on any theoretical ground, that QC will force us to increase the key lengths for private-key crypto to get the same level of security, but assuming it does, the increase is unlikely to be by more than about a factor of 2.

  30. Eric Cordian Says:

    Scott,

    We manage to do computational chemistry to great accuracy using classical techniques from linear algebra and functional analysis to approximate electron wavefunctions involved in chemical bonding, without needing special hardware that can model actual quantum superpositions.

    Could you enumerate some problems for which quantum simulation on a quantum computer would compute something useful in materials science with exponential speedup over a classical approach?

  31. The problem with gatekeepers Says:

    Thank you for sharing the note about Feynman. I read the letter and I was very moved by it. It confirms Feynman was the kind of genius we don’t see very much these days. In the letter, he pulled the opposite of this https://www.youtube.com/watch?v=ZbFM3rn4ldo . In the video he was trying to tell viewers that the scientific view of beauty is more complex than the experience the average flower gazer gets. And yet, in his letter to his dead wife, while acknowledging the incomprehensibility and irrationality of writing a letter to a dead person, he seems to cherish doing so.

    We human beings are too complex to be reduced to chemical reactions. And yet, this reductionist view of human beings is the one that seems to be the most popular among academics and intellectuals.

  32. John Sidles Says:

    Eric Cordian observes (circa #30)  “We manage to do computational chemistry to great accuracy using classical techniques from linear algebra and functional analysis …”

    Alań Aspuru-Guzik, Roland Lindh, and Markus Reiher survey recent radical advances in quantum simulation capability in their engaging review “The Matter Simulation [R]evolution” (ACS Central Science, 2018).

    Aspuru-Guzik, Lindh, and Reiher’s analysis richly supports both the hopes of quantum supremacists — “It’s incredible what quantum computers someday will accomplish” — and the expectations of quantum skepticists — “It’s incredible what classical computers already can accomplish”.

    The species of quantum skepticism that Aspuru-Guzik, Lindh, and Reiher’s review supports might reasonably be termed “quantum metaskepticism”, with a view to conveying the notion that quantum metaskepticism relates to quantum skepticism, as metamodernism relates to modernism (for a survey of metamodernism, see e.g. Hanzi Freinacht’s on-line essay “The Difference Between Post- and Meta-modernism“).

    Although metamodernist ideas and objectives are trendy among philosophers nowadays, the roots of metamodernism in scientific metaskepticism date at least as far back as Spinoza (circa 1664)

    (Ethics, Part III, Proposition 2)  “No one has hitherto laid down the limits to the powers of the body [that is, matter], that is, no one has as yet been taught by experience what the body [matter] can accomplish solely by the laws of nature, in so far as she is regarded as extension.”

    “No one hitherto has gained such an accurate knowledge of the bodily mechanism, that he can explain all its functions; nor need I call attention to the fact that many actions are observed in the lower animals, which far transcend human sagacity, and that somnambulists do many things in their sleep, which they would not venture to do when awake: these instances are enough to show, that the body can by the sole laws of its nature do many things which the mind wonders at.”

    In light of this celebrated Spinoza passage — oftimes shortened to “no one knows what matter can do” — we can be reasonably confident that Spinoza would have hilariously enjoyed the concrete analysis of Aspuru-Guzik, Lindh, and Reiher.

    Needless to say, there is plenty more to be said in regard to these richly enlightening, wonderfully entangled topics. Young researchers especially, are fortunate to be beginning their scientific careers, in an era in which the hopes of quantum supremacists and quantum (meta)skeptics alike are vibrantly alive, and alike are compatibly achievable.

  33. Viktor Dukhovni Says:

    Scott #25:
    Yeah, fixing things tends to be a bit less sexy… The other vulnerability that made a splash and helped to drive getting things fixed was DROWN. In an ironic twist of fate, I accidentally ended up a coauthor of the paper, with my name right next to Alex Halderman… [ While working on fixing things, I happened to notice that two separately reported vulnerabilities combined together to enable a much more effective attack. ]

    P.S. There’s a typo in my previous comment #24, the DANE survey covers 7.6 *million* signed domains, not just 7.6. As in TLS, in DNSSEC there are also opportunities for the unglamorous work of fixing things.

  34. James Gallagher Says:

    Eric Cordian #30

    See eg Chemistry is quantum computing’s killer app

    “If you have 125 orbitals and you want to store all possible configurations, then you need more memory in your classical computer than there are atoms in the universe,” says Matthias Troyer, who develops algorithms and applications for quantum computers at Microsoft Research in Zurich. But a quantum computer could model such a system with just 250 qubits

  35. Scott Says:

    Eric #30: You ask an excellent question, and one that’s been receiving a great deal of concentrated attention over the last few years, by groups like Microsoft’s Quarc. You need problems in chemistry or condensed-matter that (1) people are highly economically motivated to solve, (2) they’ve been unable to solve so far, (3) are difficult for reasons of many-body QM, and (4) a QC could plausibly help with.

    My impression is that, if you look around for situations where the current approximation methods in quantum chemistry fail, it’s not hard to find examples. But then you need to ask the further question of how much insight you’ll get by being able to simulate those systems on a QC, as opposed to just studying them in the lab.

    Without passing judgment on plausibility, I can tell you the main potential examples of problems satisfying (1)-(4) that I’ve heard talked about in our community:

    – Designing better solar cells
    – Designing better batteries
    – Designing drugs that bind to a target receptor
    – Designing a way to produce fertilizer that’s less energy-intensive than the Haber process
    – Designing high-temperature superconductors

    There are many uncertainties with all of these: most obviously, do there even exist so much better compounds for a better computer search to discover, and which could be economically produced? If they do, how much is the difficulty of discovering them really related to many-body quantum effects that we can’t currently simulate? How much would a QC help?

    As I pointed out, though, these are also areas where a very small number of successes (maybe even one success) might justify the whole pursuit of QC-based quantum simulation.

    John Preskill’s NISQ article has more on these questions.

  36. Wochenrückblick 2018/23 – Raketenstiefel Says:

    […] Quantum computing for policymakers and philosopher-novelists: Die drei wesentlichen Anwendungsbereiche für Quantencomputer sind 1. Simulation von Quantenphysik und Quantenchemie zur Entwicklung neuer Materialien und Medikamente. 2. Entschlüsselung heutiger kryptografisch gesicherter Informationen. 3. Optimierung von maschinellem Lernen. […]

  37. Craig Says:

    Scott,

    Do you have any inside info about the 49 and 50 qubit quantum computers that Intel and IBM recently built? IBM announced less than a year ago that they had built one, yet we have not heard anything more.

  38. Scott Says:

    Craig #37: No, I don’t have inside info. I’m skeptical that 49-50 qubit devices already exist with anywhere near the level of control and coherence that we’d like for quantum supremacy experiments based on random circuit sampling. My sense is that building these things is extremely hard, and that some announcements were made prematurely (or hyped by the press beyond their importance) for PR/marketing reasons. Does anyone else have additional insight to provide?

  39. Richard Gaylord Says:

    scott #13.

    i don’t know what a ‘philosopher-novelist’ or ‘philosopher novelist’ is. if the term refers to an individual who presents their philosophical ideas in the form of a novel, then the google search of ‘philosopher novelist’ (which returns Philosopher > Novelist with no explanation as to what the ‘>’ means) lists only one individual who is clearly a philosopher-novelist. That is Ayn Rand, whose novels are simply a poorly written fictional pretext within which she very clearly states her philosophical views (see the 80 page speech by John Galt in “Atlas Shrugged”). i have no idea why the authors listed by google are considered ‘philosopher novelists’ since they don’t explicity deal with philosophical issues in their novels. my confusion lies in the term philosopher. i recognize a novelist when i read one but what makes a person a philosopher is unclear unless they are discussing metaphysics or epistemology (or logic?). btw – the google list inexplicably does not include Hermann Hesse. nor does it include Robert Heinlein, with his depiction of an anarcho-capitalist society in “The Moon Is A Harsh Mistress”.

  40. SR Says:

    Scott, I know you’re a theorist but what’s your view on the role of boring mundane engineering in getting up coherence times, gate fidelities, etc. of these qubits?

    At a recent conference in the field, I heard a leading ion trap researcher state with almost a sneer of contempt that ‘Oh, we aren’t interested in the engineering, we just want to focus on the science!’ So I thought that was a bit.. concerning. And quite misguided, actually. Someone’s gotta work on the boring stuff to make sure the awesomely cool science actually sees light of day. I’d hate for us to come this close in the field, and then hit the valley of death just because we failed to immediately deliver on the hype.

  41. Michael G Says:

    Re: the Feynman movie — casting choices: I really think Jimmy Fallon looks like Feynman and might be able to pull off the “buffoon” side of him, as Freeman Dyson wrote to his mom.

  42. Shtetl-Optimized » Blog Archive » Scott’s Supreme Quantum Supremacy FAQ! Says:

    […] But the second issue is that, even in a hypothetical future with scalable, error-corrected QCs, on our current understanding they’ll only be able to crack some codes, not all of them. By an unfortunate coincidence, the public-key codes that they can crack include most of what we currently use to secure the Internet: RSA, Diffie-Hellman, elliptic curve crypto, etc. But private-key crypto should only be minimally affected. And there are even candidates for public-key cryptosystems (for example, based on lattices) that no one knows how to break quantumly after 20+ years of trying, and some efforts underway now to start migrating to those systems. For more, see for example my letter to Rebecca Goldstein. […]