Quantum computing news (98% Trump-free)

(1) Apparently Microsoft has decided to make a major investment in building topological quantum computers, which will include hiring Charles Marcus and Matthias Troyer among others.  See here for their blog post, and here for the New York Times piece.  In the race to implement QC among the established corporate labs, Microsoft thus joins the Martinis group at Google, as well as the IBM group at T. J. Watson—though both Google and IBM are focused on superconducting qubits, rather than the more exotic nonabelian anyon approach that Microsoft has long favored and is now doubling down on.  I don’t really know more about this new initiative beyond what’s in the articles, but I know many of the people involved, they’re some of the most serious in the business, and Microsoft intensifying its commitment to QC can only be good for the field.  I wish the new effort every success, despite being personally agnostic between superconducting qubits, trapped ions, photonics, nonabelian anyons, and other QC hardware proposals—whichever one gets there first is fine with me!

(2) For me, though, perhaps the most exciting QC development of the last month was a new preprint by my longtime friend Dorit Aharonov and her colleague Yosi Atia, entitled Fast-Forwarding of Hamiltonians and Exponentially Precise Measurements.  In this work, Dorit and Yosi wield the clarifying sword of computational complexity at one of the most historically confusing issues in quantum mechanics: namely, the so-called “time-energy uncertainty principle” (TEUP).

The TEUP says that, just as position and momentum are conjugate in quantum mechanics, so too are energy and time—with greater precision in energy corresponding to lesser precision in time and vice versa.  The trouble is, it was never 100% clear what the TEUP even meant—after all, time isn’t even an observable in quantum mechanics, just an external parameter—and, to whatever extent the TEUP did have a definite meaning, it wasn’t clear that it was true.  Indeed, as Dorit and Yosi’s paper discusses in detail, in 1961 Dorit’s uncle Yakir Aharonov, together with David Bohm, gave a counterexample to a natural interpretation of the TEUP.  But, despite this and other counterexamples, the general feeling among physicists—who, after all, are physicists!—seems to have been that some corrected version of the TEUP should hold “in all reasonable circumstances.”

But, OK, what do we mean by a “reasonable circumstance”?  This is where Dorit and Yosi come in.   In the new work, they present a compelling case that the TEUP should really be formulated as a tradeoff between the precision of energy measurements and circuit complexity (that is, the minimum number of gates needed to implement the energy measurement)—and in that amended form, the TEUP holds for exactly those Hamiltonians H that can’t be “computationally fast-forwarded.”  In other words, it holds whenever applying the unitary transformation e-iHt requires close to t computation steps, when there’s no magical shortcut that lets you simulate t steps of time evolution with only (say) log(t) steps.  And, just as the physicists handwavingly thought, that should indeed hold for “generic” Hamiltonians H (assuming BQP≠PSPACE), although it’s possible to use Shor’s algorithm, for finding the order of an element in a multiplicative group, to devise a counterexample to it.

Anyway, there’s lots of other stuff in the paper, including a connection to the stuff Lenny Susskind and I have been doing about the “generic” growth of circuit complexity, in the CFT dual of an expanding wormhole (where we also needed to assume BQP≠PSPACE and closely related complexity separations, for much the same reasons).  Congratulations to Dorit and Yosi for once again illustrating the long reach of computational complexity in physics, and for giving me a reason to be happy this month!

(3) As many of you will have seen, my former MIT colleagues, Lior Eldar and Peter Shor, recently posted an arXiv preprint claiming a bombshell result: namely, a polynomial-time quantum algorithm to solve a variant of the Closest Vector Problem in lattices.  Their claimed algorithm wouldn’t yet break lattice-based cryptography, but if the approximation factors could be improved, it would be well on the way to doing so.  This has been one of the most tempting targets for quantum algorithms research for more than twenty years—ever since Shor’s “original” algorithm laid waste to RSA, Diffie-Hellman, elliptic-curve cryptography, and more in a world with scalable quantum computers, leaving lattice-based cryptography as one of the few public-key crypto proposals still standing.

Unfortunately, Lior tells me that Oded Regev has discovered a flaw in the algorithm, which he and Peter don’t currently know how to fix.  So for now, they’re withdrawing the paper (because of the Thanksgiving holiday, the withdrawal won’t take effect on the arXiv until Monday).  It’s still a worthy attempt on a great problem—here’s hoping that they or someone else manage to, as Lior put it to me, “make the algorithm great again.”

65 Responses to “Quantum computing news (98% Trump-free)”

  1. Anonymous Says:

    Just loved the “make algorithm great again” part 🙂

  2. Matt Broome Says:

    There is also the intel push at Delft in the Netherlands, supporting Vandersypen’s quantum dot program, amongst others. Only a cool $50M investment though, so probably doesn’t compare to the others 😉


  3. A Says:

    what is the flaw?

  4. Scott Says:

    A #3: Apparently it was with one of the measure-concentration lemmas. I don’t know more detail than that.

  5. Fred Mertz Says:

    Shtetl: a small Jewish town or village in eastern Europe.

    Optimized: made as perfect as possible.

    When will you tell us why you gave this blog that name?

  6. Vadim Says:

    Are there reasons to think that there should or shouldn’t be public key systems that can survive quantum attacks (assuming nothing crazy like NP being contained in BQP)? Or is the intuition weak either way?

  7. Tim Duty Says:

    I often wonder if quantum computing is being grossly oversold to the public. The Microsoft post mentions, “vast computing power that could address some of the world’s most pressing problems, from climate change and hunger to a multitude of medical challenges”. I hear this sort of thing often. Climate change, at least it’s cause, can be understood with 19th century chemistry and physics-Arhennius pretty much nailed it, and the solution is simple.

  8. Felipe Says:

    Was there any word on publishing a writeup of the flaw?

  9. Scott Says:

    Felipe #8: I assume the arXiv retraction will explain something about it, but I’m not sure.

  10. Scott Says:

    Fred Mertz #5: I already have, several times. See for example my Scientific American interview.

  11. Scott Says:

    Vadim #6: For private-key cryptography, there’s a strong intuition that there should exist schemes that survive quantum attack: there are as long as there exist quantum-secure one-way functions, which in turn seems almost as secure as NP⊄BQP. For public-key crypto, by contrast, it’s basically just a question of which candidate schemes are on the table! I.e., as long as the lattice-based schemes stand, we’ll think quantum-secure PKC is likely possible. If those fall and nothing takes their place, the intuition will become much weaker.

  12. Scott Says:

    Tim Duty #7: Indeed, that’s not the sort of language you’d ever find on this blog! The only question I face is whether to call it out or let it pass, which in turn depends on how much fire I know to be behind the smoke. In the case of Microsoft Station Q and the Quarc group, there’s plenty of fire.

    Like, yes, it’s possible that QCs will give us a general-purpose quantum simulation capability, which will help in designing higher-efficiency solar cells and carbon-sequestration techniques, which in turn will help solve global warming. But there are a few steps in between!

    And yes, as you say, in that case we also have a vastly simpler solution (stop doing so much of the bad stuff we’re doing), which we’ve simply lacked the political will to implement. (Though if you think of right-wing obstruction as a constraint as ironclad as the laws of physics, maybe you will need a revolution in quantum chemistry after all…)

  13. Joshua Zelinsky Says:

    One of Dorit Aharonov and Yosi Atials primary results is that if the 2-sparse row-computable Hamiltonians can be exponentially fast forwarded, the complexity class PSPACE equals BQP. This is a small fraction of what they label as physically realizable Hamiltonians.

    This raises a question which is not rigorous (and hopefully someone can turn it into a well-posed question): given a distribution (presumably well behaved) on physically realizable distributions, when is a random collection able to be fast forwarded? One would naively expect that an actual rigorous version of this question would lead to a 0-1 law depending on the distribution, and that most reasonable distributions this would be 0 (especially in light of their Theorems 1 and 6 together with the fact that one would expect most Hamiltonians to in some sense be worse than the 2-sparse row computable ones). But as stated this isn’t really a formal mathematical question.

  14. Tim Duty Says:

    Scott #12: Seems like quantum simulation has become the point of retreat when it is acknowledged that quantum computers will likely have very limited applications. Sure, it will be interesting to have a quantum simulator, but I question just how useful these will be for solving complex problems such as those involved in solar cell efficiency, or for example, making HTSC wires cost-effective. Essentially, there would have to be a problem so simple that at the moment, it’s entire resolution is limited by classical simulation of a quantum system. Do such problems really exist, when considered in detail and context, that are genuinely significant? Also, simulation is not synonymous with understanding, as any one who has used numerical methods knows.

  15. Scott Says:

    Tim #14: For me, quantum simulation and basic science (e.g. proving it’s possible at all) have been the biggest known applications since the beginning; it’s not a “point of retreat” because I’ve never wavered from that view or expressed a different one.

    There has been work (much of it, actually, from the Microsoft Quarc group) looking at specific applications of a quantum simulation capability, and even costing out how many qubits, etc. you’d need to learn various things of chemical or industrial interest. See this paper for example.

    It’s true that simulation is not understanding. But it’s also true that a crazy percentage of supercomputer time (here at UT, they told me 30% on the Stampede supercomputer) is used for quantum chemistry simulations. And when you talk to the people who do these simulations, they’re like, yes, absolutely, if we had a quantum computer we could learn a lot more.

  16. Tim Duty Says:

    Scott #15: Apologies as I was not pointing the finger at you. Obviously, from the banner at the top of the page, and your book (which I am currently reading), you are quite clear about what are likely outcomes of quantum computing. It’s just that I have never been to a talk or other QC event where the speaker says, “all right guys, I’ll be honest with you, the prospect of quantum computing really does address some very fundamental questions about knowledge and reality, but it’s likely to have only special purpose applications, quantum simulation being the main one, so we should really have a closer look at what to do with that.” The arxiv paper is a step in the right direction. I’d be interested to hear what relative value the authors of ref. 27 would place on such a calculation of transition rates.

  17. Scott Says:

    Tim #16:

      It’s just that I have never been to a talk or other QC event where the speaker says, “all right guys, I’ll be honest with you, the prospect of quantum computing really does address some very fundamental questions about knowledge and reality, but it’s likely to have only special purpose applications…

    So you haven’t been to any of my talks? 🙂

  18. Tim Duty Says:

    Not yet!

  19. Vadim Kosoy Says:

    Scott #11: IIUC, you and Vadim are talking about whether there are *classical* encryption schemes safe against quantum attack. What about *quantum* public-key cryptography? Is it much more likely to exist?

  20. Tim Duty Says:

    Anyway, getting back to the Microsoft announcement…it’s their money, so they can do what they want. I myself am nowhere near ready to conclude that we are at an “inflection point in which we are ready to go from research to engineering”, especially when it come to experimental evidence for Majorana modes arising from inducing superconductivity into very disordered semiconductor nanowires…if that is what they are referring to. One finds literally a zoo of sub gap states in these systems, see e.g. http://www.nature.com/nnano/journal/v9/n1/full/nnano.2013.267.html, so one needs to take care that one is not cherry picking for signatures of “Majorana” zero modes. Until some one demonstrates braiding, or maybe even 4pi periodic Josephson current, I will remain a skeptic.

  21. Osa Lunde Says:

    Tim #20: I wonder if something got a bit lost in the blog/article translation here, as MS also seems to acknowledge that the work ahead is not a given. From the blog post, and the links listed below it, my takeaway is as follows:

    They’ve been funding the research of Marcus, Troyer, Kouwenhoven, and Reilly for years now. I’m assuming on a contract basis, if that’s how university funding works? This stuff first got interesting for me because there was press about Microsoft giving millions of dollars to establish Station Q at Purdue university – so the new arraignment here must be similar? The website link already lists them as part of a Station Q consortium, apparently for some time now.

    Presumably, they’ve spent a lot of those years studying the “zoo of sub gap states” problem in these systems. If they’re so dedicated to this topological path, this move can’t be simply a reaction to what the competition is doing – even if we’ll likely see devices from other players first.

    They’re not totally over the technical hump, but they are now confident enough in their progress to snatch these four up and bring them into the family proper, and fund more permanent lab setups at their respective universities. This new, more significant funding would explain the P.R./Press articles, but not the reality of their engineering status. The true next steps would be to demonstrate actual qubits and braiding, but could we expect the article to tell us that :-). But I guess it makes sense to start planning these pieces now as a whole system (prototype qc), and not in isolation.

    It’s exciting, regardless.

  22. Tim Duty Says:

    Osa #21: In my humble opinion, hitching on to any particular technology at this point is very risky, but what else can you do, if getting in early is the key to winning? I’m willing to bet that any significant implementation of QC, including quantum simulation, is so far away at this point that when it does get here, it will look very different than what’s fashionable at the moment.

  23. Māris Ozols Says:

    Does P = BQP imply NP = QMA?

  24. Scott Says:

    Vadim #19:

      What about *quantum* public-key cryptography? Is it much more likely to exist?

    If you mean that a quantum computer is used to compute the encryption function, but the encrypted message is still classical: no, in 20+ years, I don’t think anyone has come up with any plausible candidates for public-key encryption schemes where that would help you (someone correct me if there’s a counterexample).

    If you mean that the encrypted messages are quantum: well, at that point, you probably might as well use quantum key distribution, which doesn’t depend on any computational assumptions at all!

  25. Scott Says:

    Māris #23: Good question (even if off-topic)! No, I don’t know such an implication. The best I know is that if PromiseP=PromiseBQP, then NP=QCMA.

  26. Jay Says:

    Tim Duty #14,

    I don’t understand the logic of your point. Let’s apply it to CRISPR. Was there any (real world) problem so that it’s entire resolution is limited by lack of CRISPR? Of course not, but do you have any doubts that it’s a game changer?

    In other words, do you consider unlikely that, if some quantum simulator was developped, the practice of physics will change at least as much as genetic is changing right now because of CRISPR?

  27. Chris Says:

    Hi Scott,

    With regards to quantum computing applications, have there been any results that have altered your thoughts on quantum machine learning since your “Read the fine print” article?

  28. Scott Says:

    Chris #27: There was a paper by Jordan Kerenidis and Anupam Prakash on quantum recommendation systems that I found quite striking. It’s another potential example of an “end-to-end” application of the phase estimation and HHL techniques for machine learningish problems, of exactly the sort I was asking for in my “Read the Fine Print” essay. On the other hand, in both that case and in the others, I think the necessary work to rule out an efficient classical algorithm for the same task still remains to be done (and we could hopefully actually do so in this case, since it’s “just query complexity”). This is another problem that I’d love to work on myself in my copious free time. But everyone else, feel free to solve it first. 🙂

  29. jonas Says:

    Scott re #11, obvious followup questions. 1. Do you believe that lattice-based public key cryptography in particular is secure against quantum attacks? 2. Do you believe that there exist classical fully homomorphic encryption schemes that are secure against quantum attacks?

  30. Scott Says:

    jonas #29: I don’t know. Lattice-based schemes have by now withstood quantum attack for ~20 years, which is not a negligible amount of time. As long as they stand, the “default guess” is presumably yes to both questions, but it’s really no more than that. (Someone correct me if I’m wrong, but my understanding is that FHE no longer requires ideal lattices, as in Gentry’s original construction, but can now be based on a variety of assumptions that are close in spirit to the security of lattice-based crypto itself.) I don’t know whether Lior and Peter’s attempt should cause a Bayesian to update upwards or downwards, or neither.

  31. wolfgang Says:

    Scott, I guess this question is more for Gil , but let me ask it anyway: Do the arguments against scalable quantum computers (entangled noise etc.) even apply to a topological approach?

  32. Scott Says:

    wolfgang #31: Well, I haven’t met anyone who believes that noise will unavoidably kill non-topological QC, but not kill topological QC. As you said, it’s really a question for Gil, but the QC skeptics all seem perfectly happy to assert that their arguments, which I’ve never understood even in the non-topological case, should apply with minimal change in the topological case as well.

  33. Harold Says:

    Can Scott give an explanation of what P not equal to NP entails for quantum mechanics. Does it rule out all hidden variable interpretations including Bohemian Mechanics.

  34. Vadim Kosoy Says:

    Scott #24: Interesting! However, quantum key distribution requires two-side communication (although it only needs to be quantum in one direction), whereas classical public key cryptography only needs one-side communication for transmitting messages, once the keys are already established. So, maybe there is some interesting public key cryptography protocol with quantum messages that also has this property?

  35. Scott Says:

    Harold #33: P≠NP is a purely mathematical conjecture. Assuming (as I do) that it holds, I don’t think it entails anything directly about the interpretation of quantum mechanics. But closely-related conjectures, such as BPP≠BQP, would rule out all hidden-variable theories that reproduce the predictions of quantum mechanics while using a polynomial amount of classical computation.

    Since Bohmian mechanics doesn’t demand only a polynomial amount of classical computation—on the contrary, it has the entire wavefunction of standard QM, plus the hidden variable on top of it—Bohmian mechanics isn’t affected by any of that. Now, I showed in 2005 that, if you want to see the entire history of a Bohmian-like hidden variable (i.e., not just where it is now, but everywhere it’s been in the past), then that requires resources that probably exceed what you could efficiently simulate using a standard computer. So, if you thought Nature should be “limited to the resources of standard quantum computation and no more,” then you could take that as a sort of argument against Bohmian mechanics. On the other hand, it’s interesting that even in the unphysical scenario where you could see the hidden variable’s entire history, it still looks unlikely that you could solve NP-complete problems in polynomial time.

  36. Alex Says:


  37. Vinod Says:

    Scott #30: (Leveled) Fully Homomorphic Encryption can now be based on the essentially the same assumption as Oded R.’s public-key encryption scheme from 2005. That is, the learning with errors assumption, or alternatively, the (worst case) hardness of approximating lattice shortest vectors to within (small) polynomial factors. “Leveled” here means that once you tell me a polynomial upper bound d on the depth of the circuits you want to homomorphically compute, I can tell you how to fix the parameters of the encryption scheme to allow depth-d computation.

    Reversing the quantifiers (i.e., an encryption scheme that can evaluate *any* polynomial depth circuits) still requires a somewhat odd and unsatisfying and non-standard “circular security” assumption.

    A (self-serving) link: https://eprint.iacr.org/2013/541.pdf

  38. Scott Says:

    Vinod #37: Thanks so much—super helpful!

  39. jonas Says:

    Scott and Vinod re #37: thanks, that sounds promising.

  40. asdf Says:

    Hey Scott, if you don’t mind a non-quantum question, this is about “is P vs NP independent?” again.

    Suppose you have a powerful effective theory T (e.g. T=ZFC+LC or whatever) and assume CON(T). Suppose T decides the P vs NP problem, i.e. if you start enumerating theorems of T, you’ll eventually find either “P=NP” or “P!=NP”. Call this proof X. Then (at least for the theories usually presented this way) you can check the correctness of X *using only Peano arithmetic*.

    I think that means PA + CON(T) proves that T decides P vs NP. So P vs NP can’t be independent of PA+CON(T) unless it’s also independent of T. You don’t need a monstrous theory like “P + all true Pi-0-1 sentences”.

    Is this a) correct; b) interesting? It seems pretty straightforward but unless I missed something I couldn’t find it in http://www.scottaaronson.com/papers/pnp.pdf .


  41. asdf Says:

    Sorry, I mis-stated the above. I think I was still getting at something though. I’ll try to re-formulate.

  42. Scott Says:

    asdf #40: You might or might not have misstated something, but unless I’m mistaken, the broad point you’re getting at is correct. In other words: yes, PA can “simulate” much more powerful formal systems (e.g., proving all the same arithmetical statements), if you merely let it assume the consistency of such systems. I.e., at least in terms of which arithmetical theorems they prove, one can “concentrate” the power of systems for e.g. set theory into single consistency statements, and then just use the language of PA if one prefers that language. I don’t know that this tells us anything about the possible independence of P vs. NP, but yes, it’s a cool observation. Or do any experts want to disagree or add caveats?

  43. asdf Says:

    Scott, yes, I still think I misstated, but haven’t had a chance to think about it more.

    That question came out of another maybe-bogus idea: there’s a theorem of Leviant (1991) that if I understand it right, presents a certain proof system S, such that given an arbitrary Turing machine M, S proves that M halts iff M actually runs in polynomial time. Now suppose P=NP. That means Levin’s universal search algorithm solves SAT in polynomial time, which means Leviant’s S proves that Levin’s algorithm halts. Enumerating the theorems of S should eventually find this proof, which I think is supposed to be checkable.

    That would seem to make P=NP a Pi-0-1 proposition rather than Pi-0-2. Can that be right? If it is, I’d expect the fact to be well known.

    Thanks again 🙂

  44. asdf Says:

    Note: the above would seem to contradict Rice’s theorem, fwiw. Maybe I misunderstand Leviant’s paper (doi: 10.1109/LICS.1991.151625 ) which I’m still trying to read.

  45. gentzen Says:

    Independent of whether I am an expert or not, I want to disagree or at least add caveats. Let T=PA+¬CON(PA) and assume CON(T). Obviously T decides the CON(PA) question (in the negative). It is easy to find a proof of this in T. Call this proof X. Then you can check the correctness of X *using only Peano arithmetic*.

    That means PA + CON(T) proves that T decides CON(PA). Nevertheless, CON(PA) is most probably independent of PA + CON(T). At least the conclusion that PA + CON(T) proves ¬CON(PA) is unjustified, since the standard model of PA is most probably also a model of PA + CON(T), and CON(PA) is true in the standard model.

    Not sure whether I am qualified to judge Scott’s broader point, but reverse mathematics is normally done in subsystems of second order arithmetic, and the “preferred” base systems are conservative over PRA (primitive recursive arithmetic), which is considerably weaker than PA. (It is widely agreed that all reasoning of PRA is finitist, while some people consider PA to be impredicative.) My personal guess is that the second order language cannot be avoided easily, at least for the nontrivial part of the work. Of course, you could try to resort to an axiom scheme consisting of all the theorems the strong system T proves about its natural numbers, but what would be your point then?

  46. Scott Says:

    asdf #43: I don’t think that can possibly be right. I can believe that Leviant’s system can only prove TMs halt if they halt in polynomial time. But if it were an “if and only if,” then the theorems in his system would not be recursively enumerable.

  47. Raoul Ohio Says:

    As a break from Trump news, we can share disappointment that the Mars One Consortium is delaying colonizing Mars from 2026 to 2031: http://phys.org/news/2016-12-mars-colonisation-red-planet.html
    Darn! I gotta change my vacation plans.

    In other news, Jeane Dixon has scheduled Armageddon for 2020, so even 2026 might be too late.

  48. Vladislav Says:

    And what will happen if the last remaining means of securing our privacy will be broken by quantum computers? The author of this blog is worried that Trump will bring about the global catastrophe, what if it will be he and his fellow QC scientists who will plunge the world into chaos?

  49. Scott Says:

    Vladislav #48: In such a case, we could always switch to quantum key distribution. I wish we had such a backup in the case of Trump.

  50. Aula Says:

    Scott #49: How is quantum key distribution going to help with digital certificates? For security over the Internet, certificates are at least as important as encryption, but I don’t see any hope trying to implement them using QKD.

  51. Scott Says:

    Aula #50: Actually, Rompel showed that authentication is possible under the sole assumption that one-way functions exist. And the belief in quantum-secure OWFs seems extremely safe, even supposing the lattice-based public key schemes were to fail someday.

    But besides that, once you use QKD to establish a shared secret key, why not use that key for authentication?

  52. Aula Says:

    Scott #51: OK, two things. First, the way I interpreted Vladislav’s question is that there are no non-quantum one-way functions that can’t be reversed by quantum computers. I know it’s a very broad interpretation, but I don’t think any narrower interpretation can match what he actually wrote. And yes, of course it’s extremely unlikely to happen in practice, but that’s really not the point of his question.

    Second, the purpose of certificates is to prevent man-in-the-middle attacks. Now clearly you can do a QKD exchange with a MITM and thereby obtain a provably secure secret key that is known only to you and the MITM, but that obviously isn’t going to protect you against the MITM in any way whatsoever. For that you need prior knowledge (ie. a certificate) that you got by some other way than connecting to a website over the Internet. At present, root certificates typically are included with operating systems (which can be considered a safe way to distribute them), but in a world without public-key cryptography things would have to be done in different (and far more expensive) ways, and that would destroy most of the commerce on the net. I mean, big websites like amazon.com could afford to send you secret keys printed on paper, but most smaller sites couldn’t, which would force them out of business, because it would no longer be safe to buy anything from them.

  53. Scott Says:

    Aula #52: The context here was my discussion of Peter and Lior’s attempt to solve lattice problems using a QC, and then Vladislav’s implied accusation that QC researchers (rather than Trump!) are going to destroy civilization by making digital security impossible. Setting aside the many more obvious problems with that idea—e.g., if a quantum algorithm exists and you decide not to look for it, what makes you imagine that no one else will?—it seemed worth pointing out that even if lattice crypto fell, we’d still have QKD, as well as many, many other one-way function candidates, which is all you need if (for example) you just wanted authentication.

    Yes, it’s true that man-in-the-middle attacks can be used against QKD, but they can also be used against classical authentication schemes, if you don’t have a trusted public key authority! So, here’s an actual interesting technical question: if you do have a key authority, but you don’t have one-way functions, can QKD help you to get authentication that’s secure against man-in-the-middle attacks? Has anyone studied that? Thoughts welcome…

  54. Aula Says:

    Scott #53:

    So, here’s an actual interesting technical question: if you do have a key authority, but you don’t have one-way functions, can QKD help you to get authentication that’s secure against man-in-the-middle attacks?

    Well, “key authority” isn’t quite the right term to use here. The fundamental problem to solve is: how can you prove your identity to someone else? In the presence of one-way functions, that’s easy; you just use any digital signature scheme to create a signing key for yourself, and then you get your key signed by a root key, and then anyone who trusts that root key will automatically also trust your key. However, in the absence of one-way functions, there are no signature schemes and signing keys (at least AFAIK) and therefore no transitive trust from a root authority; you can’t prove your identity to someone else unless you have managed to create a shared secret with that someone else, by some method that both of you trust to not be vulnerable to MITM attacks. You will also need a lot of that shared secret, because once you use part of it, that part should never be used again, because otherwise you will be vulnerable to MITM. If you can get that far, then the answer to your question is yes; you can use QKD to get a key for an encrypted connection, where first you and your counterparty use parts of your shared secret to prove your identities to each other, and then communicate whatever it is that you want to communicate.

  55. Justin Says:

    I understand the academic interest in solving hard problems, but what benefit do we gain by breaking all known public-key cryptography methods?

    Once that is done and computers are built that can implement them, those who control those computers have unchecked spying powers against all others. Hardly an ideal situation. Is this really the case or is there something important that I am missing?

  56. Scott Says:

    Justin #55: I think you’re missing several major things.

    First, quantum computers would have significant applications besides codebreaking, the most important of which is simulating chemistry and physics (to aid drug discovery, materials science, etc). It would seem a shame to give up on that, just like it would seem a shame to refuse to invent classical computers because they can also be used for nefarious things.

    Second, just because you choose not to invent something doesn’t mean no one else will. If (say) the US or UK or Canada chose not to invest in quantum computing, it wouldn’t mean RSA would remain secure forever; it would just mean that the first to break it would be Russia or China or whoever else got there first. This is even more obvious if we’re just talking about discovering an algorithm, which anyone could in principle do using pen and paper.

    Third, the overwhelmingly likely scenario is that there ARE many forms of cryptography that will be secure even against quantum attack. But we won’t know which ones until we try our hardest to break things! The approach of “security through obscurity” (just not thinking about things too hard) has been tried, and found to lead to terrible results. That’s why modern cryptography encourages everyone to try as hard as they like to break things.

  57. asdf Says:

    If public-key cryptography becomes breakable but secret-key cryptography still works, then private communications become less convenient but they’re still doable. Right now we have encrypted content if we want it, but totally exposed metadata in almost all cases. That’s probably damaging privacy than public-key is helping it.

  58. Impossible Says:

    So you’re a computer scientist and yet your blog doesn’t even support HTTPS? I’m disappointed…

  59. Scott Says:

    Impossible #58: I’m a theoretical computer scientist. 🙂 As a Bluehost user, what would I need to do to add HTTPS support, and would it cost money?

  60. Impossible Says:

    @Scott: Great to hear that you want to pass to HTTPS! Maybe in the future you’ll even have an onion service 🙂 Fortunately there is now a FREE certificate authority known as “Let’s Encrypt” that can let you have an SSL certificate for your website, here’s how you can do it: https://letsencrypt.org/getting-started/

  61. Dacyn Says:

    asdf #40: You did misstate something. What you said is right up to and including the point where you say “PA + CON(T) proves that T decides P vs NP”, but this does not allow you to conclude that PA + CON(T) decides P vs NP: from the perspective of PA + CON(T), T may prove or disprove P vs NP without P vs NP being true or false respectively.

    Gentzen #45: No, CON(PA) is not independent of PA + CON(T). Namely, since T is stronger than PA, CON(T) implies CON(PA), and thus PA + CON(T) proves CON(PA).

    Scott #42: Gentzen #45’s response to your comment is still partially correct: if T=PA+¬CON(PA), then T proves ¬CON(PA), while PA+CON(T) does not prove ¬CON(PA) (among other reasons, since PA+CON(T) is true for the standard model it cannot prove ¬CON(PA), which is false for the standard model). So in this case T can prove arithmetical statements that PA+CON(T) cannot prove, and there is no reason to suspect that the same is not true for other (more useful) extensions of PA.

    PA can still “simulate” more powerful axiom systems in the weak sense that PA can prove all true statements of the form “axiom system X proves theorem Y”. (Indeed, such statements can be proven in much weaker theories, such as PA without induction.) I think this is the strongest “simulation” claim that you could make.

  62. gentzen Says:

    Dacyn #61: You are right, the statement CON(T) for my example T=PA+¬CON(PA) implies CON(PA). I should have used the example TR=PRA+¬CON(PA) instead. Then CON(PA) is (most probably) independent of PA + CON(TR).

    (Indeed, such statements can be proven in much weaker theories, such as PA without induction.)

    PA without induction (Robinson arithmetic) is not so great, it cannot even prove x + y = y + x or Sx ≠ x. I would use
    IΔ_0, arithmetic with induction on Δ_0-predicates or
    IΣ_1, arithmetic with induction on Σ_1-predicates,
    if the goal is to stay closer to PA than PRA. They have proof theoretic ordinal ω² and ω^ω (like PRA) respectively.

  63. Dacyn Says:

    Gentzen #62: Even though Robinson arithmetic can’t prove some very basic Π_1 statements, it can prove all true Σ_1 statements, and in particular all true statements of the form “axiom system X proves theorem Y” (well, assuming that the encoding of finite set theory into arithmetic is reasonable enough to respect the class of Σ_1 statements, I have not seen Gödel’s original construction but hear that it is complicated). But yes, for other purposes Robinson arithmetic is not necessarily so good.

  64. gentzen Says:

    Dacyn #63: I had some bad memories regarding Robinson arithmetic, which might be summarised as “Robinson arithmetic is not a nice and round formalisation of arithmetic”. I never really tried to work with IΔ_0 or IΣ_1, but I guess that they will be nicer and smoother. I do know in which sense PRA is nice and smooth, and how its nice and good properties can make it look much more daunting than it really is. And I am aware that “PRA+¬CON(PA)” is unclear, since it is not obvious whether CON(PA) can be expressed in a language without quantifiers, and ¬CON(PA) makes no sense at all (without quantifiers in the language).

    But “Q+¬CON(PA)” is also unclear, since Robinson arithmetic (Q) is probably unable to prove that the different reasonable encodings of ¬CON(PA) are equivalent. Not sure whether IΣ_1 is better in this respect, since I am not even sure whether PA is able to prove the equivalence different (reasonable) encodings.

  65. Dacyn Says:

    Gentzen #64: I think we agree now.