Archive for the ‘Quantum’ Category

Teaching quantum in junior high: special Thanksgiving guest post by Terry Rudolph

Thursday, November 22nd, 2018

Happy Thanksgiving!

People have sometimes asked me: “how do you do it?  how do you do your research, write papers, teach classes, mentor grad students, build up the quantum center at UT, travel and give talks every week or two, serve on program committees, raise two rambunctious young kids, and also blog and also participate in the comments and also get depressed about people saying mean things on social media?”  The answer is that increasingly I don’t.  Something has to give, and this semester, alas, that something has often been blogging.

And that’s why, today, I’m delighted to have a special guest post by my good friend Terry Rudolph.  Terry, who happens to be Erwin Schrödinger’s grandson, has done lots of fascinating work over the years in quantum computing and the foundations of quantum mechanics, and previously came up on this blog in the context of the PBR (Pusey-Barrett-Rudolph) Theorem.  Today, he’s a cofounder and chief architect at PsiQuantum, a startup in Palo Alto that’s trying to build silicon-photonic quantum computers.

Terry’s guest post is about the prospects for teaching quantum theory at the junior high school level—something he thought about a lot in the context of writing his interesting recent book Q is for Quantum.  I should stress that the opinions in this post are Terry’s, and don’t necessarily reflect the official editorial positions of Shtetl-Optimized.  Personally, I have taught the basics of quantum information to sharp junior high and high school students, so I certainly know that it’s possible.  (By undergrad, it’s not only possible, but maybe should become standard for both physics and CS majors.)  But I would also say that, given the current state of junior high and high school education in the US, it would be a huge step up if most students graduated fully understanding what’s a probability, what’s a classical bit, what’s a complex number, and any of dozens of other topics that feed into quantum information—so why not start by teaching the simpler stuff well?  And also, if students don’t learn the rules of classical probability first, then how will they be properly shocked when they come to quantum? 🙂

But without further ado, here’s Terry—who’s also graciously agreed to stick around and answer some comments.


Can we/should we teach Quantum Theory in Junior High?

by Terry Rudolph

Should we?

Reasons which suggest the answer is “yes” include:

Economic: We are apparently into a labor market shortage in quantum engineers.  We should not, however, need the recent hype around quantum computing to make the economic case – the frontier of many disparate regions of the modern science and technology landscape is quantum.  Surely if students do decide to drop out of school at 16 they should at least be equipped to get an entry-level job as a quantum physicist?

Educational: If young peoples’ first exposures to science are counterintuitive and “cutting edge,” it could help excite them into STEM.  The strong modern quantum information theoretic connections between quantum physics, computer science and math can help all three subjects constructively generate common interest.

Pseudo-Philosophical: Perhaps our issues with understanding/accepting quantum theory are because we come to it late and have lost the mental plasticity for a “quantum reset” of our brain when we eventually require it late in an undergraduate degree.  It may be easier to achieve fluency in the “language of quantum” with early exposure.

Can we?

There are two distinct aspects to this question: Firstly, is it possible at the level of “fitting it in” – training teachers, adjusting curricula and so on?  Secondly, can a nontrivial, worthwhile fraction of quantum theory even be taught at all to pre-calculus students?

With regards to the first question, as the child of two schoolteachers I am very aware that an academic advocating for such disruption will not be viewed kindly by all.  As I don’t have relevant experience to say anything useful about this aspect, I have to leave it for others to consider.

Let me focus for the remainder of this post on the second aspect, namely whether it is even possible to appropriately simplify the content of the theory.  This month it is exactly 20 years since I lectured the first of many varied quantum courses I have taught at multiple universities. For most of that period I would have said it simply wasn’t possible to teach any but the most precocious of high school students nontrivial technical content of quantum theory – despite some brave attempts like Feynman’s use of arrows in QED: The Strange Theory of Light and Matter (a technique that cannot easily get at the mysteries of two-particle quantum theory, which is where the fun really starts).  I now believe, however, that it is actually possible.

A pedagogical method covering nontrivial quantum theory using only basic arithmetic

My experience talking about quantum theory to 12-15 year olds has only been in the idealized setting of spending a few hours with them at science fairs, camps and similar.  In fact it was on the way to a math camp for very young students, desperately trying to plan something non-trivial to engage them with, that I came up with a pedagogical method which I (and a few colleagues) have found does work.

I eventually wrote the method into a short book Q is for Quantum, but if you don’t want to purchase the book then here is a pdf of Part I,, which takes a student knowing only the rules of basic arithmetic through to learning enough quantum computing they can understand the Deutsch–Jozsa algorithm.  In fact not only can they do a calculation to see how it works in detail, they can appreciate conceptual nuances often under-appreciated in popular expositions, such as why gate speed doesn’t matter – it’s all about the number of steps, why classical computing also can have exponential growth in “possible states” so interference is critical, why quantum computers do not compute the uncomputable and so on.

Before pointing out a few features of the approach, here are some rules I set myself while writing the book:

  • No analogies, no jargon – if it can’t be explained quantitatively then leave it out.
  • No math more than basic arithmetic and distribution across brackets.
  • Keep clear the distinction between mathematical objects and the observed physical events they are describing.
  • Be interpretationally neutral.
  • No soap opera: Motivate by intriguing with science, not by regurgitating quasi-mythological stories about the founders of the theory.
  • No using the word “quantum” in the main text! This was partly to amuse myself, but I also thought if I was succeeding in the other points then I should be able to avoid a word almost synonymous with “hard and mysterious.”

One of the main issues to confront is how to represent and explain superposition.  It is typical in popular expositions to draw analogies between a superposition of, say, a cat which is dead and a cat which is alive by saying it is dead “and” alive.  But if superposition was equivalent to logical “and”, or, for that matter, logical “or”, then quantum computing wouldn’t be interesting, and in this and other ways the analogy is ultimately misleading.  The approach I use is closer to the latter – an unordered list of possible states for a system (which is most like an “or”) can be used to represent a superposition. Using a list has some advantages – it is natural to apply a transformation to all elements of a list, for instance doubling the list of ingredients in a recipe.  More critically, given two independent lists of possibilities the new joint list of combined possibilities is a natural concept.  This makes teaching the equivalent of the Kronecker (tensor) product for multiple systems easy, something often a bit tricky even for undergrads to become comfortable with.

Conceptually the weirdest part of the whole construction, particularly for someone biased by the standard formalism, is that I use a standard mathematical object (a negative or minus sign) applied to a diagram of a physical object (a black or white ball).  Moreover, positive and negative balls in a diagram can cancel out (interfere).  This greatly simplifies the exposition, by removing a whole level of abstraction in the standard theory (we do not need to use a vector containing entries whose specific ordering must be remembered in order to equate them to the physical objects).  While it initially seemed odd to me personally to do this, I have yet to have any young person think of it as any more weird than using the negative sign on a number.  And if it is always kept clear that drawing and manipulating the whole diagram is an abstract thing we do, which may or may not have any correspondence to what is “really going on” in the physical setups we are describing, then there really is no difference.

There are some subtleties about the whole approach – while the formalism is universal for quantum computing, it can only make use of unitary evolution which is proportional to a matrix with integer entries.  Thus the Hadamard gate (PETE box) is ok, the Controlled-NOT and Toffoli likewise, but a seemingly innocuous gate like the controlled-Hadamard is not capable of being incorporated (without adding a whole bunch of unintuitive and unjustified rules).  The fact the approach covers a universal gate set means some amazing things can be explained in this simple diagrammatic language.  For example, the recent paper Quantum theory cannot consistently describe the use of itself, which led to considerable discussion on this blog, can be fully reproduced.  That is, a high school student can in principle understand the technical details of a contemporary argument between professional physicists.  I find this amazing.

Based on communication with readers I have come to realize the people at most risk of being confused by the book are actually those already with a little knowledge – someone who has done a year or two’s worth of undergraduate quantum courses, or someone who has taken things they read in pop-sci books too literally.  Initially, as I was developing the method, I thought it would be easy to keep “touching base” with the standard vector space formalism.  But in fact it becomes very messy to do so (and irrelevant for someone learning quantum theory for the first time).  In the end I dropped that goal, but now realize I need to develop some supplementary notes to help someone in that situation.

Q is for Quantum is certainly not designed to be used as a classroom text – if nothing else my particular style and choice of topics will not be to others’ tastes, and I haven’t included all the many, many simple examples and exercises I have students doing along with me in class when I actually teach this stuff.  It should be thought of as more a “proof of principle,” that the expository challenge can be met.  Several colleagues have used parts of these ideas already for teaching, and they have given me some great feedback.  As such I am planning on doing a revised and slightly expanded version at some point, so if you read it and have thoughts for improvement please send me them.

Ten updates

Wednesday, November 7th, 2018

If you like quantum, complexity, etc., then please read to the end! I’ve gotten a bunch of emails lately of the form “why haven’t you ever blogged about such-and-such?,” when it turned out that I damn well did blog about it; it was just somewhere down in a multi-item post.

1. Like many of you, I watched the US midterm election results with (mostly…) disappointment and dismay.  I think that history will judge us harshly for not totally and unequivocally rebuking everything Trump stands for and every politician associated with him.  But that’s not what I wanted to blog about today.

2. There was a breakthrough in communication complexity by Arkadev Chattopadhyay, Nikhil Mande, and Suhail Sherif: the first exponential separation between randomized communication complexity and log approximate rank for a total Boolean function f.  This falsifies the longstanding conjecture that these measures are polynomially related (though it doesn’t resolve the original log rank conjecture).  For those of you keeping score at home, the quantum communication complexity of f is sandwiched in between randomized CC and log approximate rank.  So, at least one of the following must now be true: either randomized CC is exponentially separated from quantum CC, or else quantum CC is exponentially separated from log approximate rank.  My money’s on the latter.

3. Ewin Tang, who achieved fame with a quantum-inspired classical algorithm for recommendation systems (which I blogged about in July), is now back with quantum-inspired classical algorithms for principal component analysis and supervised clustering.  Well, with the announcements of such algorithms; details of the analysis are to come later.

4. A bunch of people asked me about the paper by Sergey Bravyi, David Gosset, and Robert Koenig, Quantum advantage with shallow circuits.  tl;dr: it’s great!  And it was deservedly a highlight of the QIP conference back in January!  That’s why it confused me when everyone started asking about it a couple weeks ago.  The resolution is that the paper was just recently published in Science magazine, which led to popular coverage like this, which in turn led to people asking me whether this result unconditionally proves P≠BQP (that is, quantum computers can solve more problems in polynomial time than classical computers), and if not why not.  The answer is no: the paper proves an unconditional separation, but one that’s a long long way from P≠BQP, or anything else that would entail solving the central open problems of complexity theory like P vs. PSPACE.  Basically, it shows there are problems solvable in constant time with a quantum computer that aren’t solvable in constant time classically, for suitable meanings of “problem” (namely, a relation problem) and “in constant time” (namely, NC0 circuits, in which each output bit depends on only a constant number of input bits).  I understand that a stronger separation has since been achieved, between quantum NC0 and classical AC0, in work that’s not yet on the arXiv.  The problems in question, however, are all easy to solve in P, or even in classical logarithmic time, given a polynomial number of parallel processors.

5. A bunch of people also asked me about the paper by Xun Gao and Luming Duan, Efficient classical simulation of noisy quantum computation.  This paper tries to formalize something that many of us have suspected/feared for years, namely that random quantum circuits (the whole thing is specific to random circuits) can tolerate only a tiny amount of noise and decoherence before they become efficiently simulable classically.  If true, this has obvious implications for the sampling-based quantum supremacy experiments that Google and others are planning for the next few years: namely, that all the engineering effort they’ve already been investing anyway to push down the noise rate, will actually be necessary!  However, correspondence with the authors revealed that there’s a significant gap in the analysis as it currently stands: namely, the current proof applies only to closed quantum systems, which would (for example) rule out all the techniques that people eventually hope to use to achieve quantum fault-tolerance—all of which are based on constantly measuring subsets of the qubits, doing essentially error-free classical computation on the measurement outcomes, throwing away noisy qubits, and pumping in fresh qubits.  Xun and Duan say that they’re currently working on an extension to open systems; in my personal view, such an extension seems essential for this interesting result to have the informal interpretation that the authors want.

6. My friend Bram Cohen asked me to announce that his company, Chia, has launched a competition for best implementation of its Verifiable Delay Functions (VDFs), with real money rewards.  You can find the details at this Github page.

7. The second Q2B (“Quantum 2 Business”) conference, organized by QC Ware Corp., will be held this coming December 10-12, at the Computer History Museum in Mountain View.  There will be two keynote addresses, one by John Preskill and the other by me.  I hope I’ll get a chance to meet some of you there!

8. Longtime colleague and friend-of-the-blog Ashwin Nayak asked me to announce that the 2019 Conference on Computational Complexity, to be held July 18-20 in exciting New Brunswick, NJ, is now accepting submissions.  I hope to be there!

9. OK, what the hell: the 21st annual, and nearly impossible to capitalize correctly, SQuInT (Southwest Quantum Information and Technology) workshop will be held February 2019 in Albuquerque, NM.  UT Austin is now a node of the SQuInT network, and I’ll hopefully be attending along with a posse of students and postdocs.  The deadline for abstract submission is coming up soon: Monday November 12!

10. I went to morning Shabbat services in Austin this past weekend, exactly one week after the tragedy in Pittsburgh.  There was massively increased security, with armed guards interrogating us, Israeli-style, about why we had no membership sticker on our car, whether we knew the name of the rabbi, etc.  Attendance was maybe a factor of three higher than usual.  About thirty priests, ministers, and Christian seminary students, and one Muslim, came up to the bima to say a prayer of solidarity with Jews.  The mayor of Austin, Steve Adler, was also on hand to give a speech.  Then the rabbi read a letter to the synagogue by Sen. Ted Cruz denouncing antisemitism (well, parts of it; he said the letter was really long).  There were murmurs of disapproval from the crowd when Cruz’s name was mentioned, but then everyone quieted down and listened.  The thing is, the US and large parts of the world are now so far outside the norms of liberal democracy, in territory so terrifyingly uncharted since the end of World War II, that shooting up synagogues is bad is actually something that it’s better than not for powerful people to affirm explicitly.  Anyway, while I’m neither a believer nor much of a synagogue-goer, I found the show of unity moving.

It’s hard to think when someone Hadamards your brain

Tuesday, September 25th, 2018

“Unperformed measurements have no results.” —Asher Peres


With two looming paper deadlines, two rambunctious kids, an undergrad class, program committee work, faculty recruiting, and an imminent trip to Capitol Hill to answer congressional staffers’ questions about quantum computing (and for good measure, to give talks at UMD and Johns Hopkins), the only sensible thing to do is to spend my time writing a blog post.

So: a bunch of people asked for my reaction to the new Nature Communications paper by Daniela Frauchiger and Renato Renner, provocatively titled “Quantum theory cannot consistently describe the use of itself.”  Here’s the abstract:

Quantum theory provides an extremely accurate description of fundamental processes in physics.  It thus seems likely that the theory is applicable beyond the, mostly microscopic, domain in which it has been tested experimentally.  Here, we propose a Gedankenexperiment to investigate the question whether quantum theory can, in principle, have universal validity.  The idea is that, if the answer was yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory.  Analysing the experiment under this presumption, we find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty.  The agents’ conclusions, although all derived within quantum theory, are thus inconsistent.  This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.

I first encountered Frauchiger and Renner’s argument back in July, when Renner (who I’ve known for years, and who has many beautiful results in quantum information) presented it at a summer school in Boulder, CO where I was also lecturing.  I was sufficiently interested (or annoyed?) that I pulled an all-nighter working through the argument, then discussed it at lunch with Renner as well as John Preskill.  I enjoyed figuring out exactly where I get off Frauchiger and Renner’s train—since I do get off their train.  While I found their paper thought-provoking, I reject the contention that there’s any new problem with QM’s logical consistency: for reasons I’ll explain, I think there’s only the same quantum weirdness that (to put it mildly) we’ve known about for quite some time.

In more detail, the paper makes a big deal about how the new argument rests on just three assumptions (briefly, QM works, measurements have definite outcomes, and the “transitivity of knowledge”); and how if you reject the argument, then you must reject at least one of the three assumptions; and how different interpretations (Copenhagen, Many-Worlds, Bohmian mechanics, etc.) make different choices about what to reject.

But I reject an assumption that Frauchiger and Renner never formalize.  That assumption is, basically: “it makes sense to chain together statements that involve superposed agents measuring each other’s brains in different incompatible bases, as if the statements still referred to a world where these measurements weren’t being done.”  I say: in QM, even statements that look “certain” in isolation might really mean something like “if measurement X is performed, then Y will certainly be a property of the outcome.”  The trouble arises when we have multiple such statements, involving different measurements X1, X2, …, and (let’s say) performing X1 destroys the original situation in which we were talking about performing X2.

But I’m getting ahead of myself.  The first thing to understand about Frauchiger and Renner’s argument is that, as they acknowledge, it’s not entirely new.  As Preskill helped me realize, the argument can be understood as simply the “Wigner’s-friendification” of Hardy’s Paradox.  In other words, the new paradox is exactly what you get if you take Hardy’s paradox from 1992, and promote its entangled qubits to the status of conscious observers who are in superpositions over thinking different thoughts.  Having talked to Renner about it, I don’t think he fully endorses the preceding statement.  But since I fully endorse it, let me explain the two ingredients that I think are getting combined here—starting with Hardy’s paradox, which I confess I didn’t know (despite knowing Lucien Hardy himself!) before the Frauchiger-Renner paper forced me to learn it.

Hardy’s paradox involves the two-qubit entangled state

$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}}.$$

And it involves two agents, Alice and Bob, who measure the left and right qubits respectively, both in the {|+〉,|-〉} basis.  Using the Born rule, we can straightforwardly calculate the probability that Alice and Bob both see the outcome |-〉 as 1/12.

So what’s the paradox?  Well, let me now “prove” to you that Alice and Bob can never both get |-〉.  Looking at |ψ〉, we see that conditioned on Alice’s qubit being in the state |0〉, Bob’s qubit is in the state |+〉, so Bob can never see |-〉.  And conversely, conditioned on Bob’s qubit being in the state |0〉, Alice’s qubit is in the state |+〉, so Alice can never see |-〉.  OK, but since |ψ〉 has no |11〉 component, at least one of the two qubits must be in the state |0〉, so therefore at least one of Alice and Bob must see |+〉!

When it’s spelled out so plainly, the error is apparent.  Namely, what do we even mean by a phrase like “conditioned on Bob’s qubit being in the state |0〉,” unless Bob actually measured his qubit in the {|0〉,|1〉} basis?  But if Bob measured his qubit in the {|0〉,|1〉} basis, then we’d be talking about a different, counterfactual experiment.  In the actual experiment, Bob measures his qubit only in the {|+〉,|-〉} basis, and Alice does likewise.  As Asher Peres put it, “unperformed measurements have no results.”

Anyway, as I said, if you strip away the words and look only at the actual setup, it seems to me that Frauchiger and Renner’s contribution is basically to combine Hardy’s paradox with the earlier Wigner’s friend paradox.  They thereby create something that doesn’t involve counterfactuals quite as obviously as Hardy’s paradox does, and so requires a new discussion.

But to back up: what is Wigner’s friend?  Well, it’s basically just Schrödinger’s cat, except that now it’s no longer a cat being maintained in coherent superposition but a person, and we’re emphatic in demanding that this person be treated as a quantum-mechanical observer.  Thus, suppose Wigner entangles his friend with a qubit, like so:

$$ \left|\psi\right\rangle = \frac{\left|0\right\rangle \left|FriendSeeing0\right\rangle + \left|1\right\rangle \left|FriendSeeing1\right\rangle}{\sqrt{2}}. $$

From the friend’s perspective, the qubit has been measured and has collapsed to either |0〉 or |1〉.  From Wigner’s perspective, no such thing has happened—there’s only been unitary evolution—and in principle, Wigner could even confirm that by measuring |ψ〉 in a basis that included |ψ〉 as one of the basis vectors.  But how can they both be right?

Many-Worlders will yawn at this question, since for them, of course “the collapse of the wavefunction” is just an illusion created by the branching worlds, and with sufficiently advanced technology, one observer might experience the illusion even while a nearby observer doesn’t.  Ironically, the neo-Copenhagenists / Quantum Bayesians / whatever they now call themselves, though they consider themselves diametrically opposed to the Many-Worlders (and vice versa), will also yawn at the question, since their whole philosophy is about how physics is observer-relative and it’s sinful even to think about an objective, God-given “quantum state of the universe.”  If, on the other hand, you believed both that

  1. collapse is an objective physical event, and
  2. human mental states can be superposed just like anything else in the physical universe,

then Wigner’s thought experiment probably should rock your world.

OK, but how do we Wigner’s-friendify Hardy’s paradox?  Simple: in the state

$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}},$$

we “promote” Alice’s and Bob’s entangled qubits to two conscious observers, call them Charlie and Diane respectively, who can think two different thoughts that we represent by the states |0〉 and |1〉.  Using far-future technology, Charlie and Diane have been not merely placed into coherent superpositions over mental states but also entangled with each other.

Then, as before, Alice will measure Charlie’s brain in the {|+〉,|-〉} basis, and Bob will measure Diane’s brain in the {|+〉,|-〉} basis.  Since the whole setup is mathematically identical to that of Hardy’s paradox, the probability that Alice and Bob both get the outcome |-〉 is again 1/12.

Ah, but now we can reason as follows:

  1. Whenever Alice gets the outcome |-〉, she knows that Diane must be in the |1〉 state (since, if Diane were in the |0〉 state, then Alice would’ve certainly seen |+〉).
  2. Whenever Diane is in the |1〉 state, she knows that Charlie must be in the |0〉 state (since there’s no |11〉 component).
  3. Whenever Charlie is in the |0〉 state, she knows that Diane is in the |+〉 state, and hence Bob can’t possibly see the outcome |-〉 when he measures Diane’s brain in the {|+〉,|-〉} basis.

So to summarize, Alice knows that Diane knows that Charlie knows that Bob can’t possibly see the outcome |-〉.  By the “transitivity of knowledge,” this implies that Alice herself knows that Bob can’t possibly see |-〉.  And yet, as we pointed out before, quantum mechanics predicts that Bob can see |-〉, even when Alice has also seen |-〉.  And Alice and Bob could even do the experiment, and compare notes, and see that their “certain knowledge” was false.  Ergo, “quantum theory can’t consistently describe its own use”!

You might wonder: compared to Hardy’s original paradox, what have we gained by waving a magic wand over our two entangled qubits, and calling them “conscious observers”?  Frauchiger and Renner’s central claim is that, by this gambit, they’ve gotten rid of the illegal counterfactual reasoning that we needed to reach a contradiction in our analysis of Hardy’s paradox.  After all, they say, none of the steps in their argument involve any measurements that aren’t actually performed!  But clearly, even if no one literally measures Charlie in the {|0〉,|1〉} basis, he’s still there, thinking either the thought corresponding to |0〉 or the thought corresponding to |1〉.  And likewise Diane.  Just as much as Alice and Bob, Charlie and Diane both exist even if no one measures them, and they can reason about what they know and what they know that others know.  So then we’re free to chain together the “certainties” of Alice, Bob, Charlie, and Diane in order to produce our contradiction.

As I already indicated, I reject this line of reasoning.  Specifically, I get off the train at what I called step 3 above.  Why?  Because the inference from Charlie being in the |0〉 state to Bob seeing the outcome |+〉 holds for the original state |ψ〉, but in my view it ceases to hold once we know that Alice is going to measure Charlie in the {|+〉,|-〉} basis, which would involve a drastic unitary transformation (specifically, a “Hadamard”) on the quantum state of Charlie’s brain.  I.e., I don’t accept that we can take knowledge inferences that would hold in a hypothetical world where |ψ〉 remained unmeasured, with a particular “branching structure” (as a Many-Worlder might put it), and extend them to the situation where Alice performs a rather violent measurement on |ψ〉 that changes the branching structure by scrambling Charlie’s brain.

In quantum mechanics, measure or measure not: there is no if you hadn’t measured.


Unrelated Announcement: My awesome former PhD student Michael Forbes, who’s now on the faculty at the University of Illinois Urbana-Champaign, asked me to advertise that the UIUC CS department is hiring this year in all areas, emphatically including quantum computing. And, well, I guess my desire to do Michael a solid outweighed my fear of being tried for treason by my own department’s recruiting committee…


Another Unrelated Announcement: As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems.

CS and quantum information at UT Austin: come join us!

Wednesday, September 19th, 2018

Merry Yom Kippur!

This is my annual post where I tell you about opportunities available at UT Austin, which has long been a safe space for CS research, and which we hope will rapidly become (or return to its historical role as…) a safe space for quantum computing and information.

If you’re interested in faculty positions in computer science at UT, I have some great news: we plan to do a lot of hiring this year!  Because of the sheer volume of interviews we’ll be doing, we’d like to start our recruiting season already in the fall.  So we’re extending an unusual invitation: if you already have your materials ready, we encourage you to apply for faculty positions right now.  If you’re chosen for an interview, we could schedule it for the next few months.

We’ll be looking for great candidates across all parts of CS, but one particular interest is hiring another quantum computing theorist in CS (i.e., besides me), most likely a junior person.  While not everyone who reads this blog is a plausible candidate, and not every plausible candidate reads this blog, the intersection is surely non-negligible!  So again: we encourage you to apply right now, so we can start scheduling interviews already.

I’m also on the lookout for postdocs, mainly in theoretical quantum computing and information.  (I, and others in the theory group, are also collectively interested in postdocs in classical computational complexity.)  If you’re interested in doing a postdoc with me starting in Fall 2019, the procedure, like in previous years, is this:

  • Email me introducing yourself (if I don’t already know you), and include your CV and up to three representative papers.  Do this even if you already emailed me before.
  • Arrange for two recommendation letters to be emailed to me.

We’ll set a deadline for this of December 15.

Finally, if you’re interested in pursuing a PhD in CS at UT, please apply here!  The deadline, again, is December 15.  Just like every year, I’m on the lookout for superb, complexity-loving, quantum- or quantum-curious, lower-bound-hungry students of every background, and if you specify that you want to work with me, I’ll be sure to see your application.  Emailing me won’t help: everything is done through the application process.

As we like to say down here in Texas, hook ’em Hadamards!  (Well OK, no, we don’t especially like to say that.  It’s just a slogan that I found amusing a few years ago.)

My Tomassoni-Chisesi Prize talk

Saturday, September 15th, 2018

Update (Sep. 21) Video of Philip Kim’s and my talks is now available! (But not streaming, just a giant mp4 that you can download.)


On Thursday, I had the incredible honor of accepting the 2018 Tomassoni-Chisesi Prize in Physics at Università “La Sapienza” in Rome—“incredible” mostly because I’m of course not a physicist.  (I kept worrying they’d revoke the award when they realized I could barely solve the wave equation.)  This is not the first time quantum information was recognized; the prize has previously gone to Serge Haroche and Alain Aspect.  This year, for the first time, there was both an under-40 and an over-40 award; the latter went to Philip Kim, a quantum materials researcher at Harvard who I had the privilege to meet on this trip (he’s the taller one below).

I’m unbelievably grateful, not only to the committee, and its chair Giorgio Parisi (whose seminal work on phase transitions and satisfiability I’d long known, but who I met for the first time on this trip), but to Fabio Sciarrino, Paolo Mataloni, Fernanda Lupinacci, and everyone else who graciously hosted me and helped make my hastily-planned visit to Europe a success.

The department I visited has a storied history: here are the notes that Enrico Fermi left, documenting what he covered each day in his physics class in 1938.  The reason the last squares are blank is that, when Fermi and his Jewish wife left for Stockholm on the occasion of Fermi’s Nobel Prize, they continued directly to the US rather than return to an Italy that had just passed the racial laws.

On my way to Rome, I also gave two talks at a “quantum computing hackathon” in Zurich, called QuID (Quantum Information for Developers).  Thanks so much to Lidia del Rio for arranging that visit, which was fantastic as well.

To accept the Tomassoni-Chisesi prize, I had to give a 40-minute talk summarizing all my research from 2000 to the present—the hardest part being that I had to do it while wearing a suit, and sweating at least half my body weight.  (I also had a cold and a hacking cough.)  I think there will eventually be video of my and Prof. Kim’s talks, but it’s not yet available.  In the meantime, for those who are interested, here are my PowerPoint slides, and here’s the title and abstract:

Three Questions About Quantum Computing
Scott Aaronson (University of Texas at Austin)

I’ll discuss some of my work in quantum computing over the past 18 years, organizing it in terms of three questions.  First, how can we demonstrate, using near-future hardware, that quantum computers can get any genuine speedups at all over classical computers (ideally useful speedups)?  Second, what sorts of problems would be hard even for quantum computers, and can we turn the intractability of those problems to our advantage?  Third, are there physically reasonable models of computation even more powerful than quantum computing, or does quantum computing represent an ultimate limit?

If you’re a regular reader here, most of the content will be stuff you’ve seen before, with the exception of a story or two like the following:

Last night I was talking to my mom about my grandfather, who as it happens came through Rome 73 years ago, as an engineer with the US Army.  Disabling landmines was, ironically, one of the safer ways to be a Jew in Europe at that time.  If you’d told him then that, three-quarters of a century later, his grandson would be back here in Rome to accept an award for research in quantum computational complexity … well, I’m sure he’d have any number of questions about it.  But one thing I clearly remember is that my grandfather was always full of effusive praise for the warmth of the people he met in Italy—how, for example, Italian farmers would share food with the hungry and inadequately-provisioned Allied soldiers, despite supposedly being on the opposing side.  Today, every time I’m in Italy for a conference or a talk, I get to experience that warmth myself, and certainly the food part.

(Awww!  But I meant it.  Italians are super-warm.)

There’s a view that scientists should just pursue the truth and be serenely unaffected by prizes, recognition, and other baubles.  I think that view has a great deal to be said for it.  But thinking it over recently, I struck the following mental bargain: if I’m going to get depressed on a semi-regular basis by people attacking me online—and experience shows that I will—well then, I also get to enjoy whatever’s the opposite of that with a clear conscience.  It’s not arrogance or self-importance; it’s just trying to balance things out a bit!

So again, thanks so much—to the physics department of La Sapienza, but also to my family, friends, mentors, readers, colleagues at UT Austin and around the world, and everyone else who helps make possible whatever it is that I do.

Lecture notes! Intro to Quantum Information Science

Sunday, August 26th, 2018

Someone recently wrote that my blog is “too high on nerd whining content and too low on actual compsci content to be worth checking too regularly.”  While that’s surely one of the mildest criticisms I’ve ever received, I hope that today’s post will help to even things out.

In Spring 2017, I taught a new undergraduate course at UT Austin, entitled Introduction to Quantum Information Science.  There were about 60 students, mostly CS but also with strong representation from physics, math, and electrical engineering.  One student, Ewin Tang, made a previous appearance on this blog.  But today belongs to another student, Paulo Alves, who took it upon himself to make detailed notes of all of my lectures.  Using Paulo’s notes as a starting point, and after a full year of procrastination and delays, I’m now happy to release the full lecture notes for the course.  Among other things, I’ll be using these notes when I teach the course a second time, starting … holy smokes … this Wednesday.

I don’t pretend that these notes break any new ground.  Even if we restrict to undergrad courses only (which rules out, e.g., Preskill’s legendary notes), there are already other great quantum information lecture notes available on the web, such as these from Berkeley (based on a course taught by, among others, my former adviser Umesh Vazirani and committee member Birgitta Whaley), and these from John Watrous in Waterloo.  There are also dozens of books—including Mermin’s, which we used in this course.  The only difference with these notes is that … well, they cover exactly the topics I’d cover, in exactly the order I’d cover them, and with exactly the stupid jokes and stories I’d tell in a given situation.  So if you like my lecturing style, you’ll probably like these, and if not, not (but given that you’re here, there’s hopefully some bias toward the former).

The only prerequisite for these notes is some minimal previous exposure to linear algebra and algorithms.  If you read them all, you might not be ready yet to do research in quantum information—that’s what a grad course is for—but I feel good that you’ll have an honest understanding of what quantum information is all about and where it currently stands.  (In fact, where it already stood by the late 1990s and early 2000s, but with many comments about the theoretical and experimental progress that’s been made since then.)

Also, if you’re one of the people who read Quantum Computing Since Democritus and who was disappointed by the lack of basic quantum algorithms in that book—a function of the book’s origins, as notes of lectures given to graduate students who already knew basic quantum algorithms—then consider these new notes my restitution.  If nothing else, no one can complain about a dearth of basic quantum algorithms here.

I welcome comments, bugfixes, etc.  Thanks so much, not only to Paulo for transcribing the lectures (and making the figures!), but also to Patrick Rall and Corey Ostrove for TA’ing the course, to Tom Wong and Supartha Podder for giving guest lectures, and of course, to all the students for making the course what it was.

  • Lecture 1: Course Intro, Church-Turing Thesis (3 pages)
  • Lecture 2: Probability Theory and QM (5 pages)
  • Lecture 3: Basic Rules of QM (4 pages)
  • Lecture 4: Quantum Gates and Circuits, Zeno Effect, Elitzur-Vaidman Bomb (5 pages)
  • Lecture 5: Coin Problem, Inner Products, Multi-Qubit States, Entanglement (5 pages)
  • Lecture 6: Mixed States (6 pages)
  • Lecture 7: Bloch Sphere, No-Cloning, Wiesner’s Quantum Money (6 pages)
  • Lecture 8: More on Quantum Money, BB84 Quantum Key Distribution (5 pages)
  • Lecture 9: Superdense Coding (2 pages)
  • Lecture 10: Teleportation, Entanglement Swapping, GHZ State, Monogamy (5 pages)
  • Lecture 11: Quantifying Entanglement, Mixed State Entanglement (4 pages)
  • Lecture 12: Interpretation of QM (Copenhagen, Dynamical Collapse, MWI, Decoherence) (10 pages)
  • Lecture 13: Hidden Variables, Bell’s Inequality (5 pages)
  • Lecture 14: Nonlocal Games (7 pages)
  • Lecture 15: Einstein-Certified Randomness (4 pages)
  • Lecture 16: Quantum Computing, Universal Gate Sets (8 pages)
  • Lecture 17: Quantum Query Complexity, Deutsch-Jozsa (8 pages)
  • Lecture 18: Bernstein-Vazirani, Simon (7 pages)
  • Lecture 19: RSA and Shor’s Algorithm (6 pages)
  • Lecture 20: Shor, Quantum Fourier Transform (8 pages)
  • Lecture 21: Continued Fractions, Shor Wrap-Up (4 pages)
  • Lecture 22: Grover (9 pages)
  • Lecture 23: BBBV, Applications of Grover (7 pages)
  • Lecture 24: Collision and Other Applications of Grover (6 pages)
  • Lecture 25: Hamiltonians (10 pages)
  • Lecture 26: Adiabatic Algorithm (10 pages)
  • Lecture 27: Quantum Error Correction (8 pages)
  • Lecture 28: Stabilizer Formalism (9 pages)
  • Lecture 29: Experimental Realizations of QC (9 pages)

And by popular request, here are the 2017 problem sets!

I might post solutions at a later date.

Note: If you’re taking the course in 2018 or a later year, these sets should be considered outdated and for study purposes only.


Notes and Updates (Aug. 27)

Here’s a 184-page combined file. Thanks so much to Robert Rand, Oscar Cunningham, Petter S, and Noon van der Silk for their help with this.

If it wasn’t explicit: these notes are copyright Scott Aaronson 2018, free for personal or academic use, but not for modification or sale.

I’ve freely moved material between lectures so that it wasn’t arbitrarily cut across lecture boundaries. This is one of the reasons why some lectures are much longer than others.

I apologize that some of the displayed equations are ugly. This is because we never found an elegant way to edit equations in Google Docs.

If you finish these notes and are still hankering for more, try my Quantum Complexity Theory or Great Ideas in Theoretical Computer Science lecture notes, or my Barbados lecture notes.  I now have links to all of them on the sidebar on the right.

Summer recapitulates life

Tuesday, July 24th, 2018

Last week, I was back at the IAS in Princeton, to speak at a wonderful PITP summer school entitled “From Qubits to Spacetime,” co-organized by Juan Maldacena and Edward Witten. This week, I’ll be back in Waterloo, to visit old and new friends at the Perimeter Institute and Institute for Quantum Computing and give a couple talks.  Then, over the weekend, I’ll be back in Boston to see old friends, colleagues, and students.  After some other miscellaneous travel, I’ll then return to Austin in late August when the semester begins.  The particular sequence IAS → Waterloo → Boston → Austin is of course one that I’ve followed before, over a longer timescale.

Two quick announcements:

First, at the suggestion of reader Sanketh Menda, I’m thinking of holding a Shtetl-Optimized meetup in Waterloo this week.  Please send me an email if you’re interested, and we’ll figure out a time and place that work for everyone.

Second, many of the videos from the IAS summer school are now available, including mine: Part I and Part II.  I cover some basics of complexity theory, the complexity of quantum states and unitary transformations, the Harlow-Hayden argument about the complexity of turning a black hole event horizon into a firewall (with my refinement), and my and Lenny Susskind’s work on circuit complexity, wormholes, and AdS/CFT.  As a special bonus, check out the super-embarrassing goof at the beginning of my first lecture—claiming a mistaken symmetry of conditional entropy and even attributing it to Edward Witten’s lecture!  (But Witten, who I met for the first time on this visit, was kind enough to call my talk “lots of fun” anyway, and give me other positive comments, which I should put on my CV or something.)

Addendum: Many of the PITP videos are well worth watching!  As one example, I found Witten’s talks to be shockingly accessible.  I’d been to a previous talk of his involving Khovanov homology, but beyond the first few minutes, it went so far over my head that I couldn’t tell you how it was for its intended audience.  I’d also been to a popular talk of Witten’s on string theory, but that’s something he could do with only 3 awake brain cells.  In these talks, by contrast, Witten proves some basic inequalities of classical and quantum information theory, then proves the Reeh-Schlieder Theorem of quantum field theory and the Hawking and Penrose singularity theorems of GR, and finally uses quantum information theory to prove positive energy conditions from quantum field theory that are often needed to make statements about GR.

Customers who liked this quantum recommendation engine might also like its dequantization

Thursday, July 12th, 2018

I’m in Boulder, CO right now for the wonderful Boulder summer school on quantum information, where I’ll be lecturing today and tomorrow on introductory quantum algorithms.  But I now face the happy obligation of taking a break from all the lecture-preparing and schmoozing, to blog about a striking new result by a student of mine—a result that will probably make an appearance in my lectures as well.

Yesterday, Ewin Tang—an 18-year-old who just finished a bachelor’s at UT Austin, and who will be starting a PhD in CS at the University of Washington in the fall—posted a preprint entitled A quantum-inspired classical algorithm for recommendation systems. Ewin’s new algorithm solves the following problem, very loosely stated: given m users and n products, and incomplete data about which users like which products, organized into a convenient binary tree data structure; and given also the assumption that the full m×n preference matrix is low-rank (i.e., that there are not too many ways the users vary in their preferences), sample some products that a given user is likely to want to buy.  This is an abstraction of the problem that’s famously faced by Amazon and Netflix, every time they tell you which books or movies you “might enjoy.”  What’s striking about Ewin’s algorithm is that it uses only polylogarithmic time: that is, time polynomial in log(m), log(n), the matrix rank, and the inverses of the relevant error parameters.  Admittedly, the polynomial involves exponents of 33 and 24: so, not exactly “practical”!  But it seems likely to me that the algorithm will run much, much faster in practice than it can be guaranteed to run in theory.  Indeed, if any readers would like to implement the thing and test it out, please let us know in the comments section!

As the title suggests, Ewin’s algorithm was directly inspired by a quantum algorithm for the same problem, which Kerenidis and Prakash (henceforth KP) gave in 2016, and whose claim to fame was that it, too, ran in polylog(m,n) time.  Prior to Ewin’s result, the KP algorithm was arguably the strongest candidate there was for an exponential quantum speedup for a real-world machine learning problem.  The new result thus, I think, significantly changes the landscape of quantum machine learning, by killing off one of its flagship applications.  (Note that whether KP gives a real exponential speedup was one of the main open problems mentioned in John Preskill’s survey on the applications of near-term quantum computers.)  At the same time, Ewin’s result yields a new algorithm that can be run on today’s computers, that could conceivably be useful to those who need to recommend products to customers, and that was only discovered by exploiting intuition that came from quantum computing. So I’d consider this both a defeat and a victory for quantum algorithms research.

This result was the outcome of Ewin’s undergraduate thesis project (!), which I supervised. A year and a half ago, Ewin took my intro quantum information class, whereupon it quickly became clear that I should offer this person an independent project.  So I gave Ewin the problem of proving a poly(m,n) lower bound on the number of queries that any classical randomized algorithm would need to make to the user preference data, in order to generate product recommendations for a given user, in exactly the same setting that KP had studied.  This seemed obvious to me: in their algorithm, KP made essential use of quantum phase estimation, the same primitive used in Shor’s factoring algorithm.  Without phase estimation, you seemed to be stuck doing linear algebra on the full m×n matrix, which of course would take poly(m,n) time.  But KP had left the problem open, I didn’t know how to solve it either, and nailing it down seemed like an obvious challenge, if we wanted to establish the reality of quantum speedups for at least one practical machine learning problem.  (For the difficulties in finding such speedups, see my essay for Nature Physics, much of which is still relevant even though it was written prior to KP.)

Anyway, for a year, Ewin tried and failed to rule out a superfast classical algorithm for the KP problem—eventually, of course, discovering the unexpected reason for the failure!  Throughout this journey, I served as Ewin’s occasional sounding board, but can take no further credit for the result.  Indeed, I admit that I was initially skeptical when Ewin told me that phase estimation did not look essential after all for generating superfast recommendations—that a classical algorithm could get a similar effect by randomly sampling a tiny submatrix of the user preference matrix, and then carefully exploiting a variant of a 2004 result by Frieze, Kannan, and Vempala.  So when I was in Berkeley a few weeks ago for the Simons quantum computing program, I had the idea of flying Ewin over to explain the new result to the experts, including Kerenidis and Prakash themselves.  After four hours of lectures and Q&A, a consensus emerged that the thing looked solid.  Only after that gauntlet did I advise Ewin to put the preprint online.

So what’s next?  Well, one obvious challenge is to bring down the running time of Ewin’s algorithm, and (as I mentioned before) to investigate whether or not it could give a practical benefit today.  A different challenge is to find some other example of a quantum algorithm that solves a real-world machine learning problem with only a polylogarithmic number of queries … one for which the exponential quantum speedup will hopefully be Ewin-proof, ideally even provably so!  The field is now wide open.  It’s possible that my Forrelation problem, which Raz and Tal recently used for their breakthrough oracle separation between BQP and PH, could be an ingredient in such a separation.

Anyway, there’s much more to say about Ewin’s achievement, but I now need to run to lecture about quantum algorithms like Simon’s and Shor’s, which do achieve provable exponential speedups in query complexity!  Please join me in offering hearty congratulations, see Ewin’s nicely-written paper for details, and if you have any questions for me or (better yet) Ewin, feel free to ask in the comments.


Update: On the Hacker News thread, some commenters are lamenting that such a brilliant mind as Ewin’s would spend its time figuring out how to entice consumers to buy even more products that they don’t need. I confess that that’s an angle that hadn’t even occurred to me: I simply thought that it was a beautiful question whether you actually need a quantum computer to sample the rows of a partially-specified low-rank matrix in polylogarithmic time, and if the application to recommendation systems helped to motivate that question, then so much the better. Now, though, I feel compelled to point out that, in addition to the potentially lucrative application to Amazon and Netflix, research on low-rank matrix sampling algorithms might someday find many other, more economically worthless applications as well.

Another Update: For those who are interested, streaming video of my quantum algorithms lectures in Boulder are now available:

You can also see all the other lectures here.

My Y Combinator podcast

Friday, June 29th, 2018

Here it is, recorded last week at Y Combinator’s office in San Francisco.  For regular readers of this blog, there will be a few things that are new—research projects I’ve been working on this year—and many things that are old.  Hope you enjoy it!  Thanks so much to Craig Cannon of Y Combinator for inviting me.

Associated with the podcast, Hacker News will be doing an AMA with me later today.  I’ll post a link to that when it’s available.  Update: here it is.

I’m at STOC’2018 TheoryFest in Los Angeles right now, where theoretical computer scientists celebrated the 50th anniversary of the conference that in some sense was the birthplace of the P vs. NP problem.  (Two participants in the very first STOC in 1969, Richard Karp and Allan Borodin, were on a panel to share their memories, along with Ronitt Rubinfeld and Avrim Blum, who joined the action in the 1980s.)  There’s been a great program this year—if you’d like to ask me about it, maybe do so in the comments of this post rather than in the AMA.

Quantum computing for policymakers and philosopher-novelists

Wednesday, June 6th, 2018

Last week Rebecca Newberger Goldstein, the great philosopher and novelist who I’m privileged to call a friend, wrote to ask me whether I “see any particular political and security problems that are raised by quantum computing,” to help her prepare for a conference she’d be attending in which that question would be discussed.  So I sent her the response below, and then decided that it might be of broader interest.

Shtetl-Optimized regulars and QC aficionados will find absolutely nothing new here—move right along, you’ve been warned.  But I decided to post my (slightly edited) response to Rebecca anyway, for two reasons.  First, so I have something to send anyone who asks me the same question in the future—something that, moreover, as Feynman said about the Feynman Lectures on Physics, contains views “not far from my own.”  And second, because, while of course I’ve written many other popular-level quantum computing essays, with basically all of them, my goal was to get the reader to hear the music, so to speak.  On reflection, though, I think there might also be some value in a piece for business and policy people (not to mention humanist intellectuals) that sets aside the harmony of the interfering amplitudes, and just tries to convey some of the words to the song without egregious howlers—which is what Rebecca’s question about “political and security problems” forced me to do.  This being quantum computing, of course, much of what one finds in the press doesn’t even get the lyrics right!  So without further ado:


Dear Rebecca,

If you want something serious and thoughtful about your question, you probably won’t do much better than the recent essay “The Potential Impact of Quantum Computers on Society,” by my longtime friend and colleague Ronald de Wolf.

To elaborate my own thoughts, though: I feel like the political and security problems raised by quantum computing are mostly the usual ones raised by any new technology (national prestige competitions, haves vs have-nots, etc)—but with one added twist, coming from quantum computers’ famous ability to break our current methods for public-key cryptography.

As Ronald writes, you should think of a quantum computer as a specialized device, which is unlikely to improve all or even most of what we do with today’s computers, but which could give dramatic speedups for a few specific problems.  There are three most important types of applications that we know about today:

(1) Simulation of quantum physics and chemistry. This was Richard Feynman’s original application when he proposed quantum computing in 1981, and I think it’s still the most important one economically.  Having a fast, general-purpose quantum simulator could help a lot in designing new drugs, materials, solar cells, high-temperature superconductors, chemical reactions for making fertilizer, etc.  Obviously, these are not applications like web browsing or email that will directly affect the everyday computer user.  But they’re areas where you’d only need a few high-profile successes to generate billions of dollars of value.

(2) Breaking existing public-key cryptography.  This is the most direct political and security implication.  Every time you visit a website that begins with “https,” the authentication and encryption—including, e.g., protecting your credit card number—happen using a cryptosystem based on factoring integers or discrete logarithms or a few other related problems in number theory.  A full, universal quantum computer, if built, is known to be able to break all of this.

Having said that, we all know today that hackers, and intelligence agencies, can compromise people’s data in hundreds of more prosaic ways than by building a quantum computer!  Usually they don’t even bother trying to break the encryption, relying instead on poor implementations and human error.

And it’s also important to understand that a quantum computer wouldn’t mean the end of online security.  There are public-key cryptosystems currently under development—most notably, those based on lattices—that are believed to resist attack even by quantum computers; NIST is planning to establish standards for these systems over the next few years.  Switching to these “post-quantum” systems would be a significant burden, much like fixing the Y2K bug (and they’re also somewhat slower than our current systems), but hopefully it would only need to happen once.

As you might imagine, there’s some interest in switching to post-quantum cryptosystems even now—for example, if you wanted to encrypt messages today with some confidence they won’t be decrypted even 30 years from now.  Google did a trial of a post-quantum cryptosystem two years ago.  On the other hand, given that a large fraction of web servers still use 512-bit “export grade” cryptography that was already breakable in the 1990s (good news: commenter Viktor Dukhovni tells me that this has now been mostly fixed, since security experts, including my childhood friend Alex Halderman, raised a stink about it a few years ago), it’s a safe bet that getting everyone to upgrade would take quite a long time, even if the experts agreed (which they don’t yet) which of the various post-quantum cryptosystems should become the new standard.  And since, as I said, most attacks target mistakes in implementation rather than the underlying cryptography, we should expect any switch to post-quantum cryptography to make security worse rather than better in the short run.

As a radical alternative to post-quantum crypto, there’s also (ironically enough) quantum cryptography, which doesn’t work over the existing Internet—it requires setting up new communications infrastructure—but which has already been deployed in a tiny number of places, and which promises security based only on quantum physics (and, of course, on the proper construction of the hardware), as opposed to mathematical problems that a quantum computer or any other kind of computer could potentially solve.  According to a long-running joke (or not-quite-joke) in our field, one of the central applications of quantum computing will be to create demand for quantum cryptography!

Finally, there’s private-key cryptography—i.e., the traditional kind, where the sender and recipient meet in secret to agree on a key in advance—which is hardly threatened by quantum computing at all: you can achieve the same level of security as before, we think, by simply doubling the key lengths.  If there’s no constraint on key length, then the ultimate here is the one-time pad, which when used correctly, is theoretically unbreakable by anything short of actual physical access to the sender or recipient (e.g., hacking their computers, or beating down their doors with an ax).  But while private-key crypto might be fine for spy agencies, it’s impractical for widespread deployment on the Internet, unless we also have a secure way to distribute the keys.  This is precisely where public-key crypto typically gets used today, and where quantum crypto could in principle be used in the future: to exchange private keys that are then used to encrypt and decrypt the actual data.

I should also mention that, because it breaks elliptic-curve-based signature schemes, a quantum computer might let a thief steal billions of dollars’ worth of Bitcoin.  Again, this could in principle be fixed by migrating Bitcoin (and other cryptocurrencies) to quantum-resistant cryptographic problems, but that hasn’t been done yet.

(3) Optimization and machine learning.  These are obviously huge application areas for industry, defense, and pretty much anything else.  The main issue is just that we don’t know how to get as large a speedup from a quantum computer as we’d like for these applications.  A quantum computer, we think, will often be able to solve optimization and machine learning problems in something like the square root of the number of steps that would be needed classically, using variants of what’s called Grover’s algorithm.  So, that’s significant, but it’s not the exponential speedup and complete game-changer that we’d have for quantum simulation or for breaking public-key cryptography.  Most likely, a quantum computer will be able to achieve exponential speedups for these sorts of problems only in special cases, and no one knows yet how important those special cases will be in practice.  This is a still-developing research area—there might be further theoretical breakthroughs (in inventing new quantum algorithms, analyzing old algorithms, matching the performance of the quantum algorithms by classical algorithms, etc.), but it’s also possible that we won’t really understand the potential of quantum computers for these sorts of problems until we have the actual devices and can test them out.

 

As for how far away all this is: given the spectacular progress by Google and others over the last few years, my guess is that we’re at most a decade away from some small, special-purpose quantum computers (with ~50-200 qubits) that could be useful for quantum simulation.  These are what the physicist John Preskill called “Noisy Intermediate Scale Quantum” (NISQ) computers in his excellent recent essay.

However, my guess is also that it will take longer than that to get the full, error-corrected, universal quantum computers that would be needed for optimization and (most relevant to your question) for breaking public-key cryptography.  Currently, the engineering requirements for a “full, universal” quantum computer look downright scary—so we’re waiting either for further breakthroughs that would cut the costs by a few more orders of magnitude (which, by their very nature, can’t be predicted), or for some modern-day General Groves and Oppenheimer who’d be licensed to spend however many hundreds of billions of dollars it would take to make it happen sooner.

The race to build “NISQ” devices has been heating up, with a shift from pure academic research to venture capitalists and industrial efforts just within the last 4-5 years, noticeably changing the character of our field.

In this particular race, I think that the US is the clear world leader right now—specifically, Google, IBM, Intel, Microsoft, University of Maryland / NIST, and various startups—followed by Europe (with serious experimental efforts in the Netherlands, Austria, and the UK among other places).  Here I should mention that the EU has a new 1-billion-Euro initiative in quantum information.  Other countries that have made or are now making significant investments include Canada, Australia, China, and Israel.  Surprisingly, there’s been very little investment in Russia in this area, and less than I would’ve expected in Japan.

China is a very interesting case.  They’ve chosen to focus less on quantum computing than on the related areas of quantum communication and cryptography, where they’ve become the world leader.  Last summer, in a big upset, China launched the first satellite (“Micius”) specifically for quantum communications, and were able to use it to do quantum cryptography and to distribute entanglement over thousands of miles (from one end of China to the other), the previous record being maybe 100 miles.  If the US has anything comparable to this, it isn’t publicly known (my guess is that we don’t).

This past year, there were hearings in Congress about the need for the US to invest more in quantum information, for example to keep up with China, and it looks likely to happen.  As indifferent or hostile as the current administration has been toward science more generally, the government and defense people I’ve met are very much on board with quantum information—often more so than I am!  I’ve even heard China’s Micius satellite referred to as the “quantum Sputnik,” the thing that will spur the US and others to spend much more to keep up.

As you’d imagine, part of me is delighted that something so abstruse, and interesting for fundamental science, and close to my heart, is now getting attention and funding at this level.  But part of me is worried by how much of the current boom I know to be fueled by misconceptions, among policymakers and journalists and the general public, about what quantum computers will be able to do for us once we have them.  Basically, people think they’ll be magic oracles that will solve all problems faster, rather than just special classes of problems like the ones I enumerated above—and that they’ll simply allow the continuation of the Moore’s Law that we know and love, rather than being something fundamentally different.  I’ve been trying to correct these misconceptions, on my blog and elsewhere, to anyone who will listen, for all the good that’s done!  In any case, the history of AI reminds us that a crash could easily follow the current boom-time, if the results of quantum computing research don’t live up to people’s expectations.

I guess there’s one final thing I’ll say.  Quantum computers are sometimes analogized to nuclear weapons, as a disruptive technology with implications for global security that scientists theorized about decades before it became technically feasible.  But there are some fundamental differences.  Most obviously: the deterrent value of a nuclear weapon comes if everyone knows you have it but you never need to use it, whereas the intelligence value of a quantum computer comes if you use it but no one knows you have it.

(Which is related to how the Manhattan Project entered the world’s consciousness in August 1945, whereas Bletchley Park—which was much more important to the actual winning of WWII—remained secret until the 1970s.)

As I said before, once your adversaries realized that you had a universal quantum computer, or might have one soon, they could switch to quantum-resistant forms of encryption, at least for their most sensitive secrets—in which case, as far as encryption was concerned, everyone would be more-or-less back where they started.  Such a switch would be onerous, cost billions of dollars, and (in practice) probably open up its own security holes unrelated to quantum computing.  But we think we already basically understand how to do it.

This is one reason why, even in a hypothetical future where hostile powers got access to quantum computers (and despite the past two years, I still don’t think of the US as a “hostile power”—I mean, like, North Korea or ISIS or something…!)—even in that future, I’d still be much less concerned about the hostile powers having this brand-new technology, than I’d be about their having the generations-old technology of fission and fusion bombs.

Best,
Scott


Unrelated Update (June 8): Ian Tierney asked me to advertise a Kickstarter for a short film that he’s planning to make about Richard Feynman, and a letter that he wrote to his first wife Arlene after she died.