- I really liked this article by Chris Monroe, of the University of Maryland and IonQ, entitled “Quantum computing is a marathon not a sprint.” The crazier expectations get in this field—and right now they’re
*really* crazy, believe me—the more it needs to be said. - In a piece for
*Communications of the ACM*, Moshe Vardi came out as a “quantum computing skeptic.” But it turns out what he means by that is not that he knows a reason why QC is impossible in principle, but simply that it’s often overhyped and that it will be hard to establish a viable quantum computing industry. By that standard, I’m a “QC skeptic” as well! But then what does that make Gil Kalai or Michel Dyakonov? - Friend-of-the-blog Bram Cohen asked me to link to this second-round competition for Verifiable Delay Functions, sponsored by his company Chia. Apparently the first link I provided actually mattered in sending serious entrants their way.
- Blogging, it turns out, is really, really hard when (a) your life has become a pile of real-world obligations stretching out to infinity,
*and also* (b) the Internet has become a war zone, with anything you say quote-mined by people looking to embarrass you. But don’t worry, I’ll have more to say soon. In the meantime, doesn’t anyone have more questions about the research papers discussed in the previous post? Y’know, NEEXP in MIP*? SBP versus QMA? Gentle measurement of quantum states and differential privacy turning out to be almost the same subject?

This entry was posted
on Friday, May 3rd, 2019 at 2:51 am and is filed under Procrastination, Quantum.
You can follow any responses to this entry through the RSS 2.0 feed.
Responses are currently closed, but you can trackback from your own site.

Comment #1 May 3rd, 2019 at 5:48 am

Well, since you ask… It’s known that MIP=NEXP which is contained in EEXP, or in other words, every protocol in MIP can be simulated by a deterministic classical computer in doubly-exponential time. Giving the provers an entangled quantum state should slow down the simulation by just one exponential (analogous to simulating a quantum computer by a classical computer), so MIP* should be contained in EEEXP. Does this make any sense?

Comment #2 May 3rd, 2019 at 6:17 am

Michel Dyakonov links to the preprint [1702.07688] What determines the ultimate precision of a quantum computer? – arXiv by Xavier Waintal with the words: “The author makes a professional analysis of errors and error correction (of which I am not capable) and his conclusions are worth considering.” (This “preprint” is actually peer reviewed paper: Phys. Rev. A 99, 042318 – Published 10 April 2019)

Xavier Waintal seems to be a special type of QC skeptic: He believes that for current quantum computing architectures, there can be a limit in the precision of the logical qubits (irrespective of the number N_c of physical qubits used to construct the logical qubits): “Using the example of the topological surface code, we show that a single local event can provoke the failure of the logical qubit, irrespective of N_c.”

Comment #3 May 3rd, 2019 at 11:08 am

My understanding is that quantum computation *doesn’t* give any advantage over classical in the single prover case, even if the verifier is also quantum. Are there any loopholes around this view, such that some fancy setup with a single prover can get beyond PSPACE?

Vaguely related: Micali’s Computationally Sound Proofs work gets all the way to NEXP with a single prover, as long as we have access to a random oracle which the prover can access only polynomially many times. This is a reasonable setting in practice whenever the prover is “good at some hard task, but bad at some other hard task which can we use to fake a random oracle”. However, the theoretical awkwardness of the statement makes it more difficult to place into the context of other proof systems, so I’d be curious for how you think about it.

Comment #4 May 3rd, 2019 at 11:14 am

Whoops, I think I misremembered and thus misstated the Computationally Sound Proofs result. Thus, a followup question: what’s the right way to state it succinctly?

Comment #5 May 3rd, 2019 at 2:57 pm

Aula #1: In general, the size of the provers’ strategies will be exponential in the number of entangled qubits that they share. And the complexity of finding an optimal prover strategy could then be exponential in that size, so

doublyexponential in the number of entangled qubits. In the Natarajan-Wright protocol, the number of entangled qubits is itself exponential, and that’s how they end up with NEEXP⊆EEEXP. The reason why we don’t know any upper bound on MIP*, is simply that we don’t know any upper bound on how many entangled qubits the provers might need to execute an (approximately) optimal strategy.Comment #6 May 3rd, 2019 at 3:36 pm

Geoffrey #3: That’s correct, we’ve known since 2009 that QIP=IP=PSPACE. With quantum interactive proofs, the big surprise—proved by Kitaev and Watrous in 2000—was that a 3-round protocol already gives you the full power of QIP (ie of PSPACE), whereas in the classical world, any constant number of rounds gives you only AM, believed to be much smaller than PSPACE.

Comment #7 May 3rd, 2019 at 4:08 pm

gentzen #2: Thanks, I hadn’t seen that paper by Waintal and it looks kind of interesting. In the past, when people identified specific types of errors that went outside the assumptions of the fault-tolerance theorem, it was often possible to generalize the theorem to account for them. It will be interesting to see whether that can be done in this case, in addition to seeing whether there’s any plausible physics that would give rise to this sort of error (the other obvious question).

Note that this will be a more sophisticated discussion than you could find in Dyakonov’s own papers. In his most recent broadsides, he simply says that QC

obviouslycan’t scale, because it involves manipulating 2^{n}continuous parameters. I.e., he offers an ‘argument’ that refuses to engage with the understanding of quantum fault-tolerance that was already firmly established by 1996.Comment #8 May 4th, 2019 at 11:59 am

Put me down also as a “quantum computing skeptic”, just as I am a “controlled nuclear fusion skeptic”. There might not be any theoretical reason why either or both cannot happen, but maybe the “one damned thing after another” sequence of practical problems does not converge.

Comment #9 May 4th, 2019 at 10:26 pm

In the spirit of quote mining to embarrass you, didn’t you say in a comment to my post https://windowsontheory.org/2017/10/30/the-different-forms-of-quantum-computing-skepticism/ that “quantum simulation is EASILY the basis for a billion-dollar business”?

An interesting historical analogy is that there were plenty of “Babbage computing engine skeptics” and indeed he ran out of money. In some sense these skeptics were right: given that NASA still employed human computers in the 1960’s, I would guess that even if he managed to built it, Babbage’s engine would have only beat human computers for very particular applications. In another sense they were of course completely wrong.

I too am a so called “quantum computing hype skeptic” in the sense I don’t foresee a near term huge commercial impact to quantum computing (although I am even more of a skeptic in my own judgement of commercial potential..)

But somehow I believe that in the long run quantum computing will turn out to be very significant, mostly from a sense of aesthetics. It just doesn’t seem OK for god to make the universe behave so strangely, give up on the notion of locality and in some sense on objective truth, enable a completely new kind of computational model, and then all that would come out of it is a machine to factor integers.

Comment #10 May 5th, 2019 at 2:59 am

Boaz #9: Rereading my comment, you’ll note that I didn’t put any timeframe on it! 🙂 I stand by the claims that:

(1) if and when QC becomes practical at scale, the simulation of chemistry and physics provides an obvious class of applications (probably

stillthe clearest applications that we know about) that has billions of dollars riding on it, and(2) given the experimental progress that there’s been, the possibility of quantum simulators really starting to become useful within the next decade or so can’t be dismissed.

Does this make me an “optimist”? I’m not sure. Most QC optimists, certainly when they’re applying for funds (but not only then), go substantially beyond the above and into territory that I disagree with…

Comment #11 May 7th, 2019 at 2:18 am

Obvious way to get a 2-second verifiable delay is bounce a radio or laser signal off the moon ;-). Some time back I concocted what was supposed to be a hard-to-jam data broadcasting scheme based on that, probably not of interest here.

Purely local proof-of-work schemes (like for crypto currency) should probably be memory bound (like scrypt) rather than just computation, on the idea that memory is about the most optimized silicon (both for performance and cost) that currently exists and it’s available everywhere, while bitcoin showed that task-specialized VLSI processors will always beat commodity ones such as gpu’s.

Comment #12 May 10th, 2019 at 10:14 am

Since quantum theory can be seen as a generalized probability theory that allows L2 sampling + interference, can any purported advantage of a quantum algorithm be nullified if interference is, in some sense, unnecessary to efficiently solve the problem? In particular, did Ewin Tang’s work show that “L2 sampling” alone is insufficient for a quantum advantage in recommendation?

Comment #13 May 12th, 2019 at 2:03 am

I was thinking about the “add noise first” approach to writing about my recent project (an ed project, 12 students) in a way that won’t fall foul of the IRB by invading the study subjects’ privacy, but I think that’s going to be difficult, because every way I can think of doing it approaches a state of creative nonfiction, which is definitely not what’s wanted here. That sort of thing works fine if you’re trying to write about real people/events in a literary manner and you don’t want to, say, get sued. People do it all the time. (Sometimes not well enough.) Here, I think I can subtract, but not add. Still thinking.

Comment #14 May 13th, 2019 at 11:42 am

>Gentle measurement of quantum states and differential privacy turning out to be almost the same subject?

Does that suggest new algorithms for machine learning? (classic or otherwise)

Comment #15 May 13th, 2019 at 5:43 pm

I’ve seen bio-chemistry mentioned as a field that can benefit from quantum computing.

But do you think bio-chemistry itself will be used as a computing platform?

For example, you can amplify a DNA sequence using a process called PCR. It splits up the DNA and fills in each half. You end up with 2^n copies, after n steps.

Since DNA is more or less programmable and can be scanned/manipulated (using proteins like cas9, used in CRISPR kits), there’s the potential to enable computing with large amounts of parallelism.

And it’s not unlike quantum computing in that you’d need to find a way to amplify a solution once it’s encountered – enough such that you can get it out.

Relative to a QC, it’s far more resource intensive – it’s explicit exponentiality – but it has similar dynamics.

Comment #16 May 13th, 2019 at 11:09 pm

L2 sans Interference #12: Yes and yes.

Comment #17 May 13th, 2019 at 11:13 pm

Lucius #14: It’s indeed suggested new procedures for learning information

about quantum states—most notably, the “Quantum Private Multiplicative Weights” (QPMW) algorithm, which we introduce in the paper (read it and see! 🙂 ). We don’t know yet whether it’s possible to go in the opposite direction, and use results about quantum measurement to get new insights purely for classical machine learning.Comment #18 May 13th, 2019 at 11:18 pm

Job #15: No, DNA computing is a fundamentally different proposition from quantum computing, because ultimately, it’s an exotic way to implement the familiar class P (i.e., do classical computation). If you gain, it’s “just” in the constant factors—or in being able to program biology for medical or other applications. You don’t enlarge the set of asymptotically feasible problems in the way that BQP is believed to do.

Comment #19 May 14th, 2019 at 2:02 am

OK so it’s DNA computing.

I realize it’s still just P. The part that reminds me of a QC is the output part, since it requires amplifying the signal of whichever unit encounters the solution.

Hey and natural selection could play the role of quantum interference. Both constructive and destructive, in a very real sense.

Comment #20 May 14th, 2019 at 10:10 am

Scott #17,

I think I got the connection. Wow!

In retrospect it should not be surprising that ML meets DP: if the model you train is good, then it must get ride of most or all information about what particular set of examples you’re training your model on (except that this set was taken from some larger population defined as obeying the law your model implement). So DP and ML, same fight under disguise. And it’s even almost the same math:

-DP would say take a database (a two-dimensional matrix where one row = one user, one column = one field), multiply each line by the query (a collection of weights, one per column), transform that into a probability (sum and exponentiate the result for each row, divide by the total if you wish), and finally pick one at random according to its probability.

-ML would say the database is actually a minibatch of inputs to some neuron (a two-dimensional matrix where one row = one exemple, one column = one feature), the weights for the query are the synaptic weights for this neuron (it’s even the same name!), the exponentiation is an activation function (you might try some others we found, DP), the only difference is we don’t pick one winner from the batch. Well, maybe we should?

I think I can see how a QT would see this: the database/inputs is a collection of physical systems one can measure (a two-dimensional matrix where one row = one system, one column = one possible measurement basis), the multiplication stuff is actually a rotation of the measurement basis, and when you actually perform the measurement then the result is at random according to an exponentiation (the squared amplitudes). That’s neat.

So first, do you see anything wrong with the picture above? Second, it seems that there is no place for intrication in this picture. Would it be wrong to stipulate that it defines columns versus rows? (e.g. no intrication between two rows)

Comment #21 May 16th, 2019 at 11:24 am

What will happen first: robust QC or controlled fusion energy production? Which one is better for tenure?

Comment #22 May 16th, 2019 at 9:28 pm

Lucius #20: To be honest, I have difficulty mapping what you write to what’s in our paper. The mathematical connection that Guy and I made is specifically between differential privacy and

gentle measurement—i.e., the problem of measuring a collection of quantum states without much damaging them. It’s not a connection to ML or quantum mechanics in general. If you’re interested in this stuff, please reread our intro! To understand our main point, it’s absolutely essential that you understand what a “gentle measurement” is, for example by looking at our main example of one (the Laplace noise measurement).Comment #23 May 16th, 2019 at 9:29 pm

Douh #21: Dunno. QC is probably better for tenure than fusion at this specific moment, but these things can change.

Comment #24 June 3rd, 2019 at 6:35 am

Scott #7: Thanks for your answer. I know that I always take too long to reply (especially compared to today’s standards on the internet), but this time I really took long even by my own standards.

My problem is that even so I find the questions raised by scalable quantum computing important and interesting, I have read neither Nielsen and Chuan nor even Mermin in depth. Even so the article by Waintal has only 6 pages and feels easy to read, it uses examples like “topological surface code”, “superconducting transmon”, or “semi-conducting singlet-triplet double quantum dot qubit” which are not explained in Nielsen and Chuan or Mermin. I would not have looked at such an article, but comment discussions on scirate are so rare that the time spend to read Michel Dyakonov defense and this referenced article was small.

So in order to be able to give at least some sort of reply, I have now tried to read (in depth) chapter 7 “Quantum computers: physical realization” from Nielsen and Chuan. This was an enjoyable experience, and I especially liked the NMR approach with its direct use of statistical ensembles and the density matrix. But in the end, all of the examples presented in that chapter are obviously unable to allow scalable quantum computing:

So it is an interesting fact that today’s schemes for quantum computing are no longer obviously unable to allow scalable quantum computing. Waintal’s sophisticated discussion of where they might still have issues limiting their scalability shows just how much the field of scalable quantum computing has advanced.

And still those excellent texts will likely remain (for a long time) the place for the uninitiated reader to learn the basics, even so they don’t explain a single physical scheme which would allow scalable quantum computing.