## A few quick announcements

I gave a new survey talk at Yale, entitled “When Exactly Do Quantum Computers Provide a Speedup?”  Here are the PowerPoint slides.  Thanks so much to Rob Schoelkopf for inviting me, and to everyone else at Yale for an awesome visit.

Aephraim Steinberg asks me to announce that the call for nominations for the 2015 John Stewart Bell Prize is now available.

Ronitt Rubinfeld asks me to remind people that the STOC’2015 submission deadline is November 4.  Here’s the call for papers.

Likewise, Jeff Kinne asks me to remind people that the Complexity’2015 submission deadline is November 26.  Here’s the call for papers.

### 105 Responses to “A few quick announcements”

1. Ashley Says:

Thanks for making your very nice survey talk available. One tiny point: you mention in the “Quantum Machine Learning Algorithms” slide that there could be fast randomised algorithms for the same problems. In the case of linear equation solving, at least, it was shown by Harrow, Hassidim and Lloyd that if there does exist a fast classical algorithm (i.e. one solving the same problem as the HHL algorithm, with a similar runtime), then BPP=BQP. A fun consequence of this, of course, is the fact that if you can solve large systems of sparse linear equations quickly, you can factorise large integers…

2. Mark Ettinger Says:

“…nonabelian HSP has been the Afghanistan of quantum algorithms.”

Very true. I still suffer from quantum-PTSD.

3. Scott Says:

Ashley: Thanks for the comment! Yes, I’m aware that the problem solved by HHL is BQP-complete—or, to put it another way, one can take any other quantum speedup that one knows about, and realize it a la HHL.

(Which, incidentally, leads to my own favorite way of thinking about the quantum machine learning and linear algebra algorithms: they’re not so much algorithms as “pre-algorithms,” or frameworks for algorithms. If you want an exponential speedup using one of these algorithms, then you need to “plug in” detailed problem assumptions that would lead to such a speedup—and the obvious way to get such assumptions, is to borrow them from some other situation like factoring where an exponential speedup is known! It’s still not clear to me to what extent these algorithms can be seen as “independent sources of speedups.”)

In any case, Seth himself told me that they were not able to prove exponential query complexity separations for the other cases, like Google PageRank and Support Vector Machines. (And indeed, in a few hours thinking about it, I was unable to come up with any assumptions that would lead to an exponential separation for the SVM problem: whenever I made the problem hard classically, the Grover lower bound would come into effect and make it hard quantumly as well.) Obviously, it’s well worth thinking about more. But in the meantime, if exponential separations were known for these problems, I assume Seth would’ve told me! 🙂

4. randomnumber53 Says:

Very interesting PowerPoint.

Question: Whenever I try to explain why/how quantum computers are/can be powerful to people, I usually preface my explanation by informing them that I probably don’t know what I’m talking about. Then I say that their power comes from being able to quickly compute on many possible inputs at once, but they’re limited in that trying to access information about an output destroys the other outputs. How close is this to true/a good quick explanation?

5. Me Says:

@Scott. Sorry, i always use the comment section to post links wich are (in my view) interesting in regard to the discussed topics. I hope this is not seen as spammy. If so, just let me know. Anyway, if someone is interested in the HHL algorithm, i recently enjoyed this talk by Seth Lloyd in which he explains it in great detail:

6. Scott Says:

randomnumber53 #4: Well, that has a nonzero inner product with the truth, but you could probably do better with about the same number of words. 🙂 Try to say something about interference—about quantum mechanics being based on amplitudes, which are complex numbers that you use to calculate probabilities, and it being possible for the different amplitudes that lead to a given outcome to “interfere destructively” and cancel each other out, and that being a very good thing, if that outcome corresponds to a wrong answer to a computational problem. And about the entire benefit of quantum computer, compared to a classical computer with a random-number generator, coming from choreographing these patterns of interference of amplitudes.

(FWIW, whether there’s any mention of interference is the quickest heuristic you can use in judging popular articles about QC, to assess whether the author has any idea what he or she is talking about. In previous posts, I’ve called this the “Minus-Sign Test.” Most articles fail it!)

7. rrtucci Says:

Why do you give me ppt instead of pdf? How do I know that you are not an MIT-Russian spy?

8. Jay Says:

>amplitudes, which are complex numbers that you use to calculate probabilities

Would QM be fundamently different if amplitudes were only positive and negative rather than complex?

9. joe Says:

Scott #3: Has anyone discovered any algorithm of practical (or even just intellectual) interest that is able to take advantage of the HHL linear equation solver routine to obtain an exponential speedup for the overall problem? (i.e., not with the caveats that currently apply to the machine learning “algorithms” form Lloyd et al., which provide no way to load data without using exponential resources)

Have you seen the paper from Hopkins APL ( http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.110.250504 ) about solving “electromagnetic scattering cross sections”? This paper is too dense for me to quickly tell whether or not their algorithm also assumes an exponential quantity of data to be loaded into quantum registers beforehand. If you’ve looked at it before, or covered it in journal club, what is your take on this?

10. Ashley Says:

Thanks. Indeed, it would be very nice if there were a provable query complexity separation for a variant of one of these problems.

11. Rahul Says:

Scott’s slides have a quote:

“The miracle, I’d say, is that this trick (i.e. to choreograph an interference pattern, where the unwanted paths cancel) yields a speedup for any classical problems, not that it doesn’t work for more of them”

This sentence makes Quantum Computers sound fait accompli. I’d rather say “it looks like it might yield a speedup for some problems, if at all ever built successfully”. But the jury is still out.

As of date, no ( practical ) classical problem has been solved with a speedup by any QC. The miracle, if any, is still in the future.

Maybe it didn’t matter since this audience was quite smart. But I wish popular press makes these points clear. There’s just too much mis-information in circulation.

12. Scott Says:

Rahul #11: No, in this talk I was focusing on algorithms, not at all on implementation. And I was saying that there’s already something “miraculous” about Shor’s algorithm—about the fact that it exists, that the factoring problem is in BQP—regardless of whether anyone ever implements it.

13. Rahul Says:

Scott #12:

Understood. Thanks. I think popular articles often conflate the theoretical existence & a practical implementation.

Other than Shor & other Quantum algorithms are there other algorithms in TCS that have been proposed but never really implemented? Any other sorts of algorithms waiting for a suitable machine to run them on? Boson Sampling comes to mind (at least in a usefully large incarnation).

In the very early days of TCS I’d imagine there were lots of such algorithms just waiting for a larger, universal computer.

14. Scott Says:

Jay #8:

Would QM be fundamently different if amplitudes were only positive and negative rather than complex?

That’s a superb question—one of my all-time favorites! I discuss it at some length in Quantum Computing Since Democritus, as well as in Is Quantum Mechanics An Island In Theoryspace?.

Short answer: the set of efficiently solvable problems—i.e., BQP—would remain completely unchanged. (I give as an exercise in my quantum computing course to prove that!) And many, many other phenomena in quantum computing and information would remain unchanged—basically because you can always just encode a complex amplitude using a pair of real amplitudes.

But there are various more subtle things that would break, owing (for example) to the fact that the real numbers are not algebraically closed. Most obviously, for physicists, we would no longer have a clean picture of Hamiltonian time evolution. Indeed, it wouldn’t even be the case that every unitary (i.e., orthogonal) transformation that we could implement, had a “square root” of the same dimension!

And there are other things that would change: for example, the number of parameters needed to describe the mixed state of a composite system, would no longer just be product of the numbers of parameters needed to describe the individual parts. That, in turn, would have implications for quantum state tomography and de Finetti theorems.

Incidentally, besides forcing the amplitudes to be real numbers, another interesting toy modification that people play with sometimes is letting the amplitudes be quaternions. (There’s even an entire book by Steve Adler about quaternionic quantum mechanics!)

15. Scott Says:

Rahul #13: Yes, there are many, many algorithms in classical TCS that probably no one has bothered to implement. I hesitate to give examples, since for any example I give, it might turn out that someone has implemented it after all! But, I dunno … has anyone implemented

– Algorithms coming from Robertson-Seymour graph minors theory
– Algorithms for testing knot equivalence / checking if a given knot is the unknot
– The Coppersmith-Winograd matrix multiplication algorithm
– The n*α(n) deterministic minimum spanning tree algorithm (where α is the inverse of the Ackermann function)
– Various SDP relaxation algorithms for NP-hard problems

?

16. fred Says:

I would think that a lot of the research around trying to realize QC could also yield improvements for classical computers (fabrication, etc).

17. Scott Says:

joe #9: You ask an excellent question to which I wish I had better answers. Personally, I confess I’ve been somewhat uncomfortable with the claims of “exponential speedups for machine learning and linear algebra problems,” when the “start-to-end work” hasn’t yet been done to figure out:

(1) How are you going to prepare the initial state?
(2) How do you know the condition number isn’t going to blow up in your face, and kill the exponential speedup?
(3) What observable about the solution vector are you going to measure, that will tell you something useful and interesting?
(4) Once you’ve answered questions (1)-(3), how confident are you that there then isn’t a fast classical randomized algorithm to compute the same thing?

I went to a talk back in March about the Clader-Jacobs-Sprouse work, on computing electromagnetic scattering cross-sections to HHL. I think that work is notable as the most detailed attempt so far to think through the answers to questions (1)-(4), and to find an actual application for HHL (i.e., to turn it from what I called a “pre-algorithm” into an algorithm).

Having said that, I’m still not satisfied that a fast classical algorithm for their E&M problem has been ruled out (and not just as a matter of dotting i’s—I worry that such an algorithm might actually exist). The trouble is that they only compare against classical conjugate-gradient methods. But particularly if you only want the scattering cross-section, and not the entire solution vector—and if you have to make whatever assumptions are necessary to keep the condition number bounded—there might be a classical algorithm that achieves comparable performance (at the least, the paper doesn’t say anything to show me why that’s unlikely).

(As a secondary issue, it wasn’t clear to me whether it’s actually been proved that the condition number is bounded by polylog(N) in their application, or whether that’s just a reasonable heuristic assumption.)

You mention the time needed to load the data into quantum registers as a crucial caveat to these results. However, of all the caveats, that’s maybe the one that I’m the least worried about! 😀 In principle, you ought to be able to load lots of data into a QC using a “quantum RAM” (i.e., a RAM that can be queried in coherent superposition). Admittedly, it might be extremely hard to build a quantum RAM (harder even than building a QC itself??), without simply creating lots and lots of parallel computing elements, in which case you now have a parallel computer, and your problem is now easy for reasons having nothing to do with quantum mechanics! But at least in theory, there doesn’t seem to be an obstruction to creating lots of passive memory elements that behave as a quantum RAM.

It’s true that, when your quantum RAM gets big enough, communication latency will become an issue, and will wipe out the exponential speedup. But since exactly the same issue arises in classical computing, and since we’ve long swept it under the rug there (e.g., in saying that ordered lists can be searched in O(log n) time), it seems only fair to sweep it under the rug in the quantum case as well.

In any case, even if a quantum RAM is theoretically possible, I still think the most compelling applications for the quantum linear algebra algorithms are the ones where exponentially-large vectors and matrices can be computed “implicitly” (i.e., entry-by-entry), using some compact, efficient encoding. And E&M scattering cross-sections provide an example where that might happen (provided the object you wanted to scatter E&M waves off of was compactly describable).

I should tell you that I’ve agreed to write a commentary piece about the quantum machine learning / linear algebra algorithms for Nature Physics, to appear sometime next year. So, to prepare for writing that, I’ll be investing a fair bit of time learning more about these algorithms, and clearing up my remaining confusions. (As a first step, if anyone wants to correct anything I said above, by all means do so!)

18. Jay Says:

#14

Great paper! Follow-up on your observation that, call f(n) the number of parameters needed to described an n-dimensional mixed state, f(ab)=f(a)f(b) iff amplitudes are complexes.

Are there some kind of number so that f(n) would correspond to the holographic principle?

19. Douglas Knight Says:

Scott 15, the Robertson–Seymour algorithm is easy. It probably has been implemented, with one of the inputs a list of minors. But RS doesn’t tell you which minors! It doesn’t give you actual algorithms!

20. John Sidles Says:

That is a MathOverflow question&nbsp— How fast can we *really* multiply matrices? (2012)&nbsp— to which Henry Cohen, assisted by several commenters, supplies an in-depth answer that can be summarized as “Apparently no, and there would be little point to it.”

21. Dave Lewis Says:

Some of the slides in the PPT are showing up as blank for me (in Powerpoint for Mac 2011, v. 14.4.5). Any chance you could provide a PDF?

22. Rahul Says:

I wonder if we will hit an n^2 algorithm for matrix multiplication eventually (next 10 years?)….There’s no fundamental barrier right?

Has TCS produced a hard lower bound (>2) on how fast a matrix multiplication can scale?

As an aside, is there a reason why the constants seem to go up as the exponent comes down? e.g. naive matrix multiplication, versus Strassen versus Coppersmith-Winograd etc? Is that just rotten luck or is it a trade-off that’s ineluctable in some sense.

In other words, is it conceivable (reasonable?) to hope that someone smart comes up with a Coppersmith-Winograd variant with a 2.376 exponent but a low constant term?

23. Raoul Ohio Says:

John Sidles (or anyone),

Can you briefly summarize the current status of “How fast can we *really* multiply matrices”? Obviously interesting topic.

Related question. Is FMM (Fast Matrix Multiplication, say Strassen’s method, or whatever) widely used in major applications? Perhaps algorithms running things for Google, or Amazon, or FB? (pretty depressing thought that FB might be the posterchild for the use of top algorithms!)

Is FMM used as widely as, say, FFT?

Here is an interesting metric for an algorithm: The number of computer cycles saved per (year? second?) in the world by using this algorithm compared to other available algorithms. What are the leading contenders? FFT? Quick Sort? Something from BitCoin mining? Google or FB backbone stuff?

Anyone have any info or thoughts on this?

24. joe Says:

Scott #17: Thanks for the comprehensive reply! I’m glad to hear about the upcoming Nature Physics piece, since there have been far too many experimental papers recently that make claims along the lines of “all we need is a QC with 60 qubits, and then we’ll have a machine learning computer capable of beating Google’s server farm”.

I must be missing something about the “data loading” problem though: from what I can tell, in, for example, the “quantum machine learning” algorithms (for concreteness, the “supervised cluster assignment” algorithm in http://arxiv.org/pdf/1307.0411.pdf), you want to start off by loading an N-dimensional vector into an n=logN qubit register in the quantum computer. That N-dimensional vector comes from a classical source, so I don’t see how you can avoid having to use O(N) resources to load that vector into a quantum register in the QC. If your classical source only allows one entry of the vector to be read at a time, then we require N=2^n steps to read the vector. I must be missing something, but I can’t see how to avoid this problem.

Is there a simple explanation for how, given a classical source of the N-dimensional vector(s), we can load them into a QC using only O(logN) resources? (I tried to read the qRAM PRL paper from 2008, but gave up in frustration.)

25. Itai Says:

Scott #17
Does the same problem with uploading data to quantum registers apply when we want to implement grover speedups?
Grover is effective only with huge amount of data, and needs access to quantum data base via oracle.

26. Scott Says:

Rahul #22: No, it remains one of the most famous open problems in CS to prove any lower bound better than Ω(n2) on the arithmetic complexity of matrix multiplication. (We do have lower bounds of Ω(n2 log(n)) if you impose additional restrictions—e.g., that no number is allowed to become too large at intermediate stages of the algorithm.) And many of my friends suspect there really is an O(n2) algorithm (or at any rate, close to that)—for example, because of the analogy with the Fast Fourier Transform for integer and polynomial multiplication. Personally, I’m agnostic.

As for the constant factors blowing up in your face as the asymptotic complexity decreases: well, I can tell you that a similar phenomenon occurs all over TCS, not just with matrix multiplication. I like to use the following analogy: what’s the complexity of starting up a bicycle versus a car versus an airplane versus a spacecraft? In general, the faster you want to travel “asymptotically,” the greater the “constant overhead” you need to invest in learning to operate the vehicle in the first place, and in fueling it and starting it up. Concretely, what happens is that the lower the exponent you want for matrix multiplication, the more elaborate a base matrix you need for your Strassen-style recursion, and that’s what blows up the constant factor.

Having said that, there are also cases in TCS where you find an algorithm that’s faster asymptotically, and has better constants, and is easier to implement—it’s just better in every way. And no, it hasn’t been ruled out at all that something like that could happen for matrix multiplication. I don’t even think it’s unreasonable to hope for (well, it would be hard to beat the constants and ease of implementation of the O(n3) algorithm, but one could at least hope not to do too much worse). The only argument against, is the 45 years of effort that have already been invested in the problem without such an algorithm being found.

Having said that, if you care about how fast matrix multiplication actually is on actual computers, then there are other factors that are at least as important in practice as the number of arithmetic operations in your algorithm. The main ones are

(1) memory latency (the sophisticated algorithms often suffer by needing to access matrix entries that are scattered all over the memory, rather than just snarfing them up in neat rows), and
(2) the degree of pipelining and parallelization that’s available (and whether your algorithm is implemented in such a way that the compiler and/or processor can “notice” the parallelization).

27. Scott Says:

Itai #25: Yes, it was noticed since the beginning that there’s a similar issue with Grover’s algorithm—either you need a quantum RAM, or else you need your “database” to be specified implicitly (for example, as the evaluation function of an NP-complete problem), so that there’s no need for expensive data-loading. For that reason, I’ve always felt that the most compelling application of Grover’s algorithm was to combinatorial search problems.

Having said that, again, you could imagine a quantum RAM consistent with the laws of physics … and possibly even with the laws of engineering. 🙂 And Andris Ambainis and I have a 2003 paper where we show that, in the case of Grover search, communication latency would not asymptotically kill the speedup, the way it would for the machine learning algorithms. Indeed, when searching a database arranged in 3 spatial dimensions, you can get the full √N Grover speedup (even accounting for communication latency), while when searching in 2 spatial dimensions, you can search an N-element database in √(N log N) time, which nearly matches not only the Grover limit, but also the limit coming from the finiteness of the speed of the light.

28. Scott Says:

joe #24:

Is there a simple explanation for how, given a classical source of the N-dimensional vector(s), we can load them into a QC using only O(logN) resources? (I tried to read the qRAM PRL paper from 2008, but gave up in frustration.)

Probably the simplest scenario to imagine is this: suppose your database was encoded using a collection of N beamsplitters—the Nth bit is a 1 if and only if the Nth beamsplitter is set in a certain way. And now suppose you have a photon that travels through all N of the beamsplitters in superposition, then recoheres. That photon will “coherently query” all N elements of the database. Yet, if we disregard latency (i.e., the finiteness of the speed of light), then there’s really no sense in which we need “linear resources” to perform the query—any more than we need linear resources to make a single, classical query to a classical database of size N. In particular, the beamsplitters aren’t doing anything active that requires a power source, etc.; they’re just sitting there passively, waiting for a photon to pass through them. And the “parallelism” of querying all the beamsplitters in superposition is handled automatically by quantum mechanics.

Granted, the above might (or might not!) be too technologically daunting to ever be practical—but it does seem to show that quantum RAM is a sensible notion in principle.

29. joe Says:

Scott #28: Thanks again.

I’m really slow, and you’ve already patiently explained more than I could hope for, so feel free to ignore this, but I’m still puzzled.

I think I follow your explanation, but it seems to me to be providing an argument for how one can perform a query in O(1) time, rather than how one can load the database in the first place.

Let’s suppose we have a quantum register with n qubits. We can encode an N-dimensional vector in this register, where N=2^n, by loading the elements of the vector as coefficients. For example, if n=5:

|psi> = a_00000 |00000> + a_00001 |00001> + … + a_11111 |11111>

The coefficients a_00000, …, a_11111 (2^5=32 of them) are the elements of the vector.

From what I understand, this is the kind of preparation that the Lloyd et al. “supervised cluster assignment” algorithm calls for.

Now, the thing I’m wondering, is how you can start with your QC having a quantum register with state |00000> and prepare |psi> using only O(n=5) steps (or resources)? For example, if the classical machine originally containing the vector can only be accessed serially, then it seems like we will need to make O(N=2^n=32) reads just to get the coefficients out of the classical memory. Then once you’ve done this, it seems like you’ll have to apply O(N=32) gates on |00000> to prepare |psi>.

Is this not true? i.e., is there a way to prepare |psi> using only, for example O(n=5) steps (or resources*)?

* I have the caveat “resources” here to include the kind of situation where you don’t need O(2^n=32) time steps, but you instead, for example, manipulate 32 different magnetic fluxes for superconducting qubits in order to encode the 32 elements of the vector. (If you need 2^n physical elements to encode the database, then you’ve again just defeated the scalability of the middle part of the algorithm which runs in O(n) time.)

Again, apologies for what is probably a stupid question, and which likely comes across as a delightful mixture of pedantic and naive.

30. Scott Says:

joe #29: Yes, once you have the database, how to prepare the state |ψ⟩ in O(1) steps (or O(log N) steps, depending on how you count)—as usual, ignoring memory latency—is exactly the thing that I was trying to explain. Basically, your photon fans out into a superposition over all the memory registers. So the access to each individual register is not a step that needs to be counted separately from the accesses to all the other registers.

Now, it’s true that the database itself has size N, and that preparing the database in the first place would presumably have required linear resources. But exactly the same objection could be raised against classical log(N)-time algorithms! For example, creating an ordered list of size N requires N time (or even N log N time), but once you create it, you can search it for a given record in only log(N) time. People don’t say that classical binary search isn’t “really” log(N) time, because the log(N) doesn’t count the time to prepare the list in the first place. And if they don’t say that in the classical case, then I see no reason why they should say it in the quantum case. In both cases, preparing the list is just a “one-time investment”; once it’s there, you can do as many different log(N)-time queries as you want.

31. rrtucci Says:

Scott #30: It seems to me you are assuming that Google will put all its data into a quantum memory before applying Grover’s algorithm to that quantum memory. But the state in the quantum memory can be used only once and then it will be destroyed or corrupted. So I don’t see how Google can avoid storing its data in a classical memory and loading that data into a quantum memory every time it intends to use Grover’s algorithm. That is why I personally never assume that the initial state used in Grover comes from an unstructured database.

32. rrtucci Says:

I said;” That is why I personally never assume that the initial state used in Grover comes from an unstructured database.”

More precisely, I should have said “encodes all of” instead of “comes from”

33. Scott Says:

rrtucci #31: No, that’s exactly what my last comment was trying to explain is not the case. There’s the N-bit quantum RAM—meaning, the classical records that can be queried coherently—and then there’s the log(N)-qubit state |ψ⟩, which we use the quantum RAM to prepare. The latter gets destroyed after every use, but the former does not. And crucially, once we’ve invested linear effort to prepare the quantum RAM—which, again, despite its name, means a collection of classical records!—it then just takes log(N) additional effort (not counting memory latency, yada yada) to use the qRAM to prepare a fresh copy of |ψ⟩ whenever we want one. And we can do that as many times as we want.

You can doubt that a qRAM will be practical within the foreseeable future—that’s fine! Or you could say that it’s unfair not to count memory latency (but in that case, why don’t you level the same objection against the classical RAM model?). But what I described above is what people mean by a qRAM.

34. rrtucci Says:

I may be wrong, but it seems to me that the method with beam splitters that you propose for loading the classical data to a quantum state would produce a “good” subspace (good in the sense that it contains the database info) of dimension N=2^n embedded in a space of higher dimensions N’ and the extra N’-N dimensions would be entangled with the good N dimensions. To disentangle those extra N’-N dimensions from the good N dimensions and get a vector space of dimension N that isn’t entangled with anything else would require > O(N) operations

35. Scott Says:

rrtucci #34: Where is that higher dimension N’ coming from? (And how much higher than N is it, anyway?) Of course there will always be experimental imperfections, decoherence channels, etc. etc. But if we’re talking about a single photon going through N modes in superposition, then at least in principle we’re only talking about an N-dimensional Hilbert space.

36. rrtucci Says:

Again, I’m not sure of this 🙂

Suppose n=3
It seems to me your method will create an 8 dim subspace in a 2^8 space
a|1000,0000> + b|0100,00000> + c|0010,0000> +…+
h|0000,0001>
but you want to map it into a state
of 3 qubits, unentangled from the other 5 qubits

37. Scott Says:

rrtucci #36: Yes, a single photon in 8 modes does span an 8-dimensional Hilbert space, whose basis vectors are |10000000>, |01000000>, etc. But, OK, suppose you wanted to “compress” such a state into a 3-qubit state, which lives in the Hilbert space spanned by the basis vectors |000>, |001>, |010>, |011>, etc. You could do that using a recursive procedure, like so:

First, in parallel, compress |10000000> and |01000000> into a single qubit, along with |00100000> and |00010000>, etc. Next, compress the |xx000000> qubit with the |00xx0000> qubit to form a 2-qubit state, and in parallel, the |0000xx00> qubit with the |000000xx> qubit. Finally compress the |xxxx0000> qubits with the |0000xxxx> qubits to form a 3-qubit state.

You might complain that, in order to do this, I’ve used a linear number of operations (albeit, O(log(N)) time). But now we come to the key point: a linear amount of logic would completely analogously have been needed to access a classical RAM. The only real difference is that, in the classical case, only a single path through the logic gates would have been relevant to any given query. But it doesn’t seem fair to penalize the qRAM for having the power to make superposed queries—that’s, like, its whole thing! And it’s not as if the energy required, or any other obvious resource, is greater in the quantum case than in the classical. (A single photon in superposition costs no more energy than a single photon not in superposition.) But maybe I’m missing something?

38. aram Says:

Nice talk!
I see your point about HHL being a “pre-algorithm” but also am curious whether you’d describe Hamiltonian simulation, Jones polynomial approximation, or a hypothetical algorithm for graph isomorphism the same way, given that in each case we do not know how to construct distributions that are believed to be hard on average (apart from making use of BQP-hardness results for the first two cases). Or are you just referring to the fact that the input is specified by an oracle, and so would say the same thing about amplitude amplification, triangle-finding or the HSP?

On a different note, I totally agree with you about the substantive point that Hamiltonian simulation is still likely to be the most useful q algorithm among the ones invented to date. I just wouldn’t use the word “algorithm” so narrowly.

39. rrtucci Says:

I’ll quit while I’m ahead. This is like the first time in 10 years you partially agree with me 🙂

40. Scott Says:

aram #38: It’s a good question. I would say that pre-algorithm vs. algorithm is not a sharp distinction, but a matter of degree. Is this algorithm something that you take right out of the package, put in the oven, and get an exponential speedup? Or is it more like one of those brownie mixes where all you need to add is the flour, sugar, eggs, butter, and chocolate?

For graph isomorphism, it’s true that we don’t know a distribution over hard instances, but at least it’s very easy to state the problem being solved. Whereas with HHL, the problem being solved is often summarized as “solving linear systems,” but of course there are 3 caveats (how to specify the input, the condition number, what observable to measure about the solution vector). As long as those caveats are remembered, I’d say yes, it’s very similar to graph isomorphism.

41. joe Says:

Scott #30, and later: Ah, I see. The point I was missing is that a qRAM allows you to keep generating copies of |psi> (n-qubit register encoding a N=2^n-dim vector in its coefficients) in time O(1), once you have expended the O(N) or O(NlogN) effort in preparing the qRAM.

Now I think I understand your beamsplitter network example: the n beamsplitters somehow act to encode the N-dim vector, so that when a photon passes through, the photon is in the state |psi>, which took O(1) effort (time for the photon to pass through the beamsplitter network).

Is there any physical-implementation-agnostic way to describe how to construct a qRAM? Can we describe the construction and operation of a qRAM in terms of qubits and one- and two-qubit gates? (Of the small number of papers I’ve seen in the literature so far, they all seem to give examples of how to construct a qRAM using some very specific set of physical tools, e.g. optical beamsplitter networks; atoms in cavities and photons, etc.)

In the case of the beamsplitter network qRAM you describe, what is the physical meaning of the qubits in the resulting |psi>? e.g., for n=3, I first thought that |100> is the state where there is 1 photon in the first mode, and 0 photons in the second and third modes. However, I suppose that’s not right, since then if you only put a single photon into the beamsplitter network, only the states |100>, |010> and |001> can have non-zero amplitude (and |000> if you consider photon loss). Let’s suppose I want to make a beamsplitter qRAM that can produce the following |psi>, how would I do it?

|psi>=(1|000> + 1.1|001> + 1.2|010> + 1.3|011> + 1.4|100> + 1.5|101> + 1.6|110> + 1.7|111>)/a

(where “a” is the normalization factor)

Thanks again, and good luck with the article. If you have even some of your discussion points from this comment thread in it, I think it’s going to be a really helpful article.

42. jonas Says:

Scott, re #37 about the RAM. The principle sounds about right to me.

There’s some magic about constant factors in the classical case. These days we use DRAM, which is a brilliant invention of an electronic circuit that lets you store lots of bits in both very little circuit board space and much less energy consumption than the more obvious SRAM. But of course, as while we have no idea how to build practical quantum computers at all currently, we can’t really speculate about the details in quantum computers.

43. upen Says:

joe i don’t see how a qram will allow u to copy registers there is a no cloning theorem

44. Scott Says:

upen #43: Dude. The No-Cloning Theorem doesn’t apply to classical information, and the information in the qRAM is completely classical. As I said, the “q” in the name only means that the information can be queried in coherent superposition, not that it itself is in a quantum state.

45. upen Says:

thanks scott, i have always wondered if superposition of light waves ,let aside all the implementation difficulties can lead to
quantum states that are more abstract than superposition of
electrons only 0/1 possible assume i superpose wavelength l1 with l2
what will be the size of such a quantum register will it still be a
qbit or is it a qDigit ? even if we can somehow implement a qDigit
what quantum operators can we define on it ?

46. Raoul Ohio Says:

Please excuse me for briefly revisiting a sad topic of a few months ago: creativity, depression, and suicide. I think most of us will find this from today’s Ars Technica to be useful, insightful, and touching:

http://arstechnica.com/staff/2014/10/harnessing-depression-one-ars-writers-journey/

47. Richard Cleve Says:

Hi Scott:

In the slides, you refer to your 2009 result about Fourier Checking. I’m wondering what your current take is on the other result in that paper, about Fourier Fishing.

My interpretation of Fourier Fishing has always been that you give a black box problem that’s solvable in polynomial time on a quantum computer; whereas that black box problem is not in the classical polynomial hierarchy (that is, a reasonably natural definition of PH for black box problems).

To me, it seems like a pretty strong result. But, in conversations over the years, I’ve often found myself defending the interestingness of it.

If you had included it in your talk, what would you have said?

48. Scott Says:

Richard #47: I would’ve said that Fourier Fishing is basically the black-box analogue of BosonSampling. I.e., compared to BosonSampling, the advantage of Fourier Fishing is that you can prove an exponential separation (and not only that—as you said, you can prove that Fourier Fishing is not in BPPPH), but the corresponding disadvantage is that Fourier Fishing is relative to an oracle. Indeed, “historically,” thinking about Fourier Fishing is exactly what led me to think about BosonSampling (and to suggest to Alex Arkhipov to think about it more): I was looking for an unrelativized analogue of something I already knew in the relativized setting.

Of course, you could also “instantiate” Fourier Fishing with an explicit Boolean function. If you do that, then you get something that’s basically equivalent to Bremner, Jozsa, and Shepherd’s IQP model. For that model, we have results about the hardness of exact sampling directly analogous to what we have for BosonSampling (i.e., if you can do it efficiently classically, then PH collapses to BPPNP). But we don’t yet have results about the hardness of approximate sampling analogous even to the incomplete results that we have for BosonSampling.

49. John Sidles Says:

Another quick (and sobering) announcement:
——————–
Take my research, please!  IBM Pays GlobalFoundries \$1.5 Billion To Shed Its Chip Division. This news has long been foreseen by the QM/QIT/QC research community within IBM, yet it is by no means welcome.

50. Jon Says:

Hey Scott,

I feel compelled to point out that it seems old-fashioned to post .ppt files, because there is software available that will record audio in sync with a screenshot of your slides (or whatever is going on on your laptop display) and post the whole thing to youtube:

For IBM: Camtasia studio
For Apple: see http://www.virology.ws/2013/03/01/how-i-record-my-lectures/

Anyway, it’s a way to make video of your talks (or practice talks) with no A/V guy required, as long as you’re willing to settle for Audio + Slides, instead of Audio+Slides+video of your face.

51. Serge Says:

Jon speaking of Slides… interesting! 😉

52. anon Says:

Scott: I consider you as one of the thought leaders in theory, so I want to ask what’s your view point on the controversy surrounding Hajiaghayi’s CS department ranking proposal?

First, is ranking a necessary evil? For superstars like you or schools like MIT or stanford, ranking probably doesn’t matter. But for the rest of us, ranking does seem to matter (grant application, graduate enrollement, etc). If it is, do you think Hajiaghayi is taking a step towards the right direction? If it is a pure evil, what should the community do as a whole to rectify the situation? Next time US news release it’s ranking, should the community be writing an open letter to protest the practice?

Second, as evidenced from the comments in Lance and Luca’s blog, there are jerks in this field – making personal attacks under a veil of anonymity. I would question the integrity of these people in their professional activities (e.g., anonymous conference paper review ), and this would make me worry. What do you think about those comments and do you think I am too paranoid?

Thanks!

53. Job Says:

Scott,

I have been looking into Deutsch’s and Simon’s algorithms – which provide an oracle separation for P/QP and BPP/BQP respectively.

I was wondering, isn’t the black-box “unfair” in the context of an Oracle separation? What i mean is that, for the QC, f(x) is more like a “gray box”, in the sense that qubits have additional (implicit) state that may reflect the implementation of f(x).

Naturally, we can argue that this is an example of how QC’s are more powerful than classical machines, but is that the case in the context of Deutsch’s/Simon’s problems, or is the black box simply establishing a-priori that classical machines are not able to perform the same task just as efficiently?

For example, suppose we develop a new computational model X, where XP contains the set of problems efficiently solved by X. An X machine operates via a set of elementary gates defined by the X model. As with a QC, we impose that a black-box provided to X must be implemented using X’s gates.

We can then ask two questions:
1. Are Deutsch’s & Simon’s problems in XP?
2. Is XP contained in P? (i.e. can a TM efficiently simulate an X machine?)

What i’m getting at is, can we not build a set of X gates that in fact make f(x) totally transparent, yet are efficiently simulated by a TM?

For example, suppose that the X gates correspond to classical OR/AND gates with the added feature that an execution transcript is kept. In that case, an X machine will have some insight into what f(x) is doing. As with a QC, it’s no longer a completely black box.

If it’s then shown that X is efficiently simulated by a TM, does that mean that the Oracle separation no longer holds? If not, why not? What’s up with the black box?

54. Scott Says:

anon #52:

what’s your view point on the controversy surrounding Hajiaghayi’s CS department ranking proposal?

I saw Hajiaghayi’s ranking, but I confess I wasn’t aware that there was a controversy. Where is this controversy raging?

All Hajiaghayi is doing, is ranking American CS theory groups by the total number of papers they publish each year in the major theory conferences (not normalized by the number of people). It seems obvious that that’s one statistic that (say) a student might want to know when deciding where to go to grad school, and equally obvious that it’s very far from the only relevant piece of information—even if we restrict ourselves to the things that are objectively quantifiable.

55. Scott Says:

Job #53: I confess that I didn’t fully understand your question, but briefly—in quantum oracle separations, it’s extremely important that the classical and quantum algorithms are given black-box access to exactly the same function. And there’s no requirement that the black box can “only” be accessed quantumly, or anything like that. Yes, quantum queries turn out to let you decide some particular thing about the function much faster than classical queries, but that’s the entire point!

It’s true that the quantum black box can be queried in superposition—and therefore, one could say, it “provides a functionality” that the classical black box doesn’t. However, what justifies this is the observation that, if you had a quantum computer, and you knew a circuit to compute some function f, then you could automatically also compute f on a superposition of inputs, because of quantum-mechanical linearity.

If you’re interested in separations, the Deutsch-Jozsa algorithm is a really bad example, since it only gives a separation between quantum and deterministic query complexity, and no separation at all (or only a factor-of-2 separation) between quantum and classical randomized. Simon’s algorithm provides a much better example. And of course, Simon’s algorithm led to Shor’s period-finding algorithm, which has “non-black-box” applications such as factoring and discrete log. One would hope that fact would lay concerns about the relevance and meaningfulness of the quantum black-box model to rest.

Also, keep in mind that you only get large quantum black-box separations for certain highly-specific problems, and not others. For example, there’s an exponential separation for Simon’s problem, but not for Grover’s search problem—and for computing the parity, there’s only a factor-of-2 separation. This is an additional indication that the quantum black-box model is telling you something real about the power of quantum algorithms; it’s not just a matter of playing around with definitions.

56. Job Says:

Scott,

What i’m picking on is the fact that the QC uses a Quantum implementation of the black-box whereas the Classical machine uses a classical black box.

This just stood out as a fallacy because the setup may not yield an accurate test – i’m not contesting that QC’s are much faster.

For example, suppose we invent a new machine X which is truly faster than classical machines but which also leaves behind additional “residual” state.

If we show that X is able to determine an internal property of a black-box function f(x) faster than a classical machine, then is that because X was using it’s true computational power, or was it sufficient to process the residual state in a classical way?

Similarly, does a QC solve Simon’s/Deutsch’s problems by using it’s true computational abilities (as seems to be the case with Shor’s/Grover’s) or is it just processing the residual state (e.g. the resulting probability distribution) in a trivial way?

57. Job Says:

Scott, one more observation. You mentioned that:

“It’s true that the quantum black box can be queried in superposition—and therefore, one could say, it “provides a functionality” that the classical black box doesn’t. However, what justifies this is the observation that, if you had a quantum computer, and you knew a circuit to compute some function f, then you could automatically also compute f on a superposition of inputs, because of quantum-mechanical linearity.”

The same statement can be made about a machine X that logs and publishes all operations, e.g.:

It’s true that X has access to the black box’s execution transcript–and therefore, one could say, it “provides a functionality” that the classical black box doesn’t. However, what justifies this is the observation that, if you had an X computer, and you knew a (classical) circuit to compute some function f, then you could automatically also compute f while capturing its execution transcript, because X still uses classical gates (the gate’s activity just gets logged).

The underlying argument is that Simon’s & Deutsch’s problems are in P because there is an X machine that solves them and which is efficiently simulated by a classical TM.

You also mentioned that:

For example, there’s an exponential separation for Simon’s problem, but not for Grover’s search problem

Which i find strange and indicative that the black box setup in Simon’s problem may be the reason why it gets such an accentuated speedup.

58. Scott Says:

Job #56-57: If you want, you can think of the quantum black-box model as a “laboratory” for devising quantum speedups, or for exploring where speedups might be possible. Some black-box speedups (e.g., for Simon’s original problem, or for Recursive Fourier Sampling) “never make it out of the lab”: i.e., we haven’t found any real, unrelativized problem that they correspond to. But others (e.g., for Shor’s period-finding problem) do make it out of the lab; they led to real quantum algorithms for problems like factoring. (And FWIW, Shor’s period-finding algorithm built directly on Simon’s algorithm, and BosonSampling also had its origins in a quantum oracle separation.) And these speedups justify the laboratory’s existence, if justification were needed.

In summary, the quantum black-box model is a mathematical model that
(a) is rigorous and beautiful and nontrivial in its own right (and includes its own internal, well-defined notion of “quantum speedup”), and
(b) has led, on multiple occasions, to actual (non-black-box) speedups for actual problems. (And has also helped us understand why other problems don’t seem to have efficient quantum algorithms.)

So what more do you want? In math and TCS, ideas only have to justify themselves by their internal coherence and elegance and by the fruitfulness of what they lead to, not on some a-priori metaphysical ground.

59. Job Says:

Scott,

The quantum black-box argument as stated for Simon’s/Deutch’s problems seems slightly flawed, in my view – i’m simply pointing this out and trying to understand whether it is in fact the case.

Note that i’m not debating whether QC’s are fast, my point is that the following argument:
“Classical machines can’t solve Simon’s/Deutch’s/Bill’s problem efficiently by design, since they can’t see through the black box.”

Should take the following form instead:
“Classical machines can’t solve Simon’s/Deutch’s/Bill’s problem efficiently because no feasible machine X exists that both solves that problem efficiently and is efficiently simulated by a classical machine.”

As you mentioned, this doesn’t change the fact that we have viable Quantum algorithms such as Shor’s/Grover’s which provide faster alternatives to classical solutions, but it would be a flaw nonetheless and i would like to understand whether this has been argued and why the two statements above are in fact the same.

If one of your blog readers has an opinion here i’m interested in figuring out where my reasoning is off so i can move on.

60. Job Says:

Also,

Is Simon’s problem the only separation between BPP & BQP – other than the probable separation provided by Shor’s/Grover’s assuming these problems are not in P/BPP – or are there other established separations between these two classes?

That by itself would warrant a closer look at black-box arguments. Are there any non-black-box separations between BPP/BQP?

61. Count Doofus Says:

Scott#14
I used to think that Quantum Math is complexity bounded for the simple reason that try to account both the action and reaction at the same time, not falling in the “uncompleteness” of deriving one from the other.

It’s the magick of complex numbers, that are themselve and their negative together; this i thought when i first learnt about immaginary unit during high school.

62. Ben Standeven Says:

I’m not understanding your argument here. Supposing that I have a non-deterministic Turing machine M that takes SAT instances and determines if a satisfying assignment exists, we can of course assume that the entire computation of M is explicitly logged. So M actually provides a satisfying assignment, if there is one.

Now, there is obviously a deterministic Turing machine N that takes the log/satisfying assignment produced by M and verifies that it works; this is an “XP” oracle-machine that solves SAT, and it can be efficiently simulated by a P oracle-machine, because it already is one. So would you then argue that SAT is in P, just as you [apparently] argued that the Simon and Deutsch problems are?

Maybe you meant “classical” to imply “deterministic”; but then I don’t see how there could even be a [computable] oracle separation between XP and P, since when the XP machine reads the transcript, the P machine can simply simulate the oracle. This would then produce only a polynomial slowdown.

63. Ross Snider Says:

@Scott

You regularly miss obvious military applications such as the calculation of EM scattering cross sections (for the design/defeat of stealth materials).

http://arxiv.org/abs/1301.2340

64. Scott Says:

Ben #62: Who are you addressing—me or Job?

65. Scott Says:

Ross #63: Dude, I already discussed that EM scattering paper in comment #17! As I said, I thought it was nice—I’m glad someone is trying to figure out a real, start-to-finish application for the HHL algorithm. But the idea that the paper has “obvious” military applications (!) is one that I doubt would be endorsed by the authors themselves, since it involves multiple enormous speculative leaps. First, you’d need an application where the shape of your material could be implicity specified by a short formula—or otherwise gotten into quantum registers efficiently. Second, you’d need the condition number to be bounded. Third, you’d need the limited amount of information that HHL can give you about the solution vector (e.g., just the cross-section) to be interesting for your application. Fourth, and most importantly, once the first three conditions were satisfied, you’d need for there still not to be any efficient classical solution.

Also, since you claim that I “regularly” miss obvious military applications (!), could you perhaps provide one other example?

66. Ross Snider Says:

@Scott

Sorry I didn’t read the comment section. Doh!

I also hadn’t/don’t understood all the caveats to the algorithm. Thank you for setting the record straight!

Lockheed has said a couple of times that it is interested in QC because of connections to software verification (I don’t precisely understand why that is). I tracked down one of the quotes:

“Lockheed, according to Brownell, hooked up with D-Wave out of a mutual interest in the types of calculations that a quantum computer could perform. “They had algorithms that were applicable to our technology,” he says. “Their particular focus is software verification.”

The same article speculates about other applications (QML) which I’ve seen before closer to first party sources. Many of the applications seem to be “do the same thing we currently do but better” – which would also include classes of optimization problems be they for communication relay and antenna designs, satellite communication systems, or ‘mundane’ vehicle design.

It’s also important to note that even a polynomial speedups are of interest to military folk. An example of this may be the solving of systems of partial differential equations, which has your usual engineering applications.

67. Job Says:

Ben,

By “classical” i mean a machine in at most BPP – your XP/SAT machine is not efficiently simulated by a classical machine.

You also mentioned:

“Maybe you meant “classical” to imply “deterministic”; but then I don’t see how there could even be a [computable] oracle separation between XP and P, since when the XP machine reads the transcript, the P machine can simply simulate the oracle. This would then produce only a polynomial slowdown.”

Does that still apply to a black-box setup? That’s essentially my point.

In normal conditions, XP does not provide any advantage since it must be efficiently simulated by a classical machine. On the other hand, i can define an X machine, and X gates, such that any input black box, which is a-priori required to be implemented with X gates (just as a black-box provided to a QC must be implemented with quantum gates) becomes a gray or white box.

In that case, even though XP does not provide any computational speed up, it does provide the internal details of the black box which is relevant information when the problem is about determining a property of the black box (as with Simon’s/Deutsch’s).

Note that i’m not saying that this would automatically put Simon’s/Deutsch’s problems in P or BPP, only that a black-box separation between P/BPP and QP/BQP would need to show that no such machine X is possible.

68. asdf Says:

In case anyone has missed the latest quantum wtf:

https://news.brown.edu/articles/2014/10/electron

69. Job Says:

For context, the reason i’m picking on the Quantum black box is that, after studying Deutsch’s circuit, it’s apparent (to me) that there is residue in the black box’s output that may be measured by a QC to determine whether f(x) depends on x.

It does not seem to be the case that the QC is performing any computationally significant task, it’s exploiting the fact that it has access to the residual quantum information in the black box’s output.

Sure, the QC can trigger and inspect the quantum residue in the black-box’s output, that’s what makes it powerful, but that would not be sufficient to separate BPP and BQP because it changes the definition of the problem.

As it is there are two versions of black-box problems:

The classical version
Given a black box that receives x and outputs f(x)+a, determine a property P of f(x).

The quantum version
Given a black box that receives x and outputs f(x)+b, determine a property P of f(x).

Where a and b are residual output information, and a != b.

Now, we’ve shown that a QC can solve its version faster than a classical machine.
Does that illustrate a capability of QM we can use for computation? Sure.

Does it prove that a QC is computationally more powerful than a classical machine (as Shor’s/Grover’s suggest)? Not unless we show that there is no value of a such that a classical machine is able to solve the problem just as fast.

Traditionally, we’d require that a = b, but it seems that it’s ok to abandon that requirement here. Why?

If a and b may differ, then we need to account for every legitimate value of a.

70. Scott Says:

Job #69: Again—Deutsch-Jozsa is an unconvincing example of a quantum speedup, since it’s just a factor of 2, which could be completely wiped out by details of the oracle access mechanism. Simon’s algorithm is a much better example. If you had an actual, efficiently-computable function f that satisfied the Simon promise, as well as a quantum computer, then you could actually use Simon’s algorithm to find f’s hidden shift. And for the closely-related period-finding problem, which arises when breaking RSA, you can of course use Shor’s algorithm. I don’t know how on earth you account for those facts on the view that quantum black-box speedups are just artifacts of unfair definitions.

Of course, everyone who knows anything about this subject knows that black-box separations don’t rule out the possibility of a fast classical algorithm that “opens the black box”—that’s why we say, for example, that we can separate BPP from BQP relative to an oracle, rather than that we can separate them in the unrelativized world. At this point, this discussion feels like pure word-splitting to me; I don’t know what substantive point is at issue.

71. Scott Says:

Ross #66: The Lockheed thing is widely recognized as an absurdity, with the fingerprints of pointy-haired bosses all over it. In order to claim an application of the D-Wave machines to finding bugs in jet-fighter code (or whatever), they simply refuse to ask the question of whether one could perform the same task just as well or better using a classical computer, presumably because they don’t want to know the answer to that question. (Namely, of course one could do just as well or better classically—so far, researchers who set out to do so haven’t even been able to find any clear advantage for the D-Wave machine on the problem of calculating its own ground state.) This isn’t what science looks like. The EM scattering application, despite its speculative nature, is an orders-of-magnitude better example than the D-Wave/Lockheed one.

72. Job Says:

Scott,

The behavior of this particular oracle is strange to me because it’s “architecture specific”.

Suppose we define an Oracle O that receives x and produces f(x)+c, where f(x) is some random noise and c is “yes” if x is a satisfiable boolean formula and “no” if x is not satifsfiable.

But here’s the catch, c is inaccessible to Quantum Computers – it’s always null.

That means BPP^O != BQP^O, but what is this really evidence of? What was shown by this oracle separation?

73. Ross Snider Says:

@Scott

I would presume they are looking to verify _classical_ code with quantum algorithms – especially since it would be tough to get a QC up into the air…

But again I don’t know that there is any advantage in this case either.

Finally given the posterior of the (current) advances in automated verification of Probablistic Programming Language programs, and DARPA funding of such, how absurd are generalizations to/from the L2 norm and how small a prior do you give DARPA/Lockheed for having made private advances on this front?

74. Scott Says:

Job #72: But what you defined is not an “oracle” in the sense we mean in theoretical computer science. It violates the rules. The oracles that interest us are ones that compute exactly the same function in the classical and quantum cases—the only difference being that, in the quantum case, you get to query the oracle in superposition. I think you’ll find that it’s not quite so trivial to separate BPP from BQP relative to that kind of oracle! With the benefit of today’s knowledge, it’s not that hard, but it does require rediscovering one of the main quantum algorithms (like Simon, Shor, or Bernstein-Vazirani). And other oracle separations (e.g., AM from PP, SZK from BQP, PH from PSPACE, the levels of PH from each other…) are quite nontrivial even given today’s knowledge. Others (e.g., QMA from QCMA, QMA from QMA(2), SZK from PP) remain unknown to this day.

75. Scott Says:

Ross #73: Yes, they’re looking to verify classical code with quantum algorithms. But no, we have no good reason to think you can get any advantage for this task with D-Wave’s machines.

Yes, there are lots of exciting advances these days in automated verification, and yes, I think it’s perfectly reasonable for DARPA to be funding such work. (For that matter, I think it’s also reasonable for them to fund quantum computing! 🙂 Their mission is supposed to be broader than things with direct military applications; it has to do with keeping the US at the technological forefront.)

And if there were some spinoff from quantum computing research that benefited classical automated verification—well, it wouldn’t be the first time that QC had had an interesting classical spinoff.

And if you had a true fault-tolerant QC, you could try running the adiabatic algorithm or Grover search to get a speedup for software verification. We can’t say with confidence that you’d see any speedup over the best classical algorithms, but nor can we say with confidence that you wouldn’t.

The one thing I can tell you with confidence is that no one is using the quantum hardware that exists today to get any genuine speedups for software verification.

76. Job Says:

Scott,

My point was that the QC’s extra ability to query the Oracle in a superposition also breaks the rules (as did my Oracle O) and that to fix this we’d either need to remove this quantum ability or allow the classical machine to get additional state from the oracle.

I understand your comment about the black box setup being a thought experiment from which insight may be gained to produce algorithms like Shor’s, which does not by itself separate BQP from other classes, and that if we could solve Simon’s problem fast classically by having the Oracle return an additional yet trivial state then we might well be able to do the same for integer factorization.

I think it’s pretty interesting and worth thinking about what kind of trivial state a classical machine might need from the Oracle in order to solve Simon’s problem – and i confined this trivial state to be any state that may be captured by an X machine (on which the black-box runs) that’s efficiently simulated by the classical machine.

Anyway, that’s just one of the questions i had. I was also wondering if anyone has tried to exploit the Law of Total Probability for computation, e.g. maybe to compute the Fourier transform. You’ve mentioned that negative probabilities would be sufficient to match much of a QC’s power, but are negative probabilities that difficult to realize?

77. Ross Snider Says:

@Scott #75

Right. So I guess this is where we come full circle back to the presentation.

Today nobody is using the D-Wave machines for anything (okay, okay, besides perhaps the research itself and for solving those odd ising spin problems less quickly than commodity hardware). So you can’t demerit these applications for not yet being fun. If you were, for no reason other than consistency, you would also need to demerit Shor’s algorithm and in fact everything in the slide deck.

The presentation is not about real-being-done-today applications but possible-might-be-done-sometime applications. It’s about ‘what do we want these machines for?’ So I would argue that you should consider expanding the slides to include EM scattering and software verification, wary of course of how far they sit in speculation land. I know you’re pretty good at that – better than yours truly, so I’ll leave you to that calculus.

Anyhow thanks for the informative discussion!

78. Ross Snider Says:

*done

79. Sniffnoy Says:

Maybe it’s just me, but objecting that a quantum computer gets to query its oracles in superposition seems to me like objecting that a probabilistic computer gets to query its oracles based on a random decision (i.e. in “probabilistic superposition”, which is as much superposition as it can muster).

80. Serge Says:

Scott #58:
“In math and TCS, ideas only have to justify themselves by their internal coherence and elegance and by the fruitfulness of what they lead to, not on some a-priori metaphysical ground.”

You can’t justify mathematics by metaphysics, but your conjectures may arise from such principles that aren’t a-priori in the least. The sources of invention are so diverse that dismissing all metaphysics in the first place doesn’t sound like good strategy – to me at least. Of course, internal coherence remains the ultimate judge… provided there’s actually something to check!

81. Raoul Ohio Says:

Are dB^4 (de Broglie, Bohm, Bell, Bush) right?

http://www.wired.com/2014/06/the-new-quantum-reality/

If so, how does this inpact QIT and QC?

I had missed the fact that the proofs that hidden variable theories cannot work had apparently been shown to be wrong.

82. Scott Says:

Raoul #81: No. None of these “oil-drop” models can explain entanglement phenomena, like Bell inequality violation—as anyone who understood the Bell inequality could have immediately predicted. So, the models do OK at reproducing the behavior of one quantum particle, but they badly fail when there are two or more entangled particles. The reason is not something technical or fixable, but goes to the core of quantum mechanics: the state of the world really is a vector of amplitudes in a gigantic Hilbert space. So if your only degrees of freedom are the values of fields in 3-dimensional space, then you’re necessarily going to get things wrong—as, in fact, you do.

To their credit, many of the people doing the oil-drop experiments are well aware of these facts. When confronted with them, they simply prefer to change the subject: “look, let’s just talk about 1 particle, and leave 2 or more particles for the future.” (Of course, Bell’s Theorem explains why the generalization to 2 or more particles will never work, as long as it remains local-realistic: it’s not a matter of getting around to it. But, again, the convention seems to be to change the subject when this comes up.)

You mentioned Bohm. Back in the 1950s, he already foresaw this fundamental problem with hidden-variable theories, and he proposed a solution: just import the entire framework of QM, exponentially-large amplitude vector and all, into your hidden-variable theory, in order to guide the hidden variables in just such a way that they always reproduce the experimental predictions of QM. If you do this, then not surprisingly, you do reproduce all the experimental predictions of QM, so the implications for QIT and QC (for example) are nil. But it remains questionable whether you gained any explanatory power for all your trouble.

By contrast, Anderson and Brady (who are quoted in the article) simply want to junk QM, with all its empirical successes, and replace it with a total nonstarter classical model. In their papers, they alternate between saying that experiments haven’t really proved Bell inequality violation, and giving nonsensical and incomprehensible “explanations” for how classical oil drops can violate the Bell inequality too. (See the previous thread where we were over this ground.) This stuff, I’m sorry to say, has crossed into crackpot territory, and doesn’t deserve any of the inexplicably-respectful attention it’s been receiving on various news websites.

83. Raoul Ohio Says:

Scott,

Thanks. I had a feeling you were not going to buy this theory.

84. Itai Says:

Scott,
I really do not understand from where do you take ” the state of the world really is a vector of amplitudes in a gigantic Hilbert space” as granted ?
You also had a weak point against Pilot wave theory in your book regarding finite dimensional Hilbert space .
You take for granted that matrix QM should hold there, you turn |0> to |+> with a unitary transformation and say it “became random “.

I think the matrix QM(Hilbert space QM) has some serious problems and it is not the best way to describe QM.
( Well , Einstein and Schrodinger didn’t like it at all , and it has all those problems with the infinite dimensional operators for energy, momentum and position plus problems that comes from probability ).

Also, Pilot wave theory is not the only successful Hidden variable theory ( it still a bit ugly and not local ), Nobel Price winner physicist t’ Hooft has series of papers where he suggest a local deterministic theory based on some kind of cellular automaton ( maybe Wolfram was right after all ? )
http://arxiv.org/abs/1405.1548
http://arxiv.org/abs/0908.3408
http://physics.stackexchange.com/questions/34217/why-do-people-categorically-dismiss-some-simple-quantum-models

85. Scott Says:

Itai: The whole point of Bell’s theorem was to prove that, if you want a hidden-variable theory, and you want it to reproduce the predictions of QM, then it must be nonlocal.

And Bell’s theorem is a theorem; it’s not a suggestion. I’ve argued with enough people who think there’s some loophole in Bell’s theorem to fill 10,000 lifetimes. The discussions were always infuriating, and were never edifying. There is no loophole.

Gerard ‘t Hooft is another example of someone who thinks there’s a loophole in Bell’s theorem. In his case, the “loophole” is absolutely gobsmacking—basically, that the entire universe is in a gigantic, nonlocal conspiracy to make it look like quantum mechanics is true, and to prevent us (and the random-number generators in our computers, etc.) from choosing to do the experiments that would show that it’s not true. It’s obvious that this is a “cure” a quadrillion times worse than the disease—if anyone other than ‘t Hooft had proposed it, it would’ve been immediately laughed off the stage, which seems to me like the only sane reaction.

Finally, my argument about Bohmian mechanics (i.e., that its determinism is just an artifact of working in an infinite-dimensional Hilbert space) assumes nothing more than what Bohm himself reasonably assumed—namely, that whatever else the theory does, it had better reproduce the experimental predictions of QM, and certainly its predictions about simple 2-level systems. Those are arguably the most precisely-confirmed predictions in all of science.

86. rrtucci Says:

Scott, I hope you write a review of Interstellar. I even have the perfect Susskindesque title for your blog post: “Quantum Data is not Enough”. Not bad, eeh?

87. Itai Says:

Scott,
I read that Pilot wave has no relation whatsoever to Hilbert space mathematics, so any argument from that side is void ,
It is built only on wave mechanics with “modifications” .

I don’t really believe you can actually prove No – Go theories in physics because you can never validate your mathematical assumptions are true (In mathematics it’s not the situation because there is no one truth) , there are always some loopholes.
what Gerard t’ Hooft is suggesting is a LOCAL theory ( see the links ) , he actually thinks that non-local theories like Pilot wave are ugly.
I read that t’ Hooft said that Bell himself told him that his arguments are not valid when we take the “free will ” ability from Alice and Bob in EPR .
So, super-determinisem is by far the most obvious loophole there, and I guess everyone should agree on that.

I think that free will , or free of measurement is ill defined thing, and should not have a place in physical theory ( maybe in philosophy ) .

Here what t’ Hooft wrote about bell and freewill :
http://arxiv.org/abs/quant-ph/0701097

The reactions for t’ Hooft work are almost what you described, but you must admit that he knows physics better than most if not all of the critics who just memorize what they were taught and try not to rethink about anything that is too basic from the theory ( I guess 99.99% of the critics did not really read his work ).

88. Scott Says:

Itai #87: Then not everything you read is correct. The “modifications” that you make to classical mechanics to get Bohmian mechanics involve importing the entire Hilbert space structure of ordinary QM, in the form of Bohm’s “guiding potential.” Don’t take my word for it; please look it up. You have to do this, because if you didn’t, you would make wrong predictions for actual physical experiments (as soon as entanglement is involved)—a point that, incredibly, you keep refusing to engage with. If your theory makes the wrong predictions for 2 entangled particles, then it doesn’t matter whether it’s local or nonlocal, beautiful or ugly, simple or complicated. The theory is out. It’s dead. Stick a fork in it. End of discussion.

And what if your theory very obviously “wants” to say that the Bell inequality can’t be violated—and it can only deal with the inconvenient experimental fact that the Bell inequality is violated (and more generally, that every experiment ever done gives results 100% consistent with quantum mechanics), by an ad-hoc rationalization involving a cosmic conspiracy set in motion since the beginning of the universe, which for some unexplained reason only lets you do exactly the things that quantum mechanics said all along that you can do, and no superluminal communication or anything else other than that? In that case, by any sane standard, your theory is late for a date with the garbage heap—and it makes no difference whether you’re some random crackpot on the Internet or Gerard ‘t Hooft. Please don’t argue from authority here.

89. Vitruvius Says:

Any time I hear a physicist asserting certitude I become more skeptical of the physicist’s other assertions, for as Oliver Wendell Holmes Jr. noted, “Certitude is not the test of certainty. We have been cocksure of many things that were not so”. Using bold for emphasis instead of italics or the language itself doesn’t help either; that’s just another kind of shouting, like all upper-case. And that includes you, Scott, as much as anyone else. Any scientist, really, but especially physicists, given that all the rest is just stamp collecting. Needless to say, my having recently finished Smolin’s The Trouble with Physics (2006, 978-0-618-55105-7), especially considering his views on not dismissing ‘t Hooft and the other so-called crackpots (in the eyes of the bureaucracy) out of hand, has done nothing to disabuse me of my skepticism 😉

90. Scott Says:

Vitruvius: There are countless things in physics that I’m extremely uncertain about. Indeed, compared to my friends in the high-energy and quantum gravity communities, I’d say I’m much more uncertain; I question many things that they regard as established. But then there are things that I, too, am “certain” about—not in the philosophical sense, but in the sense of everyday life.

For example, I’m “certain” that physicists won’t announce tomorrow that the earth is actually flat—or that it’s a hollow sphere, and we’re all living on the inside. And I’m equally “certain” they won’t announce that the Bell inequality can’t be violated after all, and indeed that quantum mechanics was just a huge wrong turn; the entire apparatus of density matrices and amplitudes can be thrown away in favor of some 19th-century-style classical model.

Yes, the path of science is winding and uncertain, but the probability that the path would perfectly reverse itself (e.g., by going from a round earth back to a flat one, or from evolution back to creationism, or from quantum physics back to classical) is so infinitesimal that it can be ignored.

Incidentally, much of what I was shouting at Itai about wasn’t even points of physics, but points of math. For example, that the guiding potential in Bohmian mechanics is isomorphic to the entire wavefunction in ordinary quantum mechanics, is just a fact about the definition of Bohmian mechanics. It makes no more sense to argue about it than to argue about whether circles are round.

91. John Sidles Says:

Scott perceives error (#6 of Speaking Truth to Parallelism)  “I got jolted out of Neal Stephenson’s Cryptonomicon when I encountered a passage about ‘factoring huge prime numbers,’ and couldn’t continue reading.”

This passage reminds us of Dirac’s celebrated critique of Dostoevsky’s Crime and Punishment: ‘It is nice, but in one of the chapters the author made a mistake. He describes the sun as rising twice on the same day.’.

Update I  Neal Stephenson and Cory Doctorow’s book tour for Hieroglyph: Stories and Visions for a Better Future is finding to standing-room-only audiences here in Seattle and around the world. This book is associated to Arizona’s States’ well-known Heiroglyph project.

Update II  Prof. Kathleen Ann Goonan (of Georgia Tech) contributed a story to Heiroglyph “Girl in wave: wave in girl” that — as I read her story — amounts to “QIST done right”

Update III  The mathematical infelicities of the first editions of Cryptonomicon were corrected in later versions. Good on yah for scrupulous care in getting the math right, Neal Stephenson!

These works are specially commended to student members of the student-led MIT STEAM initiative

About Us  MIT STEAM is a student-led effort to ignite communications between disparate fields in academia, business, and thought. Our focus is broad but our starting point is uniting the Arts with STEM (Science, Technology, Engineering, Mathematics).

Anyone is welcome, regardless of your major or background. We think that a diversity of backgrounds is essential to our success.

Summary  Ample reasons exist to give Neal Stephenson/Heiroglyph/MIT STEAM another look, Scott and Shtetl Optimized readers!

—————-

Scott fulminates against Lockheed (#71)  “The Lockheed thing is widely recognized as an absurdity, with the fingerprints of pointy-haired bosses all over it”

Just to clarify, Scott is (presumably) not fulminating against Lockheed’s multibillion-dollar investment in optical processors for synthetic aperture radar (1951-1980?). Because that investment was outstandingly successful.

And having succeeded once, why shouldn’t Lockheed try again to create transformative hugely-profitable technologies based upon non-digital computing technologies?

Update IV  History-minded STEAM students are aware of at least five plausible respects in which the most advanced computing hardware of the 20th century was analog:

#1  Hannibal Ford and William Newell’s ship-born steering and fire-control computers of the 1920s-1950s (as celebrated in Norbert Wiener’s novel The Tempter).

#2  John von Neumann’s parametron computing technology (per his posthumous patent US2815488); see also the wartime computing engines of Gabriel Kron.

#3  The post-WWII optical computing engines used for synthetic aperture radar (capabilities still classified).

#4  The Adi Shamir/Weizman Institute TWINKLE and TWIRL factoring engines (computational capabilities unknown).

#5 Geordie Rose/D-Wave’s quantum-coherent computing engine (capabilities and technologies held privately).

Update V  The history of computing technology provides ample reason to regard Aaronson/Arkhipov boson-sampling engines as the six — and least-secret! — in this historical series of analog-bettering-classical computing engines.

That is why — from a history-of-computing point-of-view — Scott Aaronson/Boson-sampling and Geordie Rose/D-Wave are natural allies in the quest to disprove the Extended Turing Thesis.

—————-

Scott fulminates against Gerard t’ Hooft’s ideas (#71)  “If anyone other than ‘t Hooft had proposed it [post-Hilbert quantum dynamics], it would’ve been immediately laughed off the stage, which seems to me like the only sane reaction.”

Gerard ‘t Hooft’s web page “How to become a good theoretical physicist” is commended to Shtetl Optimized student-readers. In the event that following ‘t Hooft’s rigorous program of study leads to post-Hilbert dynamical models that some folks call “insane”, don’t blame me!

92. Simple Minded Says:

No matter how rigorous the program of study, developing an insane post-Hilbert quantum dynamical model shouldn’t be that difficult. 😉

93. Itai Says:

I do not understand what do you mean by post Hilbert ? And why is it considered insane ?
by wiki here , unless you think it is wrong (then go fix and give references) pilot wave is post Hilbert -the axiom about Hilbert space is not needed.

http://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory#Operators_as_observables

94. Scott Says:

Itai: I didn’t see any place in that Wikipedia article where it says that Bohmian mechanics doesn’t require a guiding potential that is, from a mathematical standpoint, a quantum state vector evolving unitarily in a Hilbert space, just like the state vector of ordinary quantum mechanics. It would indeed be strange if the article said that, since anyone who knows Bohmian mechanics knows that it’s not true.

95. John Sidles Says:

Itai remarks “I do not understand what do you mean by ‘post-Hilbert'”

As Dirac said in response to a similar remark: “That is not a question.”

Fortunately it points directly to a very good question “What ought we choose to mean by ‘post-Hilbert dynamics’? How can we have-fun/do-good-science/discover-good-math/create-great-enterprises/evolve-wonderful-narratives, starting with this question?”

The concluding section of Gerard ‘t Hooft’s recent preprint “The Cellular Automaton Interpretation of Quantum Mechanics. A View on the Quantum Nature of our Universe, Compulsory or Impossible?” (arXiv:1405.1548) provides a starting-point.

Numbers (1)-(3) were added by me:

Conclusions (p. 194) It may seem odd that our theory, unlike most other approaches, (1) does not contain any strange kinds of stochastic differential equation, no “quantum logic”, not an infinity of other universes, no pilot wave, (2) just completely ordinary equations of motion that we have hardly been able to specify, as they could be almost anything. Our most essential point is that (3) we should not be deterred by ‘no go theorems’ if these contain small print and emotional ingredients in their arguments.

As a preliminary remark, obviously it is neither necessary, nor feasible, nor even desirable that everyone think alike in regard to ‘t Hooft’s three points … indeed, it is most helpful that ‘t Hooft has stated them in broad terms.

Here are three concrete (hence non-unique) presentations of ‘t Hooft’s post-Hilbert principles:

(1) restrict the state-space  The state-space of post-Hilbert dynamics can be a restriction of Hilbert-space to algebraic varieties … on the grounds that pretty much all quantum chemistry/condensed matter/quantum transport simulations already impose this restriction.

(2) restrict the Hamiltonians  The dynamical flow of post-Hilbert dynamics can be a restriction of Hamiltonian symbol-functions to those provided by gauge theory … on the grounds, again, that pretty much all quantum chemistry/condensed matter/quantum transport simulations already impose this restriction.

(3) disregard no-go theorems and emotional appeals  Consonant with restrictions (1) and (2), unravel the post-Hilbert trajectories of the broadest possible class of computing and metrology devices — including but not limited to the six device-classes of #91 — without regard for no-go postulates.

For example, the postulated hardness of scattershot boson-sampling simulation should not deter us from simulating this class of device.

In regard to t’Hooft’s recommendation that we be wary of “emotional ingredients”, the concluding statement of the slides that Scott provides for his (wonderfully enjoyable) talk “When exactly do quantum computers provide a speedup?” provides a concrete example:

An emotional quantum ingredient  The single most important application of QC: Disproving the people who said QC was impossible!

Needless to say, quantum discourse would be a whole lot less fun without the passionate emotional appeal of Scott’s lecture. But a little goes a long way, and hence emotional ingredients need to be balanced against dispassionate ingredients. One such ingredient might be:

A dispassionate quantum ingredient  The single most important application of QC: Achieving a thorough understanding what already is technologically feasible (and infeasible) by quantum dynamics, and by this path conceiving new possibilities (and impossibilities) in 21st century enterprise.

As a concluding unifying tribute and celebration of the spirit of Shtetl Optimized, and the spirit of Gerard t Hooft too, I’m going to close this comment with something that I seldom undertake: a concrete falsifiable prediction.

Here it is:

The Permanent Entropy Postulate  No universe governed by long-range gauge theories can experimentally sample a permanent distribution at lower entropy-cost than indistinguishably simulating that distribution by a classical computation.

By far the most significant aspect of this postulate (as it seems to me) is that we can all be confident of learning a lot about this postulate in coming years.

We all of us owe appreciation and thanks to Scott Aaronson and Alex Arkipov (and Gerard t Hooft too) for this wonderful learning opportunity, and that is the main point that this comment attempts to convey.

——-

PS  Can we please stop talking about “pilot waves”? Neither Scott nor t Hooft nor me nor anyone here on Shtetl Optimized has a high regard to them!

96. James Cross Says:

Any chance you might comment on Nick Bostrom’s Superintelligence book?

97. Itai Says:

I and Wikipedia article never said Bohmian mechanics doesn’t need a guiding potential ,
Matrix and wave formulation of QM was proved to be equivalent , I’m not sure who did a “correct proof” yet , Schrodinger and Pauli did not complete a correct proof, and maybe Von Newman ultimately proof the equivalence – never seen the full proof , got my references here http://www.lajpe.org/may08/09_Carlos_Madrid.pdf
I hope he had no mistake in the proof too as with the no-go theorem about hidden variables ).

But, I have never seen same equivalent proof to the Bohm wave theory ( he has no distinction between observables and states , and the only “operator” is the Hamiltonian ).

So , Unless such a proof exists , you can not “attack” pilot wave with arguments from matrix mechanics such as unitary evolution on Hilbert space and so on.

It’s not that I like so much Bohmian mechanics , but I want to distinguish what is real criticism and what is not.

98. Raoul Ohio Says:

In fourth place on the list of widely accepted physics facts that RO does not think are obviously true is the existence of the Higgs boson. There seems to be an explosion of news today about others who don’t think the case is closed quite yet.

You might want to put a hold on that Nobel Prize for right now.

99. Ben Standeven Says:

@Job #76:

Ah, OK. Your plan won’t work, because the classical machine is only allowed to make polynomially many queries to the computation graph of the oracle; but the oracle’s computation could be arbitrarily long.

@Itai #97:
The proof you’re looking for is just the one you quoted. Since matrix and wave formulations are equivalent, it does not matter whether the Bohm particle is guided by a wave or by a matrix; its trajectories are the same either way. Even if the proof is wrong, this would simply mean that there are two versions of Bohmian mechanics; one with a wave and one with a matrix.

@Sidles: #96?:
obviously it is neither necessary, nor feasible, nor even desirable that everyone think alike in regard to pilot waves…

I should have noticed this before; but using subvarieties of a Hilbert space means using the Nullstellensatz to calculate observables; an EXPSPACE-complete problem in general. So you aren’t really getting any speedup this way.

100. John Sidles Says:

Itai remarks “It’s not that I like so much Bohmian mechanics , but I want to distinguish what is real criticism and what is not.”

It’s not clear what traits distinguish “real” criticism, e.g. from “complex” criticism … but five weighty considerations are:

(1) No new good algorithms have arisen from Bohmian dynamics.

(2) No broadly useful simulations have been based upon Bohmian dynamics.

(3) No new scientific instruments have been designed with the aid of insights from Bohmian dynamics.

(4) No substantial scientific discoveries have been stimulated by insights from Bohmian dynamics.

(5) No prosperous mathematical disciplines have arisen as abstractions of Bohmian dynamics.

These reasons help us understand why algorithm/application-oriented quantum dynamicists like Linus Pauling, John Marcus, John Pople and Walter Kohn have won Nobel Prizes … and David Bohm didn’t.

Looking ahead, at least three-of-four 2014 Fields Medalists are doing work that broadly relates to post-Hilbert dynamics in the t Hooftian sense (of #91 and #95)

•$$\$$Maryam$$\$$Mirzakhani:  hyperbolic geometry $$\Leftrightarrow$$ ergodic dynamical flows.

•$$\$$Martin$$\$$Hairer:  stochastic trajectories $$\Leftrightarrow$$ Lindblad-Carmichael unravellings on varietal state-spaces.

•$$\$$Artur$$\$$Avila:  nonlinear stochastic PDE $$\Leftrightarrow$$ quantum metrology triangles and quantum transport dynamics

The overlap of cutting-edge technology with cutting-edge mathematics is responsible for the burgeoning vigor of t Hooftian post-Hilbert dynamics.

In contrast, it’s not evident to most folks (including Scott, t Hooft, and me too) that Bohmian dynamics is providing comparably substantial inspiration to young mathematicians.

Question  Why should young researchers focus on Bohmian dynamics, when so many other transformational 21st century research and enterprise opportunities are presenting themselves, across the entire STEAM spectrum? The world wonders!

Summary  The prospects of post-Hilbert dynamics are bright … but (seemingly) not Bohmian.

101. Scott Says:

James #96:

Any chance you might comment on Nick Bostrom’s Superintelligence book?

Yes, there’s a chance. 🙂 I read it and found it thought-provoking, but it would take some time to gather the provoked thoughts.

102. Scott Says:

Itai #97: Bohmian mechanics is empirically indistinguishable from ordinary quantum mechanics because Bohm defined it that way. The equation that governs the time-evolution of the guiding potential is the Schrödinger equation; and the equation that governs the time-evolution of the particle positions is designed so that, if it obeys the |ψ|2 probability rule at any one time, then it also obeys it at all other times. QED; that’s the complete equivalence proof.

Please keep in mind that, if this weren’t the case—i.e., if there were an empirical way to distinguish Bohmian mechanics from “standard” QM—then it’s extremely likely that Bohmian mechanics would’ve already been ruled out by experiments, which have confirmed QM (insofar as it talks about a few particles at a time) to staggering precision.

103. John Sidles Says:

Ben Standeven remarks [correctly but narrowly] “Using subvarieties of a Hilbert space means using the Nullstellensatz to calculate observables; an EXPSPACE-complete problem in general. So you aren’t really getting any speedup this way.”

As Captain Picard’s borgified persona Locutus once remarked: A$$\$$narrow vision, Number$$\$$One!

Question  Shall we abandon Dantzig’s simplex algorithm, because it is EXPTIME in general? Shall we abandon Kohn-Sham methods because — let’s face it! — our understanding of how these quantum simulation methods work is highly imperfect?

The t$$\$$Hooft Alternative  Or should we embrace these methods, and seek to extend them, per the t$$\$$Hooftian maxim (quoted in #91) “We should not be deterred by  no go theorems  no speedup theorems“.

Summary  Corporations like Lockheed Martin are exhibiting an experience-grounded and mathematically well-founded t$$\$$Hooftian appreciation that transformational enterprises can be founded — indeed commonly have been founded — upon computational hardware that affords “only” polynomial speedups, and dynamical simulation algorithms whose workings formally are EXPTIME or even mysterious outright.

Conclusion  Yet another wonderful title for a 21st century STEAM-saga would be The `t$$\$$Hooft Alternative.

104. James Cross Says:

Scott #101

I thought you would be interested.

Hoping to see your take on it.

105. Ben Standeven Says:

@Sidles, 103:

Algorithms like Dantzig’s or Grobner’s work fine on “native” linear/polynomial programming problems. But if you take a hard computational problem [or a random problem which is hard-on-average], and translate that into the appropriate format, they don’t offer any speedup. In fact, it is harder to solve the problems that way, since any simulation introduces some overhead. Likewise, we should expect quantum simulation algorithms to fast on “natural” quantum systems, but not very useful on systems which are designed to do hard computations by using the rules of quantum mechanics.