## Memrefuting

*(in which I bring this blog back to the “safe, uncontroversial” territory of arguing with people who think they can solve NP-complete problems in polynomial time)*

A few people have asked my opinion about “memcomputing”: a computing paradigm that’s being advertised, by its developers, as a way to solve NP-complete problems in polynomial time. According to the paper Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states, memcomputing “is based on the brain-like notion that one can process and store information within the *same* units (memprocessors) by means of their mutual interactions.” The authors are explicit that, in their view, this idea allows the Subset Sum problem to be solved with polynomial resources, by exploring all 2^{n} possible subsets in parallel, and that this refutes the Extended Church-Turing Thesis. They’ve actually built ‘memcomputers’ that solve small instances of Subset Sum, and they hope to scale them up, though they mention hardware limitations that have made doing so difficult—more about that later.

A bunch of people (on Hacker News, Reddit, and elsewhere) tried to explain the problems with the Subset Sum claim when the above preprint was posted to the arXiv last year. However, an overlapping set of authors has now simply repeated the claim, unmodified, in a feature article in this month’s *Scientific American*. Unfortunately the SciAm article is behind a paywall, but here’s the relevant passage:

Memcomputing really shows advantages when applied to one of the most difficult types of problems we know of in computer science: calculating all the properties of a large series of integers. This is the kind of challenge a computer faces when trying to decipher complex codes. For instance, give the computer 100 integers and then ask it to find at least one subset that adds up to zero. The computer would have to check all possible subsets and then sum all numbers in each subset. It would plow through each possible combination, one by one, which is an exponentially huge increase in processing time. If checking 10 integers took one second, 100 integers would take 10^{27} seconds—millions of trillions of years … [in contrast,] a memcomputer can calculate all subsets and sums in just one step, in true parallel fashion, because it does not have to shuttle them back and forth to a processor (or several processors) in a series of sequential steps. The single-step approach would take just a single second.

For those tuning in from home: in the Subset Sum problem, we’re given n integers a_{1},…,a_{n}, and we want to know whether there exists a subset of them that sums to a target integer k. (To avoid trivializing the problem, either k should be nonzero or else the subset should be required to be nonempty, a mistake in the passage quoted above.)

To solve Subset Sum in polynomial time, the basic idea of “memcomputing” is to generate waves at frequencies that encode the sums of all possible subsets of a_{i}‘s, and then measure the resulting signal to see if there’s a frequency there that corresponds to k.

Alas, there’s a clear scalability problem that seems to me to completely kill this proposal, as a practical way of solving NP-complete problems. The problem is that the signal being measured is (in principle!) a sum of waves of exponentially many different frequencies. By measuring this wave and taking a Fourier transform, one will not be able to make out the individual frequencies until one has monitored the signal for an exponential amount of time. There are actually two issues here:

(1) Even if there were just a single frequency, measuring the frequency to exponential precision will take exponential time. This can be easily seen by contemplating even a moderately large n. Thus, suppose n=1000. Then we would need to measure a frequency to a precision of one part in ~2^{1000}. If the lowest frequency were (say) 1Hz, then we would be trying to distinguish frequencies that differ by far less than the Planck scale. But distinguishing frequencies that close would require so much energy that one would exceed the Schwarzschild limit and create a black hole! The alternative is to make the lowest frequency slower than the lifetime of the universe, causing an exponential blowup in the amount of time we need to run the experiment.

(2) Because there are exponentially many frequencies, the amplitude of each frequency will get attenuated by an exponential amount. Again, suppose that n=1000, so that we’re talking about attenuation by a ~2^{-1000} factor. Then given any amount of input radiation that could be gathered in physical universe, the expected amount of amplitude on each frequency would correspond to a microscopically small fraction of 1 photon — so again, it would take exponential time for us to notice any radiation at all on the frequency that interests us (unless we used an insensitive test that was liable to confuse that frequency with many other nearby frequencies).

What do the authors have to say about these issues? Here are the key passages from the above-mentioned paper:

all frequencies involved in the collective state (1) are dampened by the factor 2^{-n}. In the case of the ideal machine, i.e., a noiseless machine, this would not represent an issue because no information is lost. On the contrary, when noise is accounted for, the exponential factor represents the hardest limitation of the experimentally fabricated machine, which we reiterate is a technological limit for this particular realization of a memcomputing machine but not for all of them …

In conclusion we have demonstrated experimentally a deterministic memcomputing machine that is able to solve an NP-complete problem in polynomial time (actually in one step) using only polynomial resources. The actual machine we built clearly suffers from technological limitations due to unavoidable noise that impair [sic] the scalability. This issue can, however, be overcome in other UMMs [universal memcomputing machines] using other ways to encode such information.

The trouble is that no other way to encode such information is ever mentioned. And that’s not an accident: as explained above, when n becomes even moderately large, this is no longer a hardware issue; it’s a fundamental physics issue.

It’s important to realize that the idea of solving NP-complete problems in polynomial time using an analog device is far from new: computer scientists discussed such ideas extensively in the 1960s and 1970s. Indeed, the whole point of my NP-complete Problems and Physical Reality paper was to survey the history of such attempts, and (hopefully!) to serve as a prophylactic against people making more such attempts without understanding the history. For computer scientists ultimately came to realize that all proposals along these lines simply “smuggle the exponentiality” somewhere that isn’t being explicitly considered, exactly like all proposals for perpetual-motion machines smuggle the entropy increase somewhere that isn’t being explicitly considered. The problem isn’t a practical one; it’s one of principle. And I find it unfortunate that the recent memcomputing papers show no awareness of this story.

(Incidentally, quantum computing is interesting precisely because, out of all “post-Extended-Church-Turing” computing proposals, it’s the only one for which we *can’t* articulate a clear physical reason why it won’t scale, analogous to the reasons given above for memcomputing. With quantum computing the tables are turned, with the skeptics forced to handwave about present-day practicalities, while the proponents wield the sharp steel of accepted physical law. But as readers of this blog well know, quantum computing doesn’t seem to promise the polynomial-time solution of NP-complete problems, only of more specialized problems.)

Comment #1 February 11th, 2015 at 6:08 pm

Speak of analog-computing-esque machines that claim to solve NP-complete problems in non-exponential (indeed sub-linear) time, I would be interested to hear a takedown of the degenerate OPO networks proposed by Yamamoto at Stanford (e.g. http://arxiv.org/abs/1501.07030)

Comment #2 February 11th, 2015 at 6:14 pm

(side question)

There are several tricks to weaken a “subset sum looking” problem. Some are stupid enough (select instances for which k is one of the integers; or instances for which k is larger than the largest integer time the number of integers; etc.), some a bit less (all integers divide by 1181 but not k; k is the sum of all integers that share exactly five LCDs with the other integers).

Do you have a rule of thumb for when a case is trivial enough one need to define the problem in a way that avoid trivialization?

Comment #3 February 11th, 2015 at 6:19 pm

Tangential note: Subset Sum seems a particularly poor choice of “test problem” when one starts to get fuzzy on computation requirements and resorts to “we built something that works” because there are (many) heuristics that do quite well on Subset Sum problems.

But perhaps that’s really just the least of problems here.

Comment #4 February 11th, 2015 at 7:30 pm

Actually, subset sum could be a fine choice as a test problem because it is believed to be hard on average when the integers are chosen uniformly at random from a large enough interval. This is essentially the core of lattice based cryptography. But you need n bigger than 100.

The 2^100 = 10^30 bound from the SciAm article to solve subsetsum with n=100 is a gross overestimate: a classic method of Shroeppel and Shamir from 1981 already shows how to solve subsetsum in dimension n in time 2^(n/2). This lowers running time of the attack from 10^30 to 10^15.

Also the estimate that one can check 10 integers in one second is a gross underestimate of modern computer power. You can probably check something around 10^9 integers in one second. Unless I got the numbers wrong, this brings the time required to solve subsetsum with n=100 integers to about 10^6 second, i.e., just a few days.

For random instances that are even faster methods based on lattice reductions. But the complexity (even for the best heuristics) on uniformly random instances is always 2^{cn} for some constant c. For n in the range 200-400 (and, say, input numbers are around n bit long), this is already a good challenge problem.

Comment #5 February 11th, 2015 at 7:41 pm

Long ago I was an avid reader of Scientific American. I find it sad that they’ve fallen so low.

Comment #6 February 11th, 2015 at 8:07 pm

This reminds me heavily of the soap bubble steiner tree solver.

Comment #7 February 11th, 2015 at 8:55 pm

Somewhat of a side issue, but you wouldn’t actually need an FFT to answer the problem, right? A sufficiently good(exponentially good) analog bandpass filter would probably work.

Then you look at the output of your k-filter. And a wave is either present or not. Answering your decision problem.

Comment #8 February 11th, 2015 at 9:44 pm

Alexander #6:

This reminds me heavily of the soap bubble steiner tree solver.

OK, but the point with that one was to try to show people why it

wouldn’tscale! 🙂Comment #9 February 11th, 2015 at 10:20 pm

Incidentally, quantum computing is interesting precisely because, out of all “post-Extended-Church-Turing” computing proposals, it’s the only one for which we can’t articulate a clear physical reason why it won’t scale, analogous to the reasons given above for memcomputing.The recurring validation of the Church-Turing thesis is hard to ignore.

The Church-Turing barrier comes through when attempting to scale an implementation, which is the main obstacle to Quantum Computing.

In fact, what this Memcomputing case study really underlines is that it’s not at all sufficient to have a functional prototype, or consistent experimental results, it has to scale.

We can’t measure frequencies with exponentially high precision, but exponentially small amplitudes will cancel out to an arbitrarily high degree? There’s no ceiling? I really want to see how that turns out.

But there is definitely one thing a Quantum Computer has going for it: it

can’tsolve NP-Complete problems.Comment #10 February 11th, 2015 at 10:52 pm

Job #9:

In fact, what this Memcomputing case study really underlines is that it’s not at all sufficient to have a functional prototype, or consistent experimental results, it has to scale.

I agree. For me, the core case for quantum computing has very little to do with prototypes, except insofar as those prototypes confirm the principles of quantum mechanics. Rather, the case rests on what looks like the staggering mathematical difficulty of modifying (or adding to, or reinterpreting) quantum mechanics in any sensible way that would make quantum computing

notscale. But even more than that, it rests on the observation that if thereissuch a modification or addition or reinterpretation, then that by itself would be a huge new discovery about physics, justifying hundreds of times over the effort to find out—in contrast to memcomputing, where well-established physics already suffices to explain why it won’t scale.We can’t measure frequencies with exponentially high precision, but exponentially small amplitudes will cancel out to an arbitrarily high degree? There’s no ceiling?

Yes, and there’s a profound reason for that: because amplitudes are not like frequencies, or any other measurable physical quantity. Instead they’re like probabilities. And just like probability theory doesn’t break down when the probabilities get too small—the very idea that it

wouldseems like a joke—so it’s equally hard to imagine how amplitude theory would break down in that limit. For more (if you’re interested), seeQuantum Computing Since Democritus.But there is definitely one thing a Quantum Computer has going for it: it can’t solve NP-Complete problems.

That’s also correct.

Comment #11 February 11th, 2015 at 11:58 pm

And just like probability theory doesn’t break down when the probabilities get too small—the very idea that it would seems like a joke—so it’s equally hard to imagine how amplitude theory would break down in that limit.And that’s how a QC “smuggles the exponentiality” somewhere that isn’t being explicitly considered.

It’s difficult to argue that point either way without understanding what drives interference or an intuitive sense of what amplitudes are – which i don’t have, they’re weird.

I’m only defaulting to the Church-Turing thesis.

Comment #12 February 12th, 2015 at 4:04 am

Job #11:

And that’s how a QC “smuggles the exponentiality” somewhere that isn’t being explicitly considered.

No, I’d say that a QC puts the exponentiality somewhere that

isexplicitly considered: Hilbert space! Indeed, explicitly considering the exponentiality of Hilbert space is more-or-less the entire point of quantum computing theory.It’s difficult to argue that point either way without understanding what drives interference or an intuitive sense of what amplitudes are – which i don’t have, they’re weird.

I agree; it’s hard to have this discussion if you don’t understand amplitudes, how they’re like probabilities in almost all ways (except the ways that they aren’t, like interference and conserving the 2-norm!), and how they’re

unlikealmost every other kind of number known to physics (masses, velocities, frequencies, voltages). That’s why I wrote QCSD and most of my other popular and semipopular articles and blog posts, to try to explain this.The single most important point to understand, I’d say, is how the combination of

(a) amplitudes evolving only via linear, unitary, 2-norm-preserving transformations, and

(b) amplitudes entering into observed phenomena only as the square roots of probabilities,

has the effect of “shielding” us from the exponentiality of Hilbert space in many cases, just as we’re “shielded” (more so) from the exponentiality inherent in classical probability theory. The BBBV lower bound (which proves that Grover’s algorithm is the best possible unordered search algorithm even for a quantum computer), and Holevo’s Theorem (which proves that n qubits sent from Alice to Bob can’t be used to reliably communicate more than n classical bits, or 2n classical bits if Alice and Bob share entanglement), provide two excellent illustrations of this shielding.

Crucially, I don’t know a single example

otherthan quantum mechanics (or a few deformations of quantum mechanics)—not even a made-up, hypothetical example—of a theory that breaks the usual Extended Church-Turing intuition, but then also provides built-in shielding that sharply limits the consequences of the break. All the other examples I know, like “memcomputing,” other analog computing models, closed timelike curves, postselection, etc., break the ECT in much more “boring, uncontrolled” ways that seem to give you vastly too much.Beyond that, it helps to be on board with the quasi-positivist idea that the sets of things that can and can’t be done, by any experiment, are not just random collections of engineering details; rather, they’re the scaffolding on which we need to construct our understanding of what’s real.

Comment #13 February 12th, 2015 at 9:12 am

Scott #12: Would it make sense to turn quantum vs classical the other way around and say polynomial quantum resources is “what is real” and in some specific circumstances the limitations of classical algorithms are bad enough to require exponential resources?

Comment #14 February 12th, 2015 at 9:19 am

JollyJoker #13: If your criterion for “real” is, “what do the currently-understood laws of physics tell us should be feasible?,” then sure, in some sense that’s what you

haveto say.Comment #15 February 12th, 2015 at 9:51 am

I kinda meant that the idea that the quantum world really has exponential amounts of “stuff” and we magically can’t access it except in very special cases may be misleading.

Comment #16 February 12th, 2015 at 9:55 am

Scott #8:

Really? So you felt that the soap bubble solver was more likely to scale than this one?

To me it seems that scalibility is the general crux with these kinds of approaches. Thus, anyone who proposes such approaches will have to deal with this question in a very early stage.

Comment #17 February 12th, 2015 at 11:17 am

JollyJoker #15: Yes, you’re right, it’s perfectly legit to take quantum as the measure of all things and then talk about “classical slowdowns.” What breaks the symmetry, mainly, is just pedagogy: I more often find myself talking to people who understand classical computing but are confused about QC than vice versa. (Though the latter does exist, particularly within high-energy physics. 🙂 )

Comment #18 February 12th, 2015 at 11:21 am

Alexander #16: No, I didn’t feel the soap bubble solver was “likely” to scale. The point of the demonstration was just to make vivid the well-known fact that it can get stuck in local minima.

Even though soap bubbles and memcomputing are both non-scalable approaches to NP-complete problems, the

reasonswhy they don’t scale are very different. With soap bubbles, the reason is local minima, and more generally, the exponential time needed to reach a global minimum. With memcomputing, the reasons are the precision needed to resolve exponentially many frequencies, and the dumping involved in splitting into that many frequencies.Comment #19 February 12th, 2015 at 11:25 am

Scott #12, if memcomputing is a “theory” then I have a theory for you that has very controlled consequences. It breaks

allcryptography, but doesn’t compute anything useful. It doesn’t even let you factor. That theory is esp.Comment #20 February 12th, 2015 at 11:31 am

Douglas #19: Don’t I provide enough

realstuff on this blog to disagree about, without needing to disagree even in cases where we don’t disagree about anything? 🙂Comment #21 February 12th, 2015 at 12:38 pm

Jay #2: The problem is not that some instances of subset sum are easy — the problem is that in their problem specification, ALL problems are easy. One can always find a set which sums to 0, the empty set! In general (this isn’t a proof!) all NP-complete problems have some instances which are easy.

In fact I probably can prove that (ish), because for any NP-complete problem I can take some SAT instances I know are trivially false, map them to your NP-complete problem, and then say “Oh, false!”

Any adjustment which makes sure there are some instances are hard is sufficient.

Comment #22 February 12th, 2015 at 12:59 pm

I thought Scientific American was better than to publish things like this. Did they not consult with any other scientists?

Comment #23 February 12th, 2015 at 12:59 pm

It’s a neat trick, but it seems to me like it’s essentially the same trick used to define hypercomputers via arithmetic on the reals [1]. Of course, assuming reals actually exist, which they probably don’t, measuring the output to arbitrary precision is effectively impossible.

[1] http://en.wikipedia.org/wiki/Real_computation

Comment #24 February 12th, 2015 at 1:25 pm

“The sets of things that can and can’t be done . . . are not just random collections of engineering details; rather, they’re the scaffolding on which we need to construct our understanding of what’s real.’

This sounds about right. 😉

Comment #25 February 12th, 2015 at 1:29 pm

[…] Source link […]

Comment #26 February 12th, 2015 at 1:36 pm

D. Eppstein #5 and Jfr #22: I guess I wouldn’t be

thathard on SciAm. For one thing, the subset-sum stuff is just 2 paragraphs in a longer article, so it could’ve slipped past. Most of the article is about other claimed advantages of ‘memcomputing’ (energy efficiency, etc.) I have no opinion about any of those other claims. Of course, the egregious wrongness about subset sum, and the unwillingness to retract the claim after its wrongness was explained, don’t exactly fill me with confidence! On the other hand, if I dismissed everyone who’s ever said something forehead-bangingly wrong about theoretical computer science, I’d have to write off alotof scientists in other fields whose expertise in those other fields I respect.Comment #27 February 12th, 2015 at 2:53 pm

Your book mentions the Knapsack Cryptosystem as relying on the hardness of the subset sum problem, but also that it’s been broken, though not in a way that makes the problem any easier to solve.

For computer scientists ultimately came to realize that all proposals along these lines simply “smuggle the exponentiality” somewhere that isn’t being explicitly considered,Slightly OT, but there’s a funny example of “smuggling the linearithmicality”: SleepSort. For all elements in the list, the the OS to wait as long as the value, then output it.

Comment #28 February 12th, 2015 at 4:07 pm

Chris Jefferson,

I just didn’t know the empty set was a subset of all sets, thanks for the information. (PS: it’s easy to google if one want to read some proofs)

But you’re raising a far more interesting question! Game to give it a try?

Start from 3SAT with some reasonable numbering of the instances, define 3SATAV such that the instance #n is positive iff one can find an even number of positive instance within the 3SAT instances #n to #n*n.

Obviously that’s NP complete, but how would you reduce from 2SAT to 3SATAV?

Comment #29 February 12th, 2015 at 5:19 pm

I had thought that the NP-stuff was a major part of the article, apologies for my misunderstanding. Still, if someone claims to have a made a major breakthrough in an area you would expect Scientific American to run it by some experts in the field.

Comment #30 February 12th, 2015 at 5:40 pm

Scott #12,

A recurring note when the Born rule is discussed is that it works, but that we don’t understand why it works.

It’s difficult to reconcile that detail with the fact that the dynamics derived from the Born rule will scale arbitrarily – despite the fact that it is consistent with experimental results.

Crucially, I don’t know a single example other than quantum mechanics (or a few deformations of quantum mechanics)—not even a made-up, hypothetical example—of a theory that breaks the usual Extended Church-Turing intuition, but then also provides built-in shielding that sharply limits the consequences of the break.Yes, and while that shield is compelling evidence of the viability of a QC, it’s not a guarantee that it will scale.

For example, while the possibility of efficiently determining the ith bit of a particle’s position might raise problems, reading a

roman numeral’s worthof precision is less controversial, and better shielded from exponentiality. But that doesn’t mean that we could, given enough time, get arbitrary precision on a particle’s position.From where i’m standing, amplitudes are a bit like probabilities and a bit like, well something i don’t understand. The probability aspect of amplitudes is well shielded from exponentiality, but the other part, if there is any, might not be.

Comment #31 February 12th, 2015 at 6:42 pm

Regarding Scientific American:

I totally loved it when I was a teenager, say in the 1960’s. Now I get frequently annoyed by the slackness and goofiness levels. Have I just become an old fogey, or has SA slipped that far?

Comment #32 February 12th, 2015 at 8:05 pm

Job #30: It’s important to separate what is and isn’t mysterious about the Born rule. The part that some people consider mysterious—the infamous “measurement problem”—is how to reconcile the picture of a quantum state evolving linearly and reversibly via the Schrödinger equation, with discontinuous, probabilistic, and irreversible measurements. E.g., how does Nature “know” when to suspend unitary evolution and apply the measurement rule instead? What counts as a measuring apparatus, anyway? (Many physicists argue that, to whatever extent these are real questions at all, they’re answered by decoherence theory, but others vigorously oppose that view.)

The

non-mysterious part, at least by my lights, is: once you’ve decided that “measurements” have to happen somehow, and they have to have definite but probabilistic outcomes, why should the rule governing that process (the Born rule) have the specific mathematical form it does? It’s true that people have considered that, too, enough of a question that they’ve given many different derivations of the Born rule, from various sets of starting postulates (e.g. Gleason’s Theorem, Zurek’s envariance, Seben and Carroll’s work, etc.). But I’d say that these derivations have collectively succeeded! I.e., they’ve shown that, if you want a probability rule that will mesh with unitary evolution in a sane way, then you’re effectively forced to choose the Born rule. This is because, once you’ve said that linear evolution of the quantum state is going to conserve the 2-norm, you’vealready singled out the 2-norm as special—so if the probabilities were based on anythingotherthan the 2-norm, you wouldn’t naturally even get conservation of probability. And if you tried to put in probability conservation by hand, you’d get absurd consequences, like superluminal communication. Crucially, these absurd consequences would follow even if you tried to “deform” the Born rule by the tiniest amount—giving a compelling explanation for why it has to haveexactlythe form it has, p=|ψ|^{2}. For more, you might enjoy my old paper Is Quantum Mechanics An Island In Theoryspace?.Comment #33 February 13th, 2015 at 4:23 am

There is one other place where quantum computing “hides the exponentiality” – it is in the assumption of a continuous real-valued time parameter – or at least in the assumption of a

continuousunitary evolution.A discrete-time based evolution model of quantum mechanics drastically reduces the “effectiveness” of the exponentiality** – although if we are talking about planck-scale timesteps (~10-43 secs) it may be difficult to detect the impact on real experiments, and Shor’s algorithm for example may still be practical for hundreds of bits before it begins to fail significantly.

**(eg The path-integral formulation of QM can be shown to be equivalent to the Schroedinger picture of unitary evolution , but the equivalence requires one to take a limit dt –> 0)

Comment #34 February 13th, 2015 at 4:27 am

Scott, do these vigorously opposing views simply correspond to MWI-denial, or do you have different kinds of opposing views in mind as well?

Comment #35 February 13th, 2015 at 4:36 am

Dániel #34: “MWI-denial” seems like a loaded term for anyone who disagrees with MWI, or simply isn’t fully on board with it! Also, there are MWI

proponentswho think there’s something mysterious, or at least still to be understood, about the Born rule and the emergence of classicality. So all I meant, really, was anyone who thinks decoherence theory hasn’t fully solved the measurement problem.Comment #36 February 13th, 2015 at 10:35 am

Scott #35: Loaded term: it was my English, sorry. I am not fully on board with MWI myself, so I definitely didn’t intend to say it derogatorily. What I really meant to ask for is pointers to people and arguments from the symmetric difference of “does not accept that decoherence theory will lead to a full answer to the measurement problem” and “opposes MWI”.

Comment #37 February 13th, 2015 at 3:29 pm

This is probably a very dumb/obvious question.

When all the ai’s are powers of 2, and we express everything in base 2, we can do a direct look-up of all the bits.

E.g. for { 1, 4, 16, 32 } and k = 35:

1..=000001

4..=000100

16=010000

32=100000

========

35=100011 (no solution)

Is this just so trivial/limited that it can’t be extended?

Or any sort of generalization of this would require an exponential blow up?

Comment #38 February 13th, 2015 at 3:35 pm

Dániel #36:

“MWI opponents” who nevertheless think decoherence theory has already answered pretty much everything worth asking about the measurement problem: err, how about Lubos, Tom Banks, probably many others in the high-energy physics community, friend-of-the-blog Greg Kuperberg, anyone who uses slogans like “quantum mechanics in your face” or “quantum mechanics needs no interpretation.” Now, the MWIers might claim that the “QM-in-your-facers” really secretly agree with MWI, and are just unwilling to state what they believe in MWI language. But the in-your-facers would of course strongly dispute that.

MWI believers who nevertheless think something important remains to be understood about the emergence of classicality and the nature of quantum measurement: Sean Carroll (though he might say that his own very-recent work has put him more at ease!); probably various philosophers who work on MWI, like David Wallace and Simon Saunders.

Comment #39 February 13th, 2015 at 3:44 pm

Scott,

“Incidentally, quantum computing is interesting precisely because, out of all “post-Extended-Church-Turing” computing proposals, it’s the only one for which we can’t articulate a clear physical reason why it won’t scale, analogous to the reasons given above for memcomputing.”

Is this because QM relies on a mathematical model where the probability amplitudes are in a “perfect” continuous space?

Could this be an approximation as well? I.e. in practice the amplitude space could turn out to be itself discrete/quantized? (a planck scale equivalent)

If regular space was perfectly continuous, one could store an infinite amount of information by just making an arbitrarily precise notch on a wood stick.

Comment #40 February 13th, 2015 at 3:58 pm

Whenever I see a paper that claims P=NP I always have this hope that once the smoke and mirrors are waved away there will be an idea which is useful in some limited way (when the input is small, or

when the input is of a certain type). This seems to have never happened. (I doubt it will happen for this subset-sum thing either.)

Why is that? because the people who claim P=NP are literally always cranks who don’t quite understand the real problem and hence frankly aren’t that good. By contrast, advances that are really do help on some subcases (e.g, FPT) or help make problems already in P faster (e.g. Randomization) are done by people who knowt the issues and don’t begin by saying stupid things like `hence P=NP’ or even `this may lead to P=NP’

I’ll turn this into a question: Do you know of any paper which made a claim like P=NP which ended up not showing P=NP but did have some good idea that made some problems solvable faster in some cases?

Comment #41 February 13th, 2015 at 4:09 pm

fred #37: Yes, subset sum is trivial if the numbers are all powers of some fixed integer—much like traveling salesman is trivial if the cities all lie along a line. 😉

Comment #42 February 13th, 2015 at 4:26 pm

fred #39: Of course it’s possible that Hilbert space is just an approximation to some discrete structure; people have speculated about such things for almost a century. The problem is that, as I said, no one has been able to conceive of any way for that to happen that wouldn’t lead to absurd results (e.g., superluminal communication and all the rest). To see the difficulty, imagine trying to “discretize” probability theory. What would that even mean? Would it mean that, if you flipped a coin too many times (say, more than 1000), the universe would crash because the probability of whatever specific sequence of heads and tails came up was now too astronomically small? Now remember that amplitudes behave mathematically much like probabilities,

notlike lengths, times, angles, or other things one could imagine being discretized on a small enough scale.(If you wanted to get more technical, you could say something like: certainly for n at least 3, the group of nxn unitary matrices has no discrete subgroups that are large enough to account for known physical phenomena, but not so large that they aren’t dense in some Lie subgroup, thereby taking us back to the continuum. And if we give up on our set of transformations of states being a subgroup of the unitary group, then we really do seem to give up on the theory making internal sense.)

Comment #43 February 13th, 2015 at 4:29 pm

gasarch #40: To be fair, the paper I linked to never claims to show P=NP, and indeed, explicitly disavows that claim. It “merely” claims to give a super-Turing but physically realizable way to solve NP-complete problems in a single time step.

Comment #44 February 13th, 2015 at 5:02 pm

Awesome, thanks for both answers, Scott!

Comment #45 February 13th, 2015 at 9:57 pm

Crucially, these absurd consequences would follow even if you tried to “deform” the Born rule by the tiniest amount—giving a compelling explanation for why it has to have exactly the form it has, p=|ψ|2With that as a basis, does it follow that, under accepted theory, a failure to scale a QC will lead to an absurd consequence?

As an example, is it possible to show that, if arbitrarily small amplitudes

don’tinterfere, then NP-Complete problems are efficiently solvable?As you mentioned, in QC the tables have turned, so it should be possible to make this type of an argument in favor of Quantum Computing.

That’s the type of argument i’m looking for.

Comment #46 February 13th, 2015 at 11:51 pm

@fred #37: Yes, that’s the basis of the Knapsack cryptosystem: there are easy knapsacks, and hard ones. (Any super-increasing set is an easy one.) But if you multiply an easy one by k mod n, you can turn it into a hard one. Then, the hard one is your public key, and the easy one (plus k and n) is the private key.

Encryption: write the message an binary, and add up all the members of the public key that correspond to the ones.

Decryption: multiply by 1/k mod n, and break it apart with the easy knapsack.

Much easier to remember than RSA! (Anyone know the barriers to converting this into a protocol for e.g. traveling salesman?)

Comment #47 February 14th, 2015 at 2:09 am

Would it mean that, if you flipped a coin too many times (say, more than 1000), the universe would crash because the probability of whatever specific sequence of heads and tails came up was now too astronomically small?That’s absurd, and kind of hilarious because it’s true. But amplitudes

aredifferent from conventional probabilities.For example, a simulation of the classical universe is not required to track probabilities for all possible outcomes – they can be implicit.

On the other hand, a Quantum Simulation may well need to track all amplitudes in order to produce interference.

How would a simulation implicitly achieve interference? And if an explicit process is required, why would it not eventually break down?

Comment #48 February 14th, 2015 at 8:05 am

Job #45: There’s certainly no general theorem that says that if you change quantum mechanics in any way whatsoever, NP-complete problems will become efficiently solvable. (How could there be?) But yes, it’s been found through experience that almost any modification one would naturally think of, which is able in principle to account for known phenomena (and thus doesn’t completely junk QM), would allow “crazy” effects like violating the uncertainty principle, sending superluminal signals, and solving NP-complete problems in polynomial time.

And yes, this is ironic, because some people’s motivation for modifying QM is to kill quantum computing! It must suck to not only fail at that goal, but then get the efficient solution of NP-complete problems thrown in for your trouble. 🙂

In any case, I

tryto adopt the most conservative position on any scientific question. And here, I’d say that position is: at least since the discovery of quantum fault-tolerance in the mid-1990s, the burden has been firmly on the people who think quantum computing isnotpossible, to offer a coherent picture of the world that would explain why. It hasn’t been on the proponents to prove that no such picture can possibly exist. And until there’s a concrete such picture on the table, in some sense the debate doesn’t even get started: the experimenters don’t even know where to look for the breakdown of our current understanding of QM.Comment #49 February 14th, 2015 at 8:12 am

Job #47:

On the other hand, a Quantum Simulation may well need to track all amplitudes in order to produce interference.

It’s true that a simulation of a quantum computer by a classical one would, in general, incur an exponential slowdown (provably in the black-box model, and very, very plausibly in the non-black-box one). Were that not the case, we wouldn’t be so interested in the first place!

On the other hand, it’s not true that a classical simulation of a quantum computer would need to explicitly track all the amplitudes in the wavefunction: it could also just directly compute the probability of the relevant outcome using a “Feynman sum over histories,” still taking exponential time, but never storing more than polynomially many numbers in memory. Indeed, that’s exactly how one proves the complexity class containments BQP⊆PSPACE and BQP⊆PP.

How would a simulation implicitly achieve interference? And if an explicit process is required, why would it not eventually break down?

I’m not saying you were doing this, but: in arguing that quantum mechanics has to break down somewhere, one doesn’t get to assume that our universe is an efficient simulation running on a classical computer. That’s circular!

Comment #50 February 14th, 2015 at 8:59 am

And here, I’d say that position is: at least since the discovery of quantum fault-tolerance in the mid-1990s, the burden has been firmly on the people who think quantum computing is not possible, to offer a coherent picture of the world that would explain why.That’s sensible but computationally it’s still at odds with the Church-Turing thesis. There is an argument that would settle this conclusively: show that one of

BQP computersorNP-Complete computersmust be physically realizable.To be honest, i’m not even in the

QCs are impossiblecamp, at this stage i’m in theQCs are possible iff BQP ⊆ Ptent and would be “satisfied” by a proof that BQP != P. 🙂On the other hand, it’s not true that a classical simulation of a quantum computer would need to explicitly track all the amplitudes in the wavefunctionRight, but unlike probabilities, the dynamics resulting from amplitudes seem to require

someexplicit process. The argument is more or less that classical probability theory has fewer requirements thanamplitude theory.While i can accept that amplitudes don’t have to break down because they could be implemented outside of the Universe, where there are no time or space constraints, that’s something that will require hard experimental evidence, at a large scale.

I can only afford to be this skeptical because i have no stakes in the fire.

Comment #51 February 14th, 2015 at 9:16 am

By BQP ⊆ P i really mean BQP ⊆ BPP.

Comment #52 February 14th, 2015 at 9:39 am

Job #50:

That’s sensible but computationally it’s still at odds with the Church-Turing thesis.

Well, it’s at odds with the

ExtendedChurch-Turing Thesis. The original Church-Turing Thesis, the one about computability, isn’t challenged in the least—nor, for that matter, is the thesis that NP-complete problems are intractable in the physical world. The claim is simply that, just like in the 1970s we had to enlarge our notion of the efficiently computable slightly, from P to BPP (i.e., to encompass randomized algorithms), so in the 1990s we had to enlarge it slightly further, from BPP to BQP. And moreover, while today it looks like ultimately P=BPP, today it also looks likely that P≠BQP.And yes, the fact that quantum computing genuinely challenges the ECT is why some of us decided to spend our careers studying it! It’s not exactly something that escaped notice, or that we sweep under the rug. 🙂

As a close analogy, the Bell inequality was interesting in the 1960s because it really, genuinely challenged local realism. So as soon as Bell published his papers, the

failureof local realism immediately became the scientifically conservative option, not certain but the thing for anyone who understood the issues to bet on—even though the failure hadn’t yet been shown by any direct experiment, even though experimental confirmation would need to wait a couple more decades. The Jobs of the 1960s could have said: “That’s sensible but it’s still at odds with the local realism thesis.” Yes, yes it is! Or: “if I’d previously imagined the universe as a 3-dimensional network of classical computing elements, something like a classical cellular automaton, then this would require me to change my view.” Yes, yes, it would! Glad you noticed! But by now, we’ve been through a century of the “quantum mechanics is true” thesis obliterating every other thesis with which it comes into conflict.Comment #53 February 14th, 2015 at 10:56 am

…the failure of local realism immediately became the scientifically conservative option, not certain but the thing for anyone who understood the issues to bet on…FWIW, i’m trying to understand the issues, but i can’t see that from

QM worksit follows thatQCs scale.We’re discussing whether a process that likely occurs

outsideof the Universe will remain consistent at a large scale – yes, i need experimental evidence for stuff that happens outside the Universe in order to push back the threshold of what’s efficiently computable.And moreover, while today it looks like ultimately P=BPP…That was the sound of P obliterating a complexity class it came into conflict with. 🙂

Comment #54 February 14th, 2015 at 1:03 pm

Job #53: I don’t know why you keep talking about “outside the universe.” A quantum computer wouldn’t require intervention from outside the universe to operate—at least, no more than an electric stapler requires intervention from outside the universe. It would simply exploit the fact that

theuniverse—ouruniverse—is a quantum-mechanical one.And yes, of course it’s conceivable that P=BQP. That’s why a lot of us study quantum complexity theory, because we want to answer such questions! Note that my work with Arkhipov, as well as that of Bremner-Jozsa-Shepherd, shows that if your proof of P=BQP yielded a little more (a quantum sampler), it would imply a collapse of the polynomial hierarchy. That raises the stakes.

Comment #55 February 14th, 2015 at 8:51 pm

Note that my work with Arkhipov, as well as that of Bremner-Jozsa-Shepherd, shows that if your proof of P=BQP yielded a little more (a quantum sampler), it would imply a collapse of the polynomial hierarchy. That raises the stakes.In a way that also strengthens P=BQP – in the same way that the absurd consequences of QM alternatives strengthen QC.

P=BQP is just possible enough.

I don’t know why you keep talking about “outside the universe.”I guess it’s the same reason why there are many-world interpretations of QM. Where does interference occur, and what drives it? It’s a computationally expensive process.

One might argue that it’s simply a property of the Universe but IMO that’s alot like

smuggling the exponentiality somewhere that isn’t being explicitly considered.I’ll accept that

ifamplitudes areexactlylike probabilities, then interference is a basic property of the Universe and there’s no clear reason why QCs won’t scale. Is that acceptable?Comment #56 February 14th, 2015 at 10:32 pm

Job #55:

In a way that also strengthens P=BQP – in the same way that the absurd consequences of QM alternatives strengthen QC.

Sorry, I think you missed a negation somewhere. We observed that there’s an “absurd” consequence (namely, collapse of the polynomial hierarchy) of a slight strengthening of P=BQP. That can only make it

lesslikely that quantum computers can be efficiently simulated classically (at least, simulated in a strong sense).One might argue that it’s simply a property of the Universe but IMO that’s alot like smuggling the exponentiality somewhere that isn’t being explicitly considered.

But it

isbeing explicitly considered! The whole point of QC is to explicitly consider it. Fundamentally, the memcomputing and similar folks go astray because they don’t appreciate the intellectualenormityof what they’re trying to do (namely, solve NP-complete problems in polynomial time), so they don’t perceive the need for a new idea commensurate with that enormity. But you can’t accuse QC researchers of similar blindness. Challenging the ECT—not even solving NP-complete problems, just pushing the boundaries of efficient computation by a little bit—would clearly require one of the most profound ideas in the history of the world, butquantum mechanics is indisputably such an idea.If the ECT is going to fall, quantum mechanics seems like a worthy conqueror.I’ll accept that if amplitudes are

exactlylike probabilities…Well, of course they’re not

exactlylike probabilities! If they were, they’dbeprobabilities.Amplitudes are similar to probabilities in that they’re not directly observable, and they evolve only via norm-preserving linear transformations, and you need 2

^{n}of them to describe a configuration of n bits, and they enter physics only as tools for calculating the likelihood that one thing or another is going to happen.I regret that this conversation is going around in circles, so we should probably draw it to a close. But, to try to explain my position one last time:

we don’t get to decidewhether we want interference of amplitudes to be a basic property of the universe or not. Nature has decided, and there’s nothing Nature could possibly have done to make it clearer that interference of amplitudesisa basic property of the universe. So to whatever extent our intuitions have trouble with that, the problem that falls to us is to update our intuitions. The ratchet of physics turns and turns, explaining observed phenomena using mathematical structures that are more and more removed from intuition, but I can’t think of a single case in history where this ratchet has ever turned backwards, and where the analogues of the QM skeptics have turned out to be right.Comment #57 February 15th, 2015 at 9:55 am

Job #53

FWIW, i’m trying to understand the issues, but i can’t see that from QM works it follows that QCs scale.QM tells us what happens with superpositions and interference. This can be used for computing. The use for computing is called QC.

QCs are just QM. If QCs don’t scale, QM doesn’t scale. QM scales.

Comment #58 February 15th, 2015 at 2:27 pm

…they enter physics only as tools…IMO that’s the key difference.

Probabilities enter the classical Universe only as a

description, but amplitudes aretoolsused by… what? The process that drives interference?I was hoping to understand what that process is so that i might reason about why it may or may not scale.

Anyway, i hope this rather long discussion was at least on topic. Thanks for taking the time.

Comment #59 February 15th, 2015 at 3:10 pm

I am definitely not up on my physics (I went to a liberal arts school and never took physics in college), but I had read a really interesting article recently about Stephen Hawking claiming that “there are no black holes.”

http://www.pbs.org/newshour/updates/hawking-meant-black-holes/

Apparently, what Hawking meant by this is that there is no such thing as an event horizon–only an “apparent horizon” that holds matter and information for a while, and then releases it.

Does Hawking’s assertion that there are no black holes (event horizons) have any relevance to memcomputing?

Please pardon my weak physics education…

Comment #60 February 15th, 2015 at 8:36 pm

God is very mad at MIT. That is why Boston is experiencing a biblical snowcopalypse. Repent yee MIT sinners.

Comment #61 February 15th, 2015 at 9:02 pm

Philip #59

I’m gonna go out on a limb and say no.

But we could have fun speculating what happens if you throw an iPhone into a black hole. I’m going to guess, purely on the basis that NP-complete problems should be hard, that it takes exponential time/energy to get back.

Comment #62 February 16th, 2015 at 9:33 am

Scott # 56:

What you note here is mostly true, where “mostly” means something like at 99.9% times or so. [That is one of the reasons why physics often is held as

themodel of doing science.] But the number is not 100%; the ratchet has at some rare times moved back too. Even in physics.The most salient counter-example would be the atomic theory/approach. Dalton and Agogadro had it as early in the early 19th century, but physicists as a community accepted it only after Jean Perin’s Nobel-winning work in the early 20th century (based on Einstein’s 1905 suggestion).

Suppose you say that it all was chemistry, not physics. Ok. How about Clausius -> Maxwell -> Boltzmann’s development of the kinetic theory/statistical mechanics? The essentials were already in place by the end of the 1860s, but in the subsequent three decades, physicists took the success of Maxwell’s own EM theory, and used its fields or continuum nature in

theirarguments in order to deny reality to the idea of atoms! They lionized thermodynamics not only because of the general nature of its truths, but also because it was a continuum theory, unlike stat mech. Planck’s subsequent application of thermodynamics and EM to the cavity radiation problem (a problem already highlighted by Kirchhoff in the mid-1880s) had entirely proceeded without deriving any benefit (even a mere conceptual level benefit) from the particles approach. (In contrast, Rayleigh did feel free to use the Boltzmann statistics. But the point is, there was no unianiminity.)A re-examination of the issues in the 21st century (by Gibbs, and separately, by Einstein, Perin) led to the necessary correction, and the ratchet began moving “forward” again.

In short, there was a three decades-long period even in the late 19th century, when the ratchet in an important area of fundamental physics had, at least in part, definitely turned backwards.

The issue of fields and particles seems to have always confused physicists. Newton suggested a corpuscular nature of light, but Huygens’ idea of pulses (sort of like the sharp characteristics implied by the hyperbolic equations), properly developed into Young’s wave theory, came to rule physics. And then came cavity radiation.

And, we know what happened next: the ratchet first splintered into pieces; then each splinter became vague; then some of the splinters even began running back in time—who cares what other people think, you know? And then, the splintering continued at an ever greater speed, with each splinter either getting guided by a thoroughly ghostly potential (exactly like those angels shoving the planets forward in their orbits) if not idly multiplying universes at an ever-incrasing rate.

But, of course, as far as these later ideas in physics are concerned, the unidirectionality of their “development” is abundantly clear: it always results in ever more increasingly sophisticated mathematical structures—mathematical structures that are more and more removed from intuition. Who can argue about that point? Answer: None.

Best,

–Ajit

[E&OE]

Comment #63 February 16th, 2015 at 11:45 am

If its already a post-classical computing thread, I challenge the following question,

Why no one is yet to consider the physical efficiency of analog computers like these one , that do not use bits/qubits at all?

http://en.wikipedia.org/wiki/Differential_analyser

http://en.wikipedia.org/wiki/Op_amp_integrator

https://www.youtube.com/watch?v=NAsM30MAHLg&src_vid=6dW6VYXp9HM&feature=iv&annotation_id=annotation_2160609135

they solve computational problems that are used everyday in engineering like Integrals, Differential equations and Fourier transform that appear in “nature computation” = physics.

There is known to be real progress in these problems in both classical and probabilistic solutions ( Monte Carlo for example ) .

But in theoretical CS those problems are classified under RE and not in R!

http://www-math.mit.edu/~poonen/papers/sampler.pdf

and not really known and famous field of research.

How do we know that any analog computer will not stop in the same problem instance as classic computer ? , and if will stop wouldn’t that be post-Turing device?

Also I have an interesting assumption, I think that in physical energy* time terms (this amount is in physical action units ), the amount of time and energy needed by those analog devices for the same computation- that do stops ( without preparation of he input ) is far less than any classical computing device .

Comment #64 February 16th, 2015 at 2:35 pm

To Lopsy #61:

“What happens if you throw an iPhone into a black hole?”

Steve Jobs is going to be waiting inside the black hole and he will immediately throw the iPhone back at you stating “how many times have I told you A-holes that you can’t destroy information ! 🙂

Comment #65 February 16th, 2015 at 6:01 pm

Scott: You mentioned above that TSP is easy if all the cities are in a straight line. So TSP gets a lot harder when you go from R^1 to R^2. Can anything be said about TSP in R^m compared to in R^n, for m .LT. n? (using Fortran .LT. to avoid fighting the system).

My guess is not much, because R^2 is already npc, and the tools for working within npc are pretty blunt.

Comment #66 February 16th, 2015 at 8:40 pm

> On the other hand, a Quantum Simulation may well need to track all amplitudes in order to produce interference.

Hum… that raises the question, what is the smallest amplitude for which interference has been demonstrated? 😀 But perhaps it’s a stupid question. Presumably you lose unitarity if interference goes away below some threshold, right?

Comment #67 February 17th, 2015 at 6:51 am

Raoul #65: If you want an exact solution, then TSP is already NP-complete in R

^{2}, so as you say, obviously also in R^{k}for all k>2.But with approximation algorithms the story gets much more interesting. Sanjeev Arora, in 1996, gave a famous PTAS (polynomial time approximation scheme) for TSP in R

^{2}. His PTAS can be generalized to R^{k}for all k>2, but its running time increases with k at adoubly-exponential rate.Comment #68 February 17th, 2015 at 6:57 am

Remember, Aaron Swartz? Too bad, no one had to pay for that.

Comment #69 February 17th, 2015 at 6:53 am

Rolf #66: Yeah, that question is hard to make sense of, because you can produce amplitudes of order 2

^{-n/2}for any n, and demonstrate interference effects with them, just by generating n horizontally-polarized photons and then looking at them all vertically.Comment #70 February 17th, 2015 at 8:14 am

Hi Scott, Everyone:

I am curious what your thoughts are on how to responsibly disclosure a result like: a quick algorithm for factoring large integers. (Of course I don’t think any one has, or would for the forseeable future, but if someone did, it would have huge consequences to things such as online banking, national security, etc.)

I just don’t see how one would go about disclosing such a result and still remain responsible!

Comment #71 February 17th, 2015 at 9:14 am

@Responsible Disclosure:

Yes! I’ve been wondering the same thing. My first thought would be to anonymously post the answers to the RSA Factoring Challenge online along with a message saying that I’ll disclose the algorithm once there’s a consensus that the important crypto has been patched.

Comment #72 February 17th, 2015 at 10:16 am

It is sad that Scientific American accepted this without disclaimers.

Even if SA doesn’t understand the errors, claims that are not yet peer-reviewed and sound too good to be true (anguis oleum cures cancer! NP problems can be solved in poly time!) should not be presented as though they were scientific facts.

Comment #73 February 17th, 2015 at 1:35 pm

Scott #68

What kind of values of n have been demonstrated experimentally?

I find it confusing that there are papers like this one putting relatively weak bounds on violations of the Born Rule when we have results from particle physics which require QM to work to extremely high precision.

Maybe the reason we haven’t detected proton decay is that THAT process is at the limit of how accurate the Born Rule is obeyed by nature (!?) 🙂

Comment #74 February 17th, 2015 at 2:28 pm

Scott #68, classical probabilities kind of muddy the waters.

For example, suppose we have n Quantum Computers. Each QC has one qubit and applies two straight Hadamards.

The result is |0>,|0>, …, |0>.

Is that the result of single-bit interference (with +-1/root(2) amplitudes) or multi-bit interference (with much smaller +-1/root(2^n) amplitudes)?

Comment #75 February 17th, 2015 at 2:42 pm

Scott #68 re #66

To demonstrate an interference effect with n horizontally-polarized photons, we need to measure that it differs from the n-1 case. Is there any such measurement that is not exponentially small -e.g. remains possible for, say, n=100?

Comment #76 February 17th, 2015 at 3:28 pm

James #73: For the reason pointed out by Job #74, I think this is fundamentally a silly question. You can get smaller and smaller amplitudes by just building a bigger and bigger source of polarized light—or for that matter, taking

twosources of polarized light, in two different labs or two different galaxies, and then defining them to be two components of the same quantum state. In fact, forget about polarized light: why not just take all the degrees of freedom in the observable universe!So at the least, we should reinterpret the question to be about something like: the smallest amplitudes that have been proven to occur in any state with global entanglement (and which was shown to contain such global entanglement). Even then, if you just take a Schrödinger cat state and view it in the Hadamard basis, you can easily get amplitudes of order 2

^{-(n-1)/2}, where n is the number of qubits in your cat. So then the question boils down to how large of a Schrödinger cat has been created, which is partly a question of definition—but it’s certainly been done with n at least in the thousands. But maybe it would be better to look at ground states of 2D or 3D spin systems with pairwise near-neighbor interactions: there’s some experimental evidence that these indeed exist in condensed-matter and indeed behave as quantum theory predicts (e.g., Aeppli et al.), and they have exponentially-small amplitudes when viewed inanylocal basis.So, bottom line, I don’t think there’s any proposal currently on the table for how you could modify QM to get rid of exponentially small amplitudes, which wouldn’t lead to bizarre predictions (contrary to what was observed) for experiments that have already been done. Once there’s such a proposal,

thenthere’ll be something to talk about.Comment #77 February 17th, 2015 at 4:40 pm

Here’s one possible single-qubit test for amplitude precision.

Apply a sequence of n pi/n rotation gates in between two Hadamards, for large n.

If there is a maximum amplitude precision, then the effect of the pi/n rotations will be rounded up/down and the result will not be 100% |1> – it could be way off.

This is assuming we can implement a pi/c rotation gate with low error rate for relatively large c.

Comment #78 February 17th, 2015 at 7:56 pm

BTW, i suppose that in order to factor n-bit numbers, we’ll need to implement the pi/2^n rotations required by the QFT, for arbitrarily large n using polynomial resources.

Before this discussion i would have asked whether there is evidence that it will be possible to implement a pi/2^1000000 rotation efficiently. Now i understand that we should be asking why, according to accepted theory, it should

notbe possible – so it wasn’t a total waste of time.From that perspective, how would we be able to construct and tune a pi/2^1000000 rotation gate? Its effect is so small that we might need to perform an exponential number of measurements just to confirm that it’s working properly.

Comment #79 February 17th, 2015 at 8:09 pm

Job #78: No, that’s a different problem than the one we were talking about. If you had to rotate through an exponentially small angle, then that really

wouldincur a big complexity blowup (“only” a polynomial blowup, if you use fault-tolerance and the Solovay-Kitaev Theorem, but still annoying).But the solution to the problem is simple: you can simply

omitall the rotations whose angle is below some threshold! (Say 1/T, where T is the total number of gates.) Because of the unitarity of QM, doing so can have only a negligible effect on the probability of getting the right answer. And as a side benefit, this also decreases the gate complexity of the quantum Fourier transform (IIRC, from O(n^{2}) to O(n log n)?) Many treatments of Shor’s algorithm have the details.Comment #80 February 17th, 2015 at 8:57 pm

But the solution to the problem is simple: you can simply omit all the rotations whose angle is below some threshold!Ah, there goes my argument that, since the problem of providing a physical implementation for a pi/2^n rotation requires exponential time to verify, and is thus not in NP, Quantum Computing is only possible at the cost of managing to find solutions to increasingly larger instances of a fabulously difficult problem.

Comment #81 February 17th, 2015 at 9:59 pm

Question: If a pi/2^n rotation gate

doesrequire exponential time to verify – and is thus not in NP – then doesn’t the bounded-error, polynomial-time approximation guaranteed by the Solovay-Kitaev theorem imply that BQP != NP?Or does the Solovay-Kitaev theorem only guarantee that such an approximation exists without actually providing one?

Incidentally, the Wikipedia page for Solovay-Kitaev Theorem is empty.

Comment #82 February 18th, 2015 at 4:20 am

Re #79, is omitting all small rotations still enough if you have quantum gates with more than two inputs?

Comment #83 February 18th, 2015 at 6:57 am

jonas #82: Yes. You can take any quantum circuit whatsoever with T gates, and replace each gate by a different gate that’s ε-close to it (or by nothing, if the gate was ε-close to the identity), and the total error this induces in the final quantum circuit will be at most about Tε.

Comment #84 February 18th, 2015 at 7:01 am

Job #81: NP is a class of languages. I don’t understand what it even means to ask if “verifying a gate is in NP.”

But if it helps, the Solovay-Kitaev Theorem doesn’t just tell you that you can approximate a rotation through any angle to within a precision of ε, using polylog(1/ε) gates drawn from any universal gate set with inverses. It also tells you

howto do it—i.e., it gives you an efficient algorithm for finding the gate sequence.Comment #85 February 18th, 2015 at 11:09 am

Job #81: You may read this: http://michaelnielsen.org/blog/the-solovay-kitaev-algorithm/

Comment #86 February 18th, 2015 at 2:55 pm

Is there some explicit dependency between the quantization of space and time (the Planck limits) and the amplitudes? How does QM hold up at such small distances?

The scale of the QM system seems to never be considered. Yet quantization is a major characteristics of QM.

Comment #87 February 18th, 2015 at 4:13 pm

Fred #86: No, they have nothing to do with each other. Remember, again, that amplitudes are purely numbers that you use to calculate probabilities—nothing more. They no more have a length scale associated with them than probabilities have a length scale. And the current popular ideas about quantum gravity (like AdS/CFT) do NOT involve modifying quantum mechanics in any way (including at the Planck length): while I guess anything is possible, it’s hard to see how a breakdown of QM at small scales only could happen. Yet again, quantum mechanics is not a conventional physical theory, that describes phenomena at a certain scale. It’s a different calculus of probabilities.

Comment #88 February 18th, 2015 at 6:08 pm

Scott #76

Ah, ok, so you’re really talking about the exactness of the

linearityof QM rather than the “accuracy” of the Born Rule – sorry, maybe I confused the issue with my earlier comment.I definitely agree that the linearity has been proven to spectacular precision experimentally.

Comment #89 February 19th, 2015 at 11:10 am

Scott #87

What makes QC conceptually difficult for me is that classical computers are all about causality/determinism (we can talk about “random algorithms”, but even in that case it’s a theoretical artifice).

QM may just be a new way to compute probabilities, but it also says that perfect randomness is a fundamental building block of nature, and uses that as the building block of the quantum computation.

Once you start dealing with probabilities the problem is that it’s very hard to talk about them in terms of absolute “precision”, besides traditional practical statistical analysis.

E.g. you can never really prove that a given practical coin is “perfect” and not biased in a subtle way.

It’s fine if you do 3 coin tosses and throw it away, but how do you build a computation that relies on an exponentially increasing number of imperfect coins which results have to “interfere” with one another?

From what I’ve read it seems that many of the attacks on the scaling of QC revolve around error corrections and some of the assumptions made to build that theory – i.e. the nature of the errors and their correlations, the actual way decoherence manifests itself in a practical computer vs in a theoretical model.

And that’s a key point – a classical computer is a perfect realization of a theoretical digital logical machine. All it takes is reducing the errors at each transistor past a certain threshold and regenerate the signals.

It’s not often that we can create truly perfect machines like that which are also perfectly scalable.

A QC on the other hand is a totally different beast – we have to deal both with correcting inherent biases and imprecision in the way qubits are realized, and at the same time maintain delicate entanglements between them. It’s a strange realm that’s both analog and digital.

Maybe the current state of confusion (DWave, QC skeptics, …) is a text book case of leaky abstraction (“an implemented abstraction where details and limitations of the implementation leak through”):

Classical computers are very easy to build.

All you need is a bunch of stable “macro” objects.

You can make them out of gears https://www.youtube.com/watch?v=jiRgdaknJCg

out of water bubbles https://www.youtube.com/watch?v=UuNK4mDSvuQ

etc.

As a result, the entire task has been delegated to engineers.

The engineers who build the computers and the CS people who use them have been living in mostly separated worlds for the last 50 years (there are still interesting physics challenge in the fabrication of transistors, but it’s mostly unrelated).

For 50 years the CS community has been served its “perfect” bits on a silver plate.

Actually the bits have been so perfect that most top CS academics can’t even be bothered writing actual programs. The fact that real classical computers exist is irrelevant to them (but the world wouldn’t be able to operate without computers now).

So it’s been very tempting for them to jump from “bits” to “qubits” on paper, and go from classical logical gates to theoretical QC gates, etc. And build entire careers on top of machines that haven’t even been built yet and talking/hyping “qubits” as the next big thing.

But the clean hardware abstractions that made classical computers such a success just don’t exist with QC.

If you “delegate” the building of qubits to the engineers, well, you come up with the D-Wave mess.

And if you rely just on physicists/CS ppl to build the hardware, you end up with endless debates on error correction, the actual nature of QM, and what they can actually solve.

In the meantime everyone is abusing the abstraction one way or another, even though it’s totally flawed.

Comment #90 February 20th, 2015 at 10:40 am

Scott –

No, they have nothing to do with each other. Remember, again, that amplitudes are purely numbers that you use to calculate probabilities—nothing more.For the record, people should not read into this a message of “shut up and calculate”, which in any case I don’t think Scott intended. Yes, amplitudes are numbers, and so are probabilities. Amplitudes are an analogue in quantum probability of what probabilities are in classical probability. (*An* analogue. The other analogue is probabilities.)

Comment #91 February 20th, 2015 at 10:48 am

fred – “What makes QC conceptually difficult for me is that classical computers are all about causality/determinism (we can talk about “random algorithms”, but even in that case it’s a theoretical artifice).”

It’s not just a theoretical artifice. Ever since the highly ironic Netscape secure connection vulnerability 20 years ago, it has been clear to everyone that there are fundamental uses of true randomness in practical computer technology.

You can think of a classical computer as a “machine that processes probability distributions”, to use a great phrase that I learned from Oded Schramm. Maybe the vast majority of practical algorithms do not need this interpretation, but some of them truly do.

Comment #92 February 23rd, 2015 at 4:21 pm

Greg #91

Classical computers definitely don’t require randomness to work, quite the opposite – good luck building a calculator on top of registers that randomly switch bits when a cosmic ray hits.

Some algorithms rely on some source of perfect randomness, but whether that source of randomness is internal or external to the computer is a matter of definition, it can just be considered to be part of the input.

It’s not hard to come up with a good source of randomness – like natural noise (which itself is traced back to QM), like read the last 4 digits of an external thermometer, or link it to a Geiger counter.

But the difficulty is generating randomness with a high throughput for the practical applications that need it (encryption).

Comment #93 February 27th, 2015 at 2:38 pm

Speaking of … Solving mazes with memristors: A massively parallel approach

Phys. Rev. E 84, 046703 – Published 14 October 2011

Yuriy V. Pershin and Massimiliano Di Ventra …

… (They) wrote:

III. NUMERICAL SIMULATIONS

A. Model

“In this section, we present numerical simulations of the

dynamics of memristive processors. Numerical modeling of memristive networks is easily implemented and thus offers by itself a practical computational algorithm for maze solving. This computational approach, however, requires multiple computational steps (see below) as opposed to the real memristive processor discussed in the previous section which needs only a single step to perform the whole computation.”

Their Figure 1 depicts a notional memristor network that they used to (numerically) model a one-step maze solution.

******

First question: Is their notional numeric approach to simulating a one-step solution to that particular maze valid?

Second question: Setting aside two of the issues discussed here …

(“Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states, memcomputing “is based on the brain-like notion that one can process and store information within the same units (memprocessors) by means of their mutual interactions”)

and the authors’ assertion that…

(“this idea allows the Subset Sum problem to be solved with polynomial resources, by exploring all 2n possible subsets in parallel”),

…is a memcomputing architecture likely to work according to the authors’ claims?

Do you know conclusively, or would you hazard a guess? I am more interested in the veridity of the memcomputing concept than possibly flawed claims about solving the Subset Sum problem “…in polynomial time using polynomial resources and collective states”.

Comment #94 February 27th, 2015 at 2:46 pm

dubina #93: Sorry, I don’t feel qualified to comment on other possible advantages of memcomputing, once the wrong claim about NP-complete problems has been debunked. If others do, they should feel welcome to chime in.

Comment #95 February 27th, 2015 at 4:23 pm

Thank you, Scott. I think that aspect has paid short shrift so far.

Comment #96 February 28th, 2015 at 10:09 pm

I don’t buy the other advantages they were touting either.

The article in Scientific American starts by pointing out how much electricity we collectively spend on computers. That’s of great importance. So at the end of the article I wondered, how exactly can memcomputing do the same amount of work with less electricity? They don’t answer that. I think it’s all smoke and mirrors.

Comment #97 April 6th, 2015 at 9:43 pm

i think the problem of P/NP is more metaphysical. which poses a physical problem.

well okay factoring integers requires some type of prime structure algorithm so riemann hypothesis with a local search.

other problems like traveling salesman can be solved by a physical continuum computer basically something that can solve meta-graph theory which is scanning a map for everything exterior to the points. which is really just matrix equation calculations.

well when you get to the number theory of prime structure, there seems to be no number dna, which can unfold the prime structure, basically something more complex or homeomorphically modular mathematics that satisfies quick steps to factoring integers. i think the reason is because locally well einstein said space can be finite but unbounded and so there are limits to what you can measure. so more clarification is going to be needed in the future as to what are the classes of P/NP. i think its not a “stable” question yet because of number theory concerns. but i do think more complex problems will be solved in the future. i would be number theory in the most unsolvable class.

Comment #98 July 3rd, 2015 at 6:27 pm

Not sure if you’re still checking comments for this post, but they’ve now published an actual experiment follow-up. http://advances.sciencemag.org/content/1/6/e1500031

The relevant bits seem to be

from the abstract, and

That sounds incredibly hand-waving to me: “quantum computing does something similar to what ours is doing, therefore ours is at least as good as that”, but it’d be nice if you could analyse that claim more robustly. At the very least, would a polynomial error codes existing allow solving NP-complete problems in polynomial time and resources, and is there reason to think polynomial error codes won’t exist.

Comment #99 July 7th, 2015 at 8:53 am

As was just mentioned by @Avi, a peer-reviewed version has just been published in the _Science Advances_ open access mega journal arm of _Science_/AAAS. Note that they attempt to give some idea of what such an encoding might look like:

which refers to:

You can find that article at http://dx.doi.org/10.1109/TNNLS.2015.2391182 or, if you don’t have an institutional subscription, via the arXiv ID. You may want to update your post (because it is frequently referenced) to address this IEEE TNNLS example encoding and whether it gets close to responding to your concerns.

Comment #100 July 7th, 2015 at 9:39 am

Ted Pavlic #99: I think it was a serious mistake for

Science Advancesto have published this paper. (Though in their defense, it’s hard to ensure quality reviewing across all areas of science, especially when you insist on 2-3 week turnaround times. And authors, of course, have the liberty of submitting their paper to one journal after another until they “hit the jackpot” by getting reviewers who don’t understand what they’re doing.)I took a look at the link you provided, and I don’t see that

anythingsubstantive has changed. In particular, I’m still missing any explanation for how an NP-complete problem gets solved in any proposal of this kind without an exponential blowup in the physical resources, ultimately violating the Bekenstein bound.If someone wants to try to explain it to me, they’re welcome to do it right here—but please answer directly, rather than providing yet another link that, when followed, turns out not to address the question (or you can’t tell where it does, if anywhere). I don’t want a URL; I want an explanation!

Comment #101 July 7th, 2015 at 5:01 pm

Great points, thanks for explaining. I noticed that most, if not all, of your arguments to refute the Memcomputing are echoed in a paper from Apr 2015 here:

http://arxiv.org/pdf/1412.0650.pdf

by Igor L. Markov.

Comment #102 July 7th, 2015 at 5:06 pm

@Scott #100 — I didn’t mean to annoy you. You had just asked about the encoding, and I just wanted to point out that in their new version, they have a reference that claims to provide an example encoding. That’s all! I wasn’t defending them nor attacking your post. I just thought you might be interested. Apparently you weren’t?

(and I totally agree with you about open-access mega journals such asScience AdvancesandPLoS ONEand the like — their results should be taken with a grain of salt. Having said that, I’ve been a reviewer forScience Advancesand was not forced to get my reviews back in any shorter timeline than other journals. Moreover, the paper I reviewed most recently was not accepted. So it’s not like everything makes it through)On risk of annoying you further with another URL, I thought you might also be interested in this recent result from May 2015. There’s no P=NP claim or anything like that, but it’s another example of how analog computers can, in principle, emulate some aspects of quantum computing that arise simply due to properties of waves.

Popular news:“Quantum computer emulated by a classical system”http://phys.org/news/2015-05-quantum-emulated-classical.html

Primary source(caveat: open-access journal):“Signal-based classical emulation of a universal quantum computer”

by La Cour and Ott

New Journal of Physics(2015),17:053017http://dx.doi.org/10.1088/1367-2630/17/5/053017

Abstract:

Comment #103 September 6th, 2015 at 3:23 pm

[…] There’s nothing wrong with analog or distributed or stochastic computing, I’ve done them myself. But they’re still classical since there are no system-level quantum interactions involved in the computation. Analog architectures are tricky, though, since it’s easy for a designer to “sweep complexity under the run” without realizing it. A common error is to implicitly assume that analog signal is a synonym for real-valued signal which can lead to absurdities in the analysis. A recent example of this is the Memcomputing fiasco. Memcomputing is a brain-inspired analog architecture which, the authors claim, can solve NP-complete problems with polynomial resources in polynomial time. It can’t, at least not in a physically realizable way, but it’s instructive to understand the “rug sweeping” errors that led them astray. Igor Markov has written an excellent summary of most of the errors here, and a more extensive discussion can be found in Scott Aaronson’s blog here. […]

Comment #104 January 18th, 2016 at 2:43 am

[…] User, theoretical computer science and other stuff1 ViewYou need to read this blog post: http://www.scottaaronson.com/blo…Long story short: mostly hype. The trick works, but only for small problem instances.Written just […]

Comment #105 August 30th, 2016 at 12:09 pm

[…] I became aware of this when I read an interesting debate about a blog post by Scott Aaronson, The Toaster-Enhanced Turing Machine, and further echoes of it in other published writings, which will be mentioned below. The issue is […]