But not enough lower that it would matter to my argument, I think.

On the other hand, it seems as if cosmic rays can sometimes run up to 10^21 eV, so only 20 orders below the Planck level. ]]>

That said, in the unlikely event that I am understanding your model correctly, the best thing to look at would be Quantum Zeno Effect experiments, since in your theory wavefunctions will collapse even in the absence of observation.

]]>I believe that the QC experiments will begin fail with higher probability as the number of qubits increases. The reason they will fail with higher probability is that the fraction of the possible “unitary” evolution paths where the discrete fourier transform etc will be useful becomes smaller and smaller as more qubits are involved. I’m suggesting that this is a fundamental limitation of Nature, not of engineering. I think this will be very hard to demonstrate convincingly until a few hundred qubits are reached, since at low numbers of qubits there is not much effect probabilistically on the predicted outcomes of the quantum algorithms by assuming that superpositions are not persistent in reality.

ie at low qubits the probability is very high that the algorithms will work (the universe evolves on a path where the algorithm is effective) – and any failures will be so rare as to not be distinguished from experimental error.

Now, you might say that this is too vague and not much of a claim, but if I was able to claim anything stronger, then I would certainly have to be wrong as it would imply a lot of well-established QM results shouldn’t work in practice either.

That’s why I suggested that any “discrete” time* evolution model could better be attacked by observations on things like the recent huge gamma ray burst GRB 130427A.

(*Note that time is continous, and so are the amplitude phases)

But perhaps a quantitative calculation of the probability of failure wrt qubit number for a specific algorithm would be possible – then I guess people would be more interested.

]]>- Is this only because we don’t know enough about QC error correction to be able to prove that a certain noise definitely does not allow QC?

I don’t know but I very much doubt it. The way I think about it, the reason why it’s so hard to kill QC without also killing classical computation is that, to get universal QC, there are basically only two requirements:

(1) A universal set of *classical* operations.

(2) A single “quantum” operation, such as the Hadamard gate.

And yes, it’s possible to design noise models (for example, pure dephasing noise) that target only (2) without targeting (1). But because they require specifying the *basis* in which the classical computation will be allowed to take place unmolested, such noise models tend to be pretty contrived. Most noise models will kill either (1) *and* (2) (in which case they don’t even allow classical computation), or neither (in which case they allow QC).

To be specific, he showed the plans for Babbage’s analytical engine, and showed one of Leonardo da Vinci’s drawings, and said that Babbage’s invention was a hundred years ahead of its time, but da Vinci’s was conceptually impossible.

]]>**Scott** says “It turns out to be extremely hard to design a physically-plausible noise model that would only kill QC, and not also kill classical computation!”

Steve Simon voices similarly broad claims in his (terrific!) on-line CSSQI video lecture **Topological Quantum Computing**: “You can argue that *all* noise processes are local!”

However, these broad claims are far more commonly encountered in QIT talks, and in blogs comments, than in the peer-reviewed literature. For example, no such claims are advanced in the much-referenced survey upon which Steve Simon’s talk is based — much-referenced because it’s terrific! — namely Nayak, Simon, Stern, Freeedman and Das Sarma, “Non-Abelian Anyons and Topological Quantum Computation” (http://arxiv.org/abs/0707.1889).

The reason is simple: Nature provides *plenty* of physically-plausible noise models that are problematic for QC precisely *because* they are generically non-local.

Disc drives provide an example that is both familiar and instructive: the classical memory is composed of thermodynamically stable magnetic domains in the platter. As the read-sensor flies overhead, each platter-bit transiently “sees” magnetic images of adjacent platter-bits in the conduction-band of the read-sensor, and is (non-locally) perturbed by those images.

Fortunately — or rather, in virtue of careful design — the platter-memory bits are self-correcting, in that the platter itself constitutes a thermal reservoir that continuously reads-and-corrects each platter bit. Yet even at the classical level, non-local memory errors are ubiquitously present — in both magnetic and electrostatic memories — so much so, that the unix command “memtest all 1” will run more than a dozen tests, that assess vulnerability to both non-local and pattern-dependent noise.

Needless to say, similar interactions dynamically entangle photon sources, photon detectors, and optical interferometer modes, such that none can be assessed in isolation from the other two. This is why assessing/demonstrating the scalability of n-photon coherent sources is comparably difficult to assessing/demonstrating the scalability of n-qubit memories, or of assessing/demonstrating the scalability of n-qubit qubit computations.

Can quantum memories be as robustly and scalably self-healing as classical memories, both in principle and if so, then (hopefully someday!) in practice? That is an open question, regarding which the invited speakers at QStart 2013 no doubt will have much to say.

]]>