http://www.myspace.com/115573699 ]]>

Hmmmm … maybe “complementary” is not the right word? Many of the blogging posts are just as polemic as Dyanakov’s preprint.

When dueling polemics become widespread, isn’t that a pretty reliable indicator than neither side has good ideas or a clear plan (kinda like the Iraq War)?

Myself, I’m going to keep reading the gospel (namely, Mike and Ike Ch. 10), and also look forward enormously to reading Jonathan Israel’s second volume *Enlightenment Contested* (when, oh when, will it reach this side of the pond?).

Hopefully, these two works will prepare me for tackling the weak Dyanokov challenge over the holidays; a challenge that I am beginning to perceive as partaking equally of technology and agnotology.

Just to make two specific prediction:

(1) Over the next six months, we’ll all be reading reviews—in many different scholarly and popular journals—of both Israel’s *Enlightenment Contested* and its earlier volume *Radical Enlightenment*. Some of us will even read the books ourselves.

(2) Yes, people will take up Dyanokov’s challenge problem: physicists, mathematicians, and engineers, each in their own style, and we will all learn a lot in doing so.

We should prepare to have our agnotology perturbed … which is good.

Essential, in fact.

]]>But it takes me a while to think about these things. And as Lowell Brown used to say, “Don’t start a long calculation until you know the answer.”

The key to the Weak Dyakonov Challenge does seem to be, “Can projective measurements be implemented with weak POVMs and finite detector efficiencies?”

Projective measurements are a good example of an quantum concept that is extremely simple and natural from an algebraic point of view, yet very complicated and hard-to-achieve from an engineering point of view.

What about the geometric point of view of Yau and his colleagues (that began this thread)? My present feeling is that Kähler geometry supplies the unifying conceptual link between the algebraic and engineering points of view.

The Weak Dyakonov Challenge seems to be a fun place to tie these ideas together.

]]>Last night, I did something in bed that I had never done before—I read chapter ten *Quantum Error Correction* of Nielsen and Chuang! Yes, it was “good for me”!

You see, for the great majority of practical quantum system engineering purposes, the elegant exegesis of chapters two, eight, nine, and eleven is all you need (and these chapters are pretty self-contained).

The portion of Nielsen and Chuang seemingly most relevant to Dyakonov’s challenge problem is section 10.2 *The Shor Code*. On first reading, it seems perfectly feasible to protect a a single-qubit storage register by iterative applications of the Shor code. Am I right in this?

And surely, it is numerically feasible (but lengthy) to implement the Shor code in a manner consistent with the conditions of the challenge problem, i.e., white noise applied to each of the nine Shor code qubits, measurements by weak POVM, finite detector efficiency, noise also in gate transforms, errors in Bloch axis alignment, etc.

The part of the Nielsen and Chuang discussion that perturbs Dyakonov’s physical intuition, and thus motivates his challenge problem, is the assertion (p. 433) “The Shor code protects against completely *arbitrary* errors, provided they affect only a single qubit! The error can be tiny—a rotation about the Bloch axis by pi/263 radians, say.”

I have to say, this statement perturbs my physical intuition too. If I were going to attempt the challenge—and I haven’t made up my mind to do so, ‘cuz it would be a lot of work—I would simplify it to “The weak Dyakonov challenge problem” by allowing all gates to be ideal and all Bloch axes to be perfectly aligned. But I would retain the nonideal Dyakonov elements of weak POVMs and finite detection efficiency for all measurements.

As for the challenge problem’s independent Markovian qubit noise, I would numerically implement it in the manner most congenial to engineers, namely, as a covert measurement process, rather than classical white noise. This is of course a purely conventional choice of noise POVMs that cannot affect the outcome of the trial.

This approach would give the Dyakonov challenge problem a unique informatic cast, and possibly some cryptographic content. “If someone is spying on your qubit, can you maintain its coherence?”

Surely, a central principle of quantum cryptography will be found lurking nearby!

]]>Whether this is a fair restriction or not depends on your point-of-view (obviously).

]]>E.g., Dyakonov could have made his point about the need for theoretical caution and humility with regard to noise mechanisms much more politely, and with greater respect for academic tradition, by reference to (from my bibtex database):

@inCollection{Orbach:72, author = {R. Orbach and H. J. Stapleton}, title = {Electron Spin-Lattice Relaxation}, booktitle = {Electron Paramagnetic Resonance}, pages = {121–216}, editor = {S. Geschwind}, publisher = {Plenum Press}, city = {New York-London}, year = 1972, jasnote = {This article give a humility-inducing history of our understanding of spin-lattice relaxation, in which (a) theorists predict low relaxation rates, (b) experimentalists measure much higher relaxation rates, following which (c) theorists improve their theorys (d) experimentalists clean up their experiments, and (e) the cycle begins anew. Theorist: Waller Experiment: Gorter Theorist: Heitler, Teller, and Fierz, Kramers, Kronig, Van Vleck, Dyson, etc, 1932-1972. 146 references! } }

Notice, by the way, all the famous physicists whose names are cited!

I guess the main respect in which I differ with your post is with regard to line-widths. Here it seems to me that Dyakonov is raising an interesting point, which he does not fully explain … here’s what I made of it:

Suppose we have a fault-tolerant, two-qubit storage register. Then we can identify the (time-increasing) error probability of that register with a traditional linewidth via the following *gedanken* experiment:

(1) Prepare an ensemble of 2-qubit registers, all having the same initialization.

(2) Perform the following step N times:

2A: store the registers for a time dt such that the error probability is amall

2B: remove the registers, apply a magnetic field (to lift the energy degeneracy of the qubits), and apply a traditional NMR 2pi-pulse to each qubit. Then apply a reverse magnetic field to undo the energy shift.

(3) After the N steps, read out the registers. The measured quantity is the no-error probability P(omega), where omega is the carrier frequency of the NMR 2pi pulse.

The whole point of error correction (it seems to me) is to increase the on-resonance probability P(omega).

So I agree that Dyakonov’s language is imprecise (error correction makes the spectral peak P(omega) get higher, not narrower), but still, his basic point that quantum error correction has well-defined and extremely interesting implications for spectroscopy, seems correct to me.

]]>OK. I admit I stopped reading it after the first page or so. Here are some quotes which should suggest why

1) “The enormous literature devoted uirto this subject is purely mathematical. It is mostly produced by computer scientists with a limited understanding of physics”

It’s rather surprising to hear Manny Knill, Wojciech Zurek, John Preskill, Alexei Kitaev and others described as computer scientists with a limited understanding of physics.

2) “the number of continuous variables that we are supposed to control, is 2^1000 ∼ 10^300.”

No. We’re supposed to control 1000 qubits, and infinite precision is not necessary even under ideal circumstances.

3) “This striking statement implies among other things that, once the spin resonance is narrow enough, it can be made arbitrarily narrow by active intervention with imperfect instruments.”

It says nothing of the sort.

4) “How is it possible that by using

only other identical pointers (also subject to random rotations) and some external fields (which cannot be controlled perfectly), it might be possible to maintain indefinitely a given pointer close to its initial position?”

It’s not.

Statements (3) and (4) are particularly troubling, as no-one who understood fault-tolerant quantum computing would ever say such things. So there is little reason to read the rest of the paper, except for the excellent drawing of a lion in Figure 1.

]]>John, The analogy with a system of classical spins does not come from the preprint but from the abstract of an old talk by Dyakonov (“quantum computing: view from the ennemy camp”) that you pointed out in your first message.

]]>I wonder whether some people are criticising Dyakonov’s preprint, without having read it?

Dyakonov’s calculations do *not* treat qubits as classical objects. In fact, his calculations do not depart from quantum orthodoxy in any respect.

Rather, Dyakonov argues that because fault-tolerant algorithms have become central to building actual quantum computers–with many young people’s careers at stake, and large resources invested in expectation of a practical benefit—it is now obligatory to do a detailed end-to-end design analyses of these algorithms.

In particular, Dyakonov’s thesis that we should be “extremely vigilant to the explicit or implicit presence of ideal elements within the error-correcting theoretical schemes” is a well-accepted principle in engineering.

To show why, I will mention that last week, the Chair of our Mechanical Engineering Department spent several hours evaluating (in the Chair’s office!) a “working prototype” of a perpetual motion machine.

Needless to say, this machine wasn’t *quite* working yet … it needed only a few “minor” improvements in a few “inessential” mechanisms. The point being, that Dyakonov’s suspicion of ideal elements is well-founded in engineering practice.

It is often the case that elements that seem innocuous from an algebraic point of view seem unphysical from a geometric point of view. One example (of many) is projection operators that satisfy P^2=P. When one actually designs devices that implement projection operators, it is much more common to obtain operators that satisfy

(P^dagger)^2 P^2 = P^dagger P

which is not at all the same thing! The difference amounts to a phase noise that has a natural geometric interpretation as a “fuzz” on the state space trajectories.

My own (very tentative) opinion is that, per the classic rabbi joke, “everyone is absolutely right.”

Given any one error mechanisms, or a few of them, then a fault-tolerant design can be found. These algorithms are wonderfully ingenious, and they point to deep theorems in many areas of mathematics–we’re surely a long way from understanding what they mean. But it is equally well-justified to ask whether the resulting ensembles of quantum trajectories are geometrically physical — they are Hilbert-space filling fractal-like manifolds whose resolution along certain axes can be made arbitrarily fine, according to the error-threshold theorems.

Dyanokov’s challenge problem is, IMHO, a good way to illuminate these issues.

Now, when doing direct quantum simulations, my laptop can handle Hilbert space dimensions of up to 2^18 (thanks, *Mathematica*!). But it is not clear (to me) whether 256,0000 dimensions is enough to protect even a single qubit against realistic errors, much less two or three entangled qubits.

Nonetheless, it would be fun—and highly instructive—to give Dyakonov’s challenge a try, specifically, by attempting to error-protect the entanglement of two or three qubits, implementing all calculations with weak POVMs, finite detector efficiencies, and error-prone unitary transformations.

Maybe over the Holidays?

]]>This is impossible. Fault-tolerance results tell us we can reduce the effective noise on an encoded qubit with a polylogarithmic overhead in resources. This allows us to increase the time we can reliably store an initial state, but not indefinitely.

Since even a polylogarithmic overhead is presently a daunting challenge to experimeters, such proof-of-principle experiments will be extremely limited, but they will most certainly be appearing in the near future.

*shouldn’t we allow not only uncorrelated Markovian noise on the elementary bits, but also noise in the coupling constants of gates and unitary transforms?*

Threshold theorems already allow for both.

I have shamelessly adopted Scott’s retort to criticisms of fault-tolerant quantum computing. The critics are essentially saying they don’t believe quantum mechanics, and it would be nice if they’d (a) be more honest about it and (b) specify exactly where they expect quantum mechanics to break down.

]]>