Try “The Backwards-Time Interpretation of Quantum Mechanics – Revisited With Experiment”, Paul J. Werbos – https://arxiv.org/ftp/quant-ph/papers/0008/0008036.pdf

]]>With the class I’m considering here—call it QMA++, or QMA_{PDQP}—the crucial difference is that you can make an ordinary, collapsing measurement on part of the witness, and *then* make multiple nondestructive measurements on the remaining uncollapsed part. *That’s* the source of the (very large, I conjecture) additional power.

In that paper, in the proof overview of \(BQP/qpoly \subseteq QMA/poly\), when using \(QMA+ =QMA\) (Aharonov+Regev), you say:

“QMA = QMA+; informally, this result says that protocols in which Arthur is granted the (physically unrealistic) ability to perform ‘non-destructive measurements’ on his witness state, can be efficiently simulated by ordinary QMA protocols.”

Could you say a few words about how this is not the class you describe in #3?

I’ll admit this is my first contact with QMA+ and I haven’t yet been through the formal proof yet, so maybe I’ve just misunderstood QMA+…

]]>Scott has given (at #7) what he had in mind. Turns out that I had read a little different into it! (I too don’t know complexity theory/QMA/etc.) So, when I read:

…if we let the verifier make multiple non-collapsing measurements of the same state…

together with this:

…everyone knows you can’t sample multiple times from a conditional probability distribution, that it’s just an object that exists in your head—whereas because of interference, the multiple branches of a superposition feel like something you could almost reach out and touch!

what I read into it was that Scott had a *time-dependent* joint probability distribution in his mind (say as in RNN/RL). Once you make that assumption, everything becomes clear.

— —

Now, a bit additional (just sharing some thinking which occurred to me, on the QM side)…:

In both classical and quantum mechanics, if you interact with a system to sample from a joint probability distribution it initially has (or measure the System ket), and wish to repeat sampling on the same initial distribution (or the same ket), then you would have to *deliberately* act to restore the post-measurement distribution back to its initial state. You would have to do that each time you wish to take a sample. Absent this restoration, you can’t possible have even a pretense at “non-collapsing measurements”.

…

Whenever people think or talk of doing repeated measurements on a System particle, the setup is not as simple as, say, in Tonomura’s famous double-slit experiment. There, the electron is emitted, travels through the chamber (with “slits”), and is *irreversibly* absorbed at the Detector.

The setup for the so-called “repeated measurements” would be more like this (which is just a guessed schematic; I am not an expert of this area):

Have a source emit a small object (massive, fermionic). Grab hold of it using optical tweezers. Use lasers to make it jump up/down in its state, and make appropriate spectroscopic measurements on photons respectively absorbed/emitted.

Note: The elementary QM particle hitting the detector is the photon, *not* the fermionic assembly trapped in the tweezers. The photon indeed is lost forever in the Detector even in such a setup; the fermionic assembly is *not*. … Doesn’t matter; photon number is never conserved anyway!

What “making repeated measurements,” then, could mean this:

Emit the fermionic assembly only once, and reuse *it* — make it dance many times over. For instance, for emission spectroscopy, it would involve exciting the fermionic assembly to the *same* excited state as the initial ket, using the controlling lasers.

Now, getting the assembly to the same initial ket is what I meant by “restoration”. For each repeated measurement, the ket for the assembly must be ensured to be the same. This is the QMcal equivalent of ensuring the same initial joint probablity distribution from classical mechanics.

Calling it the “repeated measurement” is, strictly speaking, a misnomer, because you aren’t measuring the same *photon* again and again.

In short, what we have here is just a practically convenient means of rapidly ensuring the same ket state, albeit with different particles. Measurements aren’t repeated on the same *particle*, but the ket can be ensured to be, within experimental tolerances, identical for each measurement.

BTW, the fermionic assembly is continuously interacting with the lasers. In contrast, in the classic QMcal experiments (including Tonomura’s), you can’t ensure the same initial ket for each measurement trial. Poisson distribution kicks in.

— —

One more aside:

Measurements *always* (irreversibly) collapse *something*. The issue is what precisely is it which undergoes the collapse.

The iqWaves theory (proposed by me) says that it’s the state of the *Detector* which collapses. In contrast, in the mainstream QM, it’s the state of the *System* particle itself that collapses.

Repeated measurements on the *same* QMcal particle must be in principle impossible, as far as I can make out, as of today. However, I haven’t yet developed the relativistic iqWaves theory, and photons can’t be handled in NRQM. So, please take it as a strictly tentative conclusion.

Thanks.

Best,

–Ajit

[PS: Sorry, there was some snag the first time I submitted the comment… Resubmitting.]

]]>As it turns out, quantum mechanics doesn’t work that way either. Indeed, the collision lower bound, which I proved in 2001, showed that in general, any quantum algorithm needs many queries to f to find a collision pair—that is, an x≠y such that f(x)=f(y).

But, OK, this is much less obvious in the quantum case, since sometimes we actually *can* use quantum interference to learn properties of multiple preimages of f(x)—as for example with Simon’s algorithm! So in the quantum case, it might feel more tempting to bend the rules a little, imagine counterfactually that we *could* generally observe multiple preimages of f(x), and then see what the consequences would be.

There’s been a lot of work on reducing the classical cost, but mostly before April. Is advantage back?

the paper: https://arxiv.org/abs/2304.11119

]]>That will come in handy once AIs will be taking all the credit for answering questions.

The only area left for humans to take credit will be “I went out in the field for 2 weeks and found three new species of insects!”

]]>