## Quantum miscellany

- Tomorrow at 1:30pm US Central time, I’ll be doing an online Q&A with Collective[i] Forecast about quantum computing (probably there will also be questions about AI safety). It’s open to all. Hope to see some of you there!
- Toby Cubitt of University College London is visiting UT Austin. We’ve been discussing the question: can you produce a QMA witness state using a closed timelike curve? Since QMA⊆PSPACE, and since Fortnow, Watrous, and I proved that closed timelike curves (or anyway, Deutsch’s model of them) let you solve PSPACE problems, clearly a closed timelike curve lets you solve QMA
*decision*problems, but that’s different from producing the actual witness state as the fixed-point of a polynomial-time superoperator. Toby has a neat recent result, which has as a corollary that you can produce the ground state of a local Hamiltonian using a CTC,*if*you have as auxiliary information the ground state energy as well as (a lower bound on) the spectral gap. But you do seem to need that extra information.

Yesterday I realized there’s also a simpler construction: namely, take an n-qubit state from the CTC, and check whether it’s a valid QMA witness, having used Marriott-Watrous amplification to push the probability of error down to (say) exp(-n^{2}). If the witness is valid, then send it back in time unmodified; otherwise replace it by the maximally mixed state. If valid witnesses exist, then you can check that this sets up a Markov chain whose stationary distribution is almost entirely concentrated on such witnesses. (If no valid witnesses exist, then the stationary distribution is just the maximally mixed state, or exponentially close to it.) One drawback of this construction is that it can only produce a Marriott-Watrous state, rather than the “original” QMA witness state.

Is there a third approach, which overcomes the disadvantages of both mine and Toby’s? I’ll leave that question to my readers! - On the theme of QMA plus weird physics, a wonderful question emerged from a recent group meeting: namely, what’s the power of QMA if we let the verifier make multiple non-collapsing measurements of the same state, as in the “PDQP” model defined by myself, Bouland, Fitzsimons, and Lee? I conjecture that this enhanced QMA goes all the way up to NEXP (Nondeterministic Exponential-Time), by a construction related to the one I used to show that PDQP/qpoly = ALL (i.e., non-collapsing measurements combined with quantum advice lets you decide literally all languages), and that also uses the PCP Theorem. I even have some candidate constructions, though I haven’t yet proven their soundness.

In the past, I would’ve spent more time on such a problem before sharing it. But after giving some students a first crack, I now … just want to know the answer? Inspecting my feelings in my second year of leave at OpenAI, I realized that I still care enormously about quantum complexity theory, but only about getting answers to the questions, barely at all anymore about getting credit for them. Admittedly, it took me 25 years to reach this state of not caring.

Comment #1 September 19th, 2023 at 11:41 am

Of course Scott you know what comment I have to make, which is that for #3 we could ask the same question without the quantum, asking instead about MA and the “PDPP” model. 😀 😛

Comment #2 September 19th, 2023 at 12:09 pm

Sniffnoy #1: Absolutely correct, and that didn’t escape my notice! The difference is that the classical version feels

sobizarre that probably no one would’ve studied it if they hadn’t first been thinking about the quantum case! Like,everyone knowsyou can’t sample multiple times from a conditional probability distribution, that it’s just an object that exists in your head—whereas because of interference, the multiple branches of a superpositionfeel likesomething you could almost reach out and touch! Even if the actual mathematical truth is much closer to classical probability theory than the hypesters want.Comment #3 September 19th, 2023 at 3:57 pm

A professor of quantum information named Qubit? Nominative determinism at its finest!

Comment #4 September 20th, 2023 at 4:11 am

Scott #2: I’m curious—though I don’t know anything about MA and QMA—what do you mean by “everyone knows you can’t sample multiple times from a conditional probability distribution”? And why are the branches of a superposition analogous to this?

Comment #5 September 20th, 2023 at 9:16 am

Scott:

” Inspecting my feelings in my second year of leave at OpenAI, I realized that I still care enormously about quantum complexity theory, but only about getting answers to the questions, barely at all anymore about getting credit for them. Admittedly, it took me 25 years to reach this state of not caring.”That will come in handy once AIs will be taking all the credit for answering questions.

The only area left for humans to take credit will be “I went out in the field for 2 weeks and found three new species of insects!”

Comment #6 September 20th, 2023 at 10:39 am

On the issue of Deutsch’s model of CTC computation, has anyone tried modeling CTCs at the level of PDEs (say with time-periodic boundary conditions) to determine how CTC computations actually work and whether they conform to Deutsch’s model? I suppose one would want a way of determining the computational complexity of finding time-periodic solutions and an understanding of how “initial conditions” evolve to the solution of the problem.

Comment #7 September 20th, 2023 at 10:22 pm

Abel #4: “Everyone knows” that a probability distribution is just a representation of knowledge. So for example, if someone sampled a random string x and then computed f(x), and you observed f(x), then you might later also observe x—thereby learning, from your perspective, a random preimage of f(x). But you don’t then get the magical ability to sample

multiplepreimages of the same f(x), thereby learning y, z, etc. such that f(x)=f(y)=f(z)… (which would, for example, let you break collision-resistant hash functions). Probability just doesn’t work that way.As it turns out, quantum mechanics doesn’t work that way either. Indeed, the collision lower bound, which I proved in 2001, showed that in general, any quantum algorithm needs many queries to f to find a collision pair—that is, an x≠y such that f(x)=f(y).

But, OK, this is much less obvious in the quantum case, since sometimes we actually

canuse quantum interference to learn properties of multiple preimages of f(x)—as for example with Simon’s algorithm! So in the quantum case, it might feel more tempting to bend the rules a little, imagine counterfactually that wecouldgenerally observe multiple preimages of f(x), and then see what the consequences would be.Comment #8 September 21st, 2023 at 2:08 am

Scott #7: Right! That makes sense, thanks!

Comment #9 September 21st, 2023 at 2:18 am

Abel #4:

Scott has given (at #7) what he had in mind. Turns out that I had read a little different into it! (I too don’t know complexity theory/QMA/etc.) So, when I read:

together with this:

what I read into it was that Scott had a

time-dependentjoint probability distribution in his mind (say as in RNN/RL). Once you make that assumption, everything becomes clear.— —

Now, a bit additional (just sharing some thinking which occurred to me, on the QM side)…:

In both classical and quantum mechanics, if you interact with a system to sample from a joint probability distribution it initially has (or measure the System ket), and wish to repeat sampling on the same initial distribution (or the same ket), then you would have to

deliberatelyact to restore the post-measurement distribution back to its initial state. You would have to do that each time you wish to take a sample. Absent this restoration, you can’t possible have even a pretense at “non-collapsing measurements”.…

Whenever people think or talk of doing repeated measurements on a System particle, the setup is not as simple as, say, in Tonomura’s famous double-slit experiment. There, the electron is emitted, travels through the chamber (with “slits”), and is

irreversiblyabsorbed at the Detector.The setup for the so-called “repeated measurements” would be more like this (which is just a guessed schematic; I am not an expert of this area):

Have a source emit a small object (massive, fermionic). Grab hold of it using optical tweezers. Use lasers to make it jump up/down in its state, and make appropriate spectroscopic measurements on photons respectively absorbed/emitted.

Note: The elementary QM particle hitting the detector is the photon,

notthe fermionic assembly trapped in the tweezers. The photon indeed is lost forever in the Detector even in such a setup; the fermionic assembly isnot. … Doesn’t matter; photon number is never conserved anyway!What “making repeated measurements,” then, could mean this:

Emit the fermionic assembly only once, and reuse

it— make it dance many times over. For instance, for emission spectroscopy, it would involve exciting the fermionic assembly to thesameexcited state as the initial ket, using the controlling lasers.Now, getting the assembly to the same initial ket is what I meant by “restoration”. For each repeated measurement, the ket for the assembly must be ensured to be the same. This is the QMcal equivalent of ensuring the same initial joint probablity distribution from classical mechanics.

Calling it the “repeated measurement” is, strictly speaking, a misnomer, because you aren’t measuring the same

photonagain and again.In short, what we have here is just a practically convenient means of rapidly ensuring the same ket state, albeit with different particles. Measurements aren’t repeated on the same

particle, but the ket can be ensured to be, within experimental tolerances, identical for each measurement.BTW, the fermionic assembly is continuously interacting with the lasers. In contrast, in the classic QMcal experiments (including Tonomura’s), you can’t ensure the same initial ket for each measurement trial. Poisson distribution kicks in.

— —

One more aside:

Measurements

always(irreversibly) collapsesomething. The issue is what precisely is it which undergoes the collapse.The iqWaves theory (proposed by me) says that it’s the state of the

Detectorwhich collapses. In contrast, in the mainstream QM, it’s the state of theSystemparticle itself that collapses.Repeated measurements on the

sameQMcal particle must be in principle impossible, as far as I can make out, as of today. However, I haven’t yet developed the relativistic iqWaves theory, and photons can’t be handled in NRQM. So, please take it as a strictly tentative conclusion.Thanks.

Best,

–Ajit

[PS: Sorry, there was some snag the first time I submitted the comment… Resubmitting.]

Comment #10 September 21st, 2023 at 7:30 am

By a coincidence of approximately ~100:1, I just so happen to be procrastinating reading “A Full Characterization of Quantum Advice” by opening this blog!

In that paper, in the proof overview of \(BQP/qpoly \subseteq QMA/poly\), when using \(QMA+ =QMA\) (Aharonov+Regev), you say:

“QMA = QMA+; informally, this result says that protocols in which Arthur is granted the (physically unrealistic) ability to perform ‘non-destructive measurements’ on his witness state, can be efficiently simulated by ordinary QMA protocols.”

Could you say a few words about how this is not the class you describe in #3?

I’ll admit this is my first contact with QMA+ and I haven’t yet been through the formal proof yet, so maybe I’ve just misunderstood QMA+…

Comment #11 September 21st, 2023 at 10:37 am

Marshall #10: Fantastic question!

With the class I’m considering here—call it QMA++, or QMA

_{PDQP}—the crucial difference is that you can make an ordinary, collapsing measurement on part of the witness, andthenmake multiple nondestructive measurements on the remaining uncollapsed part.That’sthe source of the (very large, I conjecture) additional power.