**Gil Kalai** says “the connection [of the Kalai postulates; per arXiv:quant-ph/0607021] with gauge theories and superradiance is very interesting.”

*Shtetl Optimized* readers might also enjoy reflecting upon two (much-cited) articles by Howard Georgi: “Unparticle Physics” (arXiv:hep-ph/0703260) and “Another Odd Thing About Unparticle Physics” (arXiv:0704.2457 [hep-ph]), together with the more recent preprint by James LeBlanc and Adolfo Grushin titled “Unparticle mediated superconductivity” (arXiv:1407.8492 [cond-mat.str-el]).

The LeBlanc/Grushin preprint in particular is interesting because (in their model) unparticle dynamics is emergent from a superradiant gauge field theory, namely the usual QED-interacting condensed-matter lattice.

Plausibly it’s only a matter of time until the first articles on the theme of “Quantum correction of unparticle/nonparticle dynamics appear” … these noise mechanisms quite plausibly (as it seems to me) can instantiate the Kalai Postulates.

Moreover, unparticle physics — or perhaps “nonparticle” physics would be better — emerges geometrically in the context of varietal dynamics, as those elements of dynamics that reflect varietal geometry rather than Hamiltonian flow. That’s why we quantum simulationists take an interest in this literature.

**Three Possibilities**

**#1** The dynamics of Nature is *strictly* varietal, such that the unparticle/nonparticle dynamical sector *strictly* accords with the Kala Postulates.

**#2** The dynamics of Nature is *effectively* varietal, such that the unparticle/nonparticle dynamical sector *effectively* accords with the Kala Postulates.

**#3** The dynamics of Nature is *strictly* Hilbert, such that the Kala Postulates fail, and students can continue to worship devoutly at the Church of the Larger Hilbert Space, without much apprehension of missing out on a trendy Kalai-respecting unparticle/nonparticle physics bandwagon.

**Conclusion** Without regard for which possibility is *true*, it seems to me that the literature of #1-2 is exceedingly interesting and vigorous nowadays, and moreover the implications for mathematics, science, and engineering are sufficiently unexplored, such that the #1-2 literature can be recommended in good conscience to STEM students generally.

**Ben Standeven** wonders “Surely the error rate can only grow linearly with the length of the process (so that the total amount of error grows quadratically)?”

One answer is that long-range gauge theories (*i.e.*, QED) generically exhibit * superradiance*.

For reasons that are not presently understood — and which plausibly can be understood in many ways, including Gil Kalai’s way — Nature inflexibly *requires* that the Hamiltonian flows of all quantum computers (and Bosonsamplers too) be described by long-range gauge field theories.

The ubiquity of superradiant dynamics (broadly defined) has turned out to pose considerable challenges in the design of fault-tolerant universal quantum computers and scalable BosonSamplers … but are these “call to action” 20th century objectives infeasible *in principle* (given Nature’s gauge-field constraint)?

Ingenious arguments can be advanced to support both sides of this question, and inarguably the conservative answer is: no one knows. And that is the best reason to try.

]]>I don’t see why this would block error correction, though. Surely the error rate can only grow linearly with the length of the process (so that the total amount of error grows quadratically)? You could compensate by doing more error correction.

]]>– Very often, for the quantum systems that people actually care about, it’s possible to make simplifying assumptions that make them tractable to simulate classically—at least if you only want to know basic things like the ground state energy, as is often the case. (E.g., a lot can be treated using Quantum Monte Carlo and DMRG.) Furthermore, even in a world where QCs become practical, we have to expect that classical simulation algorithms will continue to improve, as they have for decades.

– Recent studies by Matthias Troyer and others suggested that there will be very large polynomial overheads in simulating quantum chemistry using a qubit-based quantum computer. I’m sure the simulation algorithms will be improved (indeed, they already have been to some extent), but the ones we know right now look like they could require millions of qubits (and even more gate operations) before they’d start giving any useful information about chemistry.

Despite the above points, I’m a guarded optimist about quantum simulation, and I tried to convey my optimism several times in the article. ðŸ™‚

]]>My question is simply this: having a universal quantum computer, that would allow us to simulate a lot of quantum processes that presumably are intractable with classical computers today, no?

The article seems to say as much. I’d expect that alone with have huge and visible effects on technology, no? The article seems to really downplay this though, and I couldn’t understand why.

]]>it is possible that every QM state can be described well in the mathematical model we all thing is good.

However, some mathematical states do not have physicaly possible state, thus making some sort of theoretical QC compuatuon impossible in practice.

It is a bit same as some told me here it is not possible to have wave function with no variance and mean, or that strong law of large numbers do not hold.

Another question is how can grover algorithem work in practice, for it to be effective vs classical computer we need huge quantum database if i am not wrong.

Something like 1M qubits DB does not seem practicle in the next 40 years.