Quantum Algorithms for Quantum Field Theories

For weeks, I’ve been meaning to blog about an important recent paper by Stephen Jordan, Keith Lee, and John Preskill, entitled Quantum Algorithms for Quantum Field Theories.  So I’m now doing so.

As long as I’ve been in quantum computing, people have been wondering aloud about the computational power of realistic quantum field theories (for example, the Standard Model of elementary particles).  But no one seemed to have any detailed analysis of this question (if there’s something I missed, surely commenters will let me know).  The “obvious” guess would be that realistic quantum field theories should provide exactly the same computational power as “ordinary,” nonrelativistic quantum mechanics—in other words, the power of BQP (the class of problems solvable in polynomial time by a quantum computer).  That would be analogous to the situation in classical physics, where bringing in special relativity dramatically changes our understanding of space, time, matter, and energy, but seems (unlike quantum mechanics) to have little or no effect on which computational problems can be solved efficiently.  Analogously, it would seem strange if quantum field theories (QFTs)—which tie together quantum mechanics, special relativity, and detailed knowledge about the elementary particles and their interactions, but seen from far enough away are “just” quantum mechanics—forced any major revision to quantum computing theory.

Until now, though, there seems to have been only one detailed analysis supporting that conclusion, and it applied to (2+1)-dimensional topological QFTs (TQFTs) only, rather than “realistic” (3+1)-dimensional QFTs.  This was the seminal work of Freedman, Kitaev, and Wang and Freedman, Larsen, and Wang in 2000.  (Six years later, Aharonov, Jones, and Landau gave a more computer-science-friendly version, by directly proving the BQP-completeness of approximating the Jones polynomial at roots of unity.  The latter problem was known to be closely-related to simulating TQFTs, from the celebrated work of Witten and others in the 1980s.)  To a theoretical computer scientist, dropping from three to two spatial dimensions might not sound like a big deal, but what’s important is that the relevant degrees of freedom become “topological”, making possible a clean, simple model of computation.  For “realistic” QFTs, by contrast, it wasn’t even obvious how to define a model of computation; putting realistic QFTs on a rigorous mathematical footing remains a notorious open problem.

In their new paper, Jordan, Lee, and Preskill say that they give an algorithm, running on a “conventional” quantum computer, to estimate scattering probabilities in a class of QFTs called “continuum φ4 theories.” Their algorithm uses time polynomial in the number of incoming particles in the scattering experiment and in their total energy, and inversely polynomial in the desired precision ε and in the distance λ-λc between the QFT’s coupling constant λ and a phase transition λc.  (In d=2 spatial dimensions, they say the dependence on the precision scales like (1/ε)2.376, the 2.376 coming from matrix multiplication. Naturally, that should now be amended to (1/ε)2.373.)  To develop their algorithm, Jordan et al. apparently had to introduce some new techniques for coping with the error incurred by discretizing QFTs.  No classical algorithm is known with similar scaling—so when suitably formalized, the “QFT simulation problem” might indeed be in BQP-BPP, matching the uninformed doofus intuition of complexity theorists like me.  Jordan et al. don’t say whether the problem they’re solving is also BQP-complete; I imagine that could be a topic for future research.  They also don’t say whether their precision parameter ε bounds the variation distance between the real and simulated output distributions (rather than just the differences between probabilities of individual scattering outcomes); I hope they or someone else will be able to clarify that point.

In case it isn’t obvious yet, let me make it crystal-clear that I lack the physics background to evaluate Jordan et al.’s work in a serious technical way.  All I can say with confidence is that the small number of people who (1) have the requisite background and (2) care about computational complexity, will probably spend non-negligible time discussing and understanding this paper in the weeks and months to come.


Conflict-of-Interest Warning: At a deep, subconscious level, I probably chose to blog about Jordan et al.’s paper not for any legitimate scientific reason, but simply because I know John Preskill and Stephen Jordan personally, and, despite being physicists, they’re both tremendously-respected colleagues who’ve made many outstanding contributions to quantum computing theory besides this one.  Then again, everything I’ve ever done—and everything you’ve ever done—has probably had such unsavory hidden motives as well, so who’s counting?  In all of history, there have only been ten or twenty people whose commitment to scientific objectivity has been absolute and pure, and since they comment on complexity blogs anonymously, we’ll probably never even know their names…

18 Responses to “Quantum Algorithms for Quantum Field Theories”

  1. Henry Y Says:

    Has anyone given formal evidence that computers that can take advantage of special relativity aren’t any more powerful than classical computers, as you suggested?

  2. Scott Says:

    Henry Y.: It’s an excellent question. I don’t know of any formal analysis, and will be grateful if someone can point us to one.

    The informal argument could be summarized as follows:

    Given that SR still envisions a “classical” state space—i.e., a collection of particles that, once we fix a reference frame, have definite positions and momenta that evolve forward in time by simple deterministic rules—there are basically only two aspects of SR that you might worry could change the computational situation, compared to whatever was true in Newtonian physics.

    The first aspect is the universal speed limit, which you might worry restricts computation, since now signals can’t travel from one part of the computer to another faster than c. And indeed, if you care about massively-parallel, sublinear-depth computations, then it’s clear that the finiteness of the speed of light does restrict what you can compute. On the other hand, if you only care about what’s computable in polynomial time, then we all know that Turing machines (in any number of dimensions) are polynomially equivalent to random-access machines. So the Extended Church-Turing Thesis looks safe.

    The second aspect of SR that you might worry about is time dilation. Can you perform an arbitrarily long computation with minimal effort, by leaving your computer on Earth, boarding a spaceship that accelerates to close to the speed of light, then turning around and returning to Earth, where you find civilization collapsed, your friends long dead, and the Sun going cold, but your important computation finished?

    Here, as I like to point out in talks, the crucial problem is the energy needed to accelerate to relativistic speed. Indeed, if you want to get a superpolynomial speedup by the above means, it’s not hard to show that you need to accelerate your spaceship to faster than c-1/p(n) for any polynomial p. But that, in turn, requires a superpolynomial expenditure of energy (assuming, of course, that you have nonzero mass!). So, as long as we make the reasonable (and separately justifiable) assumption that in T seconds you can only collect poly(T) joules of energy, the Extended Church-Turing Thesis once again seems safe.

    Now, keep in mind that people argue about the status of the CT and ECT even in the context of pre-relativistic, Newtonian physics! For example, is it possible to build a “Zeno computer,” which executes the first step in 1 second, the second step in 1/2 second, the third step in 1/4 second, the fourth step in 1/8 second, and so on? (Personally, I believe the answer is no, but that the ultimate explanation requires quantum gravity.)

    However, to whatever extent you believed the ECT in the Newtonian case, the two intuitive arguments above are why it seems to me that SR shouldn’t change your belief.

    Indeed, I can say something stronger. Even if you doubted the ECT in the Newtonian case—because you worried about a Zeno computer that would work by repeatedly doubling the transmission speed of signals—SR should reassure you that that particular type of Zeno computer can’t exist.

  3. Moshe Says:

    Thanks for pointing out this fascinating paper, and in general for dealing with this noisy environment. Hope the various annoyances involved will not stop you from continuing.

    One quick comment, before looking at that paper. I’d think that the main new ingredient in QFT (as compared to “non-relativistic” QM) is the infinite number of degrees of freedom. In other words I’d expect that relativistic and non-relativistic QFTs will behave the same way. Whether or not this new type infinity is relevant for computational complexity is an interesting issue.

  4. Scott Says:

    Thanks, Moshe!

    You might be right about the infinite degrees of freedom issue: certainly, a large fraction of the technical work they do is related to discretization error. (Much of the rest seems related to the facts that (a) the QFTs are defined by perturbation expansions, and (b) at the simplest level, they need to simulate a “realistic system” of interacting Gaussian wavepackets using a qubit-based QC.)

    But one issue that I want to understand better—and a place where SR could certainly be relevant—is the polynomial dependence of the simulation cost on the incoming particles’ total energy. As the paper points out, such a dependence seems necessary because the greater the energy, the more new particles can be created. But is there any connection to the “other” relativistic tradeoff between energy and computation time (the one I discussed in comment #2)?

    More generally, it would be extremely interesting to know which aspects (if any) of the simulation procedure become easier or harder in the case of non-relativistic QFTs.

  5. Douglas Knight Says:

    You should only need a conflict of interest disclaimer if the author went to your wedding.

  6. Gil Kalai Says:

    It is worth mentioning that the difficulty of making QFT computations on digital computers was a major motivation mentioned in Feynman’s original paper to the quantum computers idea. So, in a sense, the paper of Jordan, Lee, and Preskill confirm a connecture that gows back to Feynman’s paper.

  7. wolfgang Says:

    I wonder why they consider this particular model.
    It is widely assumed (known?) that phi^4 is indeed a trivial QFT.

    I think one should look at QCD models ( e.g. pure SU(2) or SU(3) without fermions), which are physically more interesting and also there is know-how already about their discretization.

  8. Stephen Jordan Says:

    Hi Wolfgang,

    You raise an important point. It is believed that the only continuum limit of phi^4 in four-dimensional spacetime is the noninteracting theory. (Even so, phi^4 can be a low energy effective theory in four dimensions.) This is one reason why we also analyze three- and two- dimensional spacetime. The main reason we focused on phi^4 is that it is a nice simple model that allows us to attack the most essential challenges of simulating QFTs without getting mired in too many technical details. We are currently working on extending our results to other models, such as those involving Fermions.

  9. John SIdles Says:

    As an adjunct to the (excellent) Jordan-Lee-Preskill manuscript — and thank you for drawing attention to it, Scott — please let me commend the Review of Modern Physics survey by Asher Peres and Daniel Terno titled Quantum Information and Relativity Theory” (arXiv:1111.3633).

    The Peres-Terno (PT) survey complements the Jordan-Lee-Preskill (JLP) manuscript in at least the following three respects: (1) JLP describes scattering theory as a unitary process; the PT discussion of (e.g.) Unruh radiation reminds us that relativistic noise processes are similarly subtle to relativistic scattering processes. When we reflect that the unitary scattering processes that are hardest to computationally simulate are precisely those most affected by noise and/or critical-point slowing, we appreciate that this PT discussion helps illuminates the central role of open system dynamics even in unitary scattering theory.

    (2) The JLP manuscript describes those classical simulation methods whose correctness and efficiency have been rigorously proved as “known”, which is the proper and accustomed mathematical usage of “known.” But it is well to keep in mind too the accustomed physics usage of “known” that Feynman summed-up in his aphorism “Much more is known than has been proved.” Embracing the physics usage of “known” helps us recognize that even today we already “know” reasonably well how to simulate, with classical computational resources, the dynamic properties even of tremendously complex systems (e.g., the phase diagram of water, the mechanical properties of solids, the mass spectrum of hadrons, the self-replication of DNA, and even the unitary scattering matrix of \phi^4 theory). On the other hand, two key mysteries that we presently don’t understand are: (a) why our presently “known” classical simulation methods are so strikingly effective, and (b)  how much further we can extend these “known” methods.

    (3) Together the JLP and PT monographs show us some of the striking limitations of today’s quantum understanding: (a) relativistic noise mechanisms (like Unruh processes) are only dimly illuminated by our understanding; (b) the back-action of quantum dynamics on space-time geometry is vaguely outlined by string theory, (which perhaps is not even correct) and (c) the feasibility of describing quantum dynamics via non-Hilbert state-spaces remains shrouded in near-total mathematical mystery.

    Thus (for me) a central lesson of the JLP and PT monographs is that “Absence of a theorem is not a theorem of absence”, meaning that the future illumination of these present vast areas of darkness may well require substantial revisions of the physical postulates upon which we ground our present-day quantum informatic understanding.

  10. Physics Monkey Says:

    Great post, so great that I decided to end my long time lurking and write a response.

    It’s a cool paper. I would say, at least among the people I know, there was no real doubt that one could simulate quantum field theories using ordinary quantum computers. To support this claim I offer a proof by arXiv: it would be way too sad if all the people posting to hep-lat (who are using classical computers no less!) were to discover that their task was hopeless even with a quantum computer.

    More seriously, any real experiments take place in finite volume over a finite time and with finite energy and hence can only probe a finite number of degrees of freedom of any putative quantum field theory. This is roughly the content of effective field theory, and it’s something we quantum matter types use all the time. Field theories are always emerging as the low energy effective description of collections of qubits e.g. 2+1d TQFTs in fractional quantum Hall fluids.

    What seems to me to be really interesting in this paper is the complexity minded analysis of discretization effects. I feel that this suggests a more general program of carefully studying the resources needed to deal with the infinite number of irrelevant operators that arise in regularizations of quantum field theories. Of course, a lot is known about this issue already, but I bet a complexity perspective could add something new.

    PS Another point of view on the “weakness” of relativity comes from the existence of non-relativistic qubit models where Lorentz invariance emerges at low energies. On the other hand, the existence of the black hole entropy bound in gravity offers further evidence that quantum field theories coupled to gravity are not truly infinite objects.

  11. John SIdles Says:

    Excellent post #10 by Physics_Monkey (IMHO) … if other self-described “long-time lurkers” are tempted to post similarly well-reasoned and well-mannered comments, then please let me express the hope that they will do so.

  12. asdf Says:

    Hey Scott, speaking of relativistic computing, this is not quantum but I thought you might like it: http://arxiv.org/abs/1105.0047

    It’s about infinite computations in closed timelike curves. It’s written by logicians and is not very “physical”, but it makes the interesting claim that the speed-of-light limit is an artifact of 3+1 spacetime and suggests superluminal travel is possible in 3+3.

  13. sodabeer Says:

    Interesting disclaimer. As someone who reads these TCS blogs occasionally, it seems to me that promotion of your field is a good thing for your field, and beyond. In fact, it makes clear what problems interest people from TCS, and that there are many places to make a breakthrough. So, anyone can take a shot on the problems that are deemed valuable in this field – and good PR in this field rises the value of such results, compared to the equally (or more difficult) results from other fields. By sheer market logic, this should attract people to work on these problems, even if they are physicists (or mathematicians etc).

    However one wonders if outsider results would get the same sort of excitement. For instance, if an outsider improved the bound on exponent for matrix multiplication (or proved BQP=BPP, or got some new lower bounds, where TCS seems to be relatively stuck), isn’t the default reaction some sort of protective ignoring from the insiders? That does not seem very plausible for reasonably big results, so more probably it is a fair game if a breakthrough is clear (matrix exponent is perhaps not such case – it could just as well remain ignored).

    Is it worth playing? Maybe the level of outreach (much better than in some other fields, though there are plenty of even better sold areas, like economics that seems to rely on inbreeding much more and in a frighteningly unobjective ways) is a bit misleading, as TCS is fighting for its share of influence, and it might be much worse off than it would appear form the bloggosphere. A thing working for TCS is that it seems relevant to a large number of people and also it is easier to understand than some obscure technical fields in, say, statistics or theoretical physics. Another good thing for TCS and its expanding influence is that it is a field that is taking ground – there are very few jobs hence few people working in it hence better chance to make a “breakthrough”, if one can afford the time and effort.

    In any case the way you people promote your stuff gives some hope in the despiring world in which no one cares.

  14. Moshe Says:

    Having looked briefly at the paper, I am not sure if non-relativistic QFTs would be different from massive relativistic theories in an important way. On the other hand, massless field theories might have new issues since you can excite arbitrarily large number of modes using arbitrarily low energy. Anyhow, thought provoking paper, thanks again for pointing it out.

  15. Vikram Says:

    This is quite interesting actually, I haven’t read the paper (yet) but it seems like this might complement the answer you gave to my question [1] earlier Scott. Great post though!

    http://cstheory.stackexchange.com/questions/7669/ed-wittens-new-paper-and-the-simulation-of-a-quantum-field-theory

  16. Martin Schwarz Says:

    Jordan, Lee, and Preskill have posted a 63-page extended version of their paper on the arXiv:

    http://arxiv.org/abs/1112.4833

  17. Slipper.Mystery Says:

    Sidles#9: the PT article is probably quant-ph/0212023 (Rev.Mod.Phys.76:93-123,2004)

  18. rrtucci Says:

    I translated Scott’s blog post to my native, lower brow language:

    Of Bananas and String Theorists