Quantum Computing Since Democritus Lecture 16: Interactive Proofs

In which I try to give a non-rigorous taste of the interactive proofs revolution that rocked the complexity world in the 1990’s, as well as its consequences for circuit lower bounds. I argue that these results matter because they offer a tiny glimpse of how one can exploit the structure of problems like 3SAT to prove lower bounds—something we know will eventually be needed for the P vs. NP question. If you got off the train before its latest tour through the Complexity Badlands, don’t worry: it will double back into Philosophers’ Valley (where everyone has an opinion and no one has a result) by Lecture 17 (“Fun With Anthropic Principles”).

47 Responses to “Quantum Computing Since Democritus Lecture 16: Interactive Proofs”

  1. matteo martini Says:

    Hi Scott,
    apparently Geordie Rose has built a 28-qbit QC, please read his blog..


  2. KaoriBlue Says:

    Seth Lloyd is a co-author on the paper matteo is referencing.

  3. Chris Drost Says:

    Isn’t the answer to the last question just 1/3?

    Because if God flipped two coins in order, with the structure (TT) = redhead, (TH) = redhead, (HT) = redhead, (HH) = greenhead, then there’s no ambiguity here: the probability that the first flip was heads given that you are a redhead is clearly 1/3.

    (I guess I could do something drawn-out with Bayes’ Theorem, but this seemed like a simpler argument.)

  4. Scott Says:

    Teo, in the future, I’d prefer you didn’t bring D-Wave into unrelated threads. We’ve already discussed them exhaustively in this space, I’m bored of it, and I don’t see something new and unexpected (for example, a demonstration of entanglement between superconducting qubits) that would make me want to pay attention again.

    I’m also not sure exactly what your comment is referring to. Their announcement of a “28-bit QC” was months ago already, and is open to the same objections as the earlier “16-qubit QC” (see my previous posts). Geordie’s most recent blog post is about a measurement of transitions in two-level systems, and seems completely unobjectionable to me, but also unrelated to the real sticking point: namely entanglement between different two-level systems (which apparently hasn’t yet been demonstrated).

  5. Scott Says:

    Chris: That’s a nice argument! But you’re implicitly assuming that the right way to think about it is that there’s a first fair coin flip to decide which world you’re in, and then (conditioned on being in the heads-world) a second fair coin-flip to decide whether you’re the redhead or the greenhead. Can you think of a different plausible-seeming assumption that would give an answer of 1/2? (If not, read the papers of Nick Bostrom, or tune in for Lec17…)

  6. KaoriBlue Says:

    “I’d prefer you didn’t bring D-Wave into unrelated threads…”

    Sorry for being rude with regards to that. There’s been a lot of negative press about anything having to do with the company, and (especially since Seth Lloyd was a co-author) I just wanted to point out that it appears to be a good/rigorous result.

  7. komponisto Says:

    What are the reasons for believing P != BQP, apart from the mere fact that people have yet to find efficient classical algorithms for certain problems (such as factoring)?

  8. Scott Says:

    komponisto: Currently, the reasons boil down to

    (1) The failure to find polynomial-time classical algorithms for simulating quantum physics or for problems like factoring and discrete log,

    (2) Intuitions developed over 40 years about what polynomial-time algorithms can and can’t do, which suggest to many of us that such algorithms are exceedingly unlikely,

    (3) Results in communication complexity, query complexity, and other settings, where quantum computers provably and unconditionally give an exponential speedup over classical computers. (In particular, these results tell us that any proof of P=BQP would have to be non-relativizing and even non-algebrizing.)

    It should go without saying that a proof of P=BQP would be one of the greatest achievements in the 2300-year history of mathematics (not least because it would imply a fast factoring method). It would be not quite as world-shattering as P=NP, but in a similar league.

    Now, you might ask the following slightly more technical question: can we make our belief in P≠BQP “almost as solid” as our belief in P≠NP? For example, can we show that if P=BQP then the polynomial hierarchy collapses? Alas, even this seems to be far beyond our current ability.

    On the other hand, I believe it might be possible to show that if FBQP=FBPP (i.e., all functional/relational problems solvable in quantum polynomial time are also solvable in randomized polynomial time), then P#P=PH (which in particular means that PH collapses). This is actually something I’ve been thinking about this summer. Were it true, it would mean our belief that “quantum computers are hard to simulate classically” is at least as secure as an extremely widely-held belief about classical complexity classes.

  9. matteo martini Says:

    sorry as I talked about D-Wave in unrelated topics.

    Keep up with the good work!!

  10. Ben Heaton Says:

    Speaking of IP=PSPACE, I made a comic a couple weeks ago that’s sort of about that. Might amuse you a bit.

  11. Greg Egan Says:

    On God’s coin flip, there’s a very nice recent paper on a different but related question, The Sleeping Beauty Paradox.

    Hartle and Srednicki’s “Are We Typical?” also gives a useful perspective.

  12. harrison Says:

    On the other hand, I believe it might be possible to show that if FBQP=FBPP (i.e., all functional/relational problems solvable in quantum polynomial time are also solvable in randomized polynomial time), then P^#P=PH (which in particular means that PH collapses). This is actually something I’ve been thinking about this summer.

    Okay, I’ll bite; why do you think that result in particular might be within reach?

  13. Scott Says:

    Ben: Thanks for the comic! 🙂 You’re continuing a proud legacy of IP=PSPACE-related comic strips.

  14. Scott Says:

    Okay, I’ll bite; why do you think that result in particular might be within reach?

    Okay, I’ll tell you. Consider the problem of counting the number of roots of a degree-3 polynomial p(x1,…,xn) over GF[2]. It’s known that this problem is #P-complete. Suppose furthermore that it were random self-reducible (as are other #P-complete problems, like the Permanent). Then FBPP=FBQP ⇒ P#P=AM.

    (Why? In the tradition of 16th-century mathematics, I’ll let people think about it for a bit… 🙂 )

  15. Douglas Knight Says:

    maybe you should have an open thread, to encourage D-Wave discussion? It would also be a place to put comments like the following, for the closed thread on QC skeptism:

    John Sidles,
    sorry about the long delay. I wanted to look more at Ashtekar-Schilling before responding, but I didn’t, so now it’s the worst of both worlds. I don’t have much to say about the possibility of QC, except that I’d file an AS problem under “complicated states are unphysical,” not under “small amplitudes are unphysical.”

    I want to say something about communication. I think a common failure of communication is when something (like the AS framework) is very general, but one or more groups use it to refer to some more specific part. It’s a hard problem to debug, because people check the denotations of words, but don’t think to check the connotations (or aren’t really aware of them). I took AS to refer deformations of QM, not (deformations of) more constrained systems. Another example is Kaehler manifolds. When you advocate them, are you referring to the AS framework?

  16. Luca Says:

    I guess the other probabilistic set-up for the mental experiment at the end of the lecture is that first God randomly and uniformly assigns “me” to one of the three people who could possibly exist (readheadA, readheadB, greenheadB), and then picks universe A or universe B uniformly.

    Then I have probability 1/2 of not existing, and probability 1/6 of being either of rhA, rhB, or ghB. Given that my hair is not green, I am equally likely to be in universe A or B.

    According to this line of reasoning, conditioned on being alive, I am more likely to be alive in a universe in which more people live, even if all universes are equally likely. I suppose this is a ‘feature’ of the reasoning rather than a refutation of its logic.

    (Personally, I think that any reasoning that starts by postulating a probability distribution over ‘people,’ or ‘sentient beings,’ and then using as a prior that one is a random person is self-evidently fallacious and can lead to all kind of paradoxes, as you will probably show in the next lecture. And this before even getting to the issue that the probability space that one is sampling over is unknowable, and so the process is undefined.)

  17. John Sidles Says:

    Douglas Knight asks: “When you advocate Kähler manifolds, are you referring to the AS framework?”

    “Yes” is the short answer, Doug.

    The long answer is nuanced. Quantum simulations are easy on (low-dimension, algebraic) Kählerian quantum state-spaces, but hard on (large-dimension, linear) Hilbert state-spaces. It follows that quantum simulations are preferably computed on Kählerian state space, provided the Kählerian predictions are close to the Hilbert predictions.

    For what quantum systems is this true?

    “Noisy and/or low-temperature systems” is the short answer. The long answer is “We don’t know rigorously–except for a few spacial cases—but empirically Kähler state spaces work well for broad classes of noisy and/or low-temperature quantum systems.”

    For these reason, even if we knew that Hilbert space was geometrically linear to infinite precision, it’s Kählerian algebraic sub-manifolds would still be of considerable practical importance.

    It seems that Cornell possibly will host a summer school on these Kählerian methods next year. Anyone who is interested should email me.

  18. rrtucci Says:

    Scott said:
    God flips a fair coin. Assuming that the coin lands tails, She creates a room with a red-haired person. If the coin lands heads, She creates two rooms: one has a person with red hair and the other has a person with green hair. Suppose that you know that this is the whole situation, then wake up to find a mirror in the room. Your goal is to find out which way the coin landed. If you see that you’ve got green hair, then you know right away how the coin landed. Here’s the puzzle: if you see that you have red hair, what is the probability that the coin landed heads

    2 node Bayesian network

    P(a=H) = P(a=T) = .5 (H=heads, T=tails)


    Find P(a=H | b=red).
    Is this all there is to this problem? Am I missing a subtlety? Question: Can a quantum computer, the ultimate probabilistic machine, be used to solve this (and more complicated) inference problems?

  19. Scott Says:

    Personally, I think that any reasoning that starts by postulating a probability distribution over ‘people,’ or ’sentient beings,’ and then using as a prior that one is a random person is self-evidently fallacious and can lead to all kind of paradoxes, as you will probably show in the next lecture. And this before even getting to the issue that the probability space that one is sampling over is unknowable, and so the process is undefined.

    Luca: This is often a problem, I find, with giving a lecture that builds an interesting argument up and then demolishes it — someone in the audience might demolish it first! 😉

  20. Chris Granade Says:

    Would that I were so elegant in stating my own objections to that particular form of anthropic reasoning!

  21. Greg Egan Says:

    Traditionally, the real “bite” of probability has come from betting experiments where a single person puts money at stake. Even if there’s no prospect of repeated trials of the experiment, it makes sense for that one person to care about their prospects of winning or losing when exposed to some probabilistic but locally quantifiable risk.

    That original notion based on one self-interested individual can be scaled up a bit to things like public health strategies; we can ask what the best policy is for a well-defined human population at a given time, assuming that we want to maximise some expectation value over that population.

    But what meaning or value is there in expectation values taken over all sentient beings in the history of the universe? When anthropic reasoning starts employing strategies whose only justification is that they would maximise certain whole-of-universe expectation values, all it ends up giving us is the delusion that we possess information that really only resides with that entire, unknown population.

    There’s an old Persian saying: “Everything is known by everyone. ‘Everyone’ is yet to be born.” The population of all sentient beings in the history of the universe collectively “knows” whether Boltzmann brains outnumber ordinary brains, and “knows” how long various civilisations and species actually last. But if we think we’re gaining any knowledge about these questions, here and now, simply by asking “What strategy — if employed uniformly by all sentient beings in the history of the universe — would maximise the whole-of-spacetime expectation value for the number of correct guesses?” then I think we’re just kidding ourselves.

  22. Job Says:

    How’s my understanding of relativization:

    As i understand it, for P vs NP, there is an oracle relative to which P=NP and one relative to which P!=NP. If a proposed solution to P vs NP is unaffected by the addition of an oracle (doesn’t relativize? relativizes?) then it can’t be valid.

    If so, how do you apply this? Do you have two oracles in hand that you commonly use?

  23. Scott Says:

    Job: Yes, your understanding is correct, except that if a technique is unaffected by the addition of an oracle then we say it does relativize.

    The usual oracle that makes P=NP is just any oracle for a PSPACE-complete problem. (It’s not hard to see that PPSPACE=NPPSPACE=PSPACE.) To construct an oracle that separates P from NP, you can use diagonalization — basically, you construct an infinite sequence of exponentially-hard search problems, of the form “does there exist an n-bit string that the oracle A accepts?” By construction, this problem is in NPA. On the other hand, it’s not hard to choose A so that the nth PA machine fails on the nth search problem — and hence, no PA machine solves the problem correctly for every n. (Alternatively, Bennett and Gill showed that a random oracle separates P from NP with probability 1.)

  24. Job Says:

    I see, so relativization is what causes that good old (irritating) case where one attempts to prove that an NPC problem is unsolvable in polynomial time and ends up “proving” that it is unsolvable in exponential time as well.

  25. Scott Says:

    No, that’s not it at all (except in the tautological sense that a false statement implies anything). One could say that you try to prove P≠NP and end up proving PSPACE≠PSPACE.

  26. Koray Says:


    About the black ravens, I find the standard Bayesian explanation (weight of evidence) a bit dissatisfying as I believe in intuitionistic logic the contrapositive is not an axiom. Perhaps we’re using the wrong logic for these experiments?

  27. roland Says:

    Say god would reward her creatures for correct guesses and repeat the procedure several times. For the red haired persons it is a bad idea to choose one outcome with a different probability than .5 because clearly there is the same expected amount of them on either side. I think the probability 1/3 is plainly wrong.

  28. Chris Granade Says:

    roland: There’s another problem very similar to God’s Coin Toss called the Sleeping Beauty Paradox (SBP). I ran across a paper on arXiv that makes a good argument that the supposed contradiction between the 1/2 and 1/3 answers in SBP is because there’s two different questions being conflated by the problem. As noted, the SBP is very similar to GCT, and so I feel like the same logic applies there.

    But then again, Scott’s answer (there– I didn’t say Dr. Aaronson– oops) is coming up in Lecture 17. I’ve heard his reasoning and find myself concluding that, more than anything, the thought experiment punches some holes in the whole idea of combining Bayesian reasoning with the anthropic principle in that way. An assault on the Bayesian religion, indeed.

  29. Greg Egan Says:

    Roland, I agree that the probability is 1/2 in Scott’s experiment … but do you really want to tie that to a reward?

    Suppose you change the experiment, so that if God gets tails she creates 2 rooms, both with redheads, and if she gets heads, then [as before] she creates 2 rooms, one with a redhead and one with a greenhead.

    If people who guess correctly are rewarded, then the strategy “redhead guesses tails, greenhead guesses heads” gives an expectation value for the number of rewarded people of (1/2)*2+(1/2)*1 = 3/2, while the strategy “redhead guesses heads, greenhead guesses heads” gives an expectation value for the number of rewarded people of (1/2)*0+(1/2)*2 = 1. So if your aim is to maximise the expectation value for the number of rewarded people in the universe, then you’d choose the first strategy.

    But is maximising that number actually a route to knowledge for the individuals making the guesses? I’d argue that even in this altered experiment, a redhead should still attribute equal probabilities to God’s coin having fallen heads or tails.

  30. roland Says:

    Greg: A red haired person has no new information about the coin flip as it was clear there would be at least one redhead. Nevertheless the person would be better off to guess tail (on average). It strange and even better than the sleeping beauty because nobody gets drugged.

  31. Chris Granade Says:

    roland: The drugging part of the sleeping beauty problem was kind of odd, but hey, so many of these anthropic puzzles are. Scott mentions later one about a room with a madman who goes on a killing spree if he rolls snake-eyes, so maybe that’s a bit more queer? Anyway, I don’t want to spoil too much.

  32. John Sidles Says:

    It’s a small point, but unless I have seriously misunderstood the card example in the lecture, the answer given is incorrect.

    The correct answer is that the Q, K, and 1 all must be flipped, in order to check the inference that none have a K on the opposite side. I was waiting for someone else to post a correction … perhaps everyone was hypnotized by the hard problems? 🙂

  33. Greg Egan Says:


    you’re given four cards, each of which you’re promised has a letter on one side and a number on the other

    If you disbelieve a “promise”, how would you like the intended question to be specified? Would “assuming every card has a letter on one side and a number on the other” do it … or would you just reply “I assume nothing!”? 🙂

  34. Chris Granade Says:

    John: I got it wrong at first, but for a different reason. The idea is really to take that social situation and turn it into something abstract, and so you are guaranteed that each card has one number and one letter. That’s like saying that everyone in the bar has a single well-defined age, and is either drinking or not drinking. Allowing a K/K card would be like letting someone drink and drink, but not defining at all what their age is.

  35. John Sidles Says:

    Sorry — missed the earlier “promise”!

  36. John Sidles Says:

    Hmmm … traffic on Scott’s blog has fallen below one post per day … and the same is true for the Fortnow/Gasarch computational complexity blog … we can’t have that! … so the following remark is just to sustain life until Scott posts a new topic.

    I was plenty embarrassed to have overlooked the “promise” in Scott’s card example … but my error reflects very accurately the real-world thought processes of engineers.

    Colleague: “Hey John, that card deck you ordered has arrived!” John: “Oh boy, let’s read the manual!” The manual: “This deck has four cards, each of which you’re promised has a letter on one side and a number on the other.”

    At this point the reactions of mathematicians and engineers diverge sharply. A mathematician might well say, “What an excellent premise for logical reasoning!”, while an engineer might well say, “We had better count the cards and turn them *all* over, to see if the manual is right!”

    I was reflecting upon this while reviewing the literature on solving large-scale systems of linear equations by indirect (iterative) means, specifically Yousef Saad’s Iterative Methods, and also the recipe compendium Templates for the Solution of Linear Systems.

    Despite their obvious merits, textbooks like these are unsatisfactory for mathematicians and engineers alike in the too-common occurrence of sentences like “…. [yet another] unanswered question is how convergence can be affected by various reorderings of the rows. For general sparse matrices, the answer is not known.”

    Here is where QIT can make a big contribution. Namely, the the promises that QIT makes about quantum systems are mathematically stronger than the promises that classical physics makes about classical systems … as all readers of this blog knows!

    This gives good hope of a future in which textbooks about quantum simulation include stronger and more elegant mathematical theorems *and* faster and more general algorithmic recipes, relative to textbooks on classical simulation. The result will be (hopefully) a substantial increase in the robustness and vigor of the overall science and technology ecosystem, which in turn will be very good news for younger researchers.

    Stronger theorems on lower bounds, accompanied by algorithms that approach or even saturate those bounds, are especially welcomed by the engineering community … because it’s very comforting to be assured that you are deploying the best possible algorithm to solve a given practical problem … and yet with the present state of knowledge this assurance is usually not available.

    The point is that in the long run, mathematicians, scientists and engineers are like sea otters, kelp, and urchins … the prosperity of each species depends utterly upon the prosperity of the other two.

    Scott’s blog provides an important point-of-contact for these three spheres. That’s why I read this blog, anyway … and have benefited greatly from it. My thanks to all who contribute!

  37. Chris Granade Says:

    John: To be fair, I misunderstood when I was told the puzzle, and thought the rule to be tested was an if-and-only-if, which makes not too much sense in retrospect. Anyway, as for posting frequency, Scott’s out of town right now, so I’m not surprised that he isn’t posting as often. I imagine it will pick up if he ever gets bored with Foo Camp and whatever else he’s up to.

  38. asdf Says:

    Terry Tao has a new post up, for those of you with a math jones who are too impatient to wait for Scott to get back.


  39. Ian Durham Says:

    But what meaning or value is there in expectation values taken over all sentient beings in the history of the universe? … all it ends up giving us is the delusion that we possess information that really only resides with that entire, unknown population.

    Excellent point. Whether we wish to admit it or not, there is a point at which our questions, regardless of their scientific or mathematical validity, become almost rhetorical. This kind of reminds me a bit of Russell’s Paradox.

  40. Chris Granade Says:

    @Ian (#39):
    I feel split on this. It is and it isn’t rhetorical, IMHO. I mean, Dawkins used anthropic reasoning in the God Delusion (not that he’s by far unique for that– just the first example that came to mind) to justify why we don’t have to worry about how the universe came to be. I personally think that’s valid if only because it gets us out of the infinite regress of “first causes.” On the other hand, a quick perusal of anthropicisms collected by Scott shows some examples that I think we can all agree are egregious abuses of anthropic reasoning.

    Where’s the line? How can I justify accepting Dawkins’ reasoning while rejecting square-moon reasoning? Really, one could brush it all away by not explaining how the universe came to be and not dismissing it either, but simply saying “we don’t know yet, and may never know, but we’ll keep thinking and looking in between doing new physics.” Maybe that’s the real solution. I don’t know, but I’m looking forward to the thread following the next lecture, since anthropic reasoning seems to be a topic of interest here.

  41. Greg Egan Says:

    Chris, Ian, for what it’s worth my own main gripe with anthropic reasoning comes when it violates causality. That’s not the only way it can go wrong, but I think it’s the clearest.

    For example, here’s a case of what I believe is correct reasoning. In 1950, the U.S. Department of Energy decided to bury some long-lived nuclear waste beneath a town chosen at random from all those named “Springfield”. Homer Simpson discovers the secret government file describing this terrible crime … which includes an analysis of the likely health effects, but doesn’t pin down the specific town. For the sake of argument, let’s simplify things ludicrously and suppose that in the town with the waste, anyone born after 1950 who is now an adult would currently face a 1 in 10 chance of having a certain kind of thyroid nodule, as opposed to a background level of 1 in 1,000 in an ordinary town.

    Homer gets his thyroid checked out. He doesn’t confer with anyone else about their own medical history. Has he gained any probabilistic information that he didn’t have before?

    I think he has. Despite the fact that there’ll be adults both with and without nodules in both kinds of town, even Homer’s single data point is better than no data at all, and should influence, to some degree, his estimate of the likelihood that the waste is buried in his town. After all, whether he has the nodules or not, he has been directly, physically exposed to some particular level of radiation, either low or high.

    Now, contrast this with the scenario where Homer discovers that the two candidates for the 2008 Presidential election have secret immigration policies. If Obama wins, he will flood the country with millions of immigrants, but nobody will be allowed to settle in a town whose name starts with the same letter as their surname. If McCain wins, he will flood the country with millions of immigrants, but nobody will be allowed to settle in a town unless the town’s name starts with the same letter as their surname.

    Homer notes that his own surname starts with the same letter as the name of his town. If Obama wins, then across the whole history of Springfield such a coincidence will be very unlikely. After finding a paper on Boltzmann brains in Lisa’s trash, Homer reasons anthropically that it’s close to a sure victory for McCain.

    Now, the one thing in favour of this kind of absurd reasoning is that if everyone in the history of Springfield is forced to adopt some uniform strategy for guessing the result of the 2008 election (let’s assume that in the future, the actual historical facts get lost in the mists of time), the strategy “my name starts with S means McCain won, my name starts with another letter means Obama won” will, over the centuries — assuming that no countervailing policies are applied later — yield a majority of correct guesses. It’s this kind of fact that’s used to “justify” a certain class of anthropic arguments.

    But does this mean that Homer in July 2008 has access to a “surname-based insight” into the likely winner of the November 2008 election? Of course not.

    Some cosmological parameters are like the radiation level in Springfield; you can argue that we’ve been “exposed” to them, and that observing some aspect of our nature constitutes a measurement of that parameter (albeit usually a single, noisy data point). But other parameters — such as aspects of dark energy that will only be revealed billions of years from now — are exactly like the unknown election result. We can choose to adopt a strategy that would average out favourably across all sentient beings in the history of the universe … but if we think we’re doing anything more than that, we’re kidding ourselves.

  42. Chris Granade Says:

    Good grief, we haven’t even gotten to the Anthropic lecture yet, and everyone here has dissected damn near every single argument that will be made. That Simpsons election example sounds like a way out of both the Doomsday Argument and Leslie’s Dice Room, which suits me just fine. A lot of these anthropic Bayesian arguments just leave me cold, but it’s hard for me to articulate exactly why.

  43. harrison Says:


    Bravo, sir. As usual, you can clarify these kinds of issues better than just about anyone, and I hope you don’t mind that I’m totally using your examples next time I’m discussing my stance on anthropic arguments.

  44. Greg Egan Says:

    Harrison, be my guest (just don’t tell the legal people at Fox).

  45. math idiot Says:

    I think anthropic principle is ridiculous. We are human beings living on earth. We exist and so some scientists suggest that laws of nature support or are made to support the survival of human beings. How can we be sure that no other forms of life exist somewhere in this universe? If they exist, can we say that laws of nature are to support that kind of life, which is different from human beings? Are we certain that laws of nature that we know are the same everywhere in this universe? If they aren’t and there may be different laws of nature in the universe (for example, two different sets of law of gravitation), can we say that laws of nature are not made to support the existence of human beings?

  46. John Armstrong Says:

    math idiot: the anthropic principle isn’t used to rule out universes that don’t support humans. It’s used to rule out universes that don’t support atoms.

    I mean, I agree it’s a pretty weak argument, but you’ve set up a laughable straw man there to knock down.

  47. John Sidles Says:

    Scott mentions: … the failure to find polynomial-time classical algorithms for simulating quantum physics …

    And yet, for both fluid and quantum systems, it may well be the case that computing simulations is infeasible in principle, yet perfectly feasible in practice.

    The two Millenium Prize problems that bear directly on this question are, of course, P vs NP and Navier-Stokes Equations.

    For this parallelism to occur, there has to be a “loophole”; one evident loophole is that real-world fluids are made of discrete atoms, and real-world quantum systems are noisy … in both cases this provides a natural cut-off to dynamical complexity.