My responses to GASARCH’s P vs. NP poll

The poll is here; my (slightly-edited) responses are below.  It took heroic self-restraint, but I tried to answer with straightforward statements of what I actually think, rather than ironic humor.

1. Do you think P=NP or not? You may give other answers as well.

I think P≠NP (on more-or-less the same grounds that I think I won’t be devoured tomorrow by a 500-foot-tall salsa-dancing marmoset from Venus, despite my lack of proof in both cases).

2. When do you think it will be resolved?

In his recent book The Beginning of Infinity, David Deutsch argues that we can’t even make decent probabilistic predictions about a future event, to whatever extent that event depends on new knowledge being created.  I agree with him on this: a proof of P≠NP, like other major mathematical advances, would depend almost entirely on new knowledge, and because of that, my uncertainty applies not only to the approximate number of years but to the approximate log of that number: decades, centuries, millennia, who knows?  Maybe the question should be rephrased: “will humans manage to prove P≠NP before they either kill themselves out or are transcended by superintelligent cyborgs?  And if the latter, will the cyborgs be able to prove P≠NP?”

3. What kinds of techniques do you think will be used?

Obviously I don’t know—but if we look at the techniques used in (say) Ryan Williams’ recent result, and then remember that that proof only separates NEXP from ACC0, we can get a weak hint about the scale of the techniques that would be needed for problems like P vs. NP.  Right now, Mulmuley’s GCT is the only approach out there that even tries to grapple with the biggest barrier we know, beyond even relativization, natural proofs, and algebrization: the barrier that many nontrivial problems (including matching and linear programming) are in P!  That’s not to say Mulmuley’s specific program will succeed: indeed, I suspect that the right chain of reasoning might diverge from Mulmuley’s at an earlier rather than later point.  But even for the seemingly-easier permanent versus determinant problem, I fear Mulmuley is basically right that the key insights lie in yellow books yet to be written.

4. Will the problem still be relevant given advances in algorithms and in SAT Solvers?

Yes, in the same way the Second Law of Thermodynamics is still relevant given advances in hybrid cars.

5. Feel free to comment on anything else: Graph Isomorphism, Factoring, Derandomization, Quantum computers, and/or your own favorite problem.

Graph Isomorphism: Probably in P.

Factoring: Probably hard for classical computers, but unlike with NP-complete problems, if it isn’t then we’re still living on Earth.

Derandomization: I think P=BPP (with essentially the same strength of conviction as P≠NP), and likewise L=RL, etc.

Quantum computing: I think BPP≠BQP (though not with the same strength of conviction as P≠NP), and also predict that no bizarre changes to quantum mechanics will be discovered of the sort needed to make scalable quantum computing impossible.

For those who are still reading, as a special bonus I present my answers to the large and interesting questions asked by a commenter on my last post named Mike S.

One thing I’ve heard before about NP(-hard) problems is that often certain instances are much harder than others. What are your feelings on the physical practicality of a computer that solves only most cases of NP(-hard) problems quickly? Also, is determining the ‘difficulty’ of particular instances of NP-complete problems NP(-hard)?

It depends what you mean by “most”! I think it’s almost certainly possible to generate a probability distribution over 3SAT instances almost all of which are hard (indeed, that assumption is central to modern cryptography). As one example, the approximate shortest vector problem is known to be just as hard on average as it is in the worst case, and it can easily be reduced to 3SAT. Another candidate is random k-SAT instances at the “critical ratio” of clauses to variables, for k≥4.

But maybe what you meant was those instances of NP-hard problems that “typically arise in real life.” Here all sorts of issues come into play: for example, often the instances that arise in practice have symmetries or other structure that makes them easy. And often your goal is not to find the best solution, but just a better solution than your competitors. And often we terminate trains of thought long before they lead to hard instances of NP-complete problems—we’re usually not even conscious that that’s what we’re doing; we just have an intuition that “such-and-such would require a hopeless search.”

But at the same time, when we do ask explicitly for optimal solutions, that request for optimality often has a way of finding the hard instances for us.

Less seriously, you said something along the lines of ‘P!=NP keeps mathematicians in business’. If math is so hard computationally, how do WE do it? Or on the other hand, if the computational complexity of certain problems is a fundamental property of the universe, and we are part of the universe, doesn’t it follow that we could make computers that are as good or better at doing math than we are?

The short answer is that math (as practiced by humans) is an extremely hit-or-miss business!  A billion years of evolution have equipped us with a lot of useful heuristics, as has the much faster evolution of mathematical ideas over the last few thousand years.

Probably even more important, we normally don’t care about arbitrary mathematical questions (does this random Turing machine halt?), but only questions that arise in some explanatory framework. And that criterion tends to select extremely strongly for questions that we can answer! Why it does so is a profound question itself, but whatever the answer, the history of math provides overwhelming evidence that it does. Goldbach’s Conjecture and the Collatz 3x+1 Conjecture are more-or-less “arbitrary” questions (at least in our present state of knowledge), and indeed they haven’t been answered yet. Fermat’s Last Theorem might have seemed pretty arbitrary at first (Gauss regarded it as such), but it wasn’t.  Indeed, in the 1980s it was embedded into the deep explanatory framework of elliptic curves and modularity, and a decade later it was solved.

Of course, despite these factors in mathematicians’ favor, they’re very far from having a general-purpose method to solve all the problems they want solved.

Incidentally, “P≠NP means computers can never replace human mathematicians” is a forehead-bangingly common misunderstanding. Personally, I see no reason why the brain couldn’t be simulated by computer (neuron-by-neuron if necessary), and P≠NP does nothing to challenge that belief.  All P≠NP suggests is that, once the robots do overtake us, they won’t have a general-purpose way to automate mathematical discovery any more than we do today.

95 Responses to “My responses to GASARCH’s P vs. NP poll”

  1. Bram Cohen Says:

    Do you believe multiplication is n*log(n) ?

  2. Scott Says:

    Bram: In which computational model? In the unit-cost RAM model we know integer multiplication can be done in linear time. In the multitape TM model, we have algorithms that are n log(n) loglog(n) and now n log(n) 2log*(n); was your question whether those can be reduced to n log(n), or whether even n log(n) can be surpassed?

    I don’t have a good intuition about either question—though it does seem plausible that FFT requires Ω(n log(n)) steps (say, in the circuit model), in which case maybe n log(n) would be the limit for integer multiplication also. And n log(n) 2log*(n) looks a bit nasty to be the final answer to anything… 🙂

    (Questions for those more knowledgeable: is there any reduction showing that if FFT requires Ω(n log(n)) time then integer multiplication does also? Also, what’s the exact relation between circuit complexity and multitape TM complexity?)

  3. Anup Says:


    I fail to see the analogy between P not equal to NP and the chance that you will be eaten by 500-foot-tall salsa-dancing marmoset from Venus. Yes in both cases you can’t prove it, but otherwise the situations are very different.

    The case for not being eaten: You can’t prove that it won’t happen. On the other hand, it hasn’t happened to you on all other days that you have lived. Also, you have never heard of it happening to anyone. There is no reason to believe that tomorrow is any different in this regard from yesterday. So the simplest hypothesis to hold is that you will not be eaten tomorrow.

    In the case of P not equal to NP, the evidence is: many different human beings have probably tried to find algorithms for NP-complete problems and have failed. Is that even evidence? What if you replaced humans with “3rd graders”. Would that still count as evidence? If not, what’s the difference?

    Given this, I can at best draw a parallel between the two situations if I am trying to argue that someone will probably not PROVE P = NP tomorrow. (which is different from asserting that P= NP). But even that is on a weak foundation, because unlike in the previous case, the chance of P=NP being known two days from now is only more likely than it being known tomorrow.

    Largely, the belief that P is not equal to NP is based on the idea that humans have not proved otherwise yet. That’s a very shaky argument, to me. Similarly, pointing out that something is a common hardness assumption in cryptography can also be interpreted as evidence of the weak foundations on which cryptography is built, rather than evidence of the hardness of the assumptions.

    I think the belief that P is not equal to NP is exactly like the belief that humans will never be able to fly, wildly held in the 10th century. I haven’t thought about this carefully, but I would love to see an argument that applies to why P not equal to NP that does not also apply to that situation.

    So we should be careful when we say such things.

  4. fagagr4bh Says:

    “Fermat’s Last Theorem seemed pretty arbitrary (Gauss regarded it as such), but then in the 1980s it became part of a deep explanatory framework, and a decade later it was solved.”

    Isn’t this a roundabout way of saying “in the 1980s, we began to understand FLT, and a decade later it was solved?” One understands concepts by relating them to what one already knows — embedding FLT into a “deep explanatory framework” is just restating the problem in terms of a domain one understands about/cares about. If you restate your quote in this way it doesn’t seem so surprising — or am I misunderstanding you?

    P.S. “Domain one cares about” is a point that I think is overlooked a lot: Wiles said himself that when FLT was restated in terms of elliptical curves it became “professionally acceptable” to work on it… and thus he started to work on it. I think a lot of the really hard problems (including P!=NP) are not professionally acceptable to work on. It is such a gamble that one can’t even guarantee partial results to publish. What do you tell your graduate students after 7 years of nothing? Perelman didn’t have such responsibilities. Wiles graduated more or less noone during this period (one wonders if the people he graduated came in BEFORE he started). One of his only students during this period (Richard Taylor) ended up co-publishing with him. I don’t think Mulmuley has students, does he?

  5. Pascal Says:

    Hello Scott, can you elaborate on this claim:

    “Mulmuley’s GCT is the only approach out there that even tries to grapple with the biggest barrier we know,…”

  6. Yatima Says:

    P≠NP means mathematics stays interesting.

  7. Yatima Says:


    “I think the belief that P is not equal to NP is exactly like the belief that humans will never be able to fly, wildly held in the 10th century. I haven’t thought about this carefully, but I would love to see an argument that applies to why P not equal to NP that does not also apply to that situation.”

    Start here, please: NP-complete Problems and Physical Reality

  8. Luca Says:

    Why do you think Factoring is harder than Graph Isomorphism?
    We know that factoring is in BQP but nothing about GI (unless I’m missing something, very likely).
    Sure, we know that neither is np-complete (unless strange things happen, very unlikely) but I don’t see a reason why GI should be the easiest.

  9. Scott Says:

    Anup #3:

    I think the belief that P is not equal to NP is exactly like the belief that humans will never be able to fly, wildly held in the 10th century.

    I think that’s an absolutely terrible analogy! People have always known that birds and insects and bats fly; by contrast, we have no evidence for any system anywhere in the universe that solves NP-complete problems efficiently. And when it comes to technologies that are unlike anything seen before (e.g., a universal classical computer, a quantum computer, a sustained fission reaction, a perpetual-motion machine, a time machine, cryonics), the question should always be how consistent they are with our best current explanations of how the universe works.

    If those people in the 10th century had had a scientific outlook (which we need to assume for the comparison to be fair), they would’ve asked: is there some fundamental physical principle that explains why flying can’t work? And I doubt they would’ve found one, even just a conjectural principle that was precisely-formulated but missing mathematical proof.

    To show how easy it is to reverse analogies: to me P=NP believers are much less like tenth-century flying believers than like the science-fiction fans of today who believe in faster-than-light travel. P≠NP is not a mathematically proven principle, but like no-superluminal-communication, it’s nevertheless a principle that explains a great deal if true and that I see no good reasons to doubt at present. (Among many other arguments, you have to believe most pairs of complexity classes are unequal, no matter how you slice things! If you believe P=NP, then you also believe NP≠EXP—but if NP and EXP are different, why not P and NP?)

    So, that’s what I meant by comparing a polytime SAT algorithm to a marmoset from Venus: not merely that I’ve never seen either one or that I can’t prove their nonexistence, but that they’d do roughly similar amounts of violence to the explanatory framework through which I understand the world.

  10. Pascal Says:

    To fagagr4bh : I do not believe that working on hard problems is professionally unacceptable. In fact, all the existing work on lower bounds in restricted models or on “barriers” can be seen as partial progress towards problems like P=NP. It has led to lots of publications and professional recognition for many of those involved. Even GCT has led to a few publications (by Mulmuley and others).

  11. Bram Cohen Says:

    Scott, to clarify: Do you think it’s possible to get rid of that stupid 2^log*(n) factor in the circuit model of multiplication?

    In answer to peoples’s questions about graph isomorphism: We have lots of good algorithms for graph isomorphism, and can easily come up with lots more, and we have no problem instances which appear to cause any significant challenges for the obvious algorithms, and even to the extent that one can find an exception it’s fairly trivial to come up with a new polynomial algorithm which fixes that one. Hence the intuition that the problem is probably easy.

  12. Scott Says:

    fagagr4bh #4:

    Isn’t this a roundabout way of saying “in the 1980s, we began to understand FLT, and a decade later it was solved?”

    I agree that from one perspective, my claim isn’t surprising at all. I suppose the non-tautological part of my claim is that there’s a “giant connected component” of interrelated mathematical knowledge, and that the means by which hard open problems are solved is very often by connecting them to that component. One could have imagined otherwise—that each problem would have to be scaled as an isolated peak—but empirically we find all these bridges between the peaks we care about, arguably more than we had any right to expect. Those bridges, I’m claiming, are the thing that’s made so much mathematical progress possible, even in the teeth of theorem-proving being NP-hard. One piece of evidence is that, when mathematicians come across a problem (like the Collatz Conjecture) with almost no visible bridges into the giant connected component, they can barely make progress on it.

    (Which is ironic, since the 3x+1 conjecture is all about proving that the integers form a giant connected component under the Collatz iteration… 🙂 )

    I think a lot of the really hard problems (including P!=NP) are not professionally acceptable to work on.

    I completely disagree: the reality is that our community recognizes and applauds progress in circuit lower bounds (e.g., NEXP⊄ACC0), and even long-term programs with few concrete successes so far (GCT).

    In the case of FLT: sure, one consequence of its being connected to elliptic curve theory is that FLT became more “professionally acceptable to work on,” but the much more important consequence was that a path was opened for actually proving FLT! 🙂 With P vs. NP, it seems to me that we’re very far from having an analogue of Ribet’s proof of Serre’s epsilon conjecture, and that the main barriers are mathematical rather than sociological.

    (Unless, by “sociological,” you wanted to take an even broader perspective, and talk about nerdy high-school kids discouraged from studying math and science or things like that.)

  13. Scott Says:

    Why do you think Factoring is harder than Graph Isomorphism?

    Luca #8: Yes, what Bram said. For GI, we know algorithms that work incredibly well in practice, so that finding graphs for which they fail is itself a hard problem. (Indeed, if I’m not mistaken, there are several extant algorithms that might already put GI in P, except that no one knows how to analyze them.)

    Factoring, by contrast, seems hard enough in practice that we’re able to base RSA on it, and even factoring a random n-digit integer is already hard as far as anyone knows.

  14. Scott Says:

    Bram #11: Alright, alright, I’ll give 2:1 odds that the 2log*(n) factor can be removed.

  15. Scott Says:

    Pascal #5: I meant the following. In GCT, the central unsolved problem is how one would go about computing the multiplicities of the irreps for (say) the permanent and determinant class varieties. However, one thing we know is that even for a very special case of that problem (Littlewood-Richardson coefficients), the known polynomial-time combinatorial algorithm to solve it generalizes Edmonds’ matching algorithm. A huge part of Ketan’s dream for how we’ll someday check whether the multiplicities are nonzero also involves checking whether certain convex polytopes are nonempty, which would use the fact that linear programming is in P. So the bottom line is that, if GCT could be used to prove a circuit lower bound in the way Ketan envisions, then the proof would have to “know about” or “engage with” the facts that matching and linear programming are in P—something that (for separate reasons) one would imagine must be true of any P≠NP proof.

    Now, I readily admit that this is an extremely low bar for a circuit lower bound proposal to clear! But I can’t think of any other proposal that clears it—can you?

  16. fagagr4bh Says:

    I think I am being misunderstood re: hard problems. Circuit lower bounds such as Ryan’s result are certainly steps towards P vs NP, but they are chunks that have been broken off the main problem and made “manageable.” If you told someone “I am going to go off and prove that ACC0 is not NEXP”, you wouldn’t get the same look of incredulity that you would if you said “I am going to go off and prove that prove P != NP”.

    At some point, you have to stop breaking off chunks, and “just solve the damn problem.” When the connection between FLT and elliptical curves was made, Wiles stopped “chunking” and spent years in solitude trying to fit the pieces together. I doubt anyone would consider it an “acceptable” goal to just sit in solitude and try to solve P vs. NP given the state of our knowledge.

    GCT is a strange example. I don’t think saying “people are applauding it” is quite accurate. GCT is something Ketan can do because he is a full professor, and has nothing left to prove. He doesn’t have to take on students. He doesn’t have to vie for tenure, much less try to get a job in the first place. If you spent as long as Ketan did on GCT and presented one (fairly poorly written, I tried to read it…) J.ACM paper to show for it, would you expect tenure?

  17. Anup Says:

    Hi Scott, arguing about the details of the analogy will not get us anywhere, but just try to imagine convincing someone from the 10th century that we will eventually be able to fly. I don’t think the fact that bats can fly is very convincing.

    Further, the way you structure this argument is itself biased. In analogy with your complaint, I ask you: where is the physical evidence to support that P is not equal to NP (lack of physical evidence for P = NP doesn’t count).

    I wasn’t advocating that we SHOULD believe that P=NP :). I am merely pointing out that I don’t buy most of the arguments I hear (including the ones you’ve given) for why we should skew our beliefs so wildly to one side.

    Why is P not equal to NP more “consistent they are with our best current explanations of how the universe works”? First off, if the universe is finite, then the P vs NP question has no real bearing on any computation being done in the universe, since P vs NP only makes sense if we are talking about asymptotics. Do you agree?

    I don’t know if the universe is finite or not, but it seems to be wildly believed that it is. Correct me if I’m wrong. So again, I’m not sure why any physical evidence from the universe pushes you so much in any direction.

    Maybe you will say that this is nitpicking on the definitions, and you want to say that there is no 100n time algorithm for SAT. But again, for the exact same reason that I talked about before, I don’t see why just because naturally occurring soap bubbles and silicon hasn’t solved SAT that means that it is IMPOSSIBLE to solve SAT in time 100n, or even more believable that this is the case.

    I would buy it if you could explain to me that a universe with P not equal to NP is a simpler universe (ala God doesn’t exist because he isn’t required anymore), but I haven’t seen any argument of this type.

    But I am open to being convinced.

  18. matt Says:

    Wow, I had no idea that Scott was so open minded about the possibility that P might equal NP. Maybe I should slightly revise my probability of equality upwards.

  19. Pascal Says:

    Pascal#5 to Scott#15 :

    >Now, I readily admit that this is an extremely low bar for a >circuit lower bound proposal to clear! But I can’t think of any >other proposal that clears it—can you?

    Let’s consider for instance the approach to lower bounds from derandomization (e.g. of polynomial identity testing).
    In this approach, it seems to me a completely irrelevant fact that certain nontrivial problems like matching or linear programming are in P. So in a sense, you are right : the bar is not cleared. But it is a bar that does not have to be cleared.

  20. Scott Says:

    matt #18: LOL! Between the people who wonder why anyone would waste their time trying to prove something as transparently obvious as P≠NP, and the ones who think P=NP, complexity theory just gets hammered from every side…

  21. Scott Says:

    Pascal #19: OK, but I think it’s relevant that

    (1) we haven’t yet derandomized polynomial identity testing, so we don’t yet know the depth of techniques that will be needed, and

    (2) as far as anyone knows, derandomizing polynomial identity testing wouldn’t give us P≠NP, but “merely” circuit lower bounds for NEXP or arithmetic circuit lower bounds.

    Still, you raise an interesting point: what I’ve personally regarded as one of the strongest (if still very inconclusive) arguments in GCT’s favor, doesn’t apply at all if your only goal is to derandomize polynomial identity testing.

    But for PIT, one could make a different argument, that any derandomization program had better “know something about” algebraic geometry, which is another low bar that GCT clears and many other ideas don’t.

    Actually, I don’t know of any treatment of PIT from a GCT perspective, even at the speculative level at which Ketan has treated permanent vs. determinant. That would be a very good project for someone who understands GCT.

  22. Scott Says:

    fagagr4bh: What I’m trying to tell you is that I see no alternative to “breaking off manageable chunks of the problem” at this stage in history. I.e., If you sent all the world’s complexity theorists (and mathematicians, for good measure) to their respective attics to think deep thoughts about P vs. NP for the next 20 years, untroubled by any need to impress or even talk to their colleagues, I predict that you’d get

    (1) lots of interesting ideas,
    (2) lots of crazy people in their attics needing mental help, but
    (3) nothing close to a proof of P≠NP.

    As far as I can tell, P vs. NP has not yet ripened to the “genius in the attic” stage—indeed, it’s not even close. Wiles was shrewd: he didn’t sequester himself in his attic to work on FLT until Frey, Serre, Ribet, and others had finished building the long, arduous bridge he’d need to cross. Likewise Perelman wouldn’t have gone back to Russia to work on Poincare if Hamilton and others hadn’t built his arduous bridge. By contrast, I claim that there hasn’t been any profound discovery of the form,

    “to prove P≠NP, all you really need to prove is X.”

    (GCT is trying to make such a discovery, but it’s not there yet.)

  23. Simple mind Says:

    I think it’s almost certainly possible to generate a probability distribution over 3SAT instances almost all of which are hard (…) the approximate shortest vector problem is known to be just as hard on average as it is in the worst case, and it can easily be reduced to 3SAT.

    Scott, I don’t understand your wording here. You say it’s “almost certainly possible” then “is known”. Do you mean average case does not equate almost all case, or ASV might not be strictly NP-hard, or something else?

  24. Scott Says:

    Simple mind: The resolution of the “paradox” is that ASV isn’t known to be NP-hard—and in fact, at least at the approximation ratio for which we know it’s worst-case/average-case equivalent, it can’t be NP-hard unless the polynomial hierarchy collapses. Showing worst-case/average-case equivalence for an NP-complete problem (under any efficiently-samplable instance distribution) remains one of the great open problems of complexity theory.

  25. Scott Says:

    Anup #17: Let me try this. Think about all the effort that’s gone into finding approximation algorithms and hardness of approximation results for NP-complete problems. The approximation and inapproximability ratios are sometimes far, sometimes close, sometimes they even match, but somehow they never cross each other—not even once! Why is that? In a world where P=NP, it would basically be an unexplained coincidence. Do you agree that assuming P≠NP yields a simpler picture of this observed reality?

    On another point you raised: yes, assuming the cosmological constant is actually constant and the holographic entropy bound holds, our observable part of the universe is finite (consisting of no more than ~10122 bits). But I don’t see why that has much of anything to do with P vs. NP.

    Personally, I’ve always seen P vs. NP as just a convenient formal stand-in for the question we really care about, whether there’s a practical way to solve NP-complete problems. Or to put it differently, operating in the background of complexity theory is what you might call the Linking Conjecture—which says that if P=NP then there’s a practical algorithm for NP-complete problems (on the instance sizes that humans care about), and if P≠NP then there’s no such practical algorithm. I’m about as confident in the Linking Conjecture as I am in P≠NP itself. But supposing the Linking Conjecture failed, we’d need to replace P vs. NP by a better formal stand-in for the practical question that we really mean to talk about.

  26. Raoul Ohio Says:

    It was pointed out “near the top” that the f: x -> 3x+1 iteration problem seems real hard, and so did FLT until it was embedded in a context that was much better understood.

    Given that we probably know a lot more about functions on R and C than on N, consider simple embeddings such as:

    f1(x) = (1/2)[6x+2 – cos^2(pi x)(5x+2)], and

    f2(x) = (1/4)[7x+2 – (-1)^x(5x+2)].

    Then the 3x+1 conjecture is that if you start iterating either of these functions from some starting point, and you hit a positive integer, “pretty soon” it will start to cycle 1 -> 4 -> 2 -> 1.

    These functions can be prodded with standard methods of real and/or complex analysis.

    Warm up exercise: Show that f1 has infinitely many fixed points on the real line, asymptotically approaching the values where x ~ (1/pi)ArcCos(sqrt(4/5)).

  27. Evan Berkowitz Says:

    Hey Scott,

    I hate to be that guy, but the Collatz conjecture may indeed be (recently) proved.

    I’m definitely not in a position to judge this paper, but I am in a position to repost interesting links from slashdot, so here I am.

  28. Simple mind Says:

    Thx Scott, I was fooled by looking (too superficially) at this paper:

  29. Scott Says:

    Hey Evan,

    I hate to be that guy, but if you click the link and scroll to the second page, you’ll see that the author has withdrawn the paper. 😉

  30. Scott Says:

    Simple mind: Yeah, it’s confusing! If you want a constant-factor approximation, then SVP is NP-hard, but not known to be worst-case/average-case equivalent. If you only want a cruder approximation (like n3/2, I think), then SVP is worst-case/average-case equivalent, but not known or believed to be NP-hard.

  31. Simple mind Says:

    Scott, are you more confident in the “linking conjecture” or in P<NP? I'm tempted to bet 1$ on the second, which would mean you can't argue the second based on the first. 😉

    Joke apart, to an outsider perspective this conjecture seems the blind point of complexity theory (or a paradigmatic concensus, in Kuhnian's terms), up to the point it is barely even discussed. I'm curious, how would you defend it?

  32. Scott Says:

    Simple mind: The failure of either conjecture would rock my world about equally!

    IMHO, calling constant factors “the blind point of complexity theory” is sort of like calling friction “the blind point of Newtonian physics”! I.e., it’s less a blind point than a central, pervasive, universally-recognized complicating factor that had to be understood and dealt with before the theory could even get off the ground in the first place.

    Now, as for how I’d defend the linking conjecture: the key point is that we expect algorithms for a given problem to get better and better, from one conference to the next, until they’ll finally hit some fundamental barrier intrinsic to the problem itself, after which they stop improving. But where should we expect these barriers to be? Well, there are certain complexities at which it seems reasonable to find a barrier on a priori grounds, and where empirically we do in fact often find barriers, in those models (decision trees, read-once branching programs, resolution proofs, AC0 circuits, etc. etc.) that are simple enough that barriers are already proved.

    So for example, 2n is a very reasonable place for a barrier to be: it says you can’t beat brute-force search. 2n/2 is another reasonable place: it says you can shave a factor of 2 off the exponent, but that’s it. n2 says you need to consider all possible pairs, n log(n) says you need log(n) linear-time iterations, and so on. By contrast, n1000 would be a rather strange and inexplicable place to hit a fundamental barrier, in a problem that didn’t involve the constant 1000 in its definition. Why can the exponent be lowered from 1001 to 1000, but not from 1000 to 999? And indeed, we essentially never do see barriers like n1000, in those situations where barriers can be proved.

    The point I’m trying to drive home is that, in math and science, the question is never just which conjectures are logically compatible with existing knowledge, since there are always way too many conjectures to consider them all! Instead, the question is always which conjectures are compatible with existing knowledge, and would fit reasonably well (if true) into our web of explanations. By now, there are several centuries of experience to indicate that focusing on the latter question is a winning strategy.

  33. mkatkov Says:

    Scott #9. How do you know that humans cannot solve NP-complete problems in polynomial time? Moreover, “we normally don’t care about arbitrary mathematical questions”, but once we do care – “it became part of a deep explanatory framework, and a decade later it was solved.”

    It is exactly like humans cannot fly, because they do not have right equipment.

  34. Scott Says:

    mkatkov: If you could solve NP-complete problems in your head, then among many other things, you could quickly factor 1000-digit numbers. Yet I don’t think there’s been anyone in history who could mentally multiply 1000-digit numbers, let alone factor them! Furthermore, this fact isn’t especially mysterious, but is a natural consequence of the selection pressures that would have been operating on the brain throughout its history (hunting and mating, yes; multiplying 1000-digit integers, not so much).

    In other words, I see no evidence whatsoever that humans can solve arbitrary P problems in polynomial time, let alone NP-complete problems! 🙂 Can I prove that we’re not all born with the latent ability to solve NP-complete problems, which for some reason no one ever bothers to use to break RSA? Of course not! But it seems obvious that the burden of proof lies with those who would posit such a far-reaching, unexplained, and wildly-underused ability.

    Incidentally, your account of FLT leaves out a crucial detail: that there was a >300-year gap between when mathematicians started caring about FLT, and when they finally embedded it in the explanatory framework that led to its being solved!

    Now, that raises an interesting side question: why did mathematicians care so much about FLT, centuries before Frey and Ribet had related it to modularity of elliptic curves? I can suggest two answers:

    (1) The extraordinary simplicity of its statement.

    (2) The enormous mathematical richness that was already evident in the problem, long before anyone had connected it to Taniyama-Shimura. (E.g., the unsuccessful attempts to prove FLT in the 19th century led to several major advances in number theory.)

  35. Janos Says:

    1. People have looked at the “continuous Collatz Problem”
    See, for example
    Lagarias, AMM 92 (1985),
    Chamberland, A Dynamical Systems Approach to the 3x+1 Problem

    Does not help for the real problem.

    2. One of the reasons P vs NP will be at least intriguing in 2100 is that the “natural” proof that P!=NP seems to defeat itself. One would like to exhibit a counterexample to every circuit/algorithm that claims to solve SAT. But a natural way to verify this — if we want to do it in P — seems to require P=NP, or at least NP=coNP (if we know how to generate counterexample formulas, we presumably know whether these formulas are satisfiable, which allows us to build a circuit to decide them). [This is what Ketan calls the “complexity barrier”]
    In the unlikely case that P=NP, we would have a very surprising algorithm.

    3. Comment on Scott 22
    A reasonable — but too general — substitute for “we need to prove X” is “we need to find clever new algorithms in P”. (To solve PIT in P, for example) — which is certainly a respectable and interesting goal of Theory.

  36. Simple mind Says:

    Scott, thanks again for clarifying the SVP issue (#30 came to my sight after I posted #31).

    I’m surprized by this equality in power to rock you, but I was not talking about constant factor stuff, I was talking about the linking conjecture as you defined it post #25. That’s not the same, but if the linking conjecture is true!

    In other words, the barriers (2^n, nlogn, etc) CT explores are all functions that “behave the same irrespective of n”. Here is the blind spot, imho. For what reason on earth all algorithms should behave like this?

    For the idea that interesting questions are always compatible with our web of explanation, sorry but in science that’s just plain wrong. I’ll provisionaly take your word it’s true in math. 😉

  37. Scott Says:

    Simple mind:

    In other words, the barriers (2^n, nlogn, etc) CT explores are all functions that “behave the same irrespective of n”. Here is the blind spot, imho. For what reason on earth all algorithms should behave like this?

    I don’t understand what you’re getting at here. Do you want functions f(n) that oscillate wildly, unpredictably, and non-monotonically with n? If so, then setting aside the empirical fact that such functions essentially never arise as complexities (unless we construct them by hand), the key point is that even the nastiest running times have smooth upper and lower bounds, and bounds are the only thing complexity theorists generally hope for anyway. E.g., P vs. NP doesn’t ask whether there’s a SAT algorithm whose running time is exactly p(n), for some polynomial p; it asks only whether there’s a SAT algorithm whose running time can be upper-bounded by a polynomial. (Though actually, the two questions are equivalent, as can be seen by the trick of outfitting Turing machines with “polynomial-time alarm clocks.”)

    For the idea that interesting questions are always compatible with our web of explanation, sorry but in science that’s just plain wrong.

    I certainly agree that surprises happen! But I claim that, tautologically, we must always expect the answers to unsolved problems to be the ones most compatible with our existing web of explanation (assuming we can think of such answers at all). For, if we expect something else, then whatever reasons we have to expect the something else ought to be part of our web of explanation!

  38. Vadim P Says:

    Stupid thought not to be taken seriously, but what if a computer could be constructed that can create additional copies of itself from available materials. Each creates a few copies of itself and then checks one of the possible answers, and each copy does the same. There would be physical limits, i.e. the copies would have to move to create space, the speed of light would be a factor if they wanted to communicate, etc, and it wouldn’t do anything towards proving P vs NP, but what would stop such a system from solving large instances much more practically than anything else? (Other than the moral dilemma re: creating a machine that eats anything and everything in order to solve a SAT instance).

  39. Scott Says:

    Vadim: Your observation is correct! The “only” thing that would stop such a system is that

    (1) if it were too spread out, the speed of light would become a limiting factor (not just for communicating the answer, but even for making room for the replicating computers themselves), and

    (2) if it weren’t too spread out, the system would exceed the Schwarzschild bound and collapse to a black hole before very many replications had occurred.

  40. mike Says:

    Vadim & Scott,

    Now, those are the kind of speculations I like!! This does sound like something out of Deutsch’s new book. 😉

  41. Joshua Zelinsky Says:

    Scott, regarding FLT, it seems that part of the reason people cared in the 19th century at least was simply social: Fermat, who was very smart and had made a lot of interesting conjectures said he had a proof.

    In a more mathematical vein, it is important to note that having a problem in an explanatory framework is frequently not enough to solve it. In the 19th century, FLT was embedded in the framework of cyclotomic fields, and for a while it looked for some time like that might be enough to deal with it. Similarly, there is some amount of explanatory framework which Goldbach’s conjecture is connected to. The inability to prove Goldbach’s conjecture is connected to the parity problem in sieve theory. Indeed, over the last few years, Iwaniec and a few others have developed sieve theoretic techniques that can somewhat handle the parity issues. The Friedlander–Iwaniec theorem (which shows that there are infinitely many primes of the form a^2 +b^4) shows that such techniques are becoming more useable, although certainly Goldbach and the Twin Prime Conjecture both seem to still be a long way off.

    So the upshot is that one doesn’t just need a framework to embed the problem but one needs the right framework and that framework needs to be itself well-developed enough to be useful.

  42. Simple mind Says:

    Scott #37

    >I don’t understand what you’re getting at here.

    Let’s try this: all the barriers you mentionned apply to algorithms that are static. That’s likely good enough for physical questions. I doubt this is the better way when it comes to biology, cognition, and economy. For each of these fields we’re facing “objects” that snaps back, fight, learn. Look at some n, some behavior. Look at some larger n/feed it with experience, and you get another behavior.

    You said such functions never arise as complexities. I’d advocate this happens almost everywhere in real life. So two possibilities: either complexity is useless for these kinds of phenomenon, or complexity theorists are presently blind to behaviors that are not well captured by the asymptotic behavior. I’m advocating the latter, and you can take it as deep mark of respect. 😉

    >bounds are the only thing complexity theorists generally hope for anyway.

    Is that any different from saying that’s the blind spot?

    >tautologically, we must always expect the answers to unsolved problems to be the ones most compatible with our existing web of explanation

    That does not sound tautological: we can know we must find something not compatible with present knowledge, that doesn’t make it compatible with present knowledge (think QM and relativity).

  43. Scott Says:

    Simple mind: Alas, I don’t think any further discussion is going to be useful. You’re convinced that complexity theory has a “blind spot,” yet every time I try to pinpoint what blind spot you’re talking about, it suddenly moves: first it was the relevance of asymptotic statements for finite n, next it was the possibility of wildly-varying growth rates, and now it’s that algorithms are “static” (whatever that means—the whole point of Turing-universality is that algorithms can model pretty much any dynamic behavior in the universe!). Those are three completely different objections.

  44. Scott Says:

    Joshua Zelinsky:

    So the upshot is that one doesn’t just need a framework to embed the problem but one needs the right framework and that framework needs to be itself well-developed enough to be useful.

    Completely agreed!

  45. Pascal Says:


    >(2) as far as anyone knows, derandomizing polynomial >identity testing wouldn’t give us P≠NP, but “merely” circuit >lower bounds for NEXP or arithmetic circuit lower bounds.

    One can get a little better than that from black-box derandomization of polynomial identity testing but yes, it would still fall short of PNP.

    Most probably, this approach will also require results from yellow books that haven’t been written yet, but not the same books as for GCT.
    One should perhaps set up another poll to decide which books will be written first. 🙂

  46. Simple mind Says:

    Scott, for the record, these were supposed to be three view-points of the very same idea, that you have yourself formulated as “bounds are the only thing complexity theorists generally hope for anyway”.

    (and of course no one has any bound on a Turing-Universal machine)

    BTW you seem tired of this discussion, so let me just thank you for the great job at proving what was initially more a question than a statement. 🙂

  47. Joshua Zelinsky Says:

    Simple, as an outside observer, it looks like you are confused. What you call three viewpoints of the same idea seem to be three very distinct claims. And I can’t parse your claim that “of course no one has any bound on a Turing-Universal machine” in any way that is both true and mathematically coherent.

  48. Simple mind Says:

    Joshua, that’s certainly my bad if you can’t parse any of my sentences, but allow me to respect Scott’s will of not discussing it further (you can ask him my email if you wish).

  49. mkatkov Says:

    Scott #34 “Incidentally, your account of FLT leaves out a crucial detail: that there was a >300-year gap between when mathematicians started caring about FLT, and when they finally embedded it in the explanatory framework that led to its being solved!”

    It is exactly the point where we diverge. You see it as the evidence for P!=NP, and I see it more toward the opposite. You operating with finite time-scales, when saying about human abilities to break RSA. I’m saying that the problem that gets a sufficient attention is usually solved in finite time.

    This is completely a matter of believes, and there is no physical experiment, given the limited resources we have to test, that there is no NP-complete problem that humans (whether individually or socially) cannot solve in polynomial time. As a physicist, you would probably admit that there is no way we can resolve this issue scientifically/experimentally. Therefore, we should remove this point from the scientific arguments, as theological. That is the only my point.

    As of FLT. Its extremely simple formulation makes it a good candidate to be shortly encoded hard problem, e.g. humans may be interested in solving hardest instances, approaching/dealing with NP-complete problems, so my guess it is 1+2.

    That is actually raises an interesting observation. It is possible that concept of complexity is similar to the concept of probability, where (I apologize to frequentists) the probability is an abstract concept that is never observed in the reality, but reflects our believes. Reality counterparts are always rational numbers. Fitting them to probability simplify our predictions, and to some degree allow us to build practical devices. Moreover, the never observable limits are believed to approach theoretical limit, called the probability.

  50. Gil Kalai Says:

    “and also predict that no bizarre changes to quantum mechanics will be discovered of the sort needed to make scalable quantum computing impossible.”

    Scott, do you believe that the only possibility that scalable quantum computers are impossible is that quantum mechanics itself is not valid?


  51. Bram Cohen Says:

    Scott #13: In the category of ‘polynomial-looking algorithms which we don’t know how to analyze’, do you think that simplex with a sufficiently clever heuristic for picking where to pivot can be made polynomial?

  52. jonas Says:

    Besides P != NP, which of these open questions where the statement itself is easy to understand by ordinary people (not only people working on theoretical computer science) do you suppose is actually important?

    * P = BPP (or some other derandomization)

    * BPP != BQP (or something else about quantum computing)

    * Graph isomorphism in P,

    * Factoring not in P,

    * NP != coNP

    * P != NP cap coNP

    The graph isomorphism problem is an easy showcase, but I don’t see it having much applications. It could lead to a new cryptography system if we can generate random hard instances, but we already have much better cryptography systems.

  53. matt Says:

    Maybe I was wrong. Maybe Scott’s not overly optimistic that P=NP. Maybe he’s just too nonchalant about those Venusian marmosets. He’s got to be warned about the dangers!

    But yeah, definitely even if we “know” P neq NP, a proof would be great. Or indeed a proof of almost any nontrivial lower bound (cue objections from CS-types about the lower bounds they have proven). In this regard, I think the question whether P=NP will still be “relevant” doesn’t make sense. Talking about SAT-solvers brings the problem into the practical domain. But practically, what are you going to do if you have to solve an NP-complete problem? Reasonable approaches today include applying heuristic methods and studying the approximation problem. Reasonable approaches do not include trying to prove that P=NP. So, the question of P=NP has no practical relevance today due to our high confidence in the answer to this question. So, Bill can’t be asking about the practical relevance. Conversely, the theoretical relevance of these questions will be untouched. Potentially other lower bound questions will acquire more practical relevance in the future; it is conceivable that the lower bound for matrix multiplication, for example, will become practically relevant.

  54. Scott Says:

    jonas #52: All of them are damn important, as open problems go!

  55. Scott Says:

    Bram #51:

    In the category of ‘polynomial-looking algorithms which we don’t know how to analyze’, do you think that simplex with a sufficiently clever heuristic for picking where to pivot can be made polynomial?

    Happily, the answer to that question is becoming less and less a matter of belief! It’s looking like the answer is “no”:

    Subexponential lower bounds for randomized pivoting rules for the simplex algorithm
    Oliver Friedmann and Thomas Dueholm Hansen and Uri Zwick
    (Best Paper Award co-winner at STOC’11)

    However, a fully convincing “no” answer would probably require a disproof of the “Polynomial Hirsch Conjecture” (i.e., the construction of a polytope where the only paths to the optimum traverse a superpolynomial number of vertices).

  56. Scott Says:

    Gil #50:

    Scott, do you believe that the only possibility that scalable quantum computers are impossible is that quantum mechanics itself is not valid?

    I think the only possibility that scalable quantum computers are impossible is a profound change in our current understanding of the laws of physics. That could mean a change to QM itself, or it could also mean some new principle “on top of” or “alongside” QM. Either way, I think it would be fair to say that our previous understanding of QM had been woefully incomplete.

  57. . Says:

    I disagree your opinion on FLT. The entire field of Algebraic Number Theory had its origins (probably even concocted to being invented) in trying to find the truth of FLT. So it was not even remotely close to being arbitrary.

  58. Scott Says:

    . #57: We actually don’t disagree; check out my response to mkatkov (#34):

      why did mathematicians care so much about FLT, centuries before Frey and Ribet had related it to modularity of elliptic curves? I can suggest two answers:

      (1) The extraordinary simplicity of its statement.

      (2) The enormous mathematical richness that was already evident in the problem, long before anyone had connected it to Taniyama-Shimura. (E.g., the unsuccessful attempts to prove FLT in the 19th century led to several major advances in number theory.)

    To put it differently, the fact that FLT might look arbitrary to someone seeing it for the first time doesn’t mean that it is arbitrary—quite the contrary! (And this is a common pattern in math.) I edited my response to Mike S. to make this clearer.

  59. Anup Says:

    Scott #25: Now you are using the arguments that I guessed you were relying on when I wrote an earlier comment. To repeat why I don’t buy it:

    1. Suppose no smart researchers had tried to solve this problem, and only 3rd graders had tried to solve it. Would you still think your argument about the coincidences would carry any weight? My point is just that your argument assumes that humans are smart enough that if they tried and failed to find some idea, we should consider that as evidence that the thing does not exist. That’s a very hard sell for me, especially since we ourselves are only human.

    2. Your argument only applies to whether or not a proof of P=NP is easy for us to find, not whether or not the statement is true. Perhaps the shortest proof takes 10,000 pages. And we DO know that there are tons of true statements that require such long proofs. Maybe this is the first useful statement with such high proof complexity?

  60. Gil Kalai Says:

    A question: Consider a theorem of the form “Linear programming is in P”. Suppose that it has a short proof. Now for general theorems we know (by believing NP=!P) that having short proofs does not guarantee that we can ever find such a proof. Does this mean that for a some/general algorithmic question in P finding a polynomial algorithm is, in principle, hopeless?

    (I neglected any asymptotic feature of the original theorem, but still isnt it “morally” a consequence of NP=!P that for general polynomial in P finding the polynomial algorithm hopeless?)

  61. Scott Says:

    Anup #59: If the 3rd graders in question had managed to prove thousands of problems from hundreds of different fields were in P, and had also managed to give thousands of reductions between equally-various NP-complete problems, but somehow not a single one of those reductions went between the two giant equivalence classes, then yes, I would weight that evidence heavily.

  62. Charles Says:

    Scott, a question about your strength of conviction on P != NP.

    Suppose that a proof was published (and checked, accepted, etc.) that SAT has an o(k^n) algorithm for every k > 1, but that the proof, when analyzed, had no new ideas that seemed relevant to P = NP. How strongly would you believe P != NP then?

  63. Kol Says:

    What do you think of the chances of proving the “easier” problem of P ≠ PSPACE?

  64. Scott Says:

    Charles #62: In that case, I guess I’d have to revise my confidence about P≠NP downward, but I don’t know by how much. The trouble with hypothetical scenarios like yours is that they ask me to make predictions in an explanatory vacuum, which is never the real situation. I.e., why would the proof of NP ⊆ DTIME(kn) for all k>1 have “no new ideas that seemed relevant to P = NP”? What about it would break down at whatever subexponential bound it broke down at?

  65. Scott Says:

    Kol #63: At least in our current state of knowledge, there’s nothing about P vs. PSPACE that seems to make it significantly easier than P vs. NP. If you want something genuinely “easier”, you’ll probably have to set your sights a lot lower: for example, NEXP versus P/poly or NP versus ACC0 could plausibly fall while still leaving us almost as far from P≠NP as we were before.

  66. . Says:

    If you believe evolution has helped shape the human mind solve non-trivial problems (each one separately) then probably you should also believe randomness would help solve NP problems quite quickly since evolution is necessarily a random process although along nature’s invisible hand.

  67. Kol Says:

    Why do you think P ≠ NP is receiving so much attention when P ≠ PSPACE is just as hard? What are the barriers to proving P ≠ PSPACE?

  68. Thomas Steinke Says:

    . #66: If BPP = P and NP != P (as conjectured), then NP != BPP, which is the formalisation of “even with randomness, NP-hard problems are infeasible to solve on all instances”.

  69. Gil Kalai Says:

    (cont 60)
    Kamal Jain’s famously said. “If your laptop can’t find it then neither can the market“. Literally he may be wrong. You can think about a market as a one time laptop that runs some program; you will never know. (even to approximate in a way leading to useful predictions.)

    Of course, if Kamal’s only intention is that if the computational complexity required to approximate a certain model for the market is not in P then the computation does not gives a good approximation of reality. If this is just the intention then its OK.

    This is not just a semantic difference. It weakens the case for building computationally feasible models for the market that will be useful.

    This ties in with David Deutsch argues that we can’t even make decent probabilistic predictions even on our laptops (and even our quantum laptops) about a future event.

    Often it has been claimed here (and also by me in several places) that if a computational model is in P then in principle we can run it on pur computers. (e.g. if natural quantum processes are in BPP then, in principle, we can simulate such processes on our computers.) Maybe we need to reexamine this.

  70. Scott Says:

    Kol #67: Within complexity, P vs. PSPACE gets plenty of attention too! The barriers are essentially the same as those for P vs. NP, as far as anyone knows.

    A good analogy might be the Riemann Hypothesis, which is known to the general nerd public by a single great flagship question. “Up close,” though, you find there are several different stronger and weaker Riemann Hypotheses over various fields.

  71. Anup Says:

    Scott, I have one last thing to say, and then I will return to my normal non-blog life:

    In my opinion, the biases towards beliefs that P is not equal to NP and that resolving the P vs NP question is “hard” are not only unreasonable (as I have argued with many points, many of which got no response at least on this thread), but they become more and more unreasonably biased with time, in our current research environment.

    If you are a young student and you come across these questions today. You might spend 30 minutes thinking about them, before you come across a blog that proclaims that P is almost certainly not equal to NP and that proving that P is not equal to NP is almost hopeless. So you don’t try to prove that P=NP.

    In fact, I have tried to give students in my undergraduate class an algorithmic problem that if resolved would show that P is not equal to NP. I told my class that it was a very hard problem, but that they should still try. The brightest students in the class came back to me in a few days. They had spent some time on google, and discovered that they should not waste their time.

    30 years pass. And now all the “evidence” (of the type you rely on in your comments) that P is not equal to NP and that proving it is hard have only become much much stronger (since now it has been an open question for even longer), even though in those 30 years, hardly anyone who should have thought about those questions HAS thought about it, because of all the hype that is a deterrent to thinking about them.

    As a Phd student I spent many months thinking about problems that I found very hard, and failed to solve them. But I never thought about P vs NP, because that would be stupid (right?).

    Meanwhile, there is a research community (namely mine) that has started to build a significant body of work (involving many many theses etc) that would become irrelevant if P = NP. So the incentives are set up so that the “experts” would prefer to perpetuate this belief that P is not equal to NP. For if the chance of P = NP is as high as 1/2, then a paper that assumes P not equal to NP to be meaningful is only half as impactful.

    Sorry to be such a wet blanket. But it’s a sad state of affairs, at least in my eyes.

  72. Anup Says:

    Correction: I gave my students an algorithmic problem to prove that P=NP (not the other way around as I said..)

  73. Anup Says:

    Also, I should add, that I know that there are people who still make attempts at resolving P vs NP, at least indirectly. But the amount of effort directed towards that question is much much smaller than the amount of effort directed to questions that are meaningful only if P is not equal to NP. I find that worrisome.

  74. Scott Says:

    Anup, we see the world completely differently.

    Even in (what I consider) the fantastically-unlikely event that P=NP, I’m not terribly afraid about would-be provers getting “discouraged” by the experts stating their belief that P≠NP. As one first piece of evidence, they don’t seem the slightest bit discouraged by me stating my belief! 🙂 I get another P=NP proof in my inbox every other week or so.

    Indeed, it seems just as likely to me that complexity theorists conjecturing P≠NP would encourage people to “prove those idiots wrong,” as that it would scare them away.

    But more importantly, I don’t imagine a polytime algorithm for SAT, supposing one existed, as some isolated, explanationless thing that would land on us from outer space. Rather, I think it’s vastly more likely that such an algorithm would be a natural outgrowth of … well, algorithms research, of the sort people have been doing for half a century and continue to do. That both allays the fear of someone being discouraged from seeking a P=NP proof, and makes it harder to understand why, if such a proof existed, half a century of algorithms research hasn’t turned up anything that looks remotely like it could lead to one.

    So, rather than worrying about your hypothetical student, I worry more about the actual people I meet all the time who dismiss complexity theory as frivolous because it’s “all based on conjectures.” Secure in their double standards, such people don’t notice or don’t care that physics, chemistry, and essentially every other science moves forward with probably 1% as much thought given to mathematical rigor as we give—and that P≠NP seems like an extremely secure and well-studied conjecture compared to the mathematical conjectures that other fields rely on all the time (the well-definedness of quantum field theory, the existence and smoothness of solutions to differential equations, etc).

  75. matt Says:

    I think it’s inaccurate to say that physics moves forward with 1 percent of the rigor of complexity theory. Rather, that statement itself is strictly speaking true, up to some argument about the exact percentage, but it confuses some completely different things. Computer science is a discipline, as is physics. Complexity theory is a small subfield of computer science, just as mathematical physics is a small subfield of physics. “Computer science as a discipline moves forward with about 1 percent of the rigor of mathematical physics” is just as true as Scott’s statement.

  76. Scott Says:

    Matt: I was comparing theoretical computer science (broadly construed) to theoretical physics (also broadly construed), which seems fair.

    And, far from seeing theoretical physicists’ relative lack of rigor as an indictment, I tend to think that we in TCS can and should become more like them!

  77. John Sidles Says:

    Obviously it is neither necessary, nor feasible, nor even desirable that everyone think alike in these matters. And for some delightful illustrations of this principle, we can turn to the wonderfully enjoyable web page Essays and Opinions by Oded Goldreich.

    In the essay A letter to CACM editor regarding computational complexity (2010) Oded stands four-square in agreement with (what I take to be) Scott’s position regarding P versus NP:

    Unfortunately, we currently lack good theoretical answers to most natural questions regarding efficient computation. This is the case not because we ask the wrong questions, but rather because these questions are very hard.

    The possibility that Juris Hartmanis (and perhaps Gil Kalai ?) have onsidered, that perhaps our definitions of P and NP are such as to ensure that the most natural questions of complexity theory are undecidable—in other words, that complexity theorists could make more rapid progress by asking different questions than the ones they are presently asking—is not mentioned in Oded’s essay.

    On the other hand, in Oded’s essay On Quantum Computing (2004) he argues in the psychologically opposite direction, namely, that in assuming unitary Hilbert dynamics, perhaps we *are* asking the wrong question … and in this regard Oded and Scott sharply disagree (as Oded describes in his essay).

    The point is that these much-debated questions are generically subtle, open, and difficult. Given that PvsNP really is (for better or worse) the “flagship” of present-day complexity theory … is that flagship metaphorically hard aground on a reef of undecidability? Given that quantum dynamics on linear Hilbert space really is the “flagship” of persent-day physics (for better or worse) … is that flagship metaphorically sailing against a dynamical flow that is symplectic but not unitary?

    Given our present state of ignorance in both regards, it’s best to be glad (not irritated or dismayed) that researchers are pursuing mixed strategies … because for sure, in all future outcomes, plenty of very smart people are destined to change their minds regarding beliefs that they presently espouse passionately. And may that day come soon! 🙂

  78. matt Says:

    CS theory is still a small part of CS, as I understand it, while theoretical physics, broadly construed, means just under half of a typical physics department, maybe even more at MIT.

    Good point on the second. Interesting point: I read recently about some work of Gromov on random groups. Turns out that some of his proofs were lacking, and the full proofs only arose later. The way he did it, though, was very natural, assuming some independence of events which wasn’t quite true, but was close to it. It was a very “physicist” type of move. Such approaches would likely be very helpful in CS.

  79. asdf Says:

    If anyone cares, Sune Jakobsen has written an undergrad thesis on why P v NP is so hard to solve:

  80. Anup Says:

    I think the right way to respond to people who say that complexity theory is not important because it is all based on assumptions is to point to the LARGE part of complexity theory that is NOT based on any assumptions, and to participate more in that kind of research :). BTW I myself have written several papers whose importance is partly based on assumptions.

    If instead we try to defend ourselves by talking about the universe or by how are assumptions are valid because we haven’t been able to violate them yet, we are only hurting ourselves. I think these arguments can work in the short term until people think about them deeply, but in the long term they are not strong enough to hold up. They may blow up in our faces if we are unlucky and our assumptions turn out to be false.

  81. Jon Says:


    Doesn’t your comment #12 contradict your answer to question #2 in the original post?

    In question #2, you say that you cannot make a reasonable prediction about when P != NP will be proved, and cite this as an example of the general principle that it is nearly impossible to make accurate probabilistic statements about knowledge creation before it happens.

    Yet in comment #12, you describe a loose model for theorem proving, (i.e. the creation of new knowledge). You even give an example: we shouldn’t have been surprised that FMT was proved a decade after major advances in relating FMT to the rest of our mathematical knowledge.

    So given that you apparently do have a model for the creation of new knowledge, why can’t you make a probabilistic statement about the timing of a P != NP proof?

  82. Scott Says:

    Jon, I can make a probabilistic statement, but in the case of P≠NP, it won’t be a useful or informative statement. It will have the form: “it might be a decade, it might be a century, it might be a millennium, who the hell knows? Indeed, my uncertainty is dominated by my uncertainty about the future of civilization itself.”

    By contrast, sometimes math problems, while not yet solved, become well-enough understood that one do better than the “null/trivial prediction” in terms of predicting how much longer it will take to solve them. (Indeed, if this weren’t the case, then it would be an enormous mystery how mathematicians choose which problems to work on!) Whether many number theorists expected FLT to fall within their lifetimes after seeing the proof of the epsilon-conjecture is an interesting question to which I don’t know the answer. All I know is that one mathematician formed enough confidence about it to devote the next seven years to proving FLT…

  83. matt Says:

    I made a quick “Monte Carlo” test of whether theoretical CS was a higher part of CS than theoretical physics was of physics. Choosing 8 people from the MIT EECS department at random (close my eyes, scroll randomly, click), I found only 1 that could conceivably be called a theoretical computer scientist. Actually, he was a coding theorist, but I call be charitable and call it close enough. Admittedly maybe being the _EE_ CS department might skew things a bit. Choosing 8 from the physics department, I got 3 who were clearly theorists (and maybe one more, Ike Chuang does a lot of theory, but I didn’t count him).

    I know the sample size isn’t great, but I think it supports my case. And based on research topics of those 16 people, I think physics as a whole was _more_ mathematically careful than CS.

  84. John Sidles Says:

    Scott says: “Indeed, my uncertainty [regarding proofs of PvsNP] is dominated by my uncertainty about the future of civilization itself.”


    That is a hugely provocative remark that (we can hope) is more often felt than publicly expressed … I admire your saying it very much.

    Scott, in what respects (if any) does this dominant uncertainty condition your choice of problems to work on? Do you have in mind any theorems that effectively are bespoke, in service of retiring the risks associated to this dominant uncertainty?

  85. Scott Says:

    Anup #80: I find it telling that you never addressed my point about other fields (like theoretical physics) relying on unproved conjectures to a much greater extent than we do, and no one making a big deal about it. More than that, many of the statements that we call “conjectures” would in most other fields be called “discoveries”! My fundamental point is this: the fact that we beat ourselves up over our inability to prove P≠NP, while other fields regard their unproved mathematical conjectures more as niggling technical issues and distractions from their real concerns, is purely a cultural fact about the respective fields. It seems to me that many people observe the cultural fact, and mistakenly think they can glean useful information from it about the conjectures themselves.

    Anyway, this double standard is precisely what I was talking about in my Anti-Complexitism post. Some people doubted that the phenomenon existed—so thank you for demonstrating that not only is it real, it even affects complexity theorists themselves!

  86. Scott Says:

    John #84: It doesn’t affect my choice of research topics. Like most theoretical computer scientists, I don’t work (directly) on P≠NP, but the main reason for that is not some personal feeling about the problem’s difficulty—rather, it’s the simple observation that, if you wanted to make progress toward P≠NP, then you might as well work instead on any of the hundreds of other lower bound problems that seem strictly easier but are still unsolved.

  87. Mitch Says:

    Scott # 85: I don’t think I can buy the notion that the unproved conjectures in TCS being more important is cultural. You yourself have written quite a few (enjoyable) articles on how different the world would be in P=NP and how the question itself is one of the deepest ever asked etc. I can’t imagine a physicist saying those things about quark confinement! And certainly if confinement was shown NOT to be a property of QCD it would mean that some other symmetry group would be needed that looks like SU(3) at higher energies etc… not that protons suddenly disintegrate. Where as if P= NP there doesn’t seem to be a way to say “well even though the conjecture was wrong, the world still works how we thought it does.”

  88. Scott Says:

    Mitch: Thanks for the extremely interesting point—I agree with you that that’s a genuine difference between P≠NP and (say) the quark confinement conjecture. On the other hand, that difference doesn’t seem to point at all in the direction of P≠NP being less likely to hold! If anything, it seems to point in the direction of P≠NP being more likely to hold than some specific mathematical formalization of quark confinement, since (as you correctly point out) the latter can be “rescued” or “salvaged” if false while the former probably can’t.

  89. matt Says:

    Actually, there are (roughly) two ways that the mass gap Clay conjecture could be false, and one of them would mean the end of the world, in a way even worse than having P=NP. Roughly, a successful solution to the problem requires first constructing the desired gauge theory, and then showing it has a gap. The beta function (RG flow of coupling constant) plays an important role in this. Perturbation theory says that weak coupling flows to stronger coupling. Now, we can’t predict just from perturbation theory what happens at intermediate coupling strength so it could confine or it could do something totally different. That’s the second part of the problem, and it’s the part where the lack of a gap could possibly be rescued by choosing a different field theory. But the first part of the problem is actually constructing the theory. This theory is expected to exist precisely because of the perturbation results, suggesting that we can UV complete the theory. If this part of the problem failed, it really would be the end of the world: it would mean that, basically, field theory does not exist.

  90. Mitch Says:


    I guess I just don’t see the big deal. QFT in 3+1 does not exist. Physicists can still calculate what goes on in collider experiments, it’s just that these techniques are way further from a coherent mathematical theory than anbody thought possible.

  91. matt Says:

    Mitch, I’m assuming you meant “suppose QFT in 3+1 doesn’t exit” rather than trying to claim that it doesn’t exist. This would be a big deal in that a predictive theory would require going outside the framework of QFT, though perhaps one may say that this isn’t a big deal as we already believe that is true given the existence of gravity.

  92. Mitch Says:

    matt, Yeah I left out a word there. I was talking about the hypothetical that the Yang Mills problem is proved in the non existence case. While I certainly think that it would be a “big deal”, I don’t think it would have the impact of P=NP on math not to mention humanity as a whole. And as Scott pointed out that can be added to the list of reasons why experts think P probably does not equal NP.

  93. whatintheworld! Says:

    @matt “Actually, there are (roughly) two ways that the mass gap Clay conjecture could be false, and one of them would mean the end of the world, in a way even worse than having P=NP. ” Can you elaborate?

  94. John Sidles Says:

    To several people posting on this thread, I commend various letters from John von Neumann to (1) Paul Dirac in 1934, Rudolf Ortvay in 1939, and (2) Marston Morse in 1955. These letters may be found in the AMS’ recent collection John von Neumann: Selected Letters (2005).

    Von Neumann’s letters discuss specifically what would today be called quantum dynamics on state-spaces having a noncommutative geometry. More broadly, it’s von Neumann’s view that:

    [Quantum field theory] is not at all rigorous the way it is done, but it must either have a rigrous equivalent or something is totally wrong with the way that quantum mechanics is now proceeding. (I think that “public opinion” among physicists has accepted too easily that a rigorous equivalent must exist.) Either alternatives would be interesting and significant and the sooner a broad mathematical audience familiarizes itself with this challenge, the better are the chances that the matter will be resolved.

    Today, seventy years later, these questions are still regarded as subtle, difficult, and open.

  95. My Quantum Debate with Aram Harrow: Timeline, Non-technical Highlights, and Flashbacks I | Combinatorics and more Says:

    […] surgery I was scheduled to have on June 20; All went well with the surgery and I returned to post blog-comments and MO answers about a week after the operation. But I forgot about Aram’s […]