Is the P vs. NP problem ill-posed? (Answer: no.)

A couple days ago, a reader wrote to me to ask whether it’s possible that the solution to the P vs. NP problem is simply undefined—and that one should enlarge the space of possible answers using non-classical logics (the reader mentioned something called Catuṣkoṭi logic).  Since other people have emailed me with similar questions in the past, I thought my response might be of more general interest, and decided to post it here.

Thanks for your mail!  I’m afraid I don’t agree with you that there’s a problem in the formulation of P vs. NP.  Let me come at it this way:

Do you also think there might be a problem in the formulation of Goldbach’s Conjecture?  Or the Twin Prime Conjecture?  (I.e., that maybe the definition of “prime number” needs to be modified using Catuṣkoṭi logic?)  Or any other currently-unsolved problem in any other part of math?

If you don’t, then my question would be: why single out P vs. NP?

After all, P vs. NP can be expressed as a Π2-sentence: that is, as a certain relationship among positive integers, which either holds or doesn’t hold.  (In this case, the integers would encode Turing machines, polynomial upper bounds on their running time, and an NP-complete problem like 3SAT — all of which are expressible using the basic primitives of arithmetic.)  In terms of its logical form, then, it’s really no different than the Twin Prime Conjecture and so forth.

So then, do you think that statements of arithmetic, like there being no prime number between 24 and 28, might also be like the Parallel Postulate?  That there might be some other, equally-valid “non-Euclidean arithmetic” where there is a prime between 24 and 28?  What exactly would one mean by that?  I understand exactly what one means by non-Euclidean geometries, but to my mind, geometry is less “fundamental” (at least in a logical sense) than positive integers are.  And of course, even if one believes that non-Euclidean geometries are just as “fundamental” as Euclidean geometry — an argument that seems harder to make for, say, the positive integers versus the Gaussian integers or finite fields or p-adics  — that still doesn’t change the fact that questions about Euclidean geometry have definite right answers.

Let me acknowledge two important caveats to what I said:

First, it’s certainly possible that P vs. NP might be independent of standard formal systems like ZF set theory (i.e., neither provable nor disprovable in them).  That’s a possibility that everyone acknowledges, even if (like me) they consider it rather unlikely.  But note that, even if P vs. NP were independent of our standard formal systems, that still wouldn’t mean that the question was ill-posed!  There would still either be a Turing machine that decided 3SAT in polynomial time, or else there wouldn’t be.  It would “only” mean that the usual axioms of set theory wouldn’t suffice to tell us which.

The second caveat is that P vs. NP, like any other mathematical question, can be generalized and extended in all sorts of interesting ways.  So for example, one can define analogues of P vs. NP over the reals and complex numbers (which are also currently open, but which might be easier than the Boolean version).  Or, even if P≠NP, one can still ask if randomized algorithms, or nonuniform algorithms, or quantum algorithms, might be able to solve NP-complete problems in polynomial time.  Or one can ask whether NP-complete problems are at least efficiently solvable “on average,” if not in the worst case.  Every one of these questions has been actively researched, and you could make a case that some of them are just as interesting as the original P vs. NP question, if not more interesting — if history had turned out a little different, any one of these might have been what we’d taken as our “flagship” question, rather than P vs. NP.  But again, this still doesn’t change the fact that the original P vs. NP question has some definite answer (like, for example, P≠NP…), even if we can’t prove which answer it is, even if we won’t be able to prove it for 500 years.

And please keep in mind that, if P vs. NP were solved after being open for hundreds of years, it would be far from the first such mathematical problem!  Fermat’s Last Theorem stayed open for 350 years, and the impossibility of squaring the circle and trisecting the angle were open for more than 2000 years.  Any time before these problems were solved, one could’ve said that maybe people had failed because the question itself was ill-posed, but one would’ve been mistaken.  People simply hadn’t invented the right ideas yet.

Best regards,

Unrelated Announcements: As most of you have probably seen, Subhash Khot won the Nevanlinna Prize, while Maryam Mirzakhani, Artur Avila, Manjul Bhargava and Martin Hairer won the Fields Medal. Mirzakhani is the first female Fields Medalist. Congratulations to all!

Also, I join the rest of the world in saying that Robin Williams was a great actor—there was no one better at playing “the Robin Williams role” in any given movie—and his loss is a loss for humanity.

141 Responses to “Is the P vs. NP problem ill-posed? (Answer: no.)”

  1. Evan Berkowitz Says:

    The point about a secret prime between 24 and 28 reminded me of this short story:

  2. william e emba Says:

    The P=NP question cannot have different answers in different set theories (meaning ZF along with any extensions). Turing machines can be encoded by a single integer, as can integral coefficient polynomials. The question of interest can then be quantified using integers only. In brief, P=NP is absolute.

    It is possible that set theory can matter, though. This would happen if “P=NP” were secretly a large cardinal. The way this works is that the consistency of large cardinals is not provable within ZF, yet the question is ultimately finitary: a set of finite strings in a finite alphabet (valid ZF-proofs), either does or does not contain a member that begins with a large cardinal axiom and ends with 0=1. Using Matiyasevich’s theorem, this corresponds to a certain (explicit) Diophantine equation having or not having solutions.

  3. Scott Says:

    william #2: OK, but even if (say) P≠NP were only provable with the help of a large-cardinal axiom, since it’s a Π2 sentence, I think it would still make perfect sense to discuss its truth or falsehood, independent of any axiom system whatsoever. (In plain terms: there’s either a Turing machine that decides SAT in polynomial time, or there isn’t one. Which set-theoretic axioms are necessary or sufficient to prove which way it goes is a subsidiary question, though obviously an important one.)

    In any case, as I said, I don’t think there’s anything here that distinguishes P vs. NP from the Twin Prime Conjecture or any other arithmetical question.

  4. Aatu Koskensilta Says:

    William, I’m pretty sure we can take it as given that Scott is not entirely ignorant of the MRDP theorem and all that jazz.

    We know, from various results in logic, that there are Pi-2 statements — that is, statements that can be put in the form “for all naturals x, there exists a natural y, such that D(x, y, a1, …, an) = 0” for a Diophantine equation D(x, y, …, an) = 0 — that are independent of ZFC. While there’s really nothing to recommend the idea that P=NP is independent of ZFC, there’s also nothing to rule this possibility out as a purely logical possibility. This really has nothing to do in particular with large cardinals, or the rather bewildering suggestion P=NP might “secretly be a large cardinal”, beyond the general observation, that from the second incompleteness theorem we know that large cardinal axioms have arithmetical consequences.

    Now, it is true that arithmetical statements are absolute in various technical senses, in that they can’t be disturbed by moving to a forcing extension, that they are true in an inner model iff they are true in the set theoretic universe, and so on, and that this is owing to the fact they only involve quantification over the lower reaches of the cumulative hierarchy. But this does not imply, or plausibly suggest to those in the know, anything in particular about their undecidability or not in extensions of ZFC.

  5. Itai Says:

    I think that the problem with P vs NP is that computation is still ill defined mathematicaly and that is why all those p vs np bpp vs np bqp vs np is stuck, and that is because the scientific community do not grasp compuation is a physical notion and can not be a pure mathematical notion , instead it lie on some deep physical concepts beneath the mathematical abstration layer we all know.
    Time complexity have a misleading name, because in every compuatuon model it counts something else that can not be described mathematically inside out and is not physical time, movement of the head in TM, classical boolean gate , quantum unitary gate , random gate and etc.

    My idea was to combine the fundemental connection between physical time and energy to define physical effective compuation without being dependent on the compuation model .
    The more energy you have the less physical time your compuation will take as seen in scott’s zeno computer and relativistic twin paradox computer.
    So the solution is to define quantum compuatuon complexity as quantum time mutliplied by quantum energy which yields quantum action which has h-bar units.
    The less quantum action your compuation takes the more physically effective it is.
    From now this definition is governed by the law of least action which yields almost all known physical theory including QM and GR .

    For those who think that this idea is mad and can lead nowhere read what Prof. Tom Toffoli wrote in this article, I am really anxious to see responses about his article.

  6. Dave R Says:

    Thanks Scott. Can you (or someone) explain these real/complex analogues of P vs NP? Is this an extension from decision problems to optimization problems?

  7. lewikee Says:

    Out of curiosity, have any non-classical logics (such as this Catuṣkoṭi logic you mention) ever been helpful in solving problems in computer science where it was clear our more conventional logic could not have done the job? I must say that wikipedia article leaves me skeptical of Catuṣkoṭi logic’s value in areas that don’t involve some sort of mysticism…

  8. william e emba Says:

    I wasn’t disagreeing with Scott. I felt his bare statement of “independence” needed amplification. In fact, one has to go to three alternating quantifiers over functions/subsets/reals to get a “number-theoretic” statement whose truth is actually independent, and not just out of reach, so to speak.

  9. John Sidles Says:

    Scott Aaronson observes “You could make a case that some of them [modifications to P vs NP] are just as interesting as the original P vs NP question, if not more interesting.”

    Consilient with Scott’s observation, the authority of the Scientific Advisory Board (SAB) of the Clay Mathematics Institute (CMI) extends to unilaterally modifying the terms of the Millennium Prizes:

    “The SAB may consider recommending the award of the prize to an individual who has published work that, in the judgement of the SAB, fully resolves the questions raised by one of the Millennium Prize Problems even if it does not exactly meet the wording in the official problem description.”

    For details, see the (well-rated) MathOverflow community wiki “For which Millennium Problems does undecidable -> true?” (which includes answers by several well-regarded researchers).

    Particularly in regard to P vs NP, candidates for propositions that are (in Scott’s phrase) “just as interesting as the original P vs NP question if not more interesting” are discussed at-length in the (again well-rated)TCS StackExchange question “Are runtime bounds in P decidable? (answer: no)”, which catalyzed the (also well-rated) TCS StackExchange community wiki “Do the undecidable attributes of P pose an obstruction to deciding P versus NP? (answer: maybe)”.

    Conclusion  Complexity-theory students who aspire to win the P vs NP Millennium Prize — as guided by Scott’s above-quoted observation and by the above-quoted codicil of CMI/SAB — can reasonably adopt a strategy of seeking alternative proof frameworks that (in the CMI/SAB’s phrasing) “fully resolve the questions raised by the P vs NP problem, even if they do not exactly meet the wording in the official problem description.”

    So what is the best way to frame the P vs NP problem? Is there a better way than the Clay Institute’s framing of the problem? These questions are reasonable for young theorists to reflect upon (and older ones too).

    Note  In previous posts, Scott has remarked that provocative Shtetl Optimized comments provide a good starting point for earning reputation points by posting well-posed StackExchange questions/answers; this *DEFINITELY* is the case in regard to the P vs NP problem.

  10. Scott Says:

    lewikee #7: Non-monotonic logics (where you can retract ‘logical implications’ on gaining more information, e.g., deduce from ‘Tweety is a bird’ that ‘Tweety flies,’ but then take it back later on learning that Tweety is a penguin) have had some modest importance in knowledge representation and AI. But no, I don’t know of any examples where Catuṣkoṭi logic or anything like it was ever useful for CS. I share your sense that Catuṣkoṭi logic is basically a mystical concept.

  11. Scott Says:

    Dave R #6: No, the real- and complex-valued analogues of P vs. NP have nothing to do with the extension from decision problems to optimization problems. Stating P vs. NP in terms of optimization problems is straightforward, and gives you a question that’s precisely mathematically equivalent to the original P vs. NP question.

    By contrast, the real and complex analogues are for different models of computation: models where the inputs to your “program” are tuples of real numbers or complex numbers, and where computation consists of adding, subtracting, and multiplying the numbers (together with constants), and sometimes also doing division, equality tests, and greater-than tests. Crucially, you don’t have access to the bit-representations of the numbers in any of these continuous models, nor do you have (for example) the floor or ceiling functions, or any other truncation or rounding ability.

    Thus, these continuous models are both “stronger” and “weaker” than ordinary digital computation: stronger because you can (e.g.) multiply two real numbers in a single time step, but weaker because you can’t (e.g.) extract a bit that tells you whether floor(x) is even or odd, no matter how much time you spend on it.

    The point of the continuous models is not to capture how any physically-realistic computer works, but rather to model how a very large number of numerical algorithms work—and, most importantly, to provide an “alternate universe of computation” where the tools of analysis, algebraic geometry, etc. can be directly brought to bear—and, for that reason, where the analogues of questions like P vs. NP might be easier to answer than they are in the Boolean world. That hope, which has its origins in the 60s and 70s, is still widely shared today, and is assumed from the very beginning in (e.g.) Mulmuley’s Geometric Complexity Theory program.

    For more about this, I recommend the book Complexity and Real Computation by Blum, Cucker, Shub, and Smale (see here to look online at Richard Karp’s foreword). In particular, that book carefully defines analogues of the P vs. NP question for Turing machines over R and C—questions that might indeed be easier than the “original” question, but that still look incredibly hard (and that, like the original, remain open)! At present, we don’t even know any implications among the Boolean, real, and complex versions of P vs. NP, although we do know “discrete” circuit lower bound conjectures that, if true, would imply PR≠NPR and PC≠NPC.

  12. Aatu Koskensilta Says:

    William, whatever it is you mean by “independent” is apparently not quite what’s usually meant by that term in logic. Earlier you mentioned absoluteness, and now allude to the third level in the analytic hierarchy as a lower-bound, so to speak, on where one might first encounter “true independence” — do you perhaps have the Shoenfield absoluteness theorem in mind? If not, I’m afraid it’s difficult to make anything of your remarks. And if you are, it would be a good idea to spell this out explicitly, and try to stick to standard terminology.

  13. John Sidles Says:

    A future-history of complexity theory, as inspired by the (well-rated) TCS StackExchange question “Does P contain languages whose existence is independent of PA or ZFC?”


    Minutes of the Scientific Advisory Board (SAB)
    of the Clay Mathematics Institute (CMI)

    January 1, 2016

    SAB Member Alpha  Good news! Researcher Alice has proved that PvsNP is independent of ZFC. It is hereby moved that Alice receive the PvsNP Millennium Prize, on the grounds that Alice’s Theorem trivially implies Alice’s Lemma: “no algorithm that provably solves 3SAT provably runs in PTIME”, and this lemma fully resolves the PvsNP questions as originally posed.

    SAB Member Bob  More good news! Researcher Bob has found an independent proof of Alice’s Lemma: that is to say, Bob’s proof neither assumes nor implies Alice’s Theorem. It is hereby moved that Bob share the PvsNP Millennium Prize with Alice.

    SAB Member Alpha (receives phone call)  “Here’s trouble! A fatal flaw has been found in Alice’s proof! That flaw however is absent from Bob’s proof.”

    SAB Member Carl (wonderingly) “By the SAB’s own vote, Bob now is the sole winner of the PvsP Millennium Prize. But does Bob’s Theorem — as we must now call it — “fully resolve the PvsNP question”, in the sense that the framers of the CMI/SAB by-laws intended and required?


    Conclusion  Tough 21st century questions like these suggest that future-mathematics is likely to be comparably controversial to past-mathematics.

  14. william e emba Says:

    “Independence” of arithmetical questions usually refers to PA or similar systems, not ZFC, precisely because of set-theoretic absoluteness. Scott’s usage was peculiar, not mine.

    And of course I had the Shoenfield theorem in mind.

  15. Scott Says:

    william #14: You had me worried that I said something wrong, so I reread my post, but I did explicitly say “independent of standard formal systems like ZF set theory.”

    Of course, Schoenfeld’s theorem tells you that you can’t change the answer to P vs. NP by forcing (or, say, by imposing V=L), which is very good to know, but it doesn’t tell you that the answer couldn’t be independent of ZF in some other way. (Personally, I consider the possibility massively unlikely, with no good reason to take it seriously that wouldn’t apply about equally well to any other mathematical question that happens to be unanswered right now. But I certainly can’t rule it out.)

    On the other hand, to get that P vs. NP is “well-posed,” or has a “definite right answer,” I’d argue that we don’t need any theorem: the fact that it’s an arithmetical statement suffices. If someone rejects that arithmetical questions have definite right answers independent of the axiom system, then it seems to me that I can trap that person in an infinite regress, where they have to admit that questions about provability from the axioms don’t have definite answers either, and therefore they can’t even get started doing math.

  16. william e emba Says:

    The absoluteness of arithmetic statements means they hold in all models of ZF, regardless of how you come upon them. There is no guarantee that ZF will prove a given arithmetic X (it may be a secret large cardinal). However, there is a guarantee that there is a unique possible choice of X or ~X that consistent extensions of ZF might prove.

    If you want weird stuff, you have to do something like move into the right topos. I mean, there’s even a topos for all the Cantor crackpots. Or find an ω-inconsistent model of PA where a fixed polynomial time bounds the solution to every truly finite 3SAT problem, but P is not NP just the same.

    As a concrete example where this type of question comes up, modern algebraic geometry was built using inaccessible cardinals, but no one cared until Wiles proved FLT. It has fairly recently been shown that FLT is in fact a theorem of ZF, no large cardinals needed. Whether it is independent of PA, I have no idea. FLT is certainly independent of Raphael Robinson’s arithmetic Q (which means we know ahead of time that 99% of the Fermat crackpots are wrong).

  17. Darrell Burgan Says:

    Rest in Peace, Mr. Williams. Nanu, nanu.

    And may there be many more female award winners in all the sciences. It’s truly embarrassing that there are not more female scientists and engineers than there are, in this day and age.

  18. komponisto Says:

    Scott (#15):

    If someone rejects that arithmetical questions have definite right answers independent of the axiom system, then it seems to me that I can trap that person in an infinite regress, where they have to admit that questions about provability from the axioms don’t have definite answers either, and therefore they can’t even get started doing math.

    I don’t see how the last part follows. Couldn’t it be the case that some questions about provability have definite answers, and others don’t?

  19. Jim Says:

    I think the topic you write about here is very important, but I think the paragraph in the middle which starts “So then, do you think…”

    a) Doesn’t really add anything and

    b) Is kind of haranguing the reader. Especially if this presumed reader is a beginner who doesn’t know about non-Euclidean geometry or p-adic geometry or whatever.

    Also on undecidability, I’d submit that Greg Martin’s comment is the right one to make here as well

  20. Aatu Koskensilta Says:

    You’re simply mistaken here, William, which you yourself should easily realize if you just recall Gödel’s completeness and incompleteness theorems, which apply to ZF(C) and friends, just as well as to PA, and pause to think about it for a moment.

    As both Scott and I have explained, the absoluteness of arithmetical statements only means that we can’t use the standard set theoretic techniques of moving to forcing extensions, cutting them down to suitable inner models if necessary, and so on, to show the independence of arithmetical statements (such as P=NP, Goldbach’s conjecture, the twin prime conjecture, …). It doesn’t mean that there aren’t models of set theory that disagree over arithmetical matters, only that whenever such disagreement occurs, it must involve non-standard (that is, non-well-founded) models of set theory.

    As for large cardinals, they enter into this in that pretty much only way we know today of showing arithmetical statements independent of (an extension of) ZF(C) is to show it implies the consistency of some large cardinal axiom. But it makes little sense to say of an arithmetical statement that it is, or might be, a “secret large cadinal”. It’s true that set theorists sometimes talk as if just any old statement A such that ZFC + A is equiconsistent with ZFC + LA, for an axiom LA positing the existence of a large cardinal, was itself a large cardinal axiom, but this is just loose talk. It makes perfect sense to think of, say, projective determinacy or the existence of 0^# as moral large cardinal axioms, since even though they don’t in themselves imply the existence of any large cardinals, they do imply that there are inner models containing large cardinals, they capture the structural implications of their large cardinal equivalents, and so on. Nothing of this sort can happen with arithmetical statements.

  21. fred Says:

    Could someone explain briefly what is meant by “the absoluteness of arithmetical statements”?
    E.g. “there is no prime number between 24 and 28” is an arithmetical statement, and it’s true no matter what (after defining the set of naturals, and +,/,*,- operations)?

    What’s an example of something added to “arithmetical statements” that would suddenly put us in the realm of the relative? Is it when you start looking at uncountable sets (the reals)?

  22. Scott Says:

    Jim #19: I apologize, then; I certainly didn’t mean to “harangue” the reader. (Though should one never argue by asking rhetorical questions? Is that what you think? 🙂 )

    Indeed, let me go further: the original asker of the question has generously given me permission to use his name. His name is Daya Chinthana Wimalasuriya, and here’s a blog post where he sets out his thoughts. Daya did something very honorable: he modified his views after reading my reply. So, I come not to harangue Daya, but to praise his open-mindedness.

  23. Gil Kalai Says:

    I agree completely with Scott that NP=!P is a well-posed mathematical problem. I am not even sure what “ill-posed” can mean in this context. There are mathematical questions which are informal or vague or questions in the interface between mathematics and the “real world” which are not fully rigorous and even those can be scientifically quite useful. But NP=!P is not of this kind. It is a concrete fully rigorous mathematical question.

    There are related questions to NP=!P which are more informal: One is our belief that NP=!P has a profound non-asymptotic implications. This still is (or can be made) fully mathematical but it is a bit vague, and there are various ways to make it more precise. Let me call it the “non asymptotic NP=!P thesis.”

    Another such related question is the “physical NP=!P” problem. This question is about the physical world so it is not fully within mathematics (at least not until we write all the laws of physics in precise mathematical terms 🙂 ) and it also refers not just to asymptotic. Still it is a useful and important question.

    We can still ask for NP=!P (as for other questions in mathematics,) if it is possible that P=!NP and yet there are some obstacles for proving it. Either obstacles which will make proving it hard or obstacles which will make proving it impossible.

    There are three (and more) directions one can consider.

    a) The known obstacles for proving NP=!P (such as “natural proofs”) are tip of an iceberg which makes the NP=!P problem not only difficult but essentially impossible.

    b) It is undecidable to prove that NP=!P (in ZFC).

    c) Proving NP=!P genuinely reflects a computational difficulty which exemplifies the NP=!P phenomenon itself and hence not possible.

    It is correct that even b) – c) and some analog of a) may be asked about many other mathematical questions. We also have to be careful about c) and understand that by “the NP=!P phenomenon” we must refer to the non-asymptotic NP=!P thesis.

    Some people regard the self-referential and “logic-theory” nature of the NP=!P conjecture as making it more amenable to such concerns, and indeed people studied related things.

    One can hope that “Proof complexity” which is an important part of computational complexity could be developed as to show interesting intractability theorems for proving NP=!P. But the reality so far (to the best of my knowledge) is that proof complexity get stacked more or less at the same place (a little early, in fact) that circuit complexity get stacked.

    Of course another issue that some people ask is if NP=!P is “the only” or “the best” or “the correct” way to express mathematically computational intractability, or perhaps there are other ways, with perhaps better interaction with other areas of mathematics. Maybe “P” is too inclusive? Maybe NP is too inclusive? etc etc.

  24. william e emba Says:

    Aatu: I’m sorry. While I did explicitly refer to nonstandard models in my #16, this was after my statement about X and ~X, and I should have said that before. To clarify, within standard models, there is at most one answer X or ~X for X arithmetic. Because of this, people don’t normally use “independence” for arithmetic statements vis-a-vis ZF. It’s either true or false in ZF+LA. They use “independence” in set theory to refer to statements that are expected to go either way. Arithmetic statements are typically called “independent” in the context of models of PA, since there the statement can go either way.

    Essentially, set theorists typically consider ZFC plus as many large cardinals as you want to get away with to be the “known truth”, and independence is with respect to this.

    By “secret large cardinal” I means something like Con(ZF+LA), a purely arithmetic statement. Translate it into a Diophantine equation whose solutions correspond to proofs LA does not exist, show it to your number theoretic friends, and laugh when he’s not looking. (And cry when he finds a solution.)

    fred: Three alternatiing quantifiers referring to reals/functions/subsets gives you possible “non-absolute” statements. The expression “for all integers n, there exists a Turing machine M, for all integer polynomials p(k)” has three alternatiing arithmetic quantifiers, and these do not contribute to the set-theoretic complexity. An expression like “for all functions f:N→N, there exists a real number x, for all subsets A⊆N” has three alternating “real” quantifiers, and these raise the specter of set theory not being able to decide the statement.

  25. Vadim Says:

    If P!=NP (or any well-defined conjecture for that matter) proved to be independent of ZF, how likely is it that it would be provable in a stronger system? Are there any examples of that happening, excluding systems where the conjecture is taken as an axiom? For that matter, Scott, I remember you writing (and I apologize if I’m mis-paraphrasing) that hypotheses like the AoC and CH are different from P vs. NP because they deal with abstractions (transfinite cardinalities) that have no existence outside of the formal system in which they’re defined. Is there an example of a P vs. NP-like problem being proven independent ZF, or have all examples of formal independence been for AoC/CH-like problems?

  26. Scott Says:

    Vadim #25: That’s an excellent question. Three answers:

    (1) There are “P vs. NP-like statements” (i.e., more-or-less “natural”, arithmetical statements) that have been proved independent of Peano arithmetic, which is much weaker than ZF set theory. Indeed, these statements generally imply the consistency of PA. The famous examples here are the non-losability of the hydra game and Goodstein’s Theorem.

    (2) There are the statements like Con(ZF) that you get from Gödel’s Theorem, which are “purely arithmetical” (i.e., they state that a certain computer program never halts, and can even be phrased as the unsolvability of particular Diophantine equations), and which Gödel tells you are unprovable in ZF. Of course, these are not the sorts of statements that would ever arise in a non-self-referential, non-Gödelian context.

    (3) Harvey Friedman has been working for 45 years on finding “natural” arithmetical statements that are provably independent of ZF. Moreover, he’s made what he considers very exciting progress on this just within the last few months. For more, see his list of manuscripts, such as this one. (You can decide for yourself whether you consider such statements “natural” or not.)

  27. Douglas Knight Says:

    Gil, I believe Scott means something much stronger by “well-posed” than merely “rigorous mathematical question.” In particular, I think he would say that the continuum hypothesis and axiom of choice are “rigorous” but not “well-posed.” They depend on what we mean by “sets,” whereas, arithmetic statements depend only on what we mean by the “integers,” about which Scott does not believe that there is any ambiguity.

    Of course, other people disagree. Some finitists don’t believe in the integers and others don’t believe that there is consensus on them. More common are Platonists who believe that there is a correct notion of sets.

  28. John Sidles Says:

    Vadim #25: you and Gil Kalai and Shtetl Optimized readers generally might enjoy (University of Hawaii professor) Bjørn Kjos-Hanssen’s thoughtful answer to the MathOverflow question “For which Millennium Problems does undecidable -> true?”

    As I understand Kjos-Hanssen’s answer (perhaps imperfectly), a “computably bounded P≠NP conjecture” is stated, which by construction is an explicit \(\Pi^0_1\) statement, whose proof in ZFC would plausibly suffice — in the view of many? most? almost all? mathematicians — to “fully resolve the questions raised by PvsNP problem”, per the CMI/SAB criterion of #9.

    Conclusion  There’s more than one plausible way to grapple with the tricky issues of decidability / verifiability / independence that are associated to the PvsNP problem. And this is why Lance Fortnow’s survey “The Status of the P Versus NP Problem” (2009) is entirely right (as it seems to me) to conclude

    “None of us truly understands the P versus NP problem, we have only begun to peel the layers around this increasingly complex question.”

  29. wolfgang Says:

    >> There would still either be a Turing machine that decided 3SAT in polynomial time, or else there wouldn’t be.

    If P=NP is undecidable it means it can neither be proved nor refuted – this further means that we will *never* be able to see a Turing machine that decided 2SAT in polynomial time;
    Otherwise the existence of this TM would constitute proof.

    So does this not mean that “P=NP is undecidable” is equivalent to “P != NP” for *all* practical purposes?

  30. Scott Says:

    wolfgang #29: Firstly, you mean 3SAT, not 2SAT.

    More important, it’s perfectly conceivable that there could be a polynomial-time algorithm for 3SAT, but no proof that it always worked (or that it always halted in polynomial time). If P=NP but the question were independent of ZF set theory (and any other formal system that people believed), then that would be our actual situation.

  31. Aatu Koskensilta Says:

    William: since this is Scott’s blog, and since presumably he doesn’t want to make it into the go-to-site for Usenet logic wars reenactment enthusiasts, allow me too to apologize for the in hindsight perhaps unnecessarily snarky tone of my replies to you. I’m a tedious stickler, a humourless git of a pedantic bore, when it comes to logical matters, you see, and sometimes I do indulge to excess.

    It appears, in light of your further comments, that you’re not in fact mistaken on any point of mathematical substance, so at this junction I hereby take back all my vile insinuations to that effect, and offer instead a heartfelt mea culpa and apologize; but, owing, no doubt, to various debilitating flaws in my person and intellectual constitution, I must still insist that you are mistaken as to the linguistic habits of set theorists, about what sort of thing they are in the habit of saying. As a moving testament to my humility, and also to my all too uncommon willingness to entertain the possibility I might be mistaken, I did have a look at “The Higher Infinite” by Kanamori, a few papers by Shelah, Kunen’s classic text on set theoretic independence results, Peter Koellner’s insightful and careful waffle on set theoretic independence, and a few other canonical references. And, as it turns out, it transpired that set theorists do mean by independence just what is usually meant in logic by independence.

  32. John SIdles Says:

    Scott’s observation (of #30) has a natural restriction that (as it seems to me) is comparably thought-provoking to Scott’s original observation:

    Scott remarks  “More important, it’s perfectly conceivable that there could be a polynomial-time algorithm for  3SAT  BosonSampling, but no proof that it always worked (or that it always halted in polynomial time).

    Three consilient reasons for contemplating this restriction are:

    • SAT-solving and BosonSampling similarly compose natural venues for verifying and validating broad classes of optimization and simulation frameworks.

    • SAT-solving and BosonSampling alike are reasonably unencumbered by secrecy and cryptolects.

    • Existing SAT-solving competitions compose a outstandingly effective teaching-and-research venue; BosonSampling simulation similarly lends itself naturally to (future?) competitions.

    Quantum metrology triangles (QMTs; see for example arXiv:1204.6500) are another class of optimization/simulation problems that exhibit these three traits … my simulationist colleagues and I would be grateful for further suggestions of problem-classes that possess this three-fold advantage.

    To paraphrase Jane Austen:

    “It is a truth universally acknowledged, that a simulationist in possession of a framework, must be in want of validation and verification tests that are maximally attractive yet publicly presentable’

    Conclusion  SAT-solving competitions are teaching us that broad classes of practical problems in optimization and simulation, that rigorously are thought to be EXPTIME, effectively are PTIME; it is natural to wonder whether this is true in particular of QMT simulation, BosonSampling, and (perhaps) many other simulation/optimization problems.

  33. Gil Kalai Says:

    Douglas (#27) “Gil, I believe Scott means something much stronger by “well-posed” than merely “rigorous mathematical question.” In particular, I think he would say that the continuum hypothesis and axiom of choice are “rigorous” but not “well-posed.” ”

    It would be a little funny to speculate on what Scott means when we can simply ask. As I said, I dont know what well-posed and ill-posed mean in this context, and I would regard the continuum hypothesis a well-posed mathematical question.

    One test case: Is finding the Ramsey number R(10,10) a well-posed mathematical question?

  34. Jay Says:

    Wolfgang #29

    Actually Levin already provided a TM which solves NP problems in P time, if and only if P=NP (see the wikipedia page).

  35. vzn Says:

    (hey dude has it been more than a year yet?)
    there will long be some lingering suspicion that there could be some glitch in the P=?NP question as long as its unsolved. you yourself have one of the best surveys available on the possibility of its undecidability… how could you not even cite that? huge oversight there. :p

    also blums axioms re “speedup theorem” show that sometimes these questions can be very subtle, goldreich refers to so called “pathological languages” wrt it. can anyone show me the simple proof that P=?NP is not subject to the blum speedup theorem? (never got the nerve to ask this on… might try it sometime if nobody answers here)

    (more on math awards & robin williams vs math/CS in TMachine blog)

  36. Anonymous Coward Says:

    If you give me a one hour exam and have me sword fight you for the first 45 minutes, I don’t think you could really call it an hour long exam. Likewise, I find the statement that squaring the circle was open for 2000 years to be a little disingenuous, as true as it may be.

  37. Scott Says:

    Douglas and Gil: I would say that, regardless of what philosophical position one takes on whether questions like AC and CH are “well-posed” or not, arithmetical questions like P vs. NP are certainly well-posed.

  38. Scott Says:

    vzn #35: Sorry, I don’t agree. Would you also say that there will be lingering suspicion of a “glitch” in the Riemann Hypothesis (i.e., not in its truth or falsehood, but in the meaningfulness of the question) until the day it’s resolved? Mathematical questions have the amazing property that they can remain open for 200 years, without the slightest doubt about what they mean (or what would count as an answer to them) arising the entire time.

    Also, my survey is here, for those who are interested. My biggest oversight in the survey is that, in my excitement to explain various points about logic I had recently learned, I failed to make clear the lack of any positive reason for thinking it’s actually true that P vs. NP is independent of strong axiom systems. (Maybe I considered that so obvious that it didn’t need to be stated.) Even then, notice that in the conclusion section of the survey, I stress the distinction between a question’s being independent of set theory and its “not having a definite answer,” and I explain why, even if an arithmetical question like P vs. NP turns out to be independent of set theory, it certainly has an answer (even if we can never know it).

  39. John Sidles Says:

    Scott asserts “Even if an arithmetical question like P vs. NP turns out to be independent of set theory, it certainly has an answer (even if we can never know it).”

    Without venturing to disagree with this statement, it is well for students (especially) to be well-read in contrasting views, of which a particularly celebrated passage appears in Bill Thurston’s “On Proof and Progress in Mathematics” (1994, 480 citations):

    “On the most fundamental level, the foundations of mathematics are much shakier than the mathematics that we do. Most mathematicians adhere to foundational principles that are known to be polite fictions.”

    “For example, it is a theorem that there does not exist any way to ever actually construct or even define a well-ordering of the real numbers.”

    “There is considerable evidence (but no proof) that we can get away with these polite fictions without being caught out, but that doesn’t make them right.”

    The entirety of Thurston’s essay (on-line at arXiv:math/9404236 [math.HO]) is heartily recommended to those Shtetl Optimized readers who are so fortunate as to still be able to read it for the first time.

    The set of PTIME Turing Machines (P-TMs) has lesser cardinality than the reals, and so the runtime ordering of this set is presumably easier than ordering the reals, right?

    Wrong! Fundamental problems associated to runtime-ordering of P-TMs lie much nearer the surface than problems in ordering the reals, and moreover (as Juris Hartmanis’ writings emphasize) the evidence that complexity theory can continue to “get away with polite fictions” (in Thurston’s phrase) in regard to run-time ordering is more ad hoc than any rigor-minded mathematician would prefer.

    More broadly, questions associated to the foundations of mathematics approach the surface more closely in the P vs NP Problem, than in any other of the Millennium Prize problems.

    Lessons from Quantum Field Theory  It is interesting that the Clay Institute’s Millennium Prize problem description for “Quantum Yang–Mills Theory” — authored by Arthur Jaffe and Edward Witten — explicitly acknowledges that fundamental difficulties can occur:

    “One does not yet have a mathematically complete example of a quantum gauge theory in four-dimensional space-time, nor even a precise definition of quantum gauge theory in four dimensions. Will this change in the 21st century? We hope so!”

    It is striking that no such foundational humility is evident in the P vs NP Problem statement. Why is this? (don’t ask me)

    Conclusion I  To pose the Quantum Yang–Mills Problem with even a partial approach to mathematical rigor, it was necessary to impose the restriction of gauge-invariance (with its attendant superradiant dynamics and asymptotic freedom). And so it’s not unreasonable to suppose, that posing the P vs NP Problem with even a partial approach to mathematical rigor, may similarly require a restriction sufficient to at least partially order a canonical set of TM’s that accept the languages of (and thereby define) the complexity class P.

    Conclusion II  Arguably the main virtue of discussing the decidability (or not) of P vs NP, is the very many wonderful papers, written by great mathematicians whose works students are well-advised to read, that grapple with this class of questions.

    Perhaps in regard to this second conclusion, we can all “splice hands!”

  40. Gil Kalai Says:

    Scott (#37): and R(10,10)=?

  41. yannick Says:

    I would like to argue that logical independence of P vs. NP would be a little different from the other independence results which we know, even though there are reasons to suspect it might be logically independent (i.e. the results of Hartmanis and Hopcroft, 1976).

    Changing the rules of a game, say chess, has a major impact on the game. If you may move your bishop one field per move or any number of fields would change the game in a very significant way. But both games are playable! Even if one might not have much in common with the other.

    In that sense, the rules of chess don’t collide with our world. But if you claim chess would make a statement about our world, like being a model for some observable phenomenon, one may dispute that claim.

    Take the parallel axiom for example. Adding or removing it from your sets of axioms results in different, albeit consistent constructs, just like in the chess example above. Only if someone also makes a claim about our world, this choice may become disputable. For example if someone claimed that the euclidean geometry could be used to describe all physical phenomena, then the choice of axioms with regard to this claim would become disputable.

    This kind of logical independence has two parts: The one part is, which abstract “game” do we want to play, and then, which game are we playing in our world?

    Now, the logical independence of the P vs. NP question is a little different, because it makes a statement about the existence of efficient algorithms. P vs. NP is never just the abstract question, which game we want to play, since it always comes with a claim about our world attached. It is also always a statement about in what kind of world we live.

  42. Neel Krishnaswami Says:

    So then, do you think that statements of arithmetic, like there being no prime number between 24 and 28, might also be like the Parallel Postulate? That there might be some other, equally-valid “non-Euclidean arithmetic” where there is a prime between 24 and 28? What exactly would one mean by that?

    It’s actually well-known how to construct “non-Euclidean arithmetics”: build upon linear logic rather than classical or intuitionistic logic. The normalization problem for different variants of linear logic correspond to different complexity classes, and you can use this fact to build set theories with varying notions of total relation. See, for example, Kazushige Terui’s paper Light Affine Set Theory, in which he shows how to construct set theory over linear logic. In this set theory, the Peano and binary natural numbers do not form isomorphic sets — induction in LAST is weaker than in ordinary set theory, which changes the character of arithmetic in a fundamental way. (In fact, it turn out that you can show a function is polytime if and only if it is provably total in LAST.)

    So even if turns out that the exponentiation function is not total, as Edward Nelson fears, we can still do mathematics.

  43. Serge Says:

    Is the existence property the relevant criterion for an algorithm in complexity theory? In my view, the distinctive feature for hypothetical polynomial SAT-solvers is rather whether they’re accessible or not. A world where P=NP but where the algorithm is unfeasible or unknown isn’t very different from one where P!=NP, especially when the proof that P!=NP is also unfeasible or unknown. I admire the symmetry of this situation, which probably has a physical interpretation that’s yet to be found.

    So we may keep wondering if Nature has made NP-complete problems exponential, or if it’s just their polynomial solution that’s impossible for us to find. The more I think about it, the less I find such a distinction meaningful. If Andrew Wiles was able to eventually prove Fermat’s Last Theorem, it’s also because his problem had more physical reality than ours. In any case, his was much more falsifiable than ours. Which isn’t to say that P!=NP is an ill-posed problem, but rather that it looks like an ill-answerable one.

  44. Scott Says:

    Neel #42: No, you’re just talking about different formal systems that can prove different sets of theorems about the natural numbers (e.g., because they have stronger or weaker induction principles). The distinction I’ve been stressing is between what’s provable or unprovable in various formal systems, and what’s metatheoretically true of the natural numbers.

    Related to that, I don’t believe any of the formal systems you mentioned do suggest the existence of an “undiscovered prime” between 24 and 28 … do they? 🙂

  45. Scott Says:

    Gil #40: Sure, that’s perfectly well-posed also. As is the question of the value of the 10,000th Busy Beaver, or (say) whether it’s even or odd.

  46. Scott Says:

    Serge #43: But “undiscoverable” is only a statement about the current state of human civilization. An algorithm might be “undiscoverable” by the best mathematicians alive today, but discoverable 500 years from now, or by an extraterrestrial civilization. If so, then I’d say that the algorithm “exists,” in exactly the same sense that a prime number “exists” regardless of whether anyone has named it yet.

    (Also, of course, we do understand much, much more about efficient computation than we did 50 years ago. The progress we’ve made turned out not to be “beyond the capacity of human civilization”—even though someone 50 years ago could have declared it would be, and thereby dissuaded people from making it.)

    The goal, in math just like in science, is to discover what’s “actually out there to be found,” in this case in the world of computational patterns. To conflate our own failure to find something with its objective nonexistence—to say that the former is not merely evidence for the latter (weaker or stronger depending on the circumstances), but amounts to the same thing—is to commit an act of violence against the whole mathematical tradition going back to Euclid.

    Incidentally, I don’t understand why you think that Fermat’s Last Theorem had more “physical reality” than P vs. NP does. Is it simply because FLT is a Π1-sentence, whereas P≠NP is a Π2-sentence? If so, then I can easily remedy that problem, by giving you a Π1-sentence of enormous interest to complexity theory. For example,

      “For every n, there is no arithmetic circuit of size 1.1n/10000 that computes the permanent of an nxn matrix.”

    Exactly like FLT, the above statement can be phrased in terms of a certain computer program not halting. So then, would you admit that this statement has as much “physical reality” as FLT?

    And conversely, if I picked a Π2-sentence from number theory—like, say, the conjecture that there are infinitely many twin primes—would you say that it’s just as “unreal” and “unfalsifiable” as P vs. NP (which has exactly the same logical form)—and therefore, that it too can probably never be resolved? (At least, if you didn’t know about the massive progress on Twin Primes that was actually made in the last few years, starting with Yitang Zhang’s breakthrough?)

  47. Scott Says:

    yannick #41:

      P vs. NP is never just the abstract question, which game we want to play, since it always comes with a claim about our world attached.

    No, not really. P vs. NP is defined as the question of whether there exists a polynomial-time Turing machine that decides 3SAT—full stop.

    As I mentioned in the post, you can also ask whether NP-complete problems become feasible if you “change the game”: for example, could they be efficiently solvable on average? By a quantum computer? By any physical device?

    Those are all extremely interesting questions as well, ones that I and others spend a great deal of time thinking about—and any answer to the last one would unavoidably “come with a claim about our world attached.” But they’re also different questions than the purely mathematical question of whether P (the class of languages decided by a polynomial-time TM) equals NP (the class of languages decided by a polynomial-time NDTM).

  48. fred Says:

    Is there anything remarkable about the existence of problems such as graph polymorphism or factorization? I.e. the fact that we can’t seem to classify them in either P or NP?
    Could that point that maybe the P!=NP question isn’t ill-defined by maybe just too “coarse”, e.g. there is a zone of interest somewhere in between?

  49. Scott Says:

    John Sidles #39: But I don’t think there’s any disagreement between what I said and what Thurston said, or even much of a “contrast of views.” Thurston’s example was well-ordering the real numbers, which is known to be doable assuming the Axiom of Choice, but not otherwise. I completely agree with him that this might cause us to question exactly what one means in saying the reals are well-orderable, or to treat their well-orderability as a “polite fiction.”

    By contrast, P vs. NP—despite the way it touches on deep foundational issues—is, at the end of the day, an arithmetical question. And that gives it a very different character from the well-orderability of reals, as I’m sure Thurston (or any other mathematician) would agree. As just one small illustration of the difference, Schoenfeld’s absoluteness theorem (mentioned earlier) assures us that, if P≠NP or P=NP are provable with the help of the Axiom of Choice, then they’re also provable without it.

  50. Scott Says:

    fred #48: Complexity theory doesn’t start and end with the P vs. NP question. The NP-intermediate problems, in particular, have been a focus of enormous interest for the past 40-odd years: first from a structural perspective, then from a cryptographic one, and now also from a quantum computing perspective. Look up some recent STOC/FOCS/CCC proceedings if you don’t believe me; you’ll find more about the no-mans-land between P and NP-complete than you ever wanted to know. 🙂

    Before proving P≠NP, there are numerous “weaker” statements that one would want to prove first, such as L≠NP, P≠PSPACE, NEXP⊄P/poly, and the permanent not having poly-size arithmetic circuits. One would also want to delineate the boundary of NP-hardness more precisely, e.g. by proving or disproving the Unique Games Conjecture.

    And yes, after proving P≠NP, one would then want to go further, and show, for example, that the polynomial hierarchy is infinite (which would imply, in particular, that factoring and graph isomorphism can’t be NP-complete), and that hard-on-average NP problems and one-way functions exist. One would also want to resolve the status of specific “intermediate” problems, like factoring, discrete log, graph isomorphism, Nash equilibrium, and the turnpike problem.

  51. Jay Says:

    Gil #23

    >I am not even sure what “ill-posed” can mean in this context.

    Suppose you find an algorithm for 3SAT, and the running time presents an erratic behavior so that:
    -for any (n, c) there exists a larger n so that the running time is above c^n for more than n^c consecutive instances
    -for any (n, c) there exists a larger n so that the running time is below n^c for more than c^n consecutive instances

    In other words, P versus NP is about asymptotic behaviors. What if the running times of the fastest algorithms are not well described as an asymptotic behavior?

  52. John Sidles Says:

    Gil Kalai asks (#23)  “Maybe P is too inclusive? Maybe NP is too inclusive?”

    Scott Aaronson opines (#49)  “P vs. NP — despite the way it touches on deep foundational issues — is, at the end of the day, an arithmetical question.”

    Perhaps one teaching of ZFC is that Gil and Scott can both be right. After all, without the Axiom of Choice, the non-orderability of the reals would have greatly impoverished 20th century mathematics.

    Similarly, perhaps Gil’s postulate is correct: the present-day non-orderability of languages in P — by TM runtime or any other natural measure — is a signifier that our present (oracle-dependent) definition of P is sufficiently over-inclusive as to obstruct progress in complexity theory.

    In this hybrid worldview, well-crafted restrictions to P (e.g., non-oracular definitions of P) might invigorate 21st century complexity theory — comparably to the Axiom of Choice’s invigoration of 20th century mathematics — without substantially altering the meta-mathematical worldview that “P vs NP is arithmetical question”, and moreover without substantially altering the practical implications of the separability (or not) of P and NP.

    Question  Does progress on the P vs NP await a complexity-theoretic “Axiom of Choice”, which refines both our definition of P and our understanding of it, such that Scott and Gil can both be right?

    The world wonders!

  53. gentzen Says:

    Scott #38 How do your definitions of \(\Pi_1\) and \(\Pi_2\)-sentences from the survey relate to the definition of \(\Pi^0_1\) and \(\Pi^0_2\)-sentences from the arithmetical hierarchy? It seems that “formula with only bounded quantifiers” has been replaced with “provably (in PA) recursive function”. Is this just another way to formulate the condition from the arithmetical hierarchy, or is this a notation from a related but slightly different hierarchy?

  54. Gil Kalai Says:

    Jay, what you proposed seems technically impossible but it will not make the problem ill-posed either, in any sense I can think about. If the question is: are there infinitely many n’s and a fixed k so that the best algorithm is below n^k then is a scenario like yours the answer is “YES”. If the question is : Is there k so that for all n’s the best algorithm is below n^k, then the answer is “NO”.

    The one thing I am not clear about in this discussion is this: For those people who regard CH ill-posed (I don’t) in the case that NP=!P problem is independent of ZFC (and there are no strong reasons to think so that I am aware of) will this make NP=!P as “ill-posed” as the CH. It seems that Scott insists that the answer is no, but I don’t see the conceptual difference between NP=!P and other undecidable problems in this hypothetical case. (I can see (vaguely) the argument that the *methods* for proving independence do not apply here.) Whatever this conceptual difference is, I am sure that you do not need to refer to the term “ill-posed” to explain it.

  55. Sam Hopkins Says:

    Say I have a strong belief that statements that are true for the standard integers are true. Is it so obvious that the metatheoretic truth of P vs NP doesn’t depend on the way in which we encode Turing machines into the integers?

  56. Jay Says:

    Gil, thx for your position (I take for granted it’s Scott’s position too). But is there a typo? I think you intend to write: *above* => yes; below => no, otherwise I’m afraid I don’t understand your point.

    For your question, it would be interesting to ear someone who actually regard CH ill-posed.

  57. fred Says:

    Do we know of any problems for which we’ve proven the best avg/worst-case lower bound to be exponential (for sure)?
    If so, is it trivial to show why they can’t be transformed in poly time to SAT3?

  58. Scott Says:

    Sam #55: Yes, I think that’s pretty obvious. As far as I can see, the situation here is really no different than for other conjectures like Poincare, ERH, Birch-Swinnerton-Dyer, Navier-Stokes smoothness, etc. whose encodings as arithmetical statements can also be complicated and subject to many different choices. If anything, the difficulties are less for P vs. NP than for conjectures involving manifolds, differential equations, or other continuous structures. Once again, then, I don’t see any reason to single out P vs. NP for special treatment: the issues in formalizing it are no greater than the issues in formalizing algebraic geometry, analysis, or any other part of “ordinary math,” and indeed are probably less.

  59. Jay Says:

    Scott #58: you mentionned that, when young, you once tried to solve P vs NP, thinking it could be quite easy and straightforward. Have you ever believed the same for other conjectures like Poincare, ERH, Birch-Swinnerton-Dyer, Navier-Stokes smoothness, etc?

    If there’s a reason to single out P vs. NP, maybe it’s we don’t really understand why the proof is not easy.

  60. Scott Says:

    Jay #51: Sure, something like the situation you describe is theoretically possible. (Your exact conditions would, I think, force the running time to be a non-monotone function of n, a bizarre state of affairs that could be remedied by simply adding dummy variables and then running the SAT-solver on a larger input length. But one could modify your conditions to something that no one knows how to rule out.)

    But suppose something like this happened. Then for purposes of worst-case analysis, the algorithm would just be an “exponential-time algorithm,” full stop! The strange fact that the exponential growth happened to slow down for certain particular input lengths would no more change its worst-case exponential-time character, than would the algorithm’s happening to be fast for certain particular inputs (e.g., those that were 2SAT instances).

    So in particular, if one could prove that any algorithm for 3SAT behaved exponentially on infinitely many input lengths, then one would have proved P≠NP, full stop. This is not a gray area in the statement of the problem.

    Of course, one would then want to go further, and show not only that the exponential behavior occurred for some input lengths, but that it occurred for all but finitely many lengths. (In particular, statements of the latter sort are much closer to what’s needed for cryptography.) Technically, this is known as the difference between infinitely-often (i.o.) and almost-everywhere (a.e.) hardness.

    Like so many other questions on this thread, I think it all comes down to three simple points, which people might want to read over and over until they sink in:

    1. A mathematical question (like P vs. NP) is a single, definite thing that, once satisfactorily answered, is completely answered for all time—no ifs, ands, or buts. (As many of them have been.)

    2. A field (like complexity theory) can give rise to an infinite number of questions, with followups to followups continuing forever.

    3. From this it follows that we should avoid identifying an entire field with a single question, even if it’s a famous one.

  61. william e emba Says:

    Aatu: the confusion I caused was entirely my own fault. Ergo, I was not the least bit offended.

    The examples you implicitly cite are in regards to set-theoretic statements, not arithmetic statements. And typically no one says “independent” in these cases with a large cardinal, rather, one says “equiconsistent” in the best cases, and one refers to upper/lower consistency strength bounds in general. Look through Kanamori again, especially the last chapter. In brief: set theorists consider large cardinals as part of the foundations of set theory. In contrast, CH remains baffling, despite our ability to dissect it in dozens of ways.

    Suppose you prove P≠NP using highly advanced algebraic geometry, one that seemingly requires the full Grothendieck universes treatment. And you prove that inaccessible cardinals are necessary. What do you put on your resume? What do you tell the Clay people? What do you tell NYT? Answer: you proved P≠NP. And the large cardinals involved? They will become the new normal.

    And the models of ZF + P=NP? They will be non-standard, and outsiders everywhere will scratch their head trying to understand how anyone can say P=NP when the “polynomial-time” algorithm has a polynomial of infinite degree, with infinite coefficients. A freak, not part of the usual discourse, similar to the classic logic qual question: describe as explicitly as possible a model of PA+~Con(PA).

  62. John Sidles Says:

    Scott opines  “Conjectures like Poincare, ERH, Birch-Swinnerton-Dyer, Navier-Stokes smoothness, etc. [have] encodings as arithmetical statements [that] can also be complicated and subject to many different choices. If anything, the difficulties are less for P vs. NP than for conjectures involving manifolds, differential equations, or other continuous structures.”

    Scott’s assertion calls to mind a wonderful essay by Donald Knuth, which comprises the introduction to a wonderful book by Marko Petkovsek, Herbert Wilf, and Doron Zeilberger, whose wonderfully short title is “\(A=B\)”. According to Knuth

    Science is what we understand well enough to explain to a computer. Art is everything else we do. … Science advances whenever an Art becomes a Science. And the state of the Art advances too, because people always leap into new territory once they have understood more about the old.

    Knuth’s Art-versus-Science criterion helps us to appreciate that the P vs NP problem is uniquely “artistic” relative to the other Millennium Prize problems. Concretely, every decent computer language includes arbitrary-magnitude integers and arbitrary-precision reals as datatypes that are explicitly well-ordered. Then upon this well-ordered foundation are constructed (commonly as built-in functions) thousands of computational capabilities like continuous-valued functions, coordinate charts, trajectory integration, curvature computations (etc).

    New options for 21st century pedagogy  Depending essentially upon well-ordering of the integers and reals, it is entirely feasible for 21st century students to take pretty much any textbook on number theory, algebraic geometry, differential geometry, transport theory (etc) and solve every problem in the book by a blend of symbolic and numerical computations.

    Indeed, among the very best ways for a student to understand a textbook is to teach its contents to a computer — precisely in the sense of Knuth.

    Learning complexity theory as an art  This uniquely instructive 21st century mode of mathematical learning fails fundamentally (as it seems to me) when it is applied to the propositions of even the very greatest textbooks in complexity theory (taking Sanjeev Arora and Boaz Barak’s wonderful Computational Complexity: a Modern Approach as a concrete example). This is because a fundamental obstruction to Knuth-style learning of complexity theory as a science is the lack of any natural ordering of the languages in P induced by the TMs that accept them.

    Conclusion Relative to other Millennium Prize problems, P vs NP (as it is presently framed) is in the sense of Knuth a uniquely artistic postulate rather than a scientific postulate.

    Is there a fundamentally artistic obstruction to progress in complexity theory? The world wonders!

  63. Scott Says:

    Gil #54:

      I don’t see the conceptual difference between NP=!P and other undecidable problems in this hypothetical case … Whatever this conceptual difference is, I am sure that you do not need to refer to the term “ill-posed” to explain it.

    I’m fine to use different words, but I do insist that there’s a major conceptual difference between P vs. NP and CH—even if P vs. NP turned out to be independent of ZF, just like CH is.

    The difference, as I see it, has nothing to do with proof techniques, but simply concerns what kind of objects the two questions refer to.

    P vs. NP refers to finite machines manipulating strings of 1’s and 0’s and either accepting or rejecting—about which, I would say, there are definite facts of the matter, to whatever extent there are definite facts of the matter about anything. You can run a program and watch it step by step, or even simulate it with pen and paper, and it will do something, and you don’t need any axioms to tell you that it does. And even if the program hasn’t halted by the end of the universe, there’s still a fact of the matter about whether it would have halted given enough time and memory—in just the same sense that there’s a fact of the matter about (say) whether the 1010^1000000th prime ends in a ‘7’. I don’t even understand what it means to deny these statements.

    Now, once we admit there’s a fact of the matter about whether programs halt, it’s then just a tiny step to admit that there’s also a fact of the matter about Π2-sentences, Πk-sentences, and so forth, all the way up through the definable ordinal notations (i.e., up to the Church-Kleene ordinal).

    By contrast, CH refers to transfinite sets that one can’t even construct, describe, or manipulate, except by implicitly or explicitly using first-order logic as an intermediary. (E.g., one can say “the set of all real numbers,” but one can’t program one’s computer to enumerate that set, not even in the limit that it runs for infinite time.) And indeed, just as there are different groups that all satisfy the group axioms, so it turns out that there are different “universes” of transfinite set theory (e.g., the V=L universe, various large cardinal universes, etc.), with no finitary test that could ever tell us which of those universes is the “true” one. For that reason, I think it’s perfectly reasonable to take the position that there need not be any fact of the matter about the truth or falsehood of statements like CH: there might only be facts about whether these statements are provable from various axiom systems. Or one could go the arch-Platonist route, and maintain there are facts about the matter about all set-theoretic questions, independence from ZF be damned. Or one could adopt any intermediate position.

    But for arithmetical questions (e.g., anything that asks about the behavior of a computer program), the situation is very different: there I’d say that we don’t have the option of denying there’s a fact of the matter, unless we’re ready to deny that there’s a fact of the matter about whether 2+2=4, whether there are infinitely many odd numbers, etc., and lapse into total incoherence.

  64. Serge Says:

    Scott #46:
    The tradition of mathematics since Euclid is full of such acts of violence. Recall Gauss, Lobachevsky, Bolyay and Riemann versus Euclid himself. Recall Cantor versus Kronecker. Recall Gödel versus Hilbert. Recall Gödel and Cohen versus Cantor. Recall Matiyasevich versus Hilbert…

    P!=NP problem is different from most other arithmetical conjectures because, if it were proved false, every other mathematical conjecture would be virtually solvable in our lifetime. That’s why, for me and apparently for some other commenters on your blog, if P!=NP this can’t be true for a mere technical reason. For me, P!=NP has to be unprovable for physical reasons and the minds of the extraterrestrial beings are not know to be exempt from the laws of physics.

  65. Scott Says:

    I should point out that there’s a very interesting analogy with physics here. If one had to extract a single lesson from 20th-century physics, one could do a lot worse than:

      the importance of distinguishing, over and over and over, between the questions whose answers have actual observable consequences, and the questions that sound meaningful, but whose answers change depending on your coordinate system or your gauge or your measurement basis or some other choice, slipping from your grasp again and again until you finally realize that you weren’t asking anything sharply-defined about nature, but only about your own description of nature.

    In different ways, this is the lesson of special relativity, general relativity, quantum mechanics, quantum field theory, and gauge theory.

    My position is just that there’s an equally-important distinction in math, but one that’s not nearly as often stressed. That distinction is between the mathematical questions “whose answers have actual observable consequences”—meaning, the ones that are ultimately about what a suitably-programmed computer will or won’t do—and the ones like the axiom of choice or the continuum hypothesis, which only start to generate “observable consequences” once we tie them to a particular system of axioms.

    The failure to recognize this distinction has led to two complementary mistakes: on the one hand, if people want to be “mathematical realists,” they think it forces them to accept the existence of a fact-of-the-matter about AC and CH, and maybe even the existence of a well-ordering of the reals or a Banach-Tarski bijection. And on the other hand, if people reject those things, they think that entitles them to reject the existence of a fact-of-the-matter about P vs. NP, or even (though many people stop before reaching such implications) whether there are any odd perfect numbers.

    All the while, what I’d regard as the clunkingly-obvious middle position—namely, to accept the definiteness of all arithmetic statements, while not committing oneself to the definiteness of statements about “unobservable” transfinite sets—has gone mostly undiscussed and unadvertised. Hence this blog post.

  66. fred Says:

    Scott #60
    ” (E.g., one can say “the set of all real numbers,” but one can’t program one’s computer to enumerate that set, not even in the limit that it runs for infinite time.) ”

    I guess that’s the power of mathematical abstraction, but one wonders if the fact this higher logic is sweeping under the rug any sort of “real world” realization (e.g. using a computer to do it) isn’t the reason why what’s provable and isn’t then becomes blurry and relative. A sort of “there’s no free lunch”. On the other hand, anything than can be proven with a Turing machine is more “absolute”.

  67. Scott Says:

    Serge #62:

      P!=NP problem is different from most other arithmetical conjectures because, if it were proved false, every other mathematical conjecture would be virtually solvable in our lifetime. That’s why, for me and apparently for some other commenters on your blog, if P!=NP this can’t be true for a mere technical reason. For me, P!=NP has to be unprovable for physical reasons…

    I have a counterexample to your argument. Consider P vs. EXP. It’s also the case that, if P≠EXP were false, then “every other mathematical conjecture would be virtually solvable in our lifetime.” (Indeed, P=EXP implies P=NP.) And yet we do know a proof of P≠EXP—one that someone like you might call “merely technical” (it’s a diagonalization argument, which makes no reference to physics).

    More broadly, history abounds with profound conceptual statements that were proved to be true for “merely technical” (that is to say, mathematical) reasons: Gödel’s and Turing’s theorems, von Neumann’s and Nash’s equilibrium theorems, Bell’s theorem, Arrow’s impossibility theorem … if nothing with deep, potentially civilization-changing consequences can be true for a “merely technical” reason, then how was any of this progress possible?

    (To clarify, many of the things I listed above have wonderful and insightful proofs, which I didn’t mean to denigrate with the phrase “merely technical.” I was simply adopting Serge’s strange terminology, according to which, it seems, any piece of pure mathematical reasoning is necessarily “merely technical.”)

  68. Michael Dixon Says:


    I’m sorry this is a bit off topic, but I’ve had this question on my mind for a little while. You posted a list of signs that a claimed P vs. NP proof was wrong a while ago. Do you happen to have the inverse of these? That is, do you have a list of signs/requirements that you would expect a good, possibly correct P vs. NP proof to satisfy?

  69. Scott Says:

    Michael #68: You can simply negate all the signs in my other list. 🙂

    E.g., suppose that a paper doesn’t jump immediately to P≠NP, but starts by proving weaker statements that are still breakthroughs (e.g., that 3SAT requires superlinear circuits) in a clear and comprehensible way. Suppose it cogently discusses the relativization/algebrization and natural proofs barriers and how it gets around them, and why the arguments break down for matching and other problems in P (or for proving a lower bound on 3SAT better than ~1.3n). Suppose the paper has, not just manipulation of formulas that look like they were lifted from an undergrad complexity textbook, but a striking new piece of mathematical machinery, whose introduction could plausibly allow one to succeed where others had failed.

    In that case, the attempt might still fail for any number of subtler reasons—in fact, it’s probably likelier than not that it will—but the paper is at least past the starting gate that almost no other P≠NP paper gets past. Experts are now intrigued and eager to read further.

  70. Dez Akin Says:

    I’m confused what P vs NP being independent of set theory means. As far as I can tell it can’t be finitely satisfiable in itself and it’s negation, so if there’s some polynomial time 3-sat algorithm it might be different for infinite sets, which I guess is of some mathematical interest, but I don’t think its of much practical interest.

    Just looking at Goodstein’s theorem and the non-standard models of arithmetic where it can be considered ‘false’ as an analogy, the notion of an independent P vs NP statement strikes me as more than a little strange… not to mention unlikely in my opinion.

    Has anyone formalized the P vs NP question in first order logic?

  71. Scott Says:

    Dez #70: Once again, all it would mean for P vs. NP to be independent of set theory, is that starting from the axioms of set theory, we couldn’t prove either P=NP or P≠NP. That’s it. The question would still mean exactly the same thing that it means today (no infinite anything), and it would still have a definite answer (either the fast algorithm for SAT would exist or it wouldn’t exist); we just wouldn’t be able to prove it. (Some people will interject that P vs. NP will then have different answers in different “nonstandard models of arithmetic”—but by Gödel’s completeness theorem, that’s not actually telling you anything more than what I said, that the answer is unprovable.) Yes, I agree that this is a very unlikely possibility.

    I’m sure someone has formalized P vs. NP in first-order logic, but I don’t know a reference.

  72. Michael Dixon Says:

    Scott #69:

    Haha, I made a mistake and was too unclear. Instead of “a good, possibly correct proof”, I should have written “a very promising proof”. This changes the question. Sorry about that.

    So you mentioned in the post that:

    Lacks deductive proof structure => Probably Bad PvNP Proof

    It’s not the case that:

    Has deductive proof structure => Promising PvNP Proof

    But it is the case that:

    Has deductive proof structure => Possibly Good PvNP Proof

    I think I wanted to ask about signs that would provide strong evidence that the proof likely does answer P vs. NP. This is opposed to the weaker statement that “it’s still possible because it didn’t fail this check”. If you stretched it, the distinction is loosely analogous to the distinction between RP and coRP.

    I hope I framed this much clearer this time.

  73. John Sidles Says:

    Dez Akin wonders  “What P vs NP being independent of set theory means?”

    Per comment #13 (above) one interesting consequence is:

    Alice’s Lemma  No algorithm that provably solves 3SAT provably runs in PTIME

    This leaves open the loophole that a 3SAT-solving TM might exist that had polynomial runtime, but not provably so.

    Per the TCS StackExchange questions “Does P contain languages whose existence is independent of PA or ZFC?” and “Are runtime bounds in P decidable? (answer: no)”, concrete TM’s whose runtime exponent is undecidable are known. However, essentially nothing is known (by me or anyone) regarding the class of languages that these so-called “cryptic” TMs accept.

    In particular the crucial Hartmanis-class question of whether there exist any languages in P whose most efficient TMs are cryptic is wide open.

    From a Thurston-style “Proof-and-Progress” viewpoint (#39), cryptic 3SAT-solving TMs would work in the real world; yet absent any proof of this fact — even in principle — there would be no straightforward way of understanding *HOW* they worked … at least by the traditional method of creative human proof-contemplation.

    That’s why the existence (or not) of efficient cryptic TMs is yet another problem in complexity theory that is natural, deep, and open.

  74. Scott Says:

    Michael #72: Sorry, I don’t think there are any superficial signs of the kind you ask for. At best, there can be superficial signs that something is worth looking at further, but ultimately a judgment of correctness for something this important can only be made by those who’ve actually worked through the details. (Fortunately, a judgment of incorrectness is sometimes easier. 🙂 )

  75. aviti Says:

    Will read the post ’n´comments later.

    My shout outs to Subhash K, Maryam M, Arthur A, Manjul B, Martin H.

    Then will go up n read the post n comments.

  76. asdf Says:

    There’s a mathematical/philosophical argument promulgated by the logician Sol Feferman, that the continuum hypothesis (CH) is not a well-posed question. He basically says that a proposition p is well-posted if you can prove “p or not-p” in semi-intuitionistic logic, i.e. without using the law of the excluded middle on statements that have unbounded quantifiers, but allowing the LEM under bounded quantification. He has a paper and some slides about this, that are cited in the Wikipedia article about CH.

    I want to add some more here about a possible similar situation with N and intuitionistic logic, but have to stop writing here for now. Basically there is no way to tell whether a model of your favorite arithmetic or set theory is standard or nonstandard. So if P=NP is formally independent then there’s a significant philosophical questin of whether it has a truth value.

  77. J Says:

    Can a cs theorist win a Fields Medal?

  78. Scott Says:

    J #77: Witten won the Fields, so the precedent that a “non-mathematician” (in this case, physicist) can win a Fields Medal has already been set. In practice, though, particularly given the parallel awarding of the Nevanlinna Prize for TCS, my guess is that a CS theorist would win the Fields only if there were (a) revolutionary crossover influence on “traditional” parts of math, or (b) major progress on P vs. NP, or something of that kind.

  79. Me Says:

    While i do believe P!=NP should be possible to prove, i can see how one would conclude that P vs NP could be unprovable.
    If you want to show that no possible algorithm can solve a NP-complete problem and try to prove it by contradiction for the sake of the argument, you have to assume there’s a black box with a magical algorithm inside. The obvious problem is, there seems to be no way to look into the box, while there could be literally anything in it. It’s not unthinkable that there is no method powerful enough to extract any information about the hypothetical algorithm.
    I wouldn’t say such a method can’t exist but this is indeed a major obstacle in proving P!=NP. The big question is whether there is a way to just walk around the box, rather than trying to take a look inside and gain knowledge about every possible algorithm anyone could ever invent.

    Well that’s just my two cents to stay on topic. What i would like to share (speaking of P vs NP) is a spectacular statement from Donald Knuth himself that i just discovered. He explains his believe that P = NP in question 17 of

    I think it’s great that at least a few of the greatest computer scientist (like also Richard Lipton) believe P = NP could be true. I don’t believe it but i would certainly wish it was true 🙂

  80. Scott Says:

    fred #57:

      Do we know of any problems for which we’ve proven the best avg/worst-case lower bound to be exponential (for sure)?
      If so, is it trivial to show why they can’t be transformed in poly time to SAT3?

    Sorry I missed your question earlier.

    Yes, there are problems for which we know essentially-optimal exponential lower bounds. A standard example would be the problem of simulating a Turing machine for 2n time steps, which we know takes about 2n time. 🙂 Or more generally, one could pick any EXP-complete problem, for some of which it’s far from obvious why they’re EXP-complete (example: deciding whether White has the win from a given position in nxn chess). Again, unlike with the NP-complete problems, we know that these problems require exponential time.

    Unfortunately, these EXP-complete problems can’t be reduced to 3SAT unless NP=EXP. And more broadly, the relativization barrier explains why diagonalization (the technique used to prove that P≠EXP) can’t suffice, on its own, for proving P≠NP.

  81. Scott Says:

    Jay #59:

      you mentionned that, when young, you once tried to solve P vs NP, thinking it could be quite easy and straightforward. Have you ever believed the same for other conjectures like Poincare, ERH, Birch-Swinnerton-Dyer, Navier-Stokes smoothness, etc?

      If there’s a reason to single out P vs. NP, maybe it’s we don’t really understand why the proof is not easy.

    Well, one genuine difference between P vs. NP and the other Clay prize problems, is that P vs. NP is much more accessible. A second, related difference is that I fully understood the statement of P vs. NP when I was 15 years old, whereas I wasn’t even aware of the existence of most of the other Clay problems (with the exceptions of Riemann and Poincare, which I knew rough statements of, but which seemed far more terrifying to me than P vs. NP—for them it was just obvious that you couldn’t get started without mastering centuries of math). So it’s really not surprising that I would try P vs. NP—which is like the white rabbit in Monty Python and the Holy Grail—and not try Riemann or Poincare. (For the same reason, I also tried to prove the Collatz 3x+1 conjecture, with similar success as for P≠NP. 🙂 )

    Having said that, I have to disagree with you that “we don’t understand the reason” why the proof of P≠NP is not easy. In fact, we know a great deal today about why P vs. NP is so hard, starting with the natural proofs and relativization barriers—and the fact that I hadn’t yet learned these things at age 15, doesn’t mean that other people hadn’t. If anything, I’d say that we know more at a formal level about the difficulty of proving P≠NP, than about the difficulty of the other problems you listed.

  82. John Sidles Says:

    asdf remarks “There’s a mathematical/philosophical argument promulgated by the logician Sol Feferman … “

    asdf, your fine comment leads to a wonderful mathematical/philosophical debate that is conducted in two fine articles (which Google finds):

    • Sol Feferman’s “Is the Continuum Hypothesis a definite mathematical problem?”

    and its rebuttal

    • Peter Koellner’s “Feferman on the Indefiniteness of CH”

    One of the questions that these articles consider — a question that in Scott’s phrase is “forehead-bangingly obvious” … yet it had never occurred to me to ask it — is this:

    The Feferman/Koellner Question (CH Version)  For what reasons (if any) is the Continuum Hypothesis Problem unsuited to be a Millennium Prize Problem?

    The reasons that Feferman and Koellner provide can be applied with comparably force (as it seems to me) to

    The Feferman/Koellner Question (P vs NP Version)  For what reasons (if any) is the P vs NP Problem unsuited to be a Millennium Prize Problem?

    Reasonable-but-diverse arguments in regard to both of these questions have been advanced.

    This diversity-of-opinion is unsurprising (as it seems to me) for the common-sense reason that ordering sets by cardinality and ordering TMs by runtime alike are sufficiently tricky undertakings — logically, computationally, and metaphysically — that framing suitable questions is comparably challenging to proving indisputable answers.

  83. Ben Standeven Says:

    asdf #74:

    The problem with a fully intuitionistic approach is that “P = NP is formally independent” is itself formally independent (by the Second Incompleteness Theorem). This problem doesn’t arise in Fefermann’s setup because provability can be defined using only bounded quantifiers.

    Dez Akin #68:
    There’s a first-order form of P!=NP at the Math Overflow site that Sidles linked above: :

    “For each polynomial p and Turing machine M implementing an algorithm attempting to decide SAT, there is a formula ϕM such that if we look at the computation of M on input ϕM after p(|ϕM|) computation steps, M has either not halted or it has answered incorrectly.”

    The author, Kjos-Hanssen, also argues that the formula ϕM is probably not much bigger than M itself, so this is really a Pi-0-1 sentence (like a Godel sentence, or the Riemann hypothesis) rather than a Pi-0-2 one (like Goodstein’s Theorem).

  84. Jay Says:

    You’re right young and amator tentatives are likely because, as collatz conjecture, it’s quite easy to understand the conjecture.

    I’m however dubitative that the three usual barriers are a great deal of knowledge. To me it sounds like we don’t manage to fill up our car, and our engeengineers assure they know a great deal of why: they just proved we can’t use a hammer, nor a screwdriver, nor a claw. Well…

  85. Scott Says:

    Jay #84: I actually agree with you, but the “three usual barriers” represent only a small distillation of what we know about the problem’s difficulty. I.e., for the past half-century, 90% of the people who tried to fill up the car did try to use a hammer, screwdriver, or claw, because those were the only real tools anyone could find at this gas station, and because each of them had worked on other cars. But even the people who used their own, improvised tools didn’t get very far. Also, and crucially, the engineers did manage to fill the car 1% of the way, and they managed to fill some toy models of the car. Even those tasks were extremely difficult (much harder than filling other kinds of cars), but they’re now understood. This gave the engineers a clear sense that filling the car all the way is certainly going to be at least as hard as the things they already did, and probably much harder.

  86. Serge Says:

    Indeed, researchers are more aware of a bunch of reasons that make P≠NP hard than of anything likely to make it easier. IMHO, this situation should suffice to single it out…

    If P and EXPTIME have been successfully separated, that’s possibly because their respective “sizes” are different enough (the same goes for NL versus PSPACE and for PSPACE versus EXPSPACE). I dream of a criterion such as “if two complexity classes are too close to each other, they can’t be distinguished by any proof”, and of one relating the minimum complexity of a separation proof to the “distance” of the classes to be separated from each other. I don’t know if it’s a feasible approach, nor do I have any idea of what notion of distance to use between the classes A and B when A ⊆ B.

    The notion of mathematical reality varies from one mathematician to another. For example, Hugh Woodin and his colleagues who work on the continuum hypothesis have come to discover “natural” axioms that imply the “reality” of one and only one cardinal between the integers and the reals. On the other hand, most computer scientists admit some “reality” for finite collections only. But mathematical reality is not something which can be defined rigorously – it’s more like a philosophical attitude that’s always at risk of becoming incompatible with future findings.

  87. fred Says:

    Me #79:

    Interesting answer from Knuth – first time I read something from him about P!=NP.
    I hope Scott will comment.

    Knuth says
    “Mathematics is full of examples where something is proved to exist, yet the proof tells us nothing about how to find it. Knowledge of the mere existence of an algorithm is completely different from the knowledge of an actual algorithm.”

    Recently I was quite surprised to find that we can compute easily how many unique connected graphs with N nodes and M edges exist (i.e. how many such graphs aren’t isomorphic to eachother), but yet we can’t tell efficiently whether two particular instances are isomorphic.

  88. Scott Says:

    Serge #86: The trouble is, how could you possibly know a-priori how close is “too close to be separated”? So for example, AC0 and AC0[2] seem extremely close to each other, and yet we do know a separation between them (have since the early 1980s). You might say, “eh, that’s just because AC0 is so weak.” But by the time you’re finished carving out exceptions for all the separations that we already know (monotone circuits, multilinear formulas, etc.), your theory of fundamental inseparability will look like a Swiss cheese. Far better, it seems to me, to say that we know barriers to complexity class separations—i.e., explanations for why it’s such a hard problem—but we also know ways to circumvent each of the barriers, at least individually and in special cases. Thus, while the barriers help us understand why P vs. NP hasn’t been solved yet, they give no real evidence that it couldn’t be solved a century from now.

  89. Jeremy Says:

    Prof. Scott I find your “clunkingly-obvious middle position—namely, to accept the definiteness of all arithmetic statements, while not committing oneself to the definiteness of statements about ‘unobservable’ transfinite sets” extremely intuitively appealing.

    As an algebraic topologist and set-theoretic layman I am hoping to see you expand on it and really flush it out.

    If U is the weakest formal system which can express Turing machines, I assume that U can express any “definite” statement in your sense. By Godel’s theorem, not even any recursively enumerable extension V of U could decide all these definite statements. I assume that within V there will be lots of sentences that are definite and lots of sentences that are “artifacts of our particular reference frame,” to use your physics analogy. However, my gut intuition is that there will be natural statements in V for which it is not decidable whether those statements are artifact or definite, and I would love to see examples.

    Assume now that U does not prove any true definite statement false. There is the natural question of what statements we should add to U that will prove more true definite statements. For example, we may add Con(U) or Con(U+Con(U)). I have some vague understanding that pursuing this leads to the notion of computable ordinals, which I never studied. Presumably as one increasingly adds statements like Con(U), certain reference frame decisions have to be made arbitrarily, leading to a combinatorial explosion of artifact questions and “artifact truths.” Is there anyway to quantify or even to limit this combinatorial explosion?

    My (possibly false) understanding is that the assumption of even massively large cardinals can have bearing on true definite questions. Is there anything else interesting to say about uncomputable ordinals or reasons to believe in large cardinals in your philosophical framework?

    What is the definiteness of questions about Turing machines with halting oracles and why?

    Anyway, I would be elated to see any new blog post or comment on this matter.

  90. Scott Says:

    fred #87: Knuth was part of the generation of computer scientists that first named and explored the P vs. NP problem in the early 1970s. Back then, it really was reasonable to spend “equal time” on almost every possibility: maybe there’s a practical linear-time algorithm for SAT. Maybe P=NP, but only via an n1000000 algorithm. Maybe P≠NP via a simple, 10-line proof. Maybe P≠NP but the statement is independent of ZF set theory. Etc.

    Today, though, as a result of nearly a half-century of experience trying to prove circuit lower bounds (and occasionally succeeding), I’d say that more intuition is available about the vastness of the undertaking: it no longer seems the slightest bit surprising to most of us that P≠NP would be so hard to prove, even conditioned on it being both true and provable. So when people ask: “what if there’s n10000 algorithm? what if there’s a 10-line proof of P≠NP?,” etc., these no longer seem like interesting maverick suggestions or needed signs of open-mindedness: just well-worn ideas that have been discussed over and over but never really led to productive research.

    Incidentally, the Robertson-Seymour graph minors theory, which Knuth mentions, is a wonderful example of the strange behavior that can occur in parameterized complexity theory. But I don’t think it has any direct bearing on P vs. NP, since SAT is just a single problem, rather than an infinite family like what Robertson and Seymour considered.

    In summary, I don’t begrudge Knuth his opinion, but I feel comfortable parting ways with him on this.

  91. william e emba Says:

    fred#57: Examples of exponentially hard problems include generalized chess, checkers, Japanese-ko-rule go. That is, the only algorithms guaranteed to produce best moves have exponential running time, essentially you have to check at least two moves repeatedly.

    Examples of double-exponentially hard problems include the decision problem for Presburger arithmetic. That is the arithmetic of the integers under addition, but without multiplication. in contrast with the more famous Peano arithmetic, there are algorithms for deciding the truth or falsity of Presburger arithmetic statements, but it turns out they are all extremely slow.

    Dez#70: As to how something like P=NP or P≠NP could depend on set theory, let me describe two scenarios. Doubtless they are far-fetched (and since the two scenarios contradict each other, one of them is provably rank nonsense) but such reasoning has been used for other finitary statements.

    There are known finite arithmetic statements, Diaphontine equations even, whose truth or falsity is equivalent to the consistency of ZF+LC, where “LC” stands for some large cardinal axiom. This isn’t surprising, because assertions of consistency are simply claims about finite combinatorial objects, “all possible proofs in a finite language with a recursive axiom system”. So suppose you had an algorithm which was P-time for most 3SAT problems, but was exponential for a certain clearly defined subclass, which may or may not be empty. And suppose the existence of those subclass members was equivalent to proofs that LC does not exist. Then assuming CON(ZF+LC) the subclass is empty and the algorithm is polynomial, because the troublesome case does not exist.

    Or, perhaps, GCT (Geometric complexity theory) will ultimately separate P and NP. But the algebraic geometry involved may depend on Grothendieck’s axiom of universes,
    a large cardinal assertion. His constructions involve heavy use of proper classes, and Grothendieck invoked inaccessible cardinals to bypass ZF’s inability to quantify over classes.

    PS: yes, P vs. NP has been formalized in first order logic of arithmetic. It’s extremely trivial to do so, using Gödel coding to translate “Turing machine” and “polynomial” and “3SAT problem instance” into an integer. The odds are against this being written out in any reference or textbook. It serves no point, and if this kind of coding isn’t obvious to the reader, the material is over his head anyway.

  92. Gil Kalai Says:

    Fred #87 quoting Knuth: “Recently I was quite surprised to find that we can compute easily how many unique connected graphs with N nodes and M edges exist (i.e. how many such graphs aren’t isomorphic to eachother)”

    Hmm, this is indeed surprising (if “easily means in polynomial time”). Does anybody here know how to do it easily??

    (See .)

  93. Serge Says:

    Scott #63:

    But for arithmetical questions (e.g., anything that asks about the behavior of a computer program), the situation is very different: there I’d say that we don’t have the option of denying there’s a fact of the matter, unless we’re ready to deny that there’s a fact of the matter about whether 2+2=4, whether there are infinitely many odd numbers, etc., and lapse into total incoherence.

    I’m ready to accept there’s a fact of the matter about whether 2+2=4, but that still lets me wondering what it means to study the behavior of a Turing machine whose size would exceed the number of atoms in the Universe, or whose polynomial upper bound would exceed this number. In Computer Science like in all finite mathematics, Nature forces us to work within bounded resources. So in this sense, P vs. NP might, indeed, turn out to be an ill-posed problem. What do you think?

  94. fred Says:

    Gil #92
    Sorry. “Easily” is all relative – I didn’t mean it’s poly but that it’s obviously much faster than the alternative of doing an explicit counting.
    E.g. for M=25 nodes and N=134 edges, the number of unique connected graphs is computable and is 1099655845868933516685814726984644933338731897533313269778335664 (I don’t know what’s the current record for M,N).
    But, even if you had a magical fast poly-time algo for graph isomorphism and wanted to use it against all possible graphs (M,N) to count the unique ones by building an explicit list of unique graphs, you probably wouldn’t be able to go beyond M=9 or 10.

  95. Pseudonym Says:

    Out of curiosity, have any non-classical logics (such as this Catuṣkoṭi logic you mention) ever been helpful in solving problems in computer science where it was clear our more conventional logic could not have done the job?

    On behalf of the type theorists of the world, the answer is “yes”. I don’t know about this Catuṣkoṭi logic thing, but intuitionistic logic seems far more useful to type theory than classical logic.

    I can’t speak for topos theorists, but I imagine they’d say something similar.

  96. Scott Says:

    Serge #93:

    Are you ready to accept that there’s a fact of the matter about there being no largest prime number?

    What about which digit the 10100000000th prime number ends in? Or whether or not the 10100000000th decimal digit of π is a 5?

    If you accept that there’s a fact of the matter about the questions above, then it seems to me that you’ve already conceded the basic point.

    And if you don’t accept that there’s a fact of the matter about these digits, because the obvious program to calculate them wouldn’t halt before the heat death of the universe … OK then, what if someone found a more clever program, which computed the digits quickly? Then would you admit that there had been a fact of the matter all along?

    And even supposing such a program didn’t exist, doesn’t your view have the strange implication that there are facts of the matter about the largest prime number that can fit in the observable universe, but no facts of the matter about the very next prime after that? If the universe had been just slightly larger, would there then have been a fact of the matter about that next larger prime? Or if cosmologists announced tomorrow that they’ve discovered the dark energy is decaying away, so that the observable universe is actually infinite, would you then say there are facts of the matter about all the infinitely many primes? (Assuming you agree that there are, indeed, infinitely many primes?)

  97. fred Says:

    Scott #96
    I see – one should consider the computational cost associated with every pair (N,P), where N is a natural and P is a certain property.
    Some properties (like the final digit of kth prime) of N depend on the entire full “chain” of naturals that precede it – a big number is viewed as the result a long computation that can’t be compressed: 1->3->5->7->11->…
    For some other properties (like evenness), the chain can be folded/shortened – a big number in this case is really just like going round and round a finite set of elements: 0->1->0->1->0->1->..

    So, even though the 10^10000000 th prime does indeed end with a 5 or not, for all intents and purposes, it’s beyond our reach.
    Do mathematical objects exist independently of us discovering (computing) them?

  98. John SIdles Says:

    Here’s a number-theoretic question in the same general class as the questions that Scott asks in #96 … but harder (as it seems to me) … and more obviously complexity-theoretic.

    Definition  We say that a real number \(r\) is \(P\)-real with respect to runtime exponent \(k\) if there exists a Turing Machine \(\text{TM}( r)\) that outputs \(n\) digits of \(r\) in time \(\mathcal{O}(n^k)\).

    Question  Does ZFC suffice to partially order the \(P\)-reals with respect to \(k( r) = \min_{\,\text{TM}( r)}k\)?

    Questions like this are (as far as I know) both open and difficult. They remind us of Klein’s Maxim:

    Klein’s Maxim  Everyone knows what a curve is, until he has studied enough mathematics to become confused through the countless number of possible exceptions.

    If Klein’s Maxim is true of curves, how much more true is it of runtime exponents?

  99. Scott Says:

    fred #97:

      So, even though the 10^10000000 th prime does indeed end with a 5 or not, for all intents and purposes, it’s beyond our reach.

    First of all, no prime except 5 ends in a 5 — otherwise 5 would divide it!! 😀 I guess that one wasn’t beyond our reach after all.

    Of course, we could instead ask whether the 1010000000th prime ends in a 3. Even then, as I pointed out in my comment, no one has ruled out the possibility that there’s a fast algorithm to decide such questions. Maybe there isn’t, but the mere possibility that an algorithm exists should make us hesitant to philosophize about such things being “forever beyond our reach.” (Remember Comte philosophizing about how the chemical composition of the stars was forever beyond humans’ reach, a few years before spectral analysis was first used to resolve starlight into hydrogen and helium.)

      Do mathematical objects exist independently of us discovering (computing) them?

    If you want to say that 17 didn’t “exist” before anyone named it, then I personally don’t understand why you shouldn’t go further, and say (for example) that Venus didn’t exist before anyone noticed it in the sky. It’s true that 17 doesn’t physically exist in the same sense Venus does, but just like Venus, it clearly has properties (e.g., being prime) that you can’t change by wishing it. By my own favorite definition of reality—“that which doesn’t go away when you stop believing in it”—this feature would imply that 17 is real.

  100. fred Says:

    Scott #99

    What I mean is that there is some interesting difference between a number which can be described using certain resources and a number which can’t be described using the same resources. Not whether “17” has ever been described or not, but whether we can indeed describe it (by counting on our fingers, or by using a Cray, etc).
    Seems to me that complexity theory is about studying the resources needed to carry out the math:

    Given a finite amount of resources R (finite amount of matter, time, space, energy, information propagation limit, etc), for a given property one can only describe/reach/compute a finite set of naturals, and all the others are unreachable beyond a sort of computational horizon.

    And isn’t the essence of P!=NP and such questions really about how that finite set grows as a function of the resources?
    You guys also look for shortcuts through the computational horizon, or ask theoretical questions about altering the limitation of the resources (with “oracles”?).
    And the P!=NP quest is itself subjected to the same limitations as any other property of the naturals.
    (this is probably all very obvious, sorry… I’m no mathematician or complexity theorist, just an average engineer).

  101. Scott Says:

    John Sidles #98: I can answer your question, although given past experience, I’m worried that you’re going to invent some after-the-fact reason for not counting my answer as an answer.

    You ask whether ZFC can “partially order” the polynomial-time computable reals, with respect to exact polynomial bounds of the fastest TMs that compute them.

    Well, it’s trivial to “partially order” this or any other set: just output that r is computable faster than s whenever you can prove that fact, and refuse to answer whenever you can’t!

    So to make the question more interesting, let me assume that you meant “totally order.” There’s still an ambiguity in your question (the same ambiguity, I recall, that marred several of your MathOverflow and CS StackExchange questions; I see that the many commenters trying to explain this to you had no effect). Namely, ZFC can only be “asked” about particular real numbers by giving it finite descriptions of them. And ZFC might be able to prove something about a real number r if described in one way, but not be able to prove the same statement about the same real number if r is described differently.

    But, OK, let’s assume you meant: given two polynomial-time computable real numbers r and s, described—let’s say—by Turing machines to compute them, can ZFC always decide whether one of them is computable faster than the other?

    In this case, the answer to your question is no. Proof: let r=0.r1r2r3… be any real number in [0,1] such that computing its first n digits requires Θ(n2) time (such an r is easily produced using the Time Hierarchy Theorem). Then define s=0.s1s2s3… by setting sn=rn if the lexicographically first n ZFC proofs include a proof of 0=1, and sn=0 otherwise. Then assuming ZFC is consistent, we have s=0 and therefore s is computable in linear time. But since ZFC can’t prove its consistency, for all it knows s agrees with r after some finite point, and therefore requires quadratic time. So ZFC can’t even decide whether s is computable faster than “0” (by which I mean, the real number computed by some Turing machine that manifestly outputs 0.00000….).

  102. John Sidles Says:

    Scott observes “It’s trivial to partially order [the polynomial-time computable reals]: just output that r is computable faster than s whenever you can prove that fact, and refuse to answer whenever you can’t!”

    The history of mathematics shows plainly — as emphasized by Grothendieck for example — that “trivial” observations can be exceedingly “interesting” and even “useful”.

    Example  Relative to “wild curves”, smooth curves have many properties that are trivial … and yet smooth curves are exceedingly interesting and useful.

    Inspired by Scott’s construction (#101), let \(P’\)-reals be those reals with provable most-efficient polynomial-time runtime exponents, as contrasted with the (larger?) set of \(P\)-reals that have most-efficient polynomial-time runtime exponents that are not necessarily provable.

    Then ZFC naturally sorts (by runtime) the \(P’\)-reals into well-ordered complexity classes, whereas the \(P\)-reals cannot be thus sorted (per Scott’s example).

    Thus relative to \(P\)-reals, the complexity-class sorting of the \(P’\)-reals is formally trivial … and yet for practical computational purposes that same sorting is of course exceedingly interesting and useful.

    Candidate MathOverflow Question  Can ZFC or any of its extensions exhibit a list of digits of any of the reals in \(P\backslash P’\)? If yes, provide an algorithm that explicitly exhibits the digits of at least one of the reals in \(P\backslash P’\). If no, give reasons why we ever need to speak of the \(P\)-reals at all … because don’t the \(P’\)-reals capture all of the algorithmic structure of the \(P\)-reals that we have reason to care about?

    Suggestions for tuning this question to be as interesting as possible are of course welcome … concrete answers are of course even more welcome.

    Perhaps the answer: “This question is natural, open, and hard” is the best that we can muster in light of present mathematical knowledge … and this answer too is worth appreciating.

  103. Luke G Says:

    John Sidles #102: P-P’ is simply empty. Any p-time algorithm can be made into a provably p-time algorithm, by adding a timer to the Turing machine. (The technical details are a bit lengthy for a blog comment, but hopefully that is intuitive enough.) So P’ already covers all functions that could be computed in p-time.

  104. John Sidles Says:

    LukeG (#103), your post outlines (what are called) polylimiter and polynomially clocked reductions, which are discussed at-length in the TCS StackExchange community wiki “Does P contain languages whose existence is independent of PA or ZFC?”, specifically in comments by Peter Shor, Sasho Nikolov, Luca Trevisan, and Timothy Chow (the wiki provides links).

    The computational intuition is easy to summarize: it’s not computationally natural (as the wiki says) to “mask cryptic computations by overlaying a computationally superfluous epi-computation”

    The practical intuition is simple too: when a software engineer is given the source code to an algorithm, and that algorithm is claimed to be \(\mathcal{O}(n^2)\), and the engineer is asked to verify and certify the claimed quadratic runtime, it’s poor practice to ignore the algorithmic structure of the source code, and instead wrap the code in a quadratic time-limiter, so as to rigorously certify the claimed runtime. The real-world problems associated to software verification are much harder than that, and complexity-theory formalisms that respect these difficulties are preferred (by engineers if not logicians).

    You are entirely right too, to appreciate that “the technical details are a bit lengthy for a blog comment.”

  105. Itai Says:

    I think you are right about that there is no such thing as mathematical reality.
    Mathematics can represent truly the reality if you will insert inside all the true physical assumption possible like in physics.
    The problem with P vs NP is that it claims to have profound impact on physical reality, with too little true physical assumption.
    I think it’s no wonder a proof can not be constructed from such a small assumption on physical reality, and be dependent only on mathematical logic rules.
    Complexity Theory poorly describes reality because it abstracts important physical elements such as physical time and energy, the physical structure of the computer and etc’.
    “Time Complexity” only counts classical/quantum gates or computation steps without defining it physically, but it claims to have profound connection with physical time.

    If a new physical complexity theory which studies which computation model is better by measuring the minimal required Plank Action (= Plank time * Plank Energy ) for solving a problem ( which is governed by the principle of least action that holds for every known physical theory ), problems like P vs NP will be reformulated and will have realistic solution .

  106. fred Says:

    Itai #105
    “The problem with P vs NP is that it claims to have profound impact on physical reality, with too little true physical assumption.”

    That’s because “Physical Reality” is a meaningless concept in the Academia, except when they have to pick between Hawaii and Copenhagen for their next conference.
    None of that stuff would fly in the private sector where they expect *results*, especially after 30+ years of *work*.
    (I kid, I kid)

  107. Scott Says:

    Jeremy #89: Sorry for the long delay in answering your comment! I’m delighted that you, as a working mathematician, find it appealing to be “a Platonist about everything ultimately expressible in terms of arithmetic and computation and a formalist about everything else,” like I am.

    To answer your question, I claim that statements about Turing machines with halting oracles have definite truth values, just as “clearly” as statements about ordinary TMs do. The argument is simply this: you agree that any given Turing machine, on any given input, either halts or doesn’t halt? Well then, every call to a halting oracle has a definite right answer. Therefore, the operation of a TM with a halting oracle is mathematically well-defined at every point in time. Therefore, by the same intuition you used for ordinary TMs, you should conclude that the oracle TM either halts or doesn’t halt, and that there’s a definite fact of the matter about which it does.

    Continuing recursively in this way gets you to definite truth-values for all Πk sentences, for any value of k—and actually further, to Πα sentences, where α is any computably-definable ordinal. But it doesn’t get you beyond that: it stops at the supremum of the computable ordinals, which is called the Church-Kleene ordinal. Beyond that lies the enchanted realm of the Axiom of Choice and the Continuum Hypothesis, for which I don’t think one has to admit the existence of any well-defined truth value (though neither is one prohibited from doing so).

  108. Ben Standeven Says:

    Sidles #104:
    Complexity theory is about designing algorithms, not certifying them. If you want to formalize the certification of algorithms, the field you want is proof theory.

  109. John Sidles Says:

    Standeven #108:
    Your observation #108 amounts to an appreciation that the P vs NP problem — as it presents itself in engineering practice — naturally comprises elements of complexity theory and proof theory. This observation is both common sense and long-appreciated:

    Results about the complexity of algorithms change quite radically if we consider only properties of computations which can be proven formally. … These results suggest very strongly that we need to explore further how our ‘world view’ of the complexity of algorithms has to be changed if we consider only provable properties of algorithms.
      — Juris Hartmanis, Feasible computations and provable complexity properties (1978)

    If we suppose that Lance Fortnow is correct to assert (thirty-one years later)

    “None of us truly understands the P versus NP problem, we have only begun to peel the layers around this increasingly complex question.”
      — Lance Fortnow, The Status of the P Versus NP Problem (2009)

    then we should neither be surprised nor dismayed that peeling the “The P vs NP Onion” is revealing alternating layers of complexity theory and proof theory. Rather we have every reason to be delighted!

  110. Jeremy Says:

    Scott #107: Thank you for the reply! The argument makes perfect sense.

  111. Dez Akin Says:

    Comment #91

    Formal representation of P vs NP isn’t trivial or obvious just because you know Gödel coding exists. That serves the purpose of making informal proofs in metamathematics for going from first order theories to number theory, which is useful for the incompleteness theorems, but the grunt work of turning turning machines into number theoretical first order statements is a bit more difficult than just asserting they’re possible to do, and certainly not pointless, because we have proof assistants and automated theorem provers, and the completeness theorem to show that if P vs NP is a theorem, we can find the proof.

    I haven’t seen a first order representation of P vs NP, but I haven’t seen much work on first order representation of metamathematics either, except for one Coq proof of the Gödel-Rosser incompleteness theorem References for Godel encodings, suggesting its possible to do, but much of the actual work for pushing definitions hasn’t been done that I know of.

    If it has, then I’d love to see it so I could add it to the TPTP library of open problems, along with a number of other open problems in mathematics.

  112. Dez Akin Says:

    Further, with model theory and forcing, a full first order representation of P vs NP would allow us to prove independence, if we lived in such an unlikely universe.

  113. John Sidles Says:

    Dex Akin #111-2
    Your comments raise a number of points that are addressed in-depth in a well-written survey Is P Versus NP Formally Independent? (2003). In particular, this survey sets forth cogent reasons why the Clay Institute’s definition of “The P versus NP Problem” cannot be captured in first-order logic (for reasons along the lines of Bjørn Kjos-Hanssen’s work referenced in #28 above).

    Many people (including me) agree entirely with this survey article’s main conclusion:

    Conclusion  It’s clear that a proof of P≠NP, if it exists at all, would require utterly new techniques. The results of the previous section twist the knife further by showing that, if P≠NP is unprovable in a strong theory such as Peano arithmetic, then utterly new techniques would be required to show that.

    To borrow a phrase of Herman Melville, in appreciating the merits of this even-handed conclusion we can all of us “splice hands”. And it would be fun to learn if this author’s opinions regarding P vs NP have altered substantially since 2003!

    @ARTICLE{ author = {Scott Aaronson},
    title = {Is P versus NP formally independent?},
    journal = {Bulletin of the European
    Association for Theoretical Computer Science},
    year = {2003}, volume = {81},
    pages = {109--136}}

  114. Scott Says:

    John Sidles #113: As I said before, I stand by the accuracy of what I said in the 2003 survey (including in that passage you quoted), but today I would change the emphasis. There are things I left unsaid that today I would say, and many things that I would phrase differently.

    At the time, I had just learned recently learned this material, and I was so fascinated by understanding what it would mean for P vs. NP to be independent, and unpacking the content of the results that had been proved about that issue, that I didn’t step back and reflect on how likely it was for independence to actually be true, or make historical comparisons to other famous open problems in math.

    I didn’t say anything whatsoever about it “not being possible” to capture P vs. NP using first-order logic—on the contrary, I discuss why P≠NP is straightforwardly a Π2-sentence of first-order logic. Please refrain from attributing to me absurd things that I never said.

  115. John Sidles Says:

    My apologies Scott … in comment #113 the phrase “first-order logic” should have read “\(\Pi_1\) statements” … then (as it seems to me) your 2003 conclusion (as quoted) remains eminently reasonable … particularly if we are mindful of the ubiquitous lesson of history that “utterly new techniques” — in P vs NP as in any branch of mathematics, science, and engineering — have commonly entailed definitional adjustments whose embrace is justified by those self-same techniques.

  116. Scott Says:

    No, I never said that P≠NP is a Π1-sentence. I said then and “reaffirm today” that it’s a Π2-sentence.

  117. John Sidles Says:

    Scott says  “No, I never said that P≠NP is a Π1-sentence. I said then and “reaffirm today” that it’s a Π2-sentence.”

    This is a statement which my (well-rated) MathOverflow question For which Millennium Problems does undecidable -> true? affirms in excruciating detail — see in particular Bjørn Kjos-Hanssen’s answer regarding P vs NP — and which my comment #113 was intended to further affirm.

    Scott, given that your opinions regarding P vs NP have evolved substantially in the past decade, isn’t it nearly certain that further evolution will occur in coming decades? Not only for you, but for everyone?

    Pudd’nhead Wilson’s Lemma  It were not best that we should all think alike; it is difference of opinion that makes horse races.

      —  Mark Twain (1894)

    Conclusion  It is desirable that various folks place varying weights upon differing layers of the P vs NP Onion … it ensures that everyone is not barking up the same wrong tree.

  118. Coda Says:


    No matter what you say, John will always get the last words. 😉

  119. Serge Says:

    Itai #105:
    I couldn’t agree more. If set theory is one way of expanding arithmetics, physics is but another one. The purpose of Church’s thesis is precisely to hide the physical meaning of computing, which is not very well known yet.

    Dez Akin #111:
    Contrary to Scott, for me the undecidability of P!=NP seems like a reasonable hypothesis. I’m told it was recently proved that very long statements were more likely to be undecidable than short ones… and P!=NP is, obviously, a very long statement in first order logic.

    Regarding mathematical reality, a lot of progress has been made since Kronecker, Poincaré, Brouwer and friends. These guys wanted to forbid non-constructive mathematics as meaningless, but nowadays their descendants only criticize its low degree of reality… 🙂

  120. Scott Says:

    Serge #119: No, I don’t see that P≠NP is a particularly “long” first-order statement. It says there’s no polynomial-time Turing machine that decides SAT. Of course you need to formalize what you mean by “Turing machine,” “polynomial time,” and “SAT” … but that’s downright easy compared to formalizing the concepts involved in numerous other open math problems, like the Birch and Swinnerton-Dyer Conjecture, the Hodge Conjecture, or the Yang-Mills mass gap. So (repeating this point until I’m hoarse) does that mean, by your logic, that those other questions are even more likely to be undecidable than P vs. NP? Or are the ground rules for thinking about P vs. NP completely different than for any other problem in math?

  121. Serge Says:

    It would be somehow paradoxical for me to list the arithmetic facts which I believe have good reasons to be true for no reason. But yes, in principle, all the conjectures you’ve quoted are candidates for undecidability – though they might as well turn out to be provable. It is known that, in a precise technical sense, the undecidable statements are more “numerous” than the decidable ones. But of course, this fact says nothing about the natural statements which mathematicians try to prove.

    My gut feeling is different for P vs NP, but the intuition is difficult to communicate. First, there’s a structural similarity between P vs NP on the one hand, and countable sets vs uncountable sets on the other hand. Second, I deeply feel that P vs NP is connected to physics, so much so that arithmetics is too weak to settle it. The basic insight is that computer science lies at the intersection of math and physics, making math alone insufficient to prove everything about computer science – at least as long as the relevant principles haven’t been translated into some suitable extension of arithmetics.

  122. Scott Says:

    Serge #121: Even then, something would have to differentiate P vs. NP in your mind from all the other problems in theoretical computer science over the last 60 years, thousands of which turned out to be solvable, and essentially zero of which turned out to be undecidable in Gödel’s sense. (Of course, many computational problems turned out to be uncomputable in Turing’s sense, but that’s something very different, and if anything, underscores our eventual ability to prove things hard.)

    Is NEXP vs. ACC also at the intersection of math and physics? Or, since that complexity class separation problem was solved, did it turn out after the fact that it wasn’t about physics after all? Then why couldn’t something similar happen with P and NP?

    (Of course, if your belief in P vs. NP’s independence is just a “gut feeling,” then there’s nothing to argue about, but it would be nice if the gut feeling made greater contact with what’s known.)

    More broadly, I think any intellectually honest attempt to wrestle with the implications of the Gödel phenomenon has to account for the fact that, by and large, it hasn’t invaded the “ordinary” parts of math, despite now having 80+ years in which to do so. Of the famous open math problems of 1931, which were arithmetical (not set-theoretic) in nature, zero (AFAIK) have since been discovered to be undecidable in Gödel’s sense, whereas many have been solved, including Fermat’s Last Theorem, Poincare, the Four-Color Theorem, and Kepler’s Conjecture. Of course this situation could someday change, just like communism could start working, or witchcraft could start winning over technology, but it doesn’t seem to be the trend of history.

  123. Serge Says:

    Scott #122:
    Your two last examples are of different natures. For communism to start working, it would imply a common good will of humans. For witchcraft to win over technology, it would mean a change in the laws of Nature. Thus both seam very unlikely, though the latter even less likely than the former…

    My knowledge about circuit complexity is too weak and I don’t have the time to review the subject before replying to your comment – although, of course, I can grasp the meaning of NEXP (= it takes at most exponential time to assert the validity of a right answer). All I can say is that there seems to be a trade-off between time and certainty, and that such a basic principle cannot be proved by means of the current mathematical axioms. I think it will have to be incorporated into mathematics sooner or later.

  124. Itai Says:

    Serge #119
    You are right about “The purpose of Church’s thesis is precisely to hide the physical meaning of computing, which is not very well known yet”.
    If you are interested to think about the true physical meaning of computation see Prof. Tom Toffoli article here :

    Especially when we understand that the computation model is important (because of QC ), we see that the strong Church Turing Thesis is holding us back ,and holds the researchers for not defining something completely else in this field, this is the assumption that makes us believe “Time Complexity” is the only measure for computational complexity ( and in TM time complexity sense “P vs NP” is the real physical question and no other), although it does not count physical time, and changes it’s physical meaning from one model to another, and from one computer design to another.
    “Time Complexity” is a very good measure for computational complexity only for a specific computation model and specific design (like TM ).

    However, I think that “Time Complexity” and physical action measures are not that far from each other ( In the arithmetic sense , for a specific computer the Time Complexity measure will be be linear in the physical action measure)

  125. Raoul Ohio Says:

    John Sidles #52: “After all, without the Axiom of Choice, the non-orderability of the reals would have greatly impoverished 20th century mathematics.”

    Like, how? Is there a good example?

    I haven’t thought about this stuff for several decades, but my recollection is that AC has little or zero connection with anything concrete. For example: AC existence of a non measurable set. RO sez: BFD! Measure theory is only there so you can define an integral that works in any situation when you need one. So, when was the last time you calculated an integral where the answer depended on the existence of an unmeasurable set? Me neither.

    The reals have an existence independent of humans, in that they are the obvious and correct completion of the rationals. The Cantor theorem shows they are not well ordered, a truly interesting and fundamental fact. As for the fact that you can (in principle) order them by AC: so what?

  126. Raoul Ohio Says:

    Scott #96: I can answer “What about which digit the 10100000000th prime number ends in? ”

    It ends in a binary 1.

  127. John Sidles Says:

    Can the modifier “intellectually honest” (in #122) be omitted without loss of substantive content? Consider for example the natural isomorphism:

    Scott asserts “Any intellectually honest attempt to wrestle with the implications of  the Gödel phenomenon  [alt. P vs NP intractability] has to account for the fact that, by and large, it hasn’t invaded the “ordinary” parts of math, despite now having  80+ years  [alt. 40+ years] in which to do so.”

    Tim Gowers’ report from last week’s ICM provides evidence for this alternative reading, :

    It was time for Sanjeev Arora to talk [at the ICM] about the work of the Nevanlinna prize winner Subhash Khot.

    It was also the time that a significant proportion of the audience decided that enough was enough and left the room.

    The same thing happened in Hyderabad four years ago, and on both occasions I was fairly shocked: I think it shows a striking disrespect […] for theoretical computer science in general.

    It seems to say, “Right, that’s the interesting prizes over — now we’re on to the ones that don’t really matter.”

    Summary  It is reasonable to regard the shared slow progress and shared mainstream disdain in regard to the multilayered onion of complexity theory/proof theory as evidence of shared foundational obstructions (per #109).

    Conclusion  It is  intellectually honest  [better: mathematically natural] to wonder whether (to borrow George Marshall’s phrase): “Neither of these problems can be solved alone. They are directly related, one to the other.”

    Observation  Criteria of “mathematical naturality” are more nearly objective than criteria of “intellectual honesty” (and traditionally have served mathematical discourse better).

  128. John Sidles Says:

    Raul Ohio #126, without pretending to offer a definitive answer to the questions you raise, Colin McLarty’s survey “What does it take to prove Fermat’s last theorem? Grothendieck and the logic of number theory” (2013) is commended to Shtetl Optimized readers who are interested in these questions:

    Does the proof of Fermat’s Last Theorem (FLT) go beyond Zermelo Fraenkel set theory (ZFC)?

    Or does it merely use Peano Arithmetic (PA) or some weaker fragment of that?

    The answers depend on what is meant by “proof” and “use,” and are not entirely known.

    This paper surveys the current state of these questions.

    Google finds non-paywalled versions.

    Happy reading!

  129. Raoul Ohio Says:

    Extension of #126 about AC:

    We can probably all agree that Calculus (+Advanced calc, real ana, complex ana, functional ana, numerical ana) are a big part of the underpinings of all science. From the time of Cauchy on, mathematicians have been working on getting the foundations “right”, and proven from first principles, which has come to mean ZM+AC.

    As is well known, about 100 years ago Landau and then Titchmarsh actually organized and numbered the theorems of the above subjects, proving everything with the least possible stuff. In particular, they avoided AC as long as they could. Thus, it is well known where the first place is where AC is actually needed: The existence of a non (Lebesgue) measurable set. But, is this a key part of calculus, or a curiosity?

    Likewise, you can prove lots of other esoteric things from AC. But, how close do any of these reflect anything in the real world?

  130. Scott Says:

    John Sidles #127: Probably the closest complexity-theoretic analogue to natural arithmetical questions being proved independent of the ZF axioms, would be natural computational problems that arise in math being proved to be NP-hard. And unlike the former, the latter is something that’s happened in spades. (Some) mathematicians might not care much about it, they might even walk out of talks about it, but I don’t think anyone would deny that it’s happened. By contrast, my claim isn’t that natural arithmetical questions (questions people cared about for separate, non-metamathematical reasons) are being proved independent of ZF and mathematicians aren’t paying attention; rather, my claim is that it’s not happening at all. This situation might change in the future, and I’m very excited by serious attempts (like Harvey Friedman’s) to change it, but nevertheless, it is the situation.

  131. Douglas Knight Says:

    Scott 130, isn’t the point to distinguish independence from undecidability? Why should NP-hardness be seen as the analogue of first rather than the second?

  132. Scott Says:

    Douglas #131: Well, yes. But Sidles was the one who insisted on mixing up these different concepts, not me! 🙂 I was trying to show how his analogy breaks down, even if we do our charitable best to interpret it on its own terms.

  133. Dez Akin Says:

    Douglas #131:

    How is independence different from undecidability? If a statement is independent of a theory, it isn’t a theorem of it. If a statement is undecidable, neither it nor its negation is a theorem. Is there some subtle definition in between the two that you guys are implying that no one let me in on?

    Raoul #129

    You don’t need choice for most of ‘real world’ mathematics. It’s simply convenient for some general proofs in analysis, and necessary for some of the proofs about sets larger than the naturals. For countable sets, you can use countable choice instead (and rarely, dependent choice), but even this is rather strong and is usually unnecessary. Sure, non-measurable sets require (some form of) choice, but we don’t usually actually operate on non-measurable sets, except to get around some of the problems created by choice and operating on such large sets as the reals in the first place. This branch of analysis studies general functions over large sets, but the real world so far as we know doesn’t actually use such large sets, and specific analogous results in constructive and computable analysis are sufficient to operate without choice in the world of countable or even finite sets.

    The notion that somehow FLT requires large cardinal axioms always struck me as people fundamentally misunderstanding set theory and bad reporting. If FLT was proven to be dependent on some LCA, that would be big, surprising news. Wiles proof tangentially using a theory that references it… not so much.

  134. John Sidles Says:

    Scott says: But Sidles was the one who insisted on mixing up these different concepts [of logical independence in its single sense, versus undecidability in both of its senses], not me!

    It’s a considerable comfort — to mortals like me — that at least some professional mathematicians regard these issues as challenging!

    Specifically, the Mathematics StackExchange question Does an undecidable decision problem have a ZFC-independent instance? — as asked by Jim Belk (reputation >20K) — vividly illuminates some of the subtler implications of Wikipedia’s definition-page “Independence (mathematical logic)”

    A sentence \(\sigma\) is independent of a given first-order theory T if T neither proves nor refutes \(\sigma\); that is, it is impossible to prove \(\sigma\) from T, and it is also impossible to prove from T that \(\sigma\) is false.

    Sometimes, \(\sigma\) is said (synonymously) to be undecidable from T; this is not the same meaning of “decidability” as in a decision problem.

    Example  Given a (finitely-specified) Turing Machine \(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\), the first two assertions are equivalent, and each is strictly stronger (or is it?) than the third:

    \(\quad\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\in\mathsf{\text{P}}\ \left\{\begin{array}{rl} \text{is independent} &\text{(of the ZFC axioms)} \\ \text{is undecidable} &\text{(by any ZFC proof)} \\ \text{is undecidable} &\text{(by any decision}\ \mathsf{\text{TM}}\text{)} \end{array}\right. \)

    Hmmm … if the first two assertions are true, that what well-posed statements (if any) can we deduce regarding the third assertion?

    The world wonders! Or at least, Mathematics StackExchange readers wonder (as do I).

    Douglas Knight and Scott Aaronson, perhaps you might consider contributing more complete answers to Jim Belk’s Mathematics StackExchange question, particularly regarding the relative strength and/or mathematical insights associated to these varieties of independence/decidability?

    Open Problems  Per the references cited in comment #104, the membership in \(\mathsf{\text{P}}\) (or not) of languages that are recognized most efficiently by \(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\)’s whose runtime exponent is effectively unknowable — in the well-posed ZFC sense — poses problems in complexity theory/proof theory that are natural, open, and hard.

    Engineering implications  These issues are key for engineers who (like me) seek to translate the logical language of independence/decidability into the engineering language of validation/verification:

      • Validation  Have we chosen the right postulated?

      • Verification  Have we deduced rightly from the postulates?

    Conclusion  It is a truth universally acknowledged that mathematicians, scientists, and engineers in possession of curiosity must be in want of definitional clarity.

    And yet the concluding paragraphs of Colin McLarty’s fine survey (cited in #128) emphasizes — rightly as it seems to me — that it is neither necessary, nor feasible, nor even desirable that everyone agree on the best postulates.

    Committed Platonists may of course feel differently.

    Which is good!

  135. Serge Says:

    Arithmetic statements are almost surely undecidable but it’s hard to exhibit one which is not decidable. Real numbers are almost surely normal in every base but it’s hard to exhibit one with this property. Is there any connection between these two facts? Maybe that’s just because their proofs are not constructive…

  136. Ben Standeven Says:

    @Sidles 134:

    Given what I think you mean by your three statements, the first two statements are strictly weaker than the third statement. They are equivalent to “The statements “TM_fs ∈ P” are undecidable (by any decision TM that always backs up its decisions by proofs in ZFC).” And this is obviously strictly weaker than “The statements “TM_fs ∈ P” are undecidable (by any decision TM).”

    And if you have set things up so that “TM_fs ∈ P” is a Godel sentence for ZFC, then it isn’t “effectively unknowable”; ZFC is consistent, so all of its Godel sentences are true.

  137. John Sidles Says:

    Ben Standeven says (#136)   “Given what I think you mean [by the three statements in the example of #134], the first two statements are strictly weaker than the third statement.”

    Ben Standeven, thank you for your comment … and yet the opposite view also is entirely reasonable (as it seems to me) … depending upon “what we mean” by these three alternative statements.

    Each alternative hinges upon precise definitions (which of course are tedious). And yet Juris Hartmanis’ essay “Observations about the development of theoretical computer science” (Annals of the History of Computing, 1981) reminds us that the motivations behind these definitional choices are crucial, subtle, and diverse:

    I believe that research in computer science will require extensive exploratory theorizing until we isolate the right conceptualizations and discard many others which do not capture the reality we want to understand. […]

    In particular in theoretical computer science we have been guilty of behaving too much like pure mathematicians; the mathematicians compass has not always guided us well in exploring computer science. Time and again we have valued the difficulty of proofs over the insights the proven results give us about computing, we have been hypnotized by mathematical elegance and pursued abstraction for its own sake. […]

    Frequently we have practiced “intellectual counterpunching” by staying with small, previously defined (and possibly) irrelevant problems instead of searching for new formulations and development of theories more directly related to computing. […]

    I have no doubts that computer science can mature to a deep and influential science.

    I expect that there will be some profound and surprising insights.

    I am convinced that we will discover, just like we found in other sciences as they matured, that the laws which we will discover about information processing do not feel any obligation to conform to our current naive ideas, derived from very limited experience with small machines and simple problems (or our dim perception how animals process information).

    I believe that as we explore information processing further there will be startling surprises and that our current ideas about computing will have to be modified substantially.

    Let’s apply Hartmanis’ advice in fleshing-out the definitions associated to the third alternative (from comment #134):

    \(\quad\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\in\mathsf{\text{P}}\ \left\{\begin{array}{rl} \text{is independent} &\text{(of the ZFC axioms)} \\ \text{is undecidable} &\text{(by any ZFC proof)} \\ \text{is undecidable} &\text{(by any decision}\ \mathsf{\text{TM}}\text{)} \end{array}\right. \)

    As an exercise in definitional tuning, let’s consider what might best be meant in imagining that we have been given “a finitely-specified \(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\)”, and that the natural question \(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\overset{\scriptscriptstyle?}{\scriptstyle\in}\mathsf{\text{P}}\) has been “decided by a \(\mathsf{\text{TM}}\)”.

    As a commonplace real-world example, \(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\) might be is a simulation algorithm whose run-time we wish to verify and validate; in particular it is natural to wonder whether the runtime of \(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\) is strictly within \(\mathsf{\text{PTIME}}\). Then which is a stronger statement: “\(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\) runs in \(\mathsf{\text{PTIME}}\) is undecidable by ZFC” versus “\(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\) is undecidable by a \(\mathsf{\text{TM}}\)”?

    Let’s follow Hartmanis’ recommendation of avoiding trivializing definitions — however much they may shorten proofs — on the pragmatic grounds that trivializing definitions are of little utility to practicing engineers.

    In the practical world of computer science, algorithmically deciding “\(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\) runs in \(\mathsf{\text{PTIME}}\)” is of course a long-desired capability that (as we know) cannot be decided by any given general-purpose \(\mathsf{\text{TM}}\) that is guaranteed to terminate.

    Thus no general-purpose finitely-specified guaranteed-to-terminate decision \(\mathsf{\text{TM}}\) can ever match the power of an oracular warranty “\(\mathsf{\text{TM}}_{\mathsf{\text{f-s}}}\) runs in \(\mathsf{\text{PTIME}}\) is undecidable by ZFC”. It is in this sense that the third statement is weaker than the first two.

    For further definitional considerations — accompanied by questions that are natural, open, and hard (and whose definitional details are necessarily tedious) — see the MathOverflow wiki “Does P contain languages whose existence is independent of PA or ZFC?”.

    Conclusion  Juris Hartmanis’ 1981 essay cautions us — reasonably as it seems to me — that the foundational definitions of the Complexity Zoo are as yet insufficiently explored and/or adapted and/or diversified as to entirely meet our needs as a community mathematicians, scientists, and engineers.

    This is good news for young researchers!

  138. Dez Akin Says:

    Independent of the ZFC axioms is undecidable by ZFC proofs. Its possible to use model theory and forcing to prove undecidability (independence), but not guaranteed because decidability of a first order satisfiability algorithm would violate Church’s theorem. Membership in P is easy to prove by any decision procedure… just take it as an axiom. Not exactly helpful…

    Consistency is the harder problem, which is why we want to restrict a proof of membership to some relation of set theory at the very least.

  139. [STEM discussion] Is P=NP? Discuss. - Says:

    […] Relevant post […]

  140. Nikos M. Says:

    Hello, let me share a couple of lines on this issue.

    At first i would disagree that definately P vs NP is well-posed and that arithmetic (and its dual form, logic) has a more “fundamental” status than geometry or analysis or set theory or category theory. In fact one can build “mathematics” (and this has been demonstrated repeatedly) upon any one of the previous “angles” (pun intended).

    1. Why are there such “barriers” to P vs NP proof attempts?

    For example relativisation barrier (one can have it any way, depending on what is taken as reference, to put it simply). The algebraisation barrier, the natural barrier.

    i could mention a couole, more (e.g the important issue of representation, e.g in unary, everything is polynomial in a “fundamental” sense if you like, the sense of “counting”), though not usualy taken to mean that

    2. What exactly does it mean for a set of problem instances to polynomialy solvable?

    Pay attention here. P is the set of problems solvable in polynomial time, while NP the set of problems verifiable in polynomial time. In a sense, for P “verification equals solution”. But should one fixed polynomial algorithm be used for ALL instances of a problem? NO!

    What if some instances of a problem, e.g 3-SAT, are polynomialy solvable with algorithm A, while other instances of 3-SAT are solvable with alogrithm B and so on, until all instances of 3-SAT are polynomialy solvable. And be careful i am not talking about non-uniform (circuit) computation! This is related to different algorithm per input-size, instead i’m talking about sth more general; different algorithm per instance values (NOT necessarily size).
    Where is this formalised exactly? Nowhere! Is 3-SAT or is it not, solvable in polynomial-time if all of its instances are indeed solvable in polynomial time but not by one fixed algorithm? Hmm, sounds like “non-deterministic polynomial” after-all (pun intended seriously).

    What if the “verification” part in NP, is actualy related to the “key” to the polynomial algorithm to use for this subset of instances? Yet EVERY instance INDEED has a polynomial algorithm (hmm sounds like algorithmica and intuitionistica at the same time, isnt it?).

    Additionaly this answers another question; why since so many peoole (some may want to add “smarter than the rest”, but i wont) are searching for (one fixed) algorithm, noone has found ONE FIXED algorithm, yet P=NP?

    So under this light of investigation, summarily but eloquently, one can see that P vs NP is not well-posed after all. But when it WILL be well-posed, there is a “fundamental way” in which P=NP (and that would be the answer).

  141. Nikos M. Says:

    3. The NPC class of NP-complete problems and their set of transformation to each other, further provide a hint to what i stated in my previous comment. Yes every NPC problem instance can be transformed, polynomialy to many other imnstances of other problems. So what? The major point here, which provides an addtional hint, is that instances of problems have different degrees of “computational hardness”, some are easier, some are more difficult, yet ALL are polynomialy transformable to and fro (with, by a simple statistical reasoning, significant overlap between “easy” and “hard”).

    ps. When posting similar question in a specific technical forum, the post-doc students (slash moderators) there, could not even think about such an approach, and what it would entail to the whole subject. If it was not in the rigid “box” that they were taught in school. This was not actualy a problem, but out of a “wrong sense of expertise” (certificate-based expetise that is), maybe some felt “threatened” (those wishing, in a self-congratulation manner, to refer to themselves as “smartest from the rest”).

    ps2. i would like to fix a couple of insignificant typos in my previous comment, but there is no edit option.