P vs. NP for Dummies

A reader named Darren commented on my last post:

I have this feeling that this whole P and NP thing is not only a profound problem that needs solving, but something that can be infinitely curious to try and wrap your mind around…

Thing is- there’s a whole world of great minded, genius hackers out here that can’t understand one iota of what anyone is talking about. We’re not your traditional code-savvy hackers; we’re your inventors, life hackers, researchers, scientists… and I think I can speak for most of us when I say: We would love to take the time to really dive into this thread, but we ask that someone (you) write a blog that breaks this whole thing down into a rest-of-the-world-friendly P/NP for dummies… or at least explain it to us like we’re stupid as hell… at this point I’m really okay with even that.

I’m of course the stupid one here, for forgetting the folks like Darren who were enticed by L’Affaire Deolalikar into entering our little P/NP tent, and who now want to know what it is we’re hawking.

The short answer is: the biggest unsolved problem of theoretical computer science, and one of the deepest questions ever asked by human beings!  Here are four informal interpretations of the P vs. NP problem that people give, and which I can endorse as capturing the spirit of what’s being asked:

  • Are there situations where brute-force search—that is, trying an exponential number of possibilities one-by-one, until we find a solution that satisfies all the stated constraints—is essentially the best algorithm possible?
  • Is there a fast algorithm to solve the NP-complete problems—a huge class of combinatorial problems that includes scheduling airline flights, laying out microchips, optimally folding proteins, coloring maps, packing boxes as densely as possible, finding short proofs of theorems, and thousands of other things that people in fields ranging from AI to chemistry to economics to manufacturing would like to solve?  (While it’s not obvious a priori, it’s known that these problems are all “re-encodings” of each other.  So in particular, a fast algorithm for any one of the problems would imply fast algorithms for the rest; conversely, if any one of them is hard then then they all are.)
  • Is it harder to solve a math problem yourself than to check a solution by someone else? [[This is where you insert a comment about the delicious irony, that P vs. NP itself is a perfect example of a monstrously-hard problem for which we could nevertheless recognize a solution if we saw one—and hence, part of the explanation for why it’s so hard to prove P≠NP is that P≠NP…]]
  • In the 1930s, Gödel and Turing taught us that not only are certain mathematical statements undecidable (within the standard axiom systems for set theory and even arithmetic), but there’s not even an algorithm to tell which statements have a proof or disproof and which don’t.  Sure, you can try checking every possible proof, one by one—but if you haven’t yet found a proof, then there’s no general way to tell whether that’s because there is no proof, or whether you simply haven’t searched far enough.  On the other hand, if you restrict your attention to, say, proofs consisting of 1,000,000 symbols or less, then enumerating every proof does become possible.  However, it only becomes “possible” in an extremely Platonic sense: if there are 21,000,000 proofs to check, then the sun will have gone cold and the universe degenerated into black holes and radiation long before your computer’s made a dent.  So, the question arises of whether Gödel and Turing’s discoveries have a “finitary” analogue: are there classes of mathematical statements that have short proofs, but for which the proofs can’t be found in any reasonable amount of time?

Basically, P vs. NP is the mathematical problem that you’re inevitably led to if you try to formalize any of the four questions above.

Admittedly, in order to state the problem formally, we need to make a choice: we interpret the phrase “fast algorithm” to mean “deterministic Turing machine that uses a number of steps bounded by a polynomial in the size of the input, and which always outputs the correct answer (yes, there is a solution satisfying the stated constraints, or no, there isn’t one).”  There are other natural ways to interpret “fast algorithm” (probabilistic algorithms? quantum algorithms? linear time? linear time with a small constant? subexponential time? algorithms that only work on most inputs?), and many are better depending on the application.  A key point, however, is that whichever choices we made, we’d get a problem that’s staggeringly hard, and for essentially the same reasons as P vs. NP is hard!  And therefore, out of a combination of mathematical convenience and tradition, computer scientists like to take P vs. NP as our “flagship example” of a huge class of questions about what is and isn’t feasible for computers, none of which we know how to answer.

So, those of you who just wandered into the tent: care to know more?  The good news is that lots of excellent resources already exist.   I suggest starting with the Wikipedia article on P vs. NP, which is quite good.  From there, you can move on to Avi Wigderson’s 2006 survey P, NP and mathematics – a computational complexity perspective, or Mike Sipser’s The History and Status of the P vs. NP Question (1992) for a more historical perspective (and a translation of a now-famous 1956 letter from Gödel to von Neumann, which first asked what we’d recognize today as the P vs. NP question).

After you’ve finished the above … well, the number of P vs. NP resources available to you increases exponentially with the length of the URL.  For example, without even leaving the scottaaronson.com domain, you can find the following:

Feel free to use the comments section to suggest other resources, or to ask and answer basic questions about the P vs. NP problem, why it’s hard, why it’s important, how it relates to other problems, why Deolalikar’s attempt apparently failed, etc.  Me, I think I’ll be taking a break from this stuff.

105 Responses to “P vs. NP for Dummies”

  1. Anonymous Says:

    What’s absolutely sad is that any P/NP aspirant dummy who has lived through the past few days may be forever turned off from the problem …
    But any innocent rebel who comes after, whose mind is not frozen in terror of thoughts of public humiliation and permanent exile to Canada…
    May then proceed to solve the problem and later choose to ignore everybody, throw away a million dollars and continue to live with their mother!

    Oh dear …

  2. Dave Says:

    Anon#1, what the heck are you talking about?

  3. Anonymous Says:

    Hello Scott,

    I have a suggestion that may or may not be suitable to you.

    There are like 5 more CMI problems: (source wikipedia)

    # 1 P versus NP (aaaargh! get away from me you crazy monster!)
    # 2 The Hodge conjecture
    # 3 The Poincaré conjecture (proven)
    # 4 The Riemann hypothesis
    # 5 Yang–Mills existence and mass gap
    # 6 Navier–Stokes existence and smoothness
    # 7 The Birch and Swinnerton-Dyer conjecture

    Can you run a guest blogger to introduce another problem to dummies, that will take their mind of this crazy monster PNP.

    Thanks!

  4. Alejandro Weinstein Says:

    Dick Lipton has a good explanation:

    http://rjlipton.wordpress.com/2010/06/26/stating-pnp-without-turing-machines/

    According to himself:

    “…the approach I have taken to explain P=NP may seem a bit strange. I hope even if you are well versed in complexity theory you will still enjoy the approach I have taken here. “

  5. Scott Says:

    There are like 5 more CMI problems … Can you run a guest blogger to introduce another problem to dummies, that will take their mind of this crazy monster PNP.

    Sorry, the other ones just seem too trivial by comparison. 😉

  6. John Sidles Says:

    Raoul Ohio asked for an engineering perspective, for which I tried to supply some historical context.

  7. Derek R Says:

    I’m not a mathematician, but the easiest problem to wrap my mind around wrt P/NP is the subset sum problem. The 3SAT problems took me a while to figure out. Now I understand them (I mean, I know what the inputs and outputs are and what’s going on in between, not a deep understanding). But for a while I was confounded by all this talk about conjunctions, disjunctions, and repeated references to some Horny Claus fellow.

    Subset sum is far easier to explain and think about, for the amateur at least.

  8. Anonymous Says:

    #5 There are like 5 more CMI problems … Can you run a guest blogger to introduce another problem to dummies, that will take their mind of this crazy monster PNP.

    Sorry, the other ones just seem too trivial by comparison.

    Ha ha never mind, keep blogging!

  9. Ken Jackson Says:

    Scott, your P vs NP progress PPT deck appears to be corrupted? Or am I the only one getting a corruption error when trying to open the PPT file?

  10. Alex ten Brink Says:

    In your PowerPoint presentation about the progress on the P vs NP question, you wonder ‘Can P vs. NP Be Solved By A “Fool’s Mate?” (A 5-line observation that everyone somehow missed?)’. While I’m not sure about P vs NP, I can think of a 6-line observation about the P vs PSPACE problem:

    Let A be some P-complete problem under LSPACE reductions, say linear programming. This is defined as saying that P = LSPACE^A. Since A is in P, we have PSPACE = PSPACE^A. Now assume P=PSPACE. We then have LSPACE^A=P=PSPACE=PSPACE^A, which contradicts the space hierarchy theorem. Hence P \neq PSPACE.

    To be completely precise: P-completeness under LSPACE reductions is defined as that every problem in P is solvable in logarithmic space with access to an oracle for the P-complete problem. This implies that P is a subset of LSPACE^A. LSPACE^A is a subset of P^A, and since P is low for itself and A is in P (http://en.wikipedia.org/wiki/Low_%28complexity%29) we have that P^A is a subset of P and therefore LSPACE^A is a subset of P and therefore P=LSPACE^A.

    PSPACE is also low for itself, and since A is in P and thereby in PSPACE we have that PSPACE=PSPACE^A.

    The space hierarchy theorem remains true in the presence of any oracle, so for any oracle B we have that LSPACE^B \neq PSPACE^B. In particular we have that LSPACE^A \neq PSPACE^A, but if P=PSPACE then we end up concluding LSPACE^A=PSPACE^A.

    So, where is my mistake? The proof is just too simple to settle a question like P vs PSPACE…

  11. JBirch Says:

    @Dave: Anon1 speaks of Grigori Perelman, who solved the Poincaré conjecture. The Poincaré conjecture was one of the Millennium Prize Problems, carrying a reward of one million dollars for a successful proof. The P/NP problem is also one of the Millennium Prize Problems.

    Perelman lives with his mother. On one occasion where he was asked why he turned down the reward, he said “You are disturbing me. I am picking mushrooms.”

  12. Anonymous Says:

    Scott, while I don’t know about the others 5, I wouldn’t say the Riemann Hypothesis is trivial compared to PNP. I bet the number of great minds who had pondered the former is far larger than the number of those who had thought about the later.

  13. Scott Says:

    Anonymous: I’d definitely agree that the Riemann Hypothesis is a strong second behind P vs. NP. Sure, RH only asks about a particular number-theoretic sequence—rather than something more natural like the set of all polynomial-time algorithms. But we shouldn’t ignore that RH would have some nice consequences for derandomization of number-theoretic algorithms. 😉

  14. lylebot Says:

    Somehow I think Dave was asking about “public humiliation” and “exile to Canada”, not the Perelman reference. Isn’t it humiliating when some of the smartest people in the world drop everything they’re doing to study your paper? And if one could be exiled for writing up a proof with a fatal error in it, there’d be no theoreticians left in the United States.

  15. Bill Kaminsky Says:

    I’m still wondering about your #2 issue from your last post, namely, your skepticism of Deolalikar’s general strategy since it doesn’t directly “know” about some our most impressive polynomial-time algorithms, e.g., the rigorously polynomial-time algorithms for linear programming.

    Were you saying…

    … 1) merely that you’re skeptical of any P vs NP approach that tries find a general approach to assess the power of all possible polynomial time algorithms instead of getting into the proverbial weeds of looking at some particular nontrivial subset of polynomial time algorithms and seeing what specific problem structure they exploit in order to succeed in polynomial time,

    Or

    … 2) were you saying you’re really iffy on the notion that the Immerman-Vardi Theorem P = FO(LFP) plus some presently known facts about first-order logic could ever be such a general approach?

    Thanks.

  16. Scott Says:

    Bill, I don’t think I was saying either! I was making a basic and (I hope) noncontroversial point: any proof of P≠NP would, in particular, be a proof that

    – No simple reduction to linear programming solves NP-complete problems in polynomial time
    – No simple reduction to matching solves NP-complete problems in polynomial time
    – No simple reduction to determinant solves NP-complete problems in polynomial time

    And so on. Thus, given any P≠NP proof, one could in principle extract a proof for any of these implications. Now, that doesn’t logically imply that the P≠NP proof has to talk explicitly about linear programming, matching, determinants, etc. But to be honest, I’d be shocked if it didn’t talk about all the things I just mentioned (in addition to plenty of other things that we can’t even imagine now).

    Here an obvious question arises: namely, aren’t there an infinite number of nontrivial polynomial-time algorithms? And therefore, if my intuition is right, then wouldn’t it suggest that P≠NP isn’t provable at all (since any proof would need to talk separately about an infinity of fundamentally different algorithmic techniques?).

    Well, I’m hopeful that the answer is no! For example, maybe eventually we’ll understand enough about polynomial-time algorithms to classify all the “nontrivially different techniques” for putting a problem in P into a small number of families (like the “matching family” and the “linear programming family”).

  17. Cranky McCrank Says:

    In case anybody want’s to see a video
    introduction to the P-vs-NP-Problem,
    with a box of popcorn in your hands rather
    than a book :-),
    i find these talks very informative and entertaining:

    http://claymath.msri.org/sipser2006.mov
    http://claymath.msri.org/pversusnp.mov
    http://video.ias.edu/stream&ref=201

    But as great as these talks are,
    there are uncountably many great science
    documentaries about physics and a handfull
    of great math documentaries.

    Hopefully one day there will be a great
    documentary about P vs NP in a BBC
    Horizon style. If there are documentaries
    about string theory, it shouldn’t be a
    problem, that P vs NP isn’t proven yet 😉

  18. John Sidles Says:

    Scott asks: Here an obvious question arises: namely, aren’t there an infinite number of nontrivial polynomial-time algorithms?

    Not in the ingeniarius universe … and this restriction is the complexity-theoretic point of the ingeniarius universe (as engineers conceive it).

  19. Job Says:

    If P != NP then it should be possible to invent a problem that requires more than polynomial time to solve but for which solutions can be verified in polynomial time.

    This seems easier than showing that an existing NP-Complete problem can’t be solved in polynomial time, because we’re able to adjust the problem definition to suit our needs.

    It’s a more engineering-oriented approach to proving P!=NP. Can you build a problem with these properties, possibly by assembling existing problems? It’s like asking whether we can build a door that’s easily opened but not easily broken into. This is in contrast to showing that an existing black-box door, whose internal mechanisms we don’t completely understand, has the same properties.

  20. Gus Says:

    @Job:
    “If P != NP then it should be possible to invent a problem that requires more than polynomial time to solve but for which solutions can be verified in polynomial time.”

    NP is the class of problems for which an answer can be verified in can be verified in polynomial time.

  21. Job Says:

    Gus, I’m not following, my point was that if we can create a problem L that, as is suspected of graph-isomorphism and factoring, requires more than polynomial time to solve, but for which solutions can be verified in polynomial time, then L is in NP but outside P. This seems sufficient to separate P and NP.

    I’m afraid I may be missing something obvious.

    Otherwise, i wonder if there’s any evidence that a problem assembled from known problems can’t have the necessary properties to separate P and NP.

  22. Carl Lumma Says:

    I’m having difficulty with

    “P vs. NP itself is a perfect example of a monstrously-hard problem for which we could nevertheless recognize a solution if we saw one”

    Could you provide a little more detail on this?

  23. Job Says:

    Carl, since Scott is on vacation, here’s my opinion – if you take P vs NP to be an instance of a problem in NP, then a solution to P vs NP should be easy to verify, despite probably, and apparently being difficult to solve.

    But is P vs NP an instance of a problem in NP? The generic problem, of which P vs NP seems to be an instance, is that of determining whether two complexity classes are the equivalent.

    If we assume that mathematical proofs can be efficiently verified despite being challenging to solve, that is, that the generic problem of proving a mathematical statement is in NP, then it should follow that P != NP is an instance of an NP problem, given that it is a mathematical statement.

    Personally, i’m also not convinced that a P vs NP solution, or a proof of a mathematical statement, can be efficiently verified. What if P vs NP and mathematical statements are actually in coNP? Then we may be able to efficiently verify that a given false solution is in fact false, but unable to efficiently validate a true solution.

  24. Quantum Tennis Racket Says:

    This talk by Sipser is in the same class of goodness as the Wigderson’s talk linked above:

    http://claymath.msri.org/sipser2006.mov

  25. Cranky McCrank Says:

    @Job

    ” The generic problem, of which P vs NP
    seems to be an instance, is that of
    determining whether two complexity
    classes are the equivalent ”

    I’m sorry if i missunderstand the
    “instance of an NP problem”-part.
    I’m not really an expert on the matter,
    but as far as i understand it, it should be:

    ” Is there a proof of P != NP wich is at most
    n signs long ? ” is an instance of the NP Problem
    ” is there a proof for P != NP ? “

  26. Quantum Tennis Racket Says:

    @Job:

    For any languages A,B, A!=B may not have a short proof as a function of the statement of the conjecture, so your complexity class equivalence problem is likely not in NP.

  27. Trishul of Shiva Says:

    Since your theoretical computer science fraternity has not yet convincingly proved/disproved Deolikar’s paper with the resources of 1000s of great minds, his proof could be an example of a problem that takes as much effort to solve as much to verify. However, we know apriori that mathematical proofs of this type belong to NP. Now, we have a NP problem that takes as much time to solve as to verify. Thus, P=NP 🙂 Just kidding.

    From a layman’s perspective, I’m curious about what happens when problems with 2 dimensions that are solved by NL & other P-time algorithms become NP with the 3rd “dimension” is added. Like 2SAT is P, while 3SAT is NP. Bipartite matching is P, but 3D matching is NP. 2-Clique is P, but 3-Clique is NP. Graph coloring with 2 colors is P, but 3 is NP. Knapsack problem with only 2 contraints is trivial, while with 3 constraints it becomes NP. What happens when this extra dimension is added? Is there anything to do with limitations of current machines that is probably limited in dimensions?

    I’m not a researcher and my only exposure to this problem is spending a sem in grad school with an advanced algorithms course and when I asked my prof this question, he just handwaved. I donno if probably the answer to this “Duh, that is the definition”. But, any smart geek here care to explain it in detail?

  28. Abe Says:

    I found following link @acm very informative when I came across it last year.

    http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext

    (Scott, off the topic: I got to your blog from the Big Numbers link. Not sure if its updated in long time. Is there any update or that still remains most relevant?)

  29. Peter Drubetskoy Says:

    Scott, you like to mention automation of human creativity when talking about P vs NP. I have an issue with that. Namely, if we subscribe to Church-Turing thesis, then minds of geniuses are still equivalent to a Turing machine. Thus it is not the case that a genius is efficiently solving hard problems in NP, while the rest of us mortals are only able to efficiently verify the solutions. Rather, a genius is using advanced heuristics to solve or nearly solve hard problems. That is, the genius’ advantage is not the access to efficient algorithms for solving NP-hard problems but access to efficient heuristics for (approximately) solving those problems.
    Now, I also seem to remember that hard instances of NP-complete problems are actually hard to come by – aren’t there many more instances of SAT that are actually solvable in polynomial time? Maybe geniuses are actually picking those instances that are particularly amenable to their advanced heuristics?
    And, by the way, I am not sure the jigsaw analogy in the BBC piece is a good one – isn’t solving a jigsaw actually in P (O(n^2) to be precise)?

  30. Greg Kuperberg Says:

    And, by the way, I am not sure the jigsaw analogy in the BBC piece is a good one – isn’t solving a jigsaw actually in P (O(n^2) to be precise)?

    Solving a jigsaw is NP-complete if the pieces fit together in more than one way, but not freely. For instance, the evil jigsaw-type puzzles in which you have to match colored lines can be made NP-complete. There is a very direct proof of this, at least if boundary conditions are allowed: You can easily simulate a non-deterministic Turing machine and its tape, row by row.

    If each piece has unique neighbors, then you’re correct that there is a polynomial-time algorithm.

  31. Jeremy H Says:

    @Alex ten Brink:

    P != L^A. Basically, a log-space machine can only make logarithmically-sized queries to A, whereas solving an arbitrary language in P requires making polynomially-large queries.

  32. Bill Kaminsky Says:

    @Jeremy H (Comment #31)

    I think you’re almost there in resolving Alex ten Brink’s paradoxical “proof”* of P ≠ PSPACE (Comment #10), but you need one more fact.

    While it’s easy to show** LSPACE ⊆ P, it’s still open whether the inclusion is strict LSPACE ⊂ P or whether LSPACE = P.

    If the inclusion is strict LSPACE ⊂ P, then your argument, Jeremy, goes through unchanged and Alex’s paradoxical “proof” of P ≠ PSPACE falls apart on its assumption that an oracle X exists for which LSPACEX = P .

    But if LSPACE = P, then Alex’s paradoxical “proof” can work — heck, Alex doesn’t even need to introduce an oracle anymore and instead can just quote the space hierarchy theorem.

    So, in my opinion (and of course please correct if I’m wrong) what Alex’s paradoxical proof really accomplishes is just a reminder that solving the open problem “Does LSPACE = P?” would solve the problem “Does P = PSPACE?”

    =============
    FOOTNOTES:

    * Proof that L ⊆ P: By definition, a logarithmic-space limited Turing machine has only exp[k log(n)] = n^k configurations for some constant k and thus it should take O(n^k) time to go through them.

    ** My summary of Alex ten Brink’s paradoxical “proof” that P ⊂ PSPACE (from Comment #10)

    1) Space Hierarchy Theorem —> LSPACE ⊂ PSPACE

    2) Space Hierarchy Theorem is provable by diagonalization, and it therefore relativizes —> For any oracle A, LSPACEA ⊂ PSPACEA

    3) Assume there exists an oracle X such that LSPACEX = P. Plugging this into (2), means P = LSPACEX ⊂ PSPACEX

    4) Now recall PSPACE is low for itself PSPACEPSPACE = PSPACE. Thus, if our assumed oracle X is no stronger than a recognizer of a PSPACE-complete language, then PSPACEX = PSPACE.

    And given that we know already P ⊆ PSPACE, it seems X shouldn’t need to be any stronger than a recognizer of a PSPACE-complete language in order to ensure LSPACEX = P.

    Therefore, P = LSPACEX ⊂ PSPACEX = PSPACE, and P ⊂ PSPACE.

  33. Raoul Ohio Says:

    Derek R.

    Excellent point. Subset Sum is not only easy to understand, but it is fun to dream up your own approximation schemes for it.

    For those who haven’t taken up actual programming, I want to mention that there is a deep sense of accomplishment and perhaps joy when you get a moderate size program working. I occasionally bore friends and students who stay up all night playing modern versions of D+D, etc., to take up programming because it is more fun, and getting paid is a good thing.

    Once you understand loops and arrays, you can try out your own schemes for Subset Sum. (Probably a good idea to try out some recursive functions first). If you know GUI’s, you can work on some cool representation of results.

  34. Bill Kaminsky Says:

    Oh darn! I see html superscript tags don’t work. In my last comment (#32), “LSPACEX” = LSPACE^X and “PSPACEX” = PSPACE^X, etc.

    I’m happy that at least the relation symbols ≠, ⊂, and ⊆ were rendered correctly.

    Darn you, WordPress!! First you banish many of my comments into moderation, and then you destroy my superscripts!! 🙂

  35. Daniel Says:

    @Job:

    “if we can create a problem …that requires more than polynomial time to solve, but for which solutions can be verified in polynomial time, then [it] is in NP but outside P. This seems sufficient to separate P and NP.”

    Yes, but the sticky part of the matter is rigorously proving both properties (particularly the “requires more than polynomial time to solve” part).

    For example, to many people and by conventional wisdom, 3SAT intuitively feels like such a problem. But separating hand-wavy intuition from provable fact leaves you with the P vs NP problem.

    After all, maybe it’s the case that 3SAT has a polynomial time algorithm (as some would willingly, and just as intuitively(!), claim). And it’s simply the fact that we haven’t been clever enough to discover it yet; maybe the fault lies with us.

    And on the other hand, given a problem that *obviously* requires exponential time to solve, it seems verifying solutions tend to also require exponential time. For example, evaluating the EXPTIME-complete problem of a position on an N-by-N checker board (i.e. deciding if a move exists that forces a win) requires enumerating every combination of moves (which is exponential). However, given a “solution,” i.e. a purported “guaranteed-to-win move,” we can only verify the claim by enumerating every possible move all over again!

  36. Daniel Says:

    The above is patently false. 🙂 Let me correct myself:

    N-by-N checkers is EXPTIME-complete because solutions can be exponentially long in the size of the board. Verification doesn’t come into play.

  37. Raoul Ohio Says:

    1. Everyone’s guess is that the Riemann Hypothesis is probably true and P vs NP is almost certainly false. Either one being decided either way will have huge theoretical impact.

    2. I agree with Scott that the RH is likely to be much easier to prove than P vs. NP. For one thing, I could explain RH to anyone who knows calculus and complex numbers in one minute (using a whiteboard an a couple colors of marker). How much background would you request before trying to explain P vs. NP in an hour?

    3. The RH may well have “practical and/or engineering” significance. Does anyone know any?

    4. What about “PA/OE” significance for P vs. NP? Popular commentators are always yammering about “proving cryptography secure” or something like that. Not!

    5. I am evidently in the minority in my view on the practical significance of P. vs. NP. I acknowledge that I am not an expert in TCS. But here is my view:

    I attended a talk on P. vs. NP as a grad student in the early 1970’s. A key point was that at this time, a paper was being published about every 45 minutes about another famous CS problem that was shown to be NP-Complete. This means that all these problems are exactly the same problem.

    I debated the speaker in the Q&A session, which I vaguely recall went like this:

    RO: “These problems are all over the gym and obviously NOT equivalent, so the logical conclusion is that the P, NP, NP-C toolset to too blunt to distinguish Scat from Shinola because it makes all these reductions that could never be done in practice.”

    Speaker: “You don’t get it”, or, maybe, “But we’re proving theorems all over the place”.

    Actually, I have no memory of the reply, so I made those two up, and my version suffers the usual fate of anything you rethink over the years.

    After close to 40 years, I don’t see any weakness in my snap judgment. P vs. NP is fascinating question, but so is the minimum number of moves that will solve any Rubic’s cube.

  38. Anonymous Says:

    @lylebot

    Stephen Cook and Lee Smolin have been doing their thing from Canada lately.

  39. trolly Says:

    Cranky McCrank
    Haiku #17

    PBS is known
    to flick entire
    documentaries
    from BBC
    edit and censor
    then show them as
    their own

  40. Mysterio Says:

    Trishul: It’s actually a mysterious fact of nature that some problems become hard when you add in the extra dimension. There’s no general reason why that we know of that 2SAT should be easy and 3SAT should be hard; it just follows from the structure of these problems. I’ve thought a little and haven’t been able to find a more meaningful explanation.

    Why is the dimension of a d-sphere maximal at 5 dimensions and not 6? It’s just a fact of nature. The same with 2/3SAT.

    A better answer is that there’s hopefully nothing special about the boundary between 2 and 3. I believe some NP-complete problems are hard even in the k=2 case. Perhaps someone can chime in with an example?

  41. Bram Cohen Says:

    If I may be pedantic a bit, isn’t your statement #1 significantly stronger than P!=NP? It states that NP-complete problems can require exponential time, while P!=NP merely stipulates superpolynomial.

  42. asdf Says:

    Let’s see, the best lower bound we have for problems in NP is linear time.

    We don’t know if P=PSPACE. And we don’t know if P=NP. Therefore we don’t know if NP=PSPACE.

    Any problem outside PSPACE cannot be PTIME. Therefore any (decision) problem that can be solve in quadratic time is in PSPACE. And since the best known upper bound for NP is linear time, the same goes for PSPACE.

    Conclusion: we don’t know if there are any decision problems (whether or not they are in NP) that need quadratic time.

    Is that really right? Does it say that decision problems are somehow an artificially weak class? Should we care more about counting problems?

  43. Alex ten Brink Says:

    @Bill Kaminsky

    Thanks for reading my comment, your comment sums up my proof very well (better than I did myself).

    The only ‘gap’ left is to prove there exist some oracle X such that P=LSPACE^X (with the restriction that X is in PSPACE).

    My solution is that any P-complete problem under LSPACE reductions will do for X. Recall that a problem A is called P-complete under LSPACE reductions iff every problem in P can be reduced to it in logarithmic space, in other words every problem in P can be solved in logarithmic space if given access to an oracle for A.

    The argument then goes as follows:

    P ⊆ LSPACE^A: this is equivalent to forall p in P : p in LSPACE^A, which follows by definition of P-completeness.

    LSPACE^A ⊆ P^A: proven by the same argument that LSPACE ⊆ P.

    P^A ⊆ P: since A is P-complete, it is also in P, so this follows from P being low for itself.

    P = LSPACE^A: follows by putting the above three points together.

    Note that my proof isn’t meant to be ‘paradoxical’: I really think it proves P ⊂ PSPACE since I haven’t found any holes in it yet. The proof is so simple however that I can’t imagine it being right…

  44. Scott Says:

    Peter Drubetskoy (#29):

    if we subscribe to Church-Turing thesis, then minds of geniuses are still equivalent to a Turing machine. Thus it is not the case that a genius is efficiently solving hard problems in NP, while the rest of us mortals are only able to efficiently verify the solutions. Rather, a genius is using advanced heuristics to solve or nearly solve hard problems. That is, the genius’ advantage is not the access to efficient algorithms for solving NP-hard problems but access to efficient heuristics for (approximately) solving those problems.

    You’re completely right, of course, and nothing I wrote should be interpreted to suggest otherwise.

    Even assuming P≠NP, eventually (come the Singularity) humans could perfectly well be replaced by computers that beat us in every respect, including mathematical creativity. Our one consolation would be that even those computers would have trouble with worst-case instances of NP-complete problems, as we do.

    In other words: while P=NP would have some dramatic implications for AI, the P vs. NP question is very different from the “human-level AI question”; either could in principle be answered without the other.

  45. Scott Says:

    Bram Cohen (#41):

    If I may be pedantic a bit, isn’t your statement #1 significantly stronger than P!=NP?

    If I may be equally pedantic, yes!

  46. Olivier Lalonde-Larue Says:

    Here is my proof:

    Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.

    However, it is easy to check whether a program has halted or not.

    Therefore, P != NP.

    Where’s my million dollar? lol 😀

  47. Rahul Says:

    @Alex: Clever. What you’ve actually done is expose a subtlety in how space-bounded machines access an oracle. The problem is that a polynomial-space bounded oracle TM could make exponentially long queries on its oracle tape, and hence in this oracle model it’s not true that PSPACE = PSPACE^A even if A is a set in P.

  48. Reinhard Neuwirth Says:

    I may be mistaken, but I associate the Prof. Walter Lewin of MIT with bravely facing the pendulum, not Richard Dawkins nor Richard Feynman, … and I am so excited to have been able to make a contribution to the debate, against all odds!

  49. Tim Says:

    Just wondering, why does the “fast algorithm” have to be deterministic? If a random algorithm has a fixed probability (in the size of the input) of solving the problem correctly in polynomial time, this probability can then be made arbitrarily close to 1 by simply repeating the algorithm many times (independent of the input size, thus keeping it in polynomial time): I can’t see how that would be fundamentally different from a deterministic “fast algorithm”. Would such a random algorithm not solve P vs NP? (at least practically) Since we are talking about worst case runtime, it is my hunch that stochastic algorithms are our only hope of ever achieving a polynomial time solution. Have a look at game theory: unless the game is very simple, minimizing the worst possible loss almost always requires you to play a stochastic strategy.

  50. Todd Bandrowsky Says:

    I think it will shock the world when someone discovers that, in fact, P=NP.

  51. dkp Says:

    @Olivier Lalonde-Larue:
    It is *not* easy to decide whether a program halts or not. It’s only easy if it does halt, but what do you do otherwise?

    @Tim: such an algorithm would not prove P != NP by itself, but would separate BPP from NP and would be a tremendous achievement as well. In fact, results on derandomization and hardness vs randomness tradeoffs has led many researchers to believe that P=BPP, meaning in particular that if the algorithm you refer to did exist, it could be derandomized (i.e. turned into deterministic).

  52. dkp Says:

    @Olivier: even for programs that do halt it’s not easy to verify this statement unless an upper bound for their running time is known a priori.

  53. Fábio Negrão Balby Says:

    As a dummy, I would say you could have started telling us what the frack P and NP stands for… Or maybe I’m just too dummy, who knows…

  54. Rafee Kamouna Says:

    P for political, and NP for non-political

  55. Peter Drubetskoy Says:

    @ Trishul of Shiva (#27):

    And what to make of the fact that a^n+b^n=c^n has infinitely many solutions for n=2 and none for n>2 ? :))

  56. Cody Says:

    I found this helpful in explaining the problem to my friends & family, The Algorithm: Idiom of Modern Science, by Bernard Chazelle. (Though it’s about more than just P v. NP.) In particular the example of a 1,000 song playlist being impractical to divide into two equal length playlists is easier to wrap your head around.

  57. Mike Says:

    @Anonymous #1: Liquid lunch was it, mate?

  58. miforbes Says:

    @ Rahul, Alex: If I’m not mistaken, PSPACE^P=EXPTIME.

    (That is, one inclusion is via padding: to solve an EXPTIME problem we use the PSPACE machine to pad 2^(n^k) bits onto the oracle tape, and then ask a P machine to then simulate the EXPTIME machine, which it can do via the padding trick, the other inclusion is via the observation that the PSPACE machine can only make exponentially large oracle queries to a P-machine, which can be answered in EXPTIME, and then we can simulate PSPACE itself in EXPTIME also).

    So @ Rahul: if the above is correct (and I know space-bounded oracles are tricky, so perhaps not), then A in P implies PSPACE^A is contained in EXPTIME. So how can PSPACE^A for A in P be provably different from PSPACE, given that we can’t separate PSPACE from EXPTIME?

  59. Jeremy H Says:

    @Bill (#32):

    Resolving LSPACE=P in the negative (which is the expected outcome) would say nothing about P=PSPACE.

    @miforbes (#58):

    The oracle tape counts towards the space complexity, so a PSPACE machine can only make polynomially-sized queries. So padding doesn’t work.

    @Alex (#43):
    As you say, the only gap is to prove there exist some oracle X such that P=LSPACE^X (with the restriction that X is in PSPACE). However, a P-complete problem does not satisfy this requirement.

    To expand on this, we have to clarify how a space-bounded oracle TM works. Specifically, does the query tape count towards the space quota?

    Suppose it does count (this is the standard definition). The reduction, while computable in LSPACE, results in a final query which is polynomially-sized. Therefore the LSPACE machine cannot write down the query on its work/query tape, and can therefore not ask the oracle. If you could find a LSPACE-reduction in which the resulting output did fit into LSPACE, then your proof would go through.

    Suppose, on the other hand, that the query tape does not count towards the space quota. Then, as miforbes (#58) showed, PSPACE^X = EXP. Your proof then becomes P=LSPACE^X != PSPACE^X = EXP, which we already know from the time-hierarchy theorem.

    I hope that helps.

  60. Alex ten Brink Says:

    @Jeremy H

    That does help: I see now that space-bounded oracle TMs are quite a bit trickier than the time-bounded versions.

    Then, if I understand it right, PSPACE is not low for itself (ie, PSPACE != PSPACE^PSPACE) and Wikipedia is wrong, (http://en.wikipedia.org/wiki/Low_%28complexity%29, http://en.wikipedia.org/wiki/PSPACE) since if PSPACE != PSPACE^X, then we’d definitely have PSPACE != PSPACE^PSPACE… Does Wikipedia need correcting or did I fall in another space-bounded trap?

  61. Bill Kaminsky Says:

    A correction and a bleg:

    ******************
    First, the correction:
    @Jeremy H (#59): Quite right. I was quite sloppy in how I concluded my comment (#32). Heck, my whole comment should’ve been shortened to: “At the risk of being pedantic, it is still open whether LSPACE = P. If this is the case (which everyone thinks unlikely, then P ≠ PSPACE follows immediately from the space hierarchy theorem.”

    [Heck, this comment here maybe should’ve been shortened just to: “Quite right.” Oh! If brevity is the soul of wit, then I’m afraid that I’m a twit. 😉 ]

    ******
    And now, the bleg…

    As we’re pondering P vs. NP and all that, can anybody here summarize the status of Michael Freedman’s proposal that maybe #P ⊆ BQP because maybe one could use a topological quantum computer to measure the values at rth roots of unity of a Jones Polynomial associated with some knot encoding a #P-hard problem?

    To be specific, the Freedman’s musings on topological quantum computing and #P ?⊆? BQP problem started in his 1998 Proc. Nat. Acad. Sci. paper “P/NP, and the quantum field computer”, which is available for free on his website: http://stationq.cnsi.ucsb.edu/~freedman/Publications/65.pdf

    In this paper, Freedman sketches out a feasibility analysis that concludes:

    Thus the number of reliable bits in each observation required to error correct to the exact values of J(\xi^a) [that is, the exact values of the Jones polynomial at integer powers a of the rth root of unity \xi = 2 \pi i / r ] is linear in the size of the problem #K [that is, the number of crossings in the knot K to which this Jones polynomial J applies] …In other contexts, such as quantum computing, this scaling of accuracy requirements has been considered reasonable although a better goal for accuracy of individual measurements is poly(log K) bits or even a small constant number of bits. Because r ≈ 4#K observations are required to solve the linear system, a total of quadratically-many bits must be collected before all observational errors can be corrected.

    Now, I think I understand why we quantum computing people — rigorous and honorable scientists that we are 😉 — aren’t screaming from the rooftops “Hey, give us money! We know to solve #P problems efficiently!!” Namely, so long as one doesn’t have a true topological quantum computer and instead is simulating one on a standard quantum computer, then this requirement of a linear number of bits is still too stringent. This is because, as best anyone can tell, the best standard quantum algorithms for approximating the values of Jones polynomials have runtimes of O[poly(1/error)].

    But is it still an open question of whether or not one could construct a topological quantum computer and simply directly measure the needed linear number of bits?

    Is this particular idea for #P ⊆ BQP essentially dead?

    Thanks.

  62. Carl Lumma Says:

    @Job

    If we assume that mathematical proofs can be efficiently verified despite being challenging to solve

    Ah yes, thanks. Viewing mathematics as a search problem. I never got this. Let’s say for the sake of argument there’s exactly one P!=NP proof shorter than 30 bits. I will personally check them if you tell me how to identify it.

  63. Jeremy H Says:

    @Alex:
    The standard definition is that the oracle tape counts towards the space complexity. Under this definition, PSPACE is low for itself, and Wikipedia is correct.

    As Scott is fond of saying, “oracles Are subtle”.

  64. At your arrogance Says:

    Hi Dummy,

    How did you get a job offer and then got tenure at MIT ?

  65. another poor person breathes again Says:

    @64: by solving lots of hard problems, and thus gaining the respect of the TCS community.

  66. @Scott post 16 Says:

    If the eventual proof ended up relying on proving an infinite number of categories of algorithm, I’d go out on a limb and say that we’re fundamentally asking the wrong question and need to get back to the drawing board in how we’re dividing classes of problem.

  67. Kevin Roche Says:

    Hi,

    I noticed a few people asked for a “Millenium problems for Dummies” type blog. Well I don’t know of such a thing but I have read a really excellent book which comes as close as you are going to get. The book is “The Millenium Problems” by Keith Devlin. It was written shortly before the proof of the Poincaré conjecture (by a guy who turned down the $million prize by the way).
    Here’s the Amazon link http://www.amazon.co.uk/Millennium-Problems-Greatest-Unsolved-Mathematical/dp/1862077355/ref=cm_cr-mr-title

  68. miforbes Says:

    @Jeremy H: what is your reasons for saying the standard definition of oracles in space-bounded computation counts the oracle tape toward the space of the machine?

    From my limited understanding of the literature, there isn’t entirely a consensus on what is “right”.

  69. VectorSpace Says:

    #29 & #44 So, can we come up with some metric of “mathematical elegance”, based on say, the ratio between the complexity of proving a mathematical result and the complexity of verifying it. Provided of course we come up with some way of quantifying the above two quantities(which in itself might lie in NP 🙂 )

    So, this raises another question, and I’d be very interested in hearing an example: can you quantify how much harder it is to prove one statement when compared to another?
    For example: How much harder is it to come up with the closed-form for the solutions of a quadratic equation when compared to proving say, (x+y)^2=x^2+2xy+y^2.

  70. Rahul Says:

    Scott (#16): Not sure I agree with you there. The question of proving NP != P seems somewhat orthogonal to that of making a taxonomy of polynomial-time algorithms. In order to show that NP != P, one needs to find an inherent limitation of polynomial time, while the taxonomy question is instead about our ingenuity in working within that limitation. As an analogy, take the P vs AC^0 question. We have lower bounds there, but I’m not sure how close we are to a taxonomy of AC^0. AC^0 still seems to surprise us from time to time, like defying GLN for example 🙂

  71. Raoul Ohio Says:

    Rahul (#70): Not sure I agree with you there. “making a taxonomy of polynomial-time algorithms” should be doable and useful, just a lot of hard work. On the other hand, “P vs. NP” is likely to never be decided, has little if any practical use, and is absorbing huge numbers of smart people hours.

    The TCS community should beware of the example of Point Set Topology. Several decades ago PST was one of the most important areas in math. They have veered off into recreational math areas such as “topology of transfinite cardinals”. It might be the case that PST has fully explored everything worth exploring, and this is the best they can do. Actually, a lot of PST people switched to TCS, which probably explains a lot.

  72. Rahul Says:

    Raoul: I was not making a judgment about which question is harder or more interesting, just saying that they might be somewhat orthogonal.

  73. Peter Drubetskoy Says:

    Anybody able to throw some light on the assertion (such as voiced by Wigderson here and elsewhere), that pseudorandom generators are used to derandomize BPP algorithms? I understand how they can take the second P out of BPP (i.e., by fooling the algorithm with pseudorandom numbers you get a result that is no longer dependent on truly random numbers) but how do you get rid of the B, i.e., the fact that the resulting algorithm, while deterministic, is still bounded-error? Maybe for all practical intents and purposes there is no diffrence between a polynomial algo that returns correct result with prob 100% and the one that does so with prob 100%-epsilon, but theoretically there certainly is.
    And how does one see that the AKS algo for PRIMES is an example of derandomization of this type (as Wigderson says in the above paper: Indeed, careful analysis
    of some important probabilistic algorithms, and the way they use their randomness, has enabled making them deterministic via tailor-made generators. These success stories (of which the most dramatic is the recent deterministic primality test of [3]) actually suggest the route of probabilistic algorithms and then derandomization as a paradigm for deterministic algorithm design.
    )

  74. Peter Morgan Says:

    I note the following session at ICM2010:

    24 Aug, Tue 19:40 – 19:55 Room No. G01
    B. Wen Tianjin University of Commerce
    Answer to question P/NP is P not= NP [sc] (Section 15)

    http://e-program.icm2010.in/e-guide.php?activity_type=sc&sortby=section

  75. @Peter Morgan Says:

    Lol, what the hell?

  76. Scott Says:

    Bill Kaminsky #61:

    Is this particular idea for #P ⊆ BQP essentially dead?

    Yes.

    If there were a “live” idea for #P ⊆ BQP, I’d drop everything to pursue it! But everything we know suggests that BQP doesn’t contain NP (or even “smaller” classes such as SZK), let alone #P.

  77. Scott Says:

    Peter Drubetskoy #73:

    The way Nisan-Wigderson, Impagliazzo-Wigderson, and the other derandomizers get rid of the “B” in BPP is simple. Assuming you have a good enough pseudorandom generator (which stretches O(log n) bits to n bits), you just loop over all possible outputs of the PRG (of which there’s only a polynomial number), and count the fraction p’ of outputs that cause the algorithm to accept. By assumption, p’ must be close to the fraction p of “truly” random strings that cause the algorithm to accept. So if p≤1/3 then p'<1/2, while if p≥2/3 then p’>1/2, and hence x∈L if and only if p’>1/2.

  78. Scott Says:

    Tim #49: Just wondering, why does the “fast algorithm” have to be deterministic?

    It doesn’t! Indeed, complexity theorists are generally perfectly happy to replace the P vs. NP question by the “more natural” NP vs. BPP question. (Though we have extremely strong evidence nowadays that P=BPP, and hence both questions have the same answer anyway.)

  79. Peter Drubetskoy Says:

    Thanks, Scott! Got it. So, is there an intuitive reason why AKS may be seen as a “tailor-made” derandomizator of such kind? Any reference would be appreciated.

  80. Scott Says:

    Peter: The original AKS paper actually explains it pretty well!

  81. Bill Kaminsky Says:

    Scott #76: Thanks!

    And now if I may pester you (and the rest of those reading the comment thread) with a follow-up…

    Is there any sense that there’s some specific and mathematically deeply interesting structure underlying how quantum approximation algorithms for the Jones polynomial fail to be precise enough to put #P into BQP?

    In particular, similarly to how Mulmuley hopes that the fancy-schmancy, high-falutin’ connection between algebraic geometry and calculating the permanent could shed light on classical computational complexity, might we hope that the fancy-schmancy, high-falutin’ connection between knot theory & topological quantum field theory and calculating the permanent could shed light on quantum computational complexity?

    Or does the failure of adequate precision stem from a much more mundane cause?

    [More details should anyone care: My impression of where things must run afoul is the following.
    1) The basic idea is if one could either directly construct or computationally simulate a quantum system obeying a certain topological quantum field theory, then one could measure certain expectation values of observables for the system and, in so doing, read off values of the Jones polynomial.
    2) Now, going off what I quoted from Michael Freedman’s 1998 Proc Nat Acad Sci paper (see comment #61), in order to answer to some #P-hard problem with these expectation values, you’d need measure them with a # of bits of accuracy linear in the size of the problem (which is of course another way of saying exponential accuracy in the size of the problem).
    3) Needing such exponential accuracy in and of itself isn’t disqualifying. For example, in quantum phase estimation, you can measure the phase to n bits of accuracy with just O(n) qubits and O(n^2) time. Thus, for intractability, there must be something more: for example, the variances of these topological quantum observables being too big to get good statistics for their means with a reasonable number of observations.
    So, my question is really whether this “something more” that cinches the intractiability is associated with high-falutin’ mathematics of deep significance and power, or whether it’s something boring.]

  82. Jeremy H Says:

    @miforbes (#68):
    Primarily that PSPACE is generally considered low for itself. This is certainly the convention on Wikipedia, and I recall this being considered true in Sipser’s and Madhu’s courses. However, I don’t have a formal reference.

    As this discussion has proved, however, it’s clearly not a universal convention. 🙂

  83. some person Says:

    I thought NP problems where if “N” denotes the input data set size, it is provable that no algorithm exists which can find the solution, for any input data set, in polynomial time. Where by “polynomial time” we mean that the number of steps required grows with “N” faster than ANY polynomial.

    I don’t think that’s all that deep or mysterious. Construct a binary tree with “N” levels, and randomly place a marker at a node of that tree. No algorithm can find the location of that node in polynomial time. That’s an “NP” problem. So what?

  84. Cranky McCrank Says:

    @some person

    Polynomial time relates to the number of
    steps a turing machine takes to solve a
    problem in relation to the input size of the
    problem.
    If you construct such a binary tree, you
    have to use it as an input for the turing
    machine. As you have to include the location
    of the marker in your input, it is very easy
    for the turing machine to find out at wich
    location the marker is set.

  85. asdf Says:

    some person, that problem is p-time because the description of the tree and the randomly placed marker has to be in the input. If you want to make the algorithm search every node, you need input size exponential in the depth, and therefore the search is p-time in the input size.

  86. rajneesh Says:

    it is really confusing but interesting at same time

  87. Quantum Tennis Receiver Says:

    P vs NP for cuties! (Lance Fortnow and some cute kids!)

  88. Cute and Informative :) Says:

    #87

  89. My name Says:

    I for one thought your bet was an interesting way to express your level of confidence in the proof being wrong.

    The whole affair reminded me of an old usenet bet. A guy called Mike Goldman had a long-standing offer of $5000 (somewhat wimply – he didn’t really bet his house!) to anyone who could create a lossless compression program for compressing general files. A person taking up the challenge would receive a file of Goldman’s choice (presumably random) and would simply have to send back a compressed file along with a decompressor, the total sum of which should be less (even just 1 byte less) than the original file.

    http://en.wikipedia.org/wiki/Lossless_data_compression

    *Spoiler*
    As an interesting twist to the story, a one guy managed to complete the challenge – sortof. This guy talked Mike into allowing him to submit his compressed file as several files as long as the total sum of all files and decompressor was less than the original file.

    It turrned out that in allowing this, Mike had been rather careless. The challenger simply split the original file into several files, splitting each time at a specific byte value. The byte at the splitting point was wholly omitted. This way his decompressor could simply paste the files together, inserting the specific byte value at the merge points, thereby reconstructing the original file. By having enough splpitting points the savings from omitting the byte totalled more than the size of the decompressor. Of course the “savings” originated from the information implicitly present by the spliting of files, essentially the data was moved to the file system structures.

    Unfortunately, Mike Goldman never acknowledged the claim of the prize. But this story shows that even safe bets can sometimes take unexpected turns. With regards to the P!=PN I guess there’s much more uncertainty than in Mike’s bet.

    By the way, does the bet apply to ANY proof that is submitted until 2019? Or only to proofs by this author? Perhaps only to variants of his specific proofs?

  90. My name Says:

    1) You wrote “Any bet”, does that mean “Any author”?

    I was unsure because I thought your bet was actually about expressing your concern about this particular proof (after your brief initial review that raised fom flags)? However, now your bet really seems to be about the field’s ability to proof this at all (until 2019).

    2) By the way, there used to be a betting market on http://www.ideosphere.com/fx-bin/Claim?claim=P!NP.

    It would have been interesting to see if the recent claim would have caused huge price swings. I guess a way you could have expressed your disbelief in the proof would be to simply massively short (or whatever you do) this market. That way you would drive down the price and would be able to make a profit in case you were right. If the price was low to begin with (and hence not really affected by the claim of proof) you might not be able to affect the market much this way, but on the other hand this would imply that your observations didn’t have that much “surprise value” to the market whole (wonder “who” the market really is in this case – mathematicians in general, complexity theorists, students or hedge funds, …?).

  91. Scott Says:

    My name: I thought I said pretty specifically that the offer only applies to Deolalikar’s proof!

  92. Anonymous Says:

    Lance and Bill are whining like little girls. Scott has a “Don’t ask, Don’t tell” policy … just deletes the offending comments without a word 😉

    http://blog.computationalcomplexity.org/2010/08/comments.html

  93. Raoul Ohio Says:

    Originally, the bet seemed like funny idea.

    Unfortunately, it appears to be a moron attractor.

  94. Scott Says:

    Raoul: Alas, that could describe a large fraction of what I do… 🙂

  95. Steven Stadnicki Says:

    Since a lot has been said about the 2SAT/XORSAT vs. 3SAT delineation, and a little has been said about the issues of relativization, I’m curious: has there been much study about the specifics of the oracles A for which P^A != NP^A with respect to how they ‘amplify’ these gaps? In other words, does looking at *why* 2SAT^A is different from 3SAT^A say anything interesting? IIRC, the oracles I’ve seen for distinguishing P from NP are fairly degenerate so I imagine there’s not much useful data to be gleaned here, but it seems like at least a conceptually interesting angle…

  96. AlanMorgan Says:

    I remember P/NP from my classes years ago. What I can’t recall is why we focused on polynomial time. Is there such a thing as NE (exponential) or NE-Complete? NL (linear)? I know that other complexity classes are studied, but I can’t recall ever seeing these.

  97. Scott Says:

    AlanMorgan: Yes, those all exist, and hundreds more! See the Complexity Zoo. (Polynomial time isn’t sacrosanct, but is convenient for many reasons—and switching to a different notion of “efficiency” doesn’t seem to make the (P vs. NP)-like questions any easier.)

  98. Cranky McCrank Says:

    I remmenber that i heard somewhere that problems seem to be NP-complete if there’s not much underlying mathematical structure to the problem.

    It seems reasonable that a Problem appears to be in P if there is much underlying structure wich could be used to design a shortcut-algorithm.

    The main difficulty in proving lower bounds seems to be, that it is very hard to proof statements about all possible algorithms one could ever invent.
    Instead of looking at the possible algorithms for a problem, one might look at the underlying structure of the problem.

    It is a very easy problem to decide if a number is a multiple
    of 10. Primality testing is in P as well, but it takes more
    steps. Couldn’t one argue that the set of multiples of 10
    has more underlying structure than the prime numbers, because the prime numbers look more randomly distributed within the natural numbers.

    I wonder if one could find a way to order all possible graphs,
    might it be possible to show that the set of graphs wich have a hamilton cycle look very randomly distributed within all graphs?

    If this doesn’t work, i’m still curious if it would make sense
    to try to find a way to measure the underlying mathematical
    structure of a problem 🙂

  99. Random Says:

    Dear Scott: I really love that you insisted on the fact that P/NP is but the flagship problem in a whole class of natural questions, with varying definitions of efficiency.

    The natural question (though pure speculation): would solving one such instance also solve the other instances? Or in other words: is this class of problems essentially one problem? Or still a whole class of loosely related problems?

  100. folding doors Says:

    I really love that you insisted on the fact that P/NP is but the flagship problem in a whole class of natural questions, with varying definitions of efficiency.

  101. Raghu Raghavan Says:

    “Are there situations where brute-force search—that is, trying an exponential number of possibilities one-by-one, until we find a solution that satisfies all the stated constraints—is essentially the best algorithm possible?”

    Certainly there must be provably exponentially hard problems, no? Why is the above question relevant to the P vs NP problem?

  102. Raghu Raghavan Says:

    “Is it harder to solve a math problem yourself than to check a solution by someone else? ”

    I would like to suggest another question: can you construct a machine model of creativity? I think that is badly posed. For example, take a standard example of creativity: Kepler finding the ellipses. If your computer’s search space included ellipses in the first place, the problem would be trivial. If it did not, and it’s program did not contain suggestions like “if you’ve got too many circles in your fit, start looking for other conic sections”, then it would be impossible. And the problem is, once you know circles won’t do, there are too many other places to look. So, I would say, yes, it _is_ harder to discover a solution (the search space is too large) than to check it.

  103. LifeIsChill Says:

    It seems that a central problem in theoretical CS is proving converses. Say a problem $X$ is hard to solve in polynomial time. Supposing there is a method $M$ to solve it in exp time. And we also know a constraint $C$ that prevents it from being in poly time using method $X$. Then the corresponding statement is:

    If I use $M$ since $C$ is not satisfied then I cannot solve $X$ in polynomial time.

    The converse: If I solve $X$ in polynomial time then $C$ will be satisfied in method $M$ may or may not be true.

    The fundamental problem in proofs in theory CS seems to be of finding such a method $M$ and constraint $C$ so that the converse will also be true and thus violating the first statement. And hence $X$ cannot have polynomial time.

  104. jonas Says:

    Let me ask a question about the basics. This might not be the right post for it, but I haven’t found a better one. Is it proven that there exists a universal quantum algorithm (a quantum analog of a universal Turing machine)? By this, I mean a quantum algorithm that gets as its input a quantum algorithm and an input for it, simulates the run of that quantum algorithm, and outputs whatever it would output, and the run time is polynomial (or even quasi linear) in how long the simulated machine would run.

    Until such a machine is constructed, it seems quite possible that any one quantum algorithm can be simulated in polynomial time on a classical computer, yet access to a quantum computer whose program you can change gives you more computational power than a classical computer.

  105. jonas Says:

    To Random: yes, the problems are all equivalent (ignoring some errors in the phrasing like what eg. Raghu Raghavan points out).