Two P vs. NP updates (neither of them technical)

lilymeme

“Meme” courtesy of my brother David

First news item: it’s come to my attention that yesterday, an MIT professor abused his power over students for a cruel April Fools’ Day prank involving the P vs. NP problem.  His email to the students is below.

I assume most of you already heard the news that a Caltech grad student, April Felsen, announced a 400-page proof of P≠NP last week.  While I haven’t yet completely digested the argument, it’s already clear that Felsen (who I actually knew back when she was an MIT undergrad) has changed theoretical computer science forever, bringing in new tools from K-theory to higher topos theory to solve the biggest problem there was.

Alas, Felsen’s proof has the “short-term” effect of making the existing 6.045 seem badly outdated.  So, after long reflection, I’ve made a decision that not all of you are going to like, but that I believe is the right one intellectually.  I’ve decided to reorient the entire course to focus on Felsen’s result, starting with tomorrow’s lecture.

And further, I decided to rewrite Thursday’s midterm to focus almost entirely on this new material.  That means that, yes, you’re going to have THREE DAYS to learn at least the basics of algebraic topology and operator algebras, as used in Felsen’s proof.  To do that, you might need to drop everything else (including sleep, unfortunately), and this might prove to be the most strenuous and intense thing you’ve ever done.  But it will also be an experience that will enrich your minds and ennoble your souls, and that you’ll be proud to tell your grandchildren about.  And of course we’ll be there to help out.  So let’s get started!

All the best,
Scott


Second news item: many of you have probably heard that Lance Fortnow’s The Golden Ticket—the first popular book about the P vs. NP problem—is now out.  (The title refers to Roald Dahl’s Charlie and the Chocolate Factory, which involved a few chocolate bars that had coveted golden tickets inside the wrappers, along with millions of chocolate bars that didn’t.)  I read it last week, and I think it’s excellent: a book I’ll happily recommend to family and friends who want the gentlest introduction to complexity theory that exists.

Some context: for more than a decade, people have been telling me that I should write a popular book about P vs. NP, and I never did, and now Lance has.  So I’m delighted to say that reading Lance’s book quickly cured me of any regrets I might have felt.  For not only is The Golden Ticket a great book, but better yet, it’s not a book that I ever could’ve written.

Here’s why: every time I would have succumbed to the temptation to explain something too complicated for the world’s journalists, literary humanists, and pointy-haired bosses—something like relativization, or natural proofs, or arithmetization, or Shannon’s counting argument, or Ladner’s Theorem, or coNP, or the reasons to focus on polynomial time—every time, Lance somehow manages to resist the temptation, and to stick to cute stories, anecdotes, and practical applications.  This is really, truly a popular book: as Lance points out himself, in 162 pages of discussing the P vs. NP question, he never even formally defines P and NP!

But it goes beyond that: in the world of The Golden Ticket, P vs. NP is important because, if P=NP, then people could design more effective cancer therapies, solve more crimes, and better predict which baseball games would be closely-matched and exciting (yes, really).  P vs. NP is also important because it provides a unifying framework for understanding current technological trends, like massively-parallel computing, cloud computing, big data, and the Internet of things.  Meanwhile, quantum computing might or might not be possible in principle, but either way, it’s probably not that relevant because it won’t be practical for a long time.

In short, Lance has written precisely the book about P vs. NP that the interested layperson or IT professional wants and needs, and precisely the book that I couldn’t have written.  I would’ve lost patience by around page 20, and exclaimed:

“You want me to justify the P vs. NP problem by its relevance to baseball??  Why shouldn’t baseball have to justify itself by its relevance to P vs. NP?  Pshaw!  Begone from the house of study, you cretinous fools, and never return!”

My favorite aspect of The Golden Ticket was its carefully-researched treatment of the history of the P vs. NP problem in the 50s, 60s, and 70s, both in the West and in the Soviet Union (where it was called the “perebor” problem).  Even complexity theorists will learn countless tidbits—like how Leonid Levin was “discovered” at age 15, and how the powerful Sergey Yablonsky stalled Soviet perebor research by claiming to have solved the problem when he’d done nothing of the kind.  The historical chapter (Chapter 5) is alone worth the price of the book.

I have two quibbles.  First, throughout the book, Lance refers to a hypothetical world where P=NP as the “Beautiful World.”  I would’ve called that world the “Hideous World”!  For it’s a world where technical creativity is mostly worthless, and where the mathematical universe is boring, flat, and incomprehensibly comprehensible.  Here’s an analogy: suppose a video game turned out to have a bug that let you accumulate unlimited points just by holding down a certain button.  Would anyone call that game the “Beautiful Game”?

My second disagreement concerns quantum computing.  Overall, Lance gives an admirably-accurate summary, and I was happy to see him throw cold water on breathless predictions about QC and other quantum-information technologies finding practical applications in the near future.  However, I think he goes beyond the truth when he writes:

[W]e do not know how to create a significant amount of entanglement in more than a handful of quantum bits.  It might be some fundamental rule of nature that prevents significant entanglement for any reasonable length of time.  Or it could just be a tricky engineering problem.  We’ll have to let the physicists sort that out.

The thing is, physicists do know how to create entanglement among many thousands or even millions of qubits—for example, in condensed-matter systems like spin lattices, and in superconducting Josephson junctions.  The problem is “merely” that they don’t know how to control the entanglement in the precise ways needed for quantum computing.  But as with much quantum computing skepticism, the passage above doesn’t seem to grapple with just how hard it is to kill off scalable QC.  How do you cook up a theory that can account for the massively-entangled states that have already been demonstrated, but that doesn’t give you all of BQP?

But let me not harp on these minor points, since The Golden Ticket has so many pleasant features.  One of them is its corny humor: even in Lance’s fantasy world where a proof of P=NP has led to a cure for cancer, it still hasn’t led to a cure for the common cold.  Another nice feature is the book’s refreshing matter-of-factness: Lance makes it clear that he believes that

(a) P≠NP,
(b) the conjecture is provable but won’t be proven in the near future, and
(c) if we ever meet an advanced extraterrestrial civilization, they’ll also have asked the P vs. NP question or something similar to it.

Of course we can’t currently prove any of the above statements, just like we can’t prove the nonexistence of Bigfoot.  But Lance refuses to patronize his readers by pretending to harbor doubts that he quite reasonably doesn’t.

In summary, if you’re the sort of person who stops me in elevators to say that you like my blog even though you never actually understand anything in it, then stop reading Shtetl-Optimized right now and go read Lance’s book.  You’ll understand it and you’ll enjoy it.

And now it’s off to class, to apologize for my April Fools prank and to teach the Cook-Levin Theorem.

108 Responses to “Two P vs. NP updates (neither of them technical)”

  1. wolfgang Says:

    >> I assume most of you already heard the news

    Does this prank work in the interwebs age, when everybody can immediately google and check twitter the story and finds nothing ?

  2. Scott Says:

    wolfgang: A couple people fell for it (but only briefly). I didn’t want anyone to fall for it for an extended period, anyway. 🙂

  3. Rahul Says:

    Wasn’t the story that von Neuman reoriented his entire running course when he heard of Godel’s breakthrough?

  4. Anonymous Says:

    While I otherwise approve of this post, I think you’re doing quite a disservice to the complexity theorist grandmothers of the world.

  5. Kris Says:

    An April 1st story:

    A college math teacher gave us quiz on April 1st. I didn’t know how to do the first question, so I skipped to the second. I didn’t know how to do it, either. I was concerned.

    I skipped to the third, saw it was to prove that for n > 2, there are no integer a, b, and c such that a^n + b^n = c^n.

    Then I looked up and noticed the Professor was trying very hard to hold back his laughter.

  6. Scott Says:
      I think you’re doing quite a disservice to the complexity theorist grandmothers of the world.

    Anonymous #4: Fine, I took “grandmothers” out! Off the top of my head, I can think of only one complexity theorist grandmother (Lenore Blum), but I’m sure there are others. (I’m not counting Nancy Lynch, who’s a distributed systems theorist.)

  7. Ryan Says:

    I don’t know for sure, but there’s probably a high chance that Sheila Greibach is a grandmother.

  8. Anonymous Says:

    To be fair, one need not be a well-known complexity theorist to be able to converse about or understand complexity theory in the context in which you mentioned it (probably many computer scientist and/or mathematician grandmothers could hold their own in such a conversation). Anyway, thanks 🙂

    -anonymous #4

  9. GASARCH Says:

    My favorite chapter of Lance’s book was the one that says what would happen if P=NP. He points out, quite rightly,
    that while crypto would have to be redone this is a very minor part of the profound changes that would happen. Since he
    (and around 80% of all theorists) think P\ne NP this chapter is fiction– and Lance seems to have a talent for fiction!

  10. Eliezer Yudkowsky Says:

    If there’s a universal solution that takes O(N^2000) time, then technically P=NP but we might not need to redo a lot of cryptography right away. Not saying I think this is likely, it’s just that “polynomial time” doesn’t necessarily equal “small polynomial time” or “cheap”. (Unless there’s a result I don’t know about in this area.)

  11. Evan Says:

    @Eliezer

    I am just an armchair complexity theorist, but I would find P=NP with an asymptotic running time O(N^2000) more unlikely (not to mention unsatisfying) than either P!=NP or P=NP with a more sane power law.

  12. Scott Says:

    Eliezer #10: My standard answer is that, if it turned out that P=NP but the fastest algorithm took n2000 time, then we’d simply need to change the question to the informal one that we actually cared about in the first place: namely, is there a practical algorithm that quickly and reliably solves NP-complete problems on reasonable-sized instances?

    However, the probability that the fastest algorithm would take anything like n2000 time—even conditioned on P=NP!—seems sufficiently tiny that, in the absence of evidence to the contrary, we often collapse the informal question with the formal one for convenience.

  13. Dániel Says:

    “However, the probability that the fastest algorithm would take anything like n^2000 time—even conditioned on P=NP!—seems sufficiently tiny […]”

    How would you evaluate such a weird conditional probability, even very informally? Personally, I would guess the opposite, especially after Robertson-Seymour.

  14. Scott Says:

    Dániel: For one thing, the probability that the complexity is nc must go to 0 as c→∞, simply because the probabilities have to sum to 1! And if they have to have to go 0 eventually, then by Occam’s Razor, why not starting at the first possibilities like 1,2,3?

    For another, I think Robertson-Seymour (or other results in parameterized complexity) are actually poor examples here. Those results do give you nc algorithms where c is enormous, but the catch is that c itself is a function of some “arbitrary” parameter of your problem (for example, which minor you’re trying to exclude). With P=NP, by contrast, we’d just be talking about a single integer (characterizing the complexity of CircuitSAT, let’s say), not a function of anything else.

    And of course, our overwhelmingly more common experience in algorithms and complexity has been that, when a problem is solvable in polynomial time, it’s also solvable in (say) n2 or n3 time. The exceptions are interesting to us precisely because they’re exceptions.

  15. Neel Krishnaswami Says:

    It’s not a probability; it’s a figure of speech. If you accept classical logic, the probability of any purely mathematical statement is either 1 or 0, depending on whether or not it’s true or false. Anything else, and you run afoul of Cox’s theorem. Given the undecidability of first-order arithmetic, this means we’re not Bayesian reasoners, and shouldn’t try to be, for rather good reasons of computability. Turning Scott’s figure of speech into a formal statement points at rather a lot of interesting mathematics yet to be done….

    Unfortunately, I must admit I found Scott’s April Fools’ joke kind of meh, though. The joke relies on the unlikeliness of higher topos theory and K-theory having applications to complexity. But: I, personally, have used techniques from topos theory to establish complexity bounds, and it wouldn’t be shocking if K-theory had applications to algorithmic complexity, given the deep connections between operator algebras and linear logic.

  16. Scott Says:

    Neel #15: Because we’re at best boundedly-rational agents rather than perfectly rational ones (in other words: we don’t know all mathematical truths, to put it very mildly), I don’t see any philosophical difficulty in talking about the “probability” of a famous unsolved mathematical conjecture being true. Indeed, a Bayesian would even say that we could “elicit” probabilities from you, by observing which bets you will or won’t accept. Would you accept an offer of $500,000 from me if the Riemann Hypothesis is ever proved, with you only having to pay me $5 if it’s disproved? I’m not offering that 🙂 , but if you’d accept it, it tells me something about what probabilities you assign to RH versus not(RH).

    Regarding the April Fools joke, I respectfully submit that you misunderstood it, by seeing it from your perspective as a programming-language theorist rather than from the perspective of an undergrad. From an undergrad’s perspective, “topos theory” and “K-theory” are just two unbelievably abstruse-sounding things that only a very cruel and deluded professor would imagine he could force his students to learn in 3 days. I could’ve just as well used algebraic geometry and representation theory.

  17. Rahul Says:

    Scott #16:

    For most part, “in 3 days” was redundant in that comment.

  18. lylebot Says:

    it’s a world where technical creativity is mostly worthless

    But if P=NP, mathematicians would have a lot more brain cycles to devote to solving problems even harder than NP, wouldn’t they?

    if we ever meet an advanced extraterrestrial civilization, they’ll also have asked the P vs. NP question or something similar to it.

    I feel some kind of quasi-anthropic argument coming on. Is that because they couldn’t become advanced enough to get here without asking it?

    For us humans on Earth it took a really long time to produce the sort of intellectual environment that would allow for the hyper-abstract thinking that led to being able to state the P vs NP question (I mean, Godel’s and Turing’s influential works are meta-meta-mathematics, aren’t they?). I wonder if by the time a civilization has reached that point, they’re also already well on the way to dooming themselves with climate change or nuclear armageddon or what-have-you, and have little to no chance of ever meeting another advanced civilization.

  19. Bjoern Says:

    I’ve got a question concerning the “P = NP, but the fastest algorithm takes n^200000 time”-discussion: Suppose that it turns out that P = NP and there’s an algorithm for SAT that runs in time n^2. So, we’re not in the discussed case, but in the one that seams more reasonable.

    By the time hierarchy theorem, we know that DTIME(n^c) != DTIME(n^(c+1) ) for every c. Doesn’t this imply that even though we’re in the “more reasonable” case, that there are problems which cannot be efficiently solved (in a practical sense), since we know that there most be problems which cannot be solved in polynomial time faster than, say, n^2000 (by the time hierarchy theorem) and that the efficient SAT algorithm doesn’t really help here? Although the SAT algorithm only takes time n^2, the reduction for such a problem to SAT would have to take time somewhere around n^2000 to not violate the lower bound of the problem.

    So, isn’t it the case that if P != NP, we’re almost more or less in the “unreasonable” case discussed earlier, even when we find an efficient algorithm for an NP-complete problem?

    I suppose I have a mistake somwhere in my argument, since I’ve never seen it anywhere before/it never comes up, but I just can’t seem to find it.

  20. Dániel Marx Says:

    (I’m different from Dániel above.)

    1. In the minor-testing algorithm of Robertson and Seymour, the exponent is always 3, it is “just” the constant factor that is an enormous function of the minor.

    2. Suppose that exactly the situation in your prank letter happened: there is a very complicated P\neq NP proof, discussing that is clearly beyond the scope of your course. What would you change, if anything, in your course? “I ask this nonrhetorically.”

  21. LZ Says:

    @Bjoern: the length of a SAT formula that encodes that a tm M accept an input in n^c time (that is, a SAT formula to decide the problem of the THT) is probably a low degree polynomial in n^c. So the exponent of the time of the reduction from that problem to SAT is not “astronomical”, it only depends on the parameter c (again).
    In other words, to me, the THT problem seems simply a fixed parameter version of an undecidible problem (the halting problem). And as Scott said, it’s pretty inevitable and not strange that the complexity of these problems depends on the parameter.

  22. Bjoern Says:

    @LZ: Yes, I see your point. The reduction might always be a low polynomial in n^c (I don’t know the exact runtime of the generic reduction to SAT right now), as you said and therefore its running time is not astronomical when measured this way, but it might be astronomical (in practice) when c itself is already astronomically. I agree that this isn’t really suprising when c itself is already huge.

    The argumentation “if P = NP, then we can propably find an algorithm for an NPC-problem which runs in time n^2 or n^3” is often used to argument that if P = NP, then “every problem we care about” can be solved efficiently. Especially since our experience tells us that we can usually improve the run time a lot once we’ve found a poly-time algorithm. But this is limited by the THT. Of course, this is only really interesting for practical purposes if one of the “practical problems” actually is only solvable in astronomcial polynomial time by itself.

    Conversley, problems which can only be solved in time n^20000 (or something else like that) would propably be interesting for cryptography if P=NP as means to deal with this situation. The THT guarantees that such problems which can only be solved in astronomical time exist, even when we find efficient algorithms for some NPC-problems.

    The only thing I find interesting/odd about this, is that it’s usually not mentioned at all when this type of discussion arrises, although it shows interesting facts about both possible outcomes when P=NP (which in turn made me wonder whether there’s a flaw in my argumentation).

  23. Serge Says:

    Scott #12 : “Is there a practical algorithm that quickly and reliably solves NP-complete problems on reasonable-sized instances?”

    The intuitive answer is no, but this isn’t a mathematical question anymore. The right definitions of “practical” and “reasonable” would be required to state an hypothesis and while we’re at it, NP-completeness might have to be replaced by something more general. My opinion is that exponential time is only one of the many ways that Nature uses to prevent us from solving hard problems easily.

  24. Scott Says:

    Dániel Marx #20:

      In the minor-testing algorithm of Robertson and Seymour, the exponent is always 3, it is “just” the constant factor that is an enormous function of the minor.

    Thanks for the correction! Shows you how “deeply” I’ve studied parametrized complexity… 🙂

      Suppose that exactly the situation in your prank letter happened: there is a very complicated P\neq NP proof, discussing that is clearly beyond the scope of your course. What would you change, if anything, in your course? “I ask this nonrhetorically.”

    It’s hard to predict, but my guess is: I would add 3 or 4 lectures discussing the broad ideas behind the P≠NP proof and take away some other stuff to compensate—and most of the rest of the course would remain unchanged.

    After all, people would still need to prove problems NP-complete via reduction. They’d still need to understand Gödel’s Theorem and the unsolvability of the halting problem and Shannon’s counting argument and the other stuff in the course (at least, as much as they need to today…).

    I suspect the biggest difference would be that the course would be tremendously more exciting for a few years afterward—as people scrambled to prove PH infinite, cryptosystems secure, etc. etc.—and thereafter would become a bit less exciting. Eventually, the course would simply become like linear algebra or multivariable calculus (or for that matter, most other undergrad courses): that is, a course that stays in the curriculum, not going anywhere, but whose central questions have already been answered.

  25. Scott Says:

    Lylebot #18:

      For us humans on Earth it took a really long time to produce the sort of intellectual environment that would allow for the hyper-abstract thinking that led to being able to state the P vs NP question (I mean, Godel’s and Turing’s influential works are meta-meta-mathematics, aren’t they?). I wonder if by the time a civilization has reached that point, they’re also already well on the way to dooming themselves with climate change or nuclear armageddon or what-have-you, and have little to no chance of ever meeting another advanced civilization.

    Alas, I can produce no argument whatsoever against that possibility. 🙁 Note that I did say if we ever meet another advanced civilization; I didn’t say anything about how likely that is to happen…

  26. Geoff Says:

    Scott #16:

    The idea of a “boundedly rational Bayesian” is actually a really interesting one, and *extremely* difficult to analyze, as far as I can tell. The problem is that Bayes’ rule only tells us what to do if we feed it a valid prior — all sorts of badness can happen if we give it a prior that doesn’t integrate to 1, for example.

    There’s a long and honored tradition, of course, of feeding invalid priors to Bayes’ rule (for example, this thread). The problem with this practice is that, if we assume a contradiction, anything follows, so logic becomes worthless. This is in contrast to the usual situation with bounded rationality, in which anything we are able to prove will actually be a theorem, but we don’t happen to be able to prove all theorems.

    So, somehow the situation with “fake Bayesian reasoning” seems a lot worse than the ordinary “bounded rationality” situation — reasoning with logic when we know that logic is worthless seems like an odd thing to do! AFAICT, when we do “fake Bayesian reasoning”, we’re basically hoping that we get lucky and don’t happen to “really use” the contradiction in our reasoning, which seems rather optimistic.

    Hence my statement above that this is an interesting problem: it would be really cool if someone were to come up with a usable “boundedly rational Bayesian” reasoning strategy that guaranteed results that were only boundedly bad, instead of arbitrarily bad.

  27. vznuri Says:

    lance’s book looks way cool & am planning to get a copy forthwith. thx for review.

    But as with much quantum computing skepticism, the passage above doesn’t seem to grapple with just how hard it is to kill off scalable QC. How do you cook up a theory that can account for the massively-entangled states that have already been demonstrated, but that doesn’t give you all of BQP?”

    lance is right about this characterization of quantum computing. you move the goalposts with your off-base counterargument. he is pointing out in the book that at the minimum its HARD TO CONTROL QUBITS. ie its [at least] a HARD ENGINEERING PROBLEM. history and research proves this. this is not a theory that rejects QM computing but it doesnt have to be for qm computing to be effectively impossible for a long time. there is much history about physical systems that are HARD TO CONTROL and therefore not in the realm of feasibility. a good example I give on my blog is FUSION ENERGY systems.
    re P vs NP see also outline for a NP vs P/poly proof based on monotone circuits, hypergraphs, factoring, and slice functions

  28. Scott Says:

    vznuri #27: Every sensible person agrees that quantum computing might be “effectively impossible for a long time” (in other words: hard). That’s not the question under discussion here. In the passage I quoted, Lance speculated that “It might be some fundamental rule of nature that prevents significant entanglement for any reasonable length of time.” I was trying to point out the severe scientific difficulties that confront anyone who believes that. Your comment is a perfect illustration of how many people still fail to understand those difficulties.

  29. vznuri Says:

    lance writes in your quoted passage:

    …Or it could just be a tricky engineering problem.

    scott, that clearly IS the question under discussion here. or rather there are clearly TWO questions that we dont really have clear answers to, and lance mixed them up in the same passage, but clearly discriminates them with separate sentences:

    (1) is QM computing *theoretically* impossible
    (2) could it be theoretically possible but “effectively impossible” due to limitations in perfecting technology. for (another great) example consider the babbage engine that was not built for over 1.5century due to technical limitations of machinery-making of the time. [“clockwork” engineering].

    now with your $100K bet, [which by the way has gotten some really great media attn, big congratulations on that, you are very effective with that], you are really focused on (1). its a very deep/valid question. but your bet appears to have nothing to do with (2). moreover you seem to have an ulterior, at times extreme/fanatic, near axe-grinding agenda of blurring the distinctions of (1) and (2). which imho, while highly entertaining [and thx for the entertainment!], borders on shrill, even [yes, I know these are strong words] unscientific and unprofessional at times! lance has no such agenda and its clear in his writeup above…..[however as state, huge disclaimer, have not read the book yet]

  30. Scott Says:

    vznuri: How on earth could you accuse me of blurring the distinction between (1) and (2), when my previous comment was all about that exact distinction?! My point was simply that the idea that there’s some fundamental law of nature that prevents hundreds of particles from being entangled, is already in conflict with the experimental evidence. So if QC is fundamentally impossible, then it has to be for some more subtle reason. Many of the QC skeptics don’t want to hear that — but for my part, I have trouble taking the skeptics seriously if they can’t even acknowledge where the experiments already are.

  31. Attila Szasz Says:

    @vznuri

    with all due respect, Scott introduced the concept of a Sure/Shor separator, HOW ON EARTH could he be the one who tries to blur these distinctions?

    I’m sure Scott addressed this type of questions and misunderstandings a zillion of times here very properly, but just as a starter I’d recommend this great discussion from earlier this year, involving lots of professionals from various positions along the qc-skepticism spectrum: https://scottaaronson-production.mystagingwebsite.com/?p=1211#comments

  32. vzn Says:

    [sigh] obviously hundreds of entangled particles, only loosely controlled in a global/macroscopic rather than a local way, is a *long* ways from what is req’d for qm computing! you’re evading the glaring issue of *control*. as fortnow points out, at the core it may be an *engineering* problem. and even he is a bit glib for my tastes by calling it possibly a “tricky engineering problem”…. building a babbage engine in ~1850 was also largely a “tricky engineering problem”…. which took 1.5CENTURY to overcome….

    hey, I wanna see scalable QM computing in my lifetime too! but after significant international scientific effort invested so far, some results are in, and they are not pretty!

    your $100K bet is rather meaningless because theres no time limit on the opposing side. look scott, youve got that hubris associated with young people not yet into middle age, but you should seriously consider the possibility that scalable qm computers do not materialize in your lifetime. does the $100K go somewhere in that case?

  33. John Sidles Says:

    Scott says: “Lance [Fortnow] speculated that “It might be some fundamental rule of nature that prevents significant entanglement for any reasonable length of time.” I was trying to point out  the severe scientific difficulties  the considerable mathematical and scientific opportunities that confront anyone who believes that.

    Scott, I have lightly edited your remark to include a link to Fernando Brandao and Michal Horodecki’s preprint “Exponential decay of correlations implies area law” (arXiv:1206.2947), which concludes with six open problems in QIT that are motivated by Brandao and Horodecki’s remark:

    The overwhelming majority of quantum states [are] quantum data hiding states due to their use in hiding correlations (and information) from local measurements. Developing a classification of entanglement in such states is one of the outstanding challenges in our understanding of quantum correlations.

    Thus Lance’s “fundamental rule of nature” might arise by a mechanism as physically (and mathematically) natural as “the vacuum interactions of relativistic gauge field theories generically act to quench the quantum data hiding states of Brandao and Horodecki.”

    It is (of course) mathematically consistent — and perhaps even physically reasonable! — to foresee that Lance’s insights, and your counter-insights, and Brandao/Horodecki’s synthesis of these insights, *ALL* will be proved correct in coming decades.

    Particularly for young QIT researchers, this outome (as it seems to me) would be an eminently happy one! 🙂

  34. Scott Says:

    vzn: Go back and read my comments, or any of my previous posts. Read them carefully (if you’re capable of that).

    Did I ever express confidence that scalable QCs would materialize in my lifetime? No? Why do you suppose I didn’t? Did I ever say experiments have shown that hundreds of entangled particles can be controlled in arbitrary ways? Also no?

    Is it possible that you keep attacking me for positions that I don’t actually hold, and that I’ve made it as clear as day that I don’t hold? That your accusation of “hubris” relies entirely on twisting things that I said (and am confident about) into things that I didn’t say (and am not confident about)?

    Regarding Babbage, what an irony! I myself have made the comparison between Babbage’s analytical engine and the current state of QC research countless times. Babbage was indeed 150 years ahead of the curve, but he was … what’s the word? Oh, yes: right! Eventually the technology did catch up with the theory, and Babbage was about as vindicated by history as it’s possible for a human being to be. So comparing QC research to Babbage as a way of criticizing QC research seems laughable.

    I find you guilty of aggravated reading incomprehension and general trollishness. The penalty: you’re banned from this blog for one year.

  35. Rahul Says:

    Scott #33:

    If anachronistically the Babbage analogy had went the QC-research evolution route, Knuth ought to have published his masterpiece tome on algorithms and programming by 1850. 🙂

  36. Rahul Says:

    I was re-reading Scott’s 2011 article in NYT and came across this bit:

    But the biggest payoff [of QC research] so far may have been an improvement in the way quantum mechanics itself is taught and understood.

    Pedagogically, is this true at the typical university? Has QC impacted QMech or QChem pedagogy significantly? I ask because I took my base QM course circa 2005 and the Professor did not even mention QC. Maybe his ~75 years age contributed to his method.

    http://www.nytimes.com/2011/12/06/science/scott-aaronson-quantum-computing-promises-new-insights.html?pagewanted=all&_r=0

  37. Neel Krishnaswami Says:

    Because we’re at best boundedly-rational agents rather than perfectly rational ones (in other words: we don’t know all mathematical truths, to put it very mildly), I don’t see any philosophical difficulty in talking about the “probability” of a famous unsolved mathematical conjecture being true.

    If you put it in quotes (i.e., use it as a figure of speech), I’ve got no problem with it either! 🙂

    Indeed, a Bayesian would even say that we could “elicit” probabilities from you, by observing which bets you will or won’t accept.

    Such a Bayesian would be wrong.

    My beliefs are almost certainly incoherent, and so I expect a sufficiently clever Dutch person could find two sequences of bets which are logically equivalent, but for which I’d give different odds. So my beliefs don’t form a probability distribution. Also, when I have new experiences, I sometimes have new ideas — and that’s also forbidden to an ideal subjective Bayesian, since the only thing they are allowed to do with new experiences is to use Bayes’ rule to compute a posterior from the prior. Worse still, I think a prior can be wrong, and so I’m willing to perform repeated trials and then compare the actual frequency with the frequentist confidence interval to figure out if I misspecified a model (i.e., failed to have the true hypothesis in the prior). All of these “deviations” from “ideal” rationality are philosophically very interesting in their own right, and that usually leads to interesting mathematics (e.g., see Alex Simpson’s work on locale-theoretic formalizations of frequentist probability).

    Viewing these issues as merely bounded approximations to what a logically omniscient reasoner would do is a bit like viewing complexity theory as “bounded recursion theory” — it’s a valid perspective, but if you take it as the whole story then you lose sight of the fact that what resources are, and how they affect computation are fundamental questions in their own right.

    (P.S. I did completely miss the point of your joke, though.)

  38. Geoff Says:

    > a sufficiently clever Dutch person could find two sequences of bets which are logically equivalent, but for which I’d give different odds

    A good way to do it is just to ask you to bet on all of the assumptions which go into the P vs. NP proof, and then on P vs. NP itself. This sequence of bets could be pretty interesting in its own right!

  39. Scott Says:

    Rahul #36:

      Pedagogically, is this true at the typical university? Has QC impacted QMech or QChem pedagogy significantly?

    Not nearly enough, but it’s starting to! I understand that the undergrad quantum mechanics course at MIT includes a couple lectures on quantum information, and the last chapter of Steven Weinberg’s new quantum mechanics textbook deals with quantum information. I predict that quantum information will infiltrate “conventional” physics more and more in the coming decades (as it already has, e.g. in the discussion of the black-hole information paradoxes). The secret of the immense gain in clarity from this approach can only be kept from the physics world for so long!

  40. Rahul Says:

    Post request: D-wave again please! Do come out of your retirement on that.

  41. Scott Says:

    Rahul #40: No. Seriously, I’ve done enough D-Wave posts for a lifetime, or even ten lifetimes.

  42. Serge Says:

    My main objection to the feasibility of quantum computing holds in five words – too good to be true. I’m not sure many of his contemporaries made the same objection to Babbage in his time. On the contrary, they probably must be thinking the machine he was trying to build was useless.

  43. Rahul Says:

    @Serge:

    I can think of other “too good to be true” plans for an 1800’s observer: Aircraft. Trip to the Moon. Skyscrapers.

    No?

  44. Scott Says:

    Serge #42: When I first learned about quantum computing as a teenager, probably the most important revelation was that it’s really not too good to be true—instead, it has just the limited amount of goodness and complicated pattern of goodness you might expect a true thing to have.

    Even if you had a QC, you’d still be limited in what you could do. You couldn’t do anything classically uncomputable (like the halting problem), and you almost certainly still couldn’t solve NP-complete problems in polynomial time (see the tagline of this blog). More generally, you couldn’t “try all possible solutions in parallel” and instantly pick the best one. (Yes, the popular articles have lied to you.)

    What you could do is cleverly exploit quantum interference to solve certain specific problems—like factoring integers, solving Pell’s equation, and of course simulating quantum physics—exponentially faster than we know how to solve those problems with a classical computer; and also get polynomial improvements over classical computers for a wider range of problems.

    Can you tell me with a straight face that this is “too good to be true”?

  45. Serge Says:

    @Rahul
    Well, skyscrapers aren’t necessarily such a good thing… 🙂
    When Kennedy announced the moon program, the mainstream scientific community already believed it was possible. And the feasibility of aircrafts indeed sounded dubious to many, but there hadn’t been such a long-lasting ongoing debate about either of these two issues before they were achieved.
    Historically speaking, the QC story is really more reminiscent of the perpetual motion discussion – to me, at least.

  46. Serge Says:

    @Scott
    I was writing to Rahul at the same time you were answering my comment… 🙂
    With a straight face, probably no. But whenever people say that factoring will eventually be found to be polynomial, I have the same feeling of doubt – thought I don’t know why. Only time will tell…

  47. Scott Says:

    Also, Serge, the opinion that quantum computing is fundamentally impossible is decidedly non-mainstream—since it would, after all, require overturning something huge in our current understanding of quantum mechanics, which is often called the most successful theory in science. Out of thousands of physicists and theoretical computer scientists, I know of maybe a dozen who publicly espouse the view that QC is impossible in principle, and a decent fraction of them are commenters on this blog. 🙂 The amount of time I spend arguing with the skeptics has more to do with my own personality quirks (and desire to engage a broad audience and entertain even the most farfetched speculations), than it does with the existence of a “live” debate in the scientific community.

  48. Serge Says:

    @Scott
    I sincerely hope you’ll be luckier than Cantor who became mad before he could prove the continuum hypothesis. And I hope not to become mad myself trying to show why P=NP is undecidable.
    Quite possibly, you might be riding the right horse this time. 🙂

  49. Rahul Says:

    @Serge:

    I agree with you but in a slightly different sense: Lot’s of ideas deemed fantastic did later become reality (e.g. moon-travel) but yet someone suggesting space-travel or airplanes in 1800 would be thought of as somewhat eccentric or wacky by most people.

    My concern in this case is the amount of legitimacy a Quantum Computer has had among media, lay-people (or even scientists from non-QC areas). Expectations are absolutely unrealistic: From some talk and articles one might think the revolutionary QC machine’s almost ready to be shipped next month and solve all our problems from protein folding to web search.

    That’s unique. No machine, potentially so far out into the future (if at all) has generated so much hype. And fraudsters like D-Wave hardly help.

  50. Gil Kalai Says:

    “I would’ve called that world the “Hideous World”! For it’s a world where technical creativity is mostly worthless, and where the mathematical universe is boring, flat,…”

    This is a sentiment which I find it hard to understand. First, it is not clear at all that in Lance’s beautiful P=NP world technical creativity will become mostly worthless. (Although this is a repeated claim.) Second, if some types of human creativity will become obsolete (as happened before) , perhaps this will give opportunity for other types of human creativity, including familiar and new forms of non-technical creativity. Third, given the wonderful benefits in terms of us being able to solve many problems, optimize the creation of new tools and medicines for humans and for the society, why should we give such a priority for creativity anyway?

    ” Here’s an analogy: suppose a video game turned out to have a bug that let you accumulate unlimited points just by holding down a certain button. Would anyone call that game the “Beautiful Game”?”

    (Personally, I like such games.)

    The analogy is not analyzed properly. This can very well be a beautiful game for those “little creatures” in the game itself.

  51. Paul B. Says:

    @Rahul #49:

    I think that I will pass on an opportunity to get into an argument with you about your last (almost libellous!) statement, but I have a (somewhat related) issue with the first one: someone suggesting space-travel or airplanes in 1800 would be thought of as somewhat eccentric or wacky by most people.

    If you look at any modern glider, deltaplane (or a paraplane! 🙂 ), there is no basic technology there which have not been available to humankind for at least a couple of millennia! Silk started in China in the 27th century BCE (see Wiki, and yes, I was surprised it goes back that long), and I personally do know probably an unproportional number of people who would hurl themselves off a cliff while attached to nothing but a contraption which could (most probably, please tell me if I am wrong!) be made of good quality silk alone…

    One can argue that there is lots of math which went into those modern designs, and, I guess, it’s true (especially to make them safer), but (again, to my surprise, I thought it would’ve required full-blown complex variable calculus) apparently it was almost figured out by Sir George Cayley in 1809, who was building his gliders around then (http://www.flyingmachines.org/fmach.html, http://en.wikipedia.org/wiki/Early_flying_machines also has an interesting zoo exhibit of attempts of human to fly 🙂 ). And, if anything, trial and error (see below), or careful study of bird’s wings (we have seen soaring birds around since we *have* been around!) would have probably made exact understanding of why they work unnecessary…

    Now, one might try to give more precise definition of what an “airplane” mean, which would exclude simple soaring gliders; say, call it a contraption that can controllably and almost reliably transport something from point A to point B without relying on local terrain, today’s winds, etc.; maybe with A and B being anywhere on Earth… Then, I would admit, humanity have not had a chance to enjoy that luxury for another 100 to 150 years since 1809. But, for a person in 1800, it would feel almost natural to *have* to rely on winds if you are attempting to cross a globe, and to try to make the best of them (was it “trial and error”, or was it some math that went into those very close related to flying wings things, called “sails”? I do not claim to know, probably both!).

    Is it still “crazy” for me to think that early 19th century (or even earlier) humans should have had no intellectual problems with the idea of being able to fly (same as they sailed!) all over the world; or was it their problem that they somehow separated one from the other, making one impossible, and the other one readily available — I do not know.

    Or maybe it was just the economics was not right, and back then one could not really make any money on going “up” instead of “across”? But I think that a good chunk of *educated* people in 1800 would have considered human flying (“aviation”) possible, maybe after a debate or two, especially with available demonstrations, and, sorry, I can not really accept an argument referring to just “the most people”…

    Just my personal opinion,

    Paul B.

  52. Scott Says:

    Gil #50:

      The analogy is not analyzed properly. This can very well be a beautiful game for those “little creatures” in the game itself.

    That’s a very interesting distinction, so let me respond with a distinction of my own. If it turned out that P=NP, then I’d be as eager as anyone on earth for that improbable bonanza to be exploited as quickly as possible, and used to automate the discovery of new medicines and other improvements in human welfare. Indeed, I’d probably reorient my entire research agenda around that exploitation, as I imagine thousands of other scientists would! (Of course, I’d also worry about how to stop criminals, dictators, etc. from using the breakthrough nefariously—same as with any other technological advance.)

    On the other hand, if you ask me to predict whether P=NP or P≠NP is more likely, then my money is on P≠NP—and if you force me to articulate my reasons, a major one will simply be that I see the alternative as “too arbitrary and ugly to be true.”

    All of this carries over well to my video game analogy. If I’m a product tester for the video game company, or reviewing the game for a magazine, etc., then a bug that makes it easy to rack up unlimited points by pressing a certain button will obviously be a huge negative. On the other hand, if a madman forces me to play the game, and will only let me live if I score enough points, then you better believe I’ll be slamming that button with all the finger-strength I can muster! 🙂

  53. Serge Says:

    Scott #52:
    With P=NP, you get the same kind of world if the polynomial algorithms for hard problems are either impractical – that is, not runnable by any existing computer – or unthinkable – that is, too complex for a living being’s brain.
    So, with P=NP you can play your video game with the same excitement – and that’s why the conjecture remains unsettled.

  54. Serge Says:

    @Rahul
    We agree on most points, though I’m not aware of – nor sensitive to – the QC hype. It’s just that I believe factoring is a hard problem, whatever means you use to tackle it – be it with hardware or software. So wait and see…

  55. Rahul Says:

    #51 Paul B:

    Truth is a good defense against all libel. 🙂

  56. Attila Szasz Says:

    (I really love nontechnical discussions of the issue, so im totally buying the Golden Ticket, but for now I’d like to ask about a sort of semi-technical aspect)

    I’ve run across this fine book of Jan Krajicek recently, and since then I was wondering how much you would agree with this particular P vs NP comment he makes in the preface:

    Methods (all esentially combinatorial) used for known circuit lower bounds are demonstrably inadequate for the general problem. It is to be expected that a nontrivial combinatorial or algebraic argument will be required for the solution of the P versus NP problem. However, I Believe that the close relations of this problem to bounded arithmetic and propositional logic indicate that such a solution should also require a nontrivial insight into logic. For example, recent strong lower bounds for constant-depth proof systems needed a reinterpretation of logical validity.

    It’s interesting to note that The year is 1994, so the natural proof argument wasn’t very standard yet, but obviously my main concern is about the second part regarding new insights into logic, how standard is this opinion? and what can be speculated in this direction ( as opposed to bringing nonlogical stuff in like higher topos and K-theory for example 😉 )

  57. Scott Says:

    Attila #56: I’d say the opinion that “nontrivial insight into logic” will be needed to solve the P vs. NP problem is very much a nonstandard one these days. Formal logic (or rather, formal illogic…) is certainly a favorite tool of the amateurs who produce wrong P≠NP proofs. But most complexity theorists would say that in the 1970s, the Baker-Gill-Solovay relativization barrier explained why logic (or more broadly, techniques borrowed from computability theory) can’t possibly be powerful enough to resolve P vs. NP on its own, though such techniques could certainly be one component of a proof. “Meatier math” will also be needed. (Or in slogan form: you ask a quantitative question, you better have some quantitative tools to answer it!) Today, Mulmuley’s GCT program is notable for invoking pretty much every branch of math in existence except logic.

  58. Scott Says:

    Serge #54:

      It’s just that I believe factoring is a hard problem, whatever means you use to tackle it – be it with hardware or software.

    While there’s nothing logically contradictory about your position, the idea that someone would place his bare, unargued-for personal hunch that a specific number-theoretic problem should be hard (only factoring? what about discrete log, solving Pell’s equation, etc.?) on one end of the scale, and Shor’s algorithm together with the theory of quantum fault-tolerance together with a century of experiments revealing the actual world to be quantum-mechanical on the other end, and decide that between the two his personal hunch is the one to go for, is something I’ll probably never understand.

  59. Serge Says:

    Scott #58:
    You don’t understand my feeling of strangeness about a situation when all secret messages and bank accounts are suddenly revealed overnight… thanks to a new machine! Regarding the scientific issue I may not be competent, but I have the right to express some doubts about such a sociological novelty.

    P.S. What about my stance on P=NP?

  60. Vadim Says:

    Serge #59:

    All secret messages & bank accounts wouldn’t be revealed if efficient factoring suddenly appeared on the scene. Factoring would only let someone attack public key cyphers. Symmetric encryption like AES wouldn’t be affected.

    Even if it WAS the case, I’m not sure how that’s any kind of an argument. I’m sure the Germans couldn’t imagine a situation where their Enigma machine would be cracked, all their *military* secret messages suddenly revealed. But oops, it was, and all they could do was accept the consequences of their own design decisions. Likewise, why make RSA out to be some sacred thing? If it’s cracked, it just means it wasn’t as secure as we originally hoped it would be. Cryptographers will pick themselves up, dust themselves off, and move onto to something that’s not easily cracked by the availability of efficient factoring.

  61. John Sidles Says:

    Attila Szasz asks: “My main concern is about the second part regarding new insights into logic [as required to resolve P versus NP], how standard is this opinion?”

    Attila, please allow me to commend to your attention Lance Fortnow’s recent Communications of the ACM review The Status of the P Versus NP Problem (2009) whose concluding paragraphs (excerpted) make four points:

    “The P versus NP problem has gone from an interesting problem related to logic to perhaps the most fundamental and important mathematical question of our time …

    Proving P ≠ NP would not be the end of the story …

    Proving P ≠ NP might not be the start of the story either …

    None of us truly understands the P versus NP problem, we have only begun to peel the layers around this increasingly complex question.”

    In light of the vast domain of Lance’s ACM survey, and the still-unexplored range of its implications, to assert with any degree of confidence, that advances in mathematical logic plausibly will not be required to resolve P versus NP, would require considerable brashness!

  62. Serge Says:

    @Vadim (&Scott)
    OK, you win… but only partially. I predict that the size improvements on the inputs managed by quantum computers will take a long time to be achieved, and that they’ll only be asymptotic – no miracle, just a slow evolution from today’s tiny experiments up to the machine of your dreams. You know, it’s like Moore’s law that the number of transistors on integrated circuits doubles approximately every two years. You’ll probably witness a similar trend in the size of the integers you can factor quickly.

    This goes well with my stance on P=NP. There may be theoretical ways of quickly solving the hardest problems, but we might not possess the right computers as of yet – whether biologic or electronic – and we probably never will, for that matter. That’s why P?=NP is a silly question. The only thing that matters is how well polynomial time can be approximated – and in how much time in human history.

  63. Scott Says:

    Serge #61:

      There may be theoretical ways of quickly solving the hardest problems, but we might not possess the right computers as of yet – whether biologic or electronic – and we probably never will, for that matter. That’s why P?=NP is a silly question. The only thing that matters is how well polynomial time can be approximated – and in how much time in human history.

    It seems to me that you could generalize your stance to an argument against all of mathematics, and maybe all of science. In a right triangle, is the square of the hypotenuse equal to the sum of the squares of the other two sides? That’s a silly question. The only thing that matters is whether a farmer can save time by cutting diagonally across his field. And if the field is full of wheat that slows him down, then the answer is no, end of discussion.

    With that attitude, I imagine we wouldn’t have advanced much beyond the Babylonians (speaking of human history).

  64. Rahul Says:

    Scott #62:

    I sense you trying to cast this into a pure versus applied debate again? (Apologies if I am wrong!)

    I think another important facet in the QC-research debate ought to be whether all fundamental research projects are of equal import. If not, how does one decide, say, where to spend money / resources. Note, this isn’t so much of pure versus applied question.

    It is in this context that I’m wary of projects such as “QC search engine algorithms” that rest so critically on the feasibility of a scalable QC. Not all fundamental questions may be equally worthy of funding except in a quixotic world of infinite resources.

    Just because (some) theory is cheaper and faster than the key experiments let’s not spawn too much of it. Maybe there needs to be a self-imposed moratorium on some of the fancier frills until we get the key questions (e.g. scalabilty, decoherence, error correction etc. ) resolved? Either experimentally or theoretically.

    Would some prioritization help?

  65. Serge Says:

    Scott #62:

    We really have trouble understanding each other! I wouldn’t even generalize my stance to any single one of the five remaining open Millennium Problems – nor to any provably undecidable statement… and I do believe the reals are uncountable! 😉

    Unfortunately, P?NP happens to be a very special problem in that we’re part of it ourselves since the brain is a computer. And the tools we’re allowed to use are also part of the problem since proofs are algorithms. To me, it’s almost obvious – for physical reasons – that a world where P=NP should look exactly the same as one where P!=NP, while you seem to identify the former with a sort of magical land and the latter with the one you’re living in.

    You remind me of David Hilbert saying “we must know and we will know”. I’m rather like Gregory Chaitin who says there are many mathematical facts that are structurally unknowable. It suits me very well.

  66. Pascal Says:

    Is there an epub version of Lance’s book?

  67. Scott Says:

    Serge #64: First, by my count, there are 6 remaining open Millennium Problems. 🙂

    The fact that there are mathematical facts that are “structurally unknowable” (at least within standard formal systems like ZF)—something we’ve known since Gödel—certainly doesn’t imply that P≠NP has to be among those facts. It’s true that P vs. NP has a “metamathematical” character, but that aspect can sometimes be overstated: it also has the character of “just a plain old extremal combinatorics problem” (how many AND/OR/NOT gates do you need to represent the CLIQUE function?).

    More to the point, if you believe P vs. NP is undecidable, then you need to answer the question: why does whatever intuition tells you that, not also tell you that the P versus EXP, NL versus PSPACE, MAEXP versus P/poly, TC0 versus AC0, and NEXP versus ACC questions are similarly undecidable? (In case you don’t know, those are five pairs of complexity classes that have been proved different from each other, sometimes using very sophisticated ideas.)

  68. Scott Says:

    Pascal #65: I just looked on Amazon and found it here.

  69. Serge Says:

    Scott #66:

    “… to any single one of the five *other* remaining open Millennium Problems”. I knew something was missing but my subconscious was unwilling to put it among the others… 🙂

    My intuition can’t tell me anything about the things I haven’t had the time to study thoroughly. Regarding the less technical question under discussion, the transfer of intuition obviously came initially from the continuum hypothesis. But the analogy stops here and I don’t think Gödel can be of any help. As I’ve already said, I feel something is missing which prevents us from knowing. And this something is – in my opinion – a principle of physics, a general theory of complexity and not an axiom of math nor an enhancement of ZFC.

  70. Scott Says:

    Serge: I think of the Continuum Hypothesis as extremely different in character from P vs. NP: the former deals with transfinite sets that we never really “get our hands on” anyway (so that in retrospect, maybe it shouldn’t be too surprising that the existence or nonexistence of those sets depends on which axioms we choose). By contrast, the latter deals only with finite machines manipulating finite strings of 0’s and 1’s—and related to that, it has actual “real-world” implications.

    In general, whenever I’m trying to speculate about a profound unanswered question, I always look for the closest related questions to which we do know the answers. And in the case of P vs. NP, the closest related questions are not things like the Continuum Hypothesis, but rather other questions about complexity classes.

  71. Ajit R. Jadhav Says:

    @Scott #69:

    You (and if not you, at least the TCS community) clearly are being dishonest.

    They (if not you) count on the (as yet (and going by the indications, probably never in principle from MIT/Berkleley/Stanford/Cambridge/etc. to be settled)) vagueness of the continuum hypothesis in order to refute, sitting in your chairs in MIT (and all), any and all (including any (partially) valid) hypotheses.

    That’s (a) part of the modern American way of life. That’s all. (This includes Chile and Brazil, of course.)

    Just (as it happened earlier) in the way of the wave-particle duality issue (of QM, in case you were brain-Americanized enough).

    If you don’t believe in me, just go, ask Harry (Binswanger, an MIT/Berkeley/Stanford/etc./American alumnus. Qua an American alumnus, he will help you all justify (in prolonging) all your “answers” (which all are wrong, anyway)).

    If I get a chance to tell so, I will do so, during his Honeywell visit at COEP, Pune, India, to Dave (the Nobel laureate David J.) Gross as well. (I hope to obtain a pass to the “interaction” and lecture sessions—one of which is in my beloved Audi.)

    Goodnight, and goodbye ((and, thereby, probably, a) good riddance (though I have saved this reply of mine (just in case))).

    –Ajit
    [E&OE]

  72. Scott Says:

    Ajit: Hate to say it, but you’re hereby banned from this blog, by reason of insanity.

  73. Scott Says:

    Rahul #63:

      Would some prioritization help?

    Unfortunately, I recently decided to “prioritize” my own activities—and doing research, teaching my students, playing with my daughter, walking in the park, and even scratching my nose all ended up higher-priority than figuring out how to cut the infinitesimal sliver of federal funding ($5 million?) that goes to quantum algorithms and complexity research, and which I believe to be far better spent than countless federal expenditures that cost thousands of times as much.

  74. John Sidles Says:

    Hmmm … a fellow named Scott Aaronson once opined that really good TCS questions/challenges deserved to be posted to TCS StackExchange.

    The question/challenge that Scott himself has posted [in his comment #66] fits his own criterion quite admirably, and so it is now posted on the TCS StackExchange asAaronson’s Extended Question: Do natural generalizations of P versus NP exist?

    Gosh-golly … the Stetl Optimized P versus NP question/challenge has received two up-votes in the first two minutes! 🙂

  75. Serge Says:

    Scott #69:

    Thanks for the advice of methodology. I’d certainly understand things better with a more solid background in complexity theory, but it’s also true that there’s a nice analogy between checking vs. finding in polynomial time on the one hand, and a set vs. its power set on the other hand.

    I don’t think we can get our hands on the infinite majority of the Turing machines either. The only ones we may visualize are those which are compatible with the hardware/intelligence that’s physically available. That’s why I spoke of a principle of physics when an axiom was needed to settle the CH.

  76. Scott Says:

    Serge #75: Happy to help! Yes, in order to have an informed opinion about the provability or unprovability of P≠NP, you first need to understand things like the Time Hierarchy Theorem and small-depth circuit lower bounds, and why those results are provable with current knowledge even though they look “P≠NP-like.” If you’re interested in delving deeper into these things, then as it happens, I know a brand-new book that could help! 🙂

  77. John Sidles Says:

    Like most folks, I had long appreciated the immense difficulty of proving P ⊂ NP, but I had *not* appreciated that proving A ⊂ NA is an open problem for (essentially) *all computational complexity classes.

    In regard to the (still-open) TCS StackExchange question Aaronson’s Extended Question: Do natural generalizations of P versus NP exist?, the boundary class A=NTIME(n) provides (apparently) [the sole known exception][2] to the general principle:

    Observation  For all complexity classes A ≠ NTIME(n)  (even artificial ones?)  proving A ⊂ NA is a hard open problem at present.

    Proving P ⊂ NP is thus indeed a special case of a more general open problem in complexity theory. It’s not obvious how to answer the natural question “What trait does every complexity class A possess, except the boundary class A = NTIME(n), that obstructs proofs of A ⊂ NA?” The Dick Lipton/Ken Regan weblog essay We Believe A Lot, But Can Prove Little provides further useful discussion and references. Moreover, as Scott has remarked (#57):

    Mulmuley’s GCT program is notable for invoking pretty much every branch of math in existence except logic.

    The obvious conclusion is that Mulmuley may eventually need to extend his GCT program — perhaps relatively slightly? — to encompass advances in mathematical logic. In doing so, Mulmuley would of course be following in the footsteps of the great Alexander Grothendieck!

  78. Pascal Says:

    Scott #68: the ebook you pointed out seems to be in Kindle format, not in epub (epub is the format used by most e-readers except Kindle).

  79. Anil Says:

    I’m having trouble understanding why the P vs NP question is being conflated with the question of whether every problem in NP has a practical solution. Here I am using the word “practical” in the same way people use when they describe a P=NP world in which (quoting Fortnow) “We can quickly learn just about everything, and the great mysteries of the world fall quickly, from cures to deadly diseases to the nature of the universe.” In some sense the P vs NP question asks whether we can solve NP problems in a much more clever way than exhaustive search. If the answer turns out to be yes, I don’t see why this implies practical efficiency and I think it is really misleading to suggest so. “Does every problem in NP have a practically efficient solution?” is really a different question. The P vs NP question and this question are both very important questions but they are different (as far as we know). To conflate the two questions one should have some good reasons to do so. I am very curious to learn these reasons because I feel like I am not seeing something that some experts see.

    This leads me to another point. I am also having trouble understanding why there is a strong bias towards P != NP. Since there is such a strong bias, I would expect some strong evidence/argument to support this bias. There seems to be two arguments made. In the first, the P=NP world is equated with some magic world in which every problem becomes practically efficient. This world is in some sense “ugly” so it probably does not exist. I said in the previous paragraph, I don’t see why we necessarily end up in a magic world if P=NP, but even if we did, what is ugly or unusual about that? It would change a lot in our lives, that is true, but how is this evidence to support the opposite? How does “the world would change dramatically” become evidence for the truth of something? One can say the world would change dramatically if quantum computers are built; does this make the possibility of quantum computers any less likely? The second argument in favor of P != NP you hear is: many smart people have tried to find efficient algorithms for NP-hard problems and they have failed. But then you wonder, so what? Isn’t this a very presumptuous statement? Every animal on this planet has limits to what they can understand, it is not hard to imagine that our brains also have limits. Why do we expect computation to be a territory that we can fully conquer? I feel that computation has proven itself to be extremely complicated. We are barely scratching the surface in understanding it. Who knows what computation can do in time n^10.

    I am genuinely looking for some answers. Because of the strong bias, I am really feeling that I am not seeing something that others like Scott and Lance are seeing.

  80. Scott Says:

    Anil #79: We’ve gone over these issues many times on this blog. On the unlikely possibility of P=NP but all the resulting algorithms being uselessly inefficient in practice, see (for example) the earlier comments on this thread. On the reasons why almost all experts believe P≠NP (and even those who say they don’t are just trying to be contrarian), see this post.

  81. Anil Says:

    Thanks for the link!

    I still don’t understand though why is it unlikely that the algorithms are inefficient even if we assume P=NP. Why is it more likely that SAT has less than n^10 time algorithm? We could play a similar argument to P!=NP proponents and say that the fact that we haven’t been able to find a practically efficient algorithm is evidence that the algorithm must have a somewhat large exponent in the running time. And is n^10 considered to be practically efficient?

    I agree that based on the points you make in your “Reasons to believe” post, it is reasonable to conjecture that P is different from NP. I don’t think anyone would say otherwise. But P!=NP is treated more like a certainty that we don’t know how to prove yet. Given that computation is something we don’t understand well (to say the least) it just doesn’t feel right to treat the PvsNP question this way.

  82. dumbman Says:

    wait April foolson?

  83. Serge Says:

    @Anil

    All of a sudden, I’m feeling a bit less lonely… 🙂 The generalized bias toward P!=NP is indeed the consequence of a confusion between “solvable” and “practically solvable”. After all, P=NP isn’t so much counter-intuitive when you deny any physical processor the possibility of being large enough to run the algorithms that break NP-completeness in polynomial time!

    But let’s go one step further. How could it be possible to make any distinction between a problem that’s hard because all of its algorithms are slow, and one that’s hard because none of its fast algorithms can be run? Einstein said the speed of light can’t be reached whatever the referential… is it too daring to suggest that NP-completeness can’t be broken, whatever the theory – also made of algorithms – which you might use to explain this fact?

    I said in comment #69 that I felt something was missing to solve P vs. NP. I’m beginning to wonder if the missing variable isn’t simply the processor that runs the algorithms… Thus, to Scott’s comment #67, I’d reply by asking in turn: “How these five pairs of complexity classes could be proved different from each other without making any mention of the processor that runs their solutions? Why does the P vs. NP question seem to require a physical hypothesis about the processor?” I think that’s because those five statements are theorems, whereas P vs. NP behaves like an axiom – the main novelty being that now, it’s up to the computer to play the role of an axiom!

  84. Vadim Says:

    Serge,

    When you say, “P=NP isn’t so much counter-intuitive when you deny any physical processor the possibility of being large enough to run the algorithms that break NP-completeness in polynomial time!”, what do you mean by a processor being “large enough”? The only definition of large that makes sense to me is “having enough memory”, and since we know that NP is contained in PSPACE, we know that the memory requirements are related to the problem size by no more than a polynomial function. In other words, it’s already known that solving NP complete problems doesn’t take exponential space; even the current, exponential-time algorithms we have run in polynomial space.

    As far as the processor being relevant, I’ve always been under the impression that universality means that one model of computation (your processor) can effectively simulate another model. With reasonable models, this simulation only adds polynomial overhead. This means that you can take your pick and prove P vs. NP on any “processor” you choose and your solution will be relevant to all of them. Of course, your processor needs to have infinite memory (related to the input size of a particular instance by a polynomial function), which is why abstract computational models are used and not, say, the latest Core i7.

  85. Serge Says:

    Vadim #84:

    > “the memory requirements are related to the problem size by no more than a polynomial function”

    Yes, but the length of the program is a practical constant you must add and it can be prohibitive – let alone the size of the brain you should possess to design such a program. As far as I know, Turing machines aren’t bounded in size, whereas the Universe is.

  86. Serge Says:

    Vadim,

    You said it yourself: “you can take your pick and prove P vs. NP on any “processor” you choose and your solution will be relevant to all of them. Of course, your processor needs to have infinite memory.”

    Therefore, the reason why we can’t decide between P=NP and P!=NP is because every real-world processor has a finite memory. QED

  87. Vadim Says:

    Serge,

    I can’t tell if you’re being serious.

  88. Serge Says:

    @Vadim

    Only the “QED” wasn’t serious. 🙂 I just lack the required technicality to explain my point more fully.

    Of course, I’m well aware that a proof must be of finite length. But that both worlds – with or without P=NP – should look alike, IMHO this is due to the fact that we live in a finite world while the world of Turing machines is infinite.

    By the way, I’m particularly proud of my comparison with relativity. I think it explains a lot.

  89. Vadim Says:

    Serge,

    As Scott’s pointed out on a number of occasions, your position – if it were true – would also prevent us from proving separations between other complexity classes where we actually do have provable separations.

    Take your statement, “Therefore, the reason why we can’t decide between P=NP and P!=NP is because every real-world processor has a finite memory.” and replace NP with EXPTIME. See the problem?

  90. Artificial Intelligence Blog · Baseball, P vs NP, April Fools Says:

    […] I think everyone should read any blog post that includes those two quotes.  Especially this one. […]

  91. Serge Says:

    Well, I’m not sure I do. It was proved that there are problems which do require exponential time. OK, but their solutions already took exponential time to check – otherwise P!=NP would’ve been proved as well.

    If I can recognize your face quickly, it doesn’t mean I’ll find you quickly in a crowd – but maybe I have my tricks, after all (who knows?) – and this is the Pvs.NP question. But if it was already hard for me to make a distinction between you and your neighbors, then I shouldn’t be expected to find you quickly in a crowd – because then my tricks (provided I wasn’t lying about them) would be useless – and this is P!=EXPTIME.

    An analogy can be made with the CH. Since we don’t know anything about the power sets, they may almost be granted whatever cardinal we want them to have. Similarly, since – because our proofs must remain finite – we can’t reason about arbitrarily complex algorithms – the unknown tricks – then we may either claim that these algorithms exist or that they don’t.

  92. Serge Says:

    @Vadim #89

    OK, I see. The solutions are checked in at least exponential time… provided that P!=EXPTIME – otherwise we can’t tell.

    I’ll think it over… 🙂

  93. Serge Says:

    Vadim #89: sorry, replied too soon in #91 – the second story exemplified P!=NEXPTIME instead of P!=EXPTIME.

  94. Anonymous Says:

    @Scott #57

    You generally have a negative view of logic which is well-known. However many prominent complexity theorists do not share that view as we can tell from their papers in the post recursion theoretic area. Many famous results originally involved logic in their original form.

    I don’t see much combinatorics or mathematical analysis in GCT either. GCT as I see it has been mostly about general perspectives and recasting of the known results using algebraic geometric and representation theoretic methods. It is too early to say how the program will develop.

  95. Scott Says:

    Anonymous #94: “Negative view of logic”? I think that’s a sweeping overstatement! 🙂

    Gödel’s Theorem, Löb’s Theorem, the Löwenheim-Skolem Theorem, Rösser’s Theorem (OK, I gave Rosser an honorary umlaut) and so on are among my favorite results in the history of humankind. I dwell lovingly on these things in my undergrad computability & complexity course, as much as time will permit. And every few years, I take another stab at understanding Cohen forcing and other more modern topics.

    I do, on the other hand, take a negative view of the ability of logic by itself to resolve non-logical questions, especially complexity lower bound questions. Counterexamples are sought.

  96. Serge Says:

    So, what makes some exponential problems provably not polynomial in this finite world of brains and computers? It’s their underlying data structure – typically a tree – that allows for an easy-to-find proof. By contrast, NP-complete problems aren’t based on such an exponential structure and this is what makes it physically out of reach to prove they’re not polynomial – or alternatively, to find their polynomial algorithms. You *may* indeed take your pick, since the physics – and not the logic – allows you to do so!

  97. Serge Says:

    John #77:
    > “Mulmuley may eventually need to extend his GCT program — perhaps relatively slightly? — to encompass advances in mathematical logic. In doing so, Mulmuley would of course be following in the footsteps of the great Alexander Grothendieck!”

    Well, if he would even be much ahead of what Grothendieck did – if only such a program were feasible.

  98. John Sidles Says:

    Serge #97, your remark touches upon topics in complexity theory that are deep (and to my knowledge) relatively unexplored.

    Proof checking systems commonly encompass Tarski-Grothendieck post-ZFC extensions to set theory, either as axioms (e.g., Mizar) or as extensions (e.g., coq).

    To what extent do the great theorems of complexity theory require post-ZFC extensions? If so, what extensions are required? How confident should we be, that a given (possibly extended) set of axioms is consistent and complete with respect to the great open problems of complexity theory?

    These are tough questions, that tend to keep researchers like Dick Lipton and Ken Regan awake at night … albeit there are plenty of researchers who sleep soundly.

    It surely is striking — but should it be concerning? (don’t ask me!) — that lists like 100 famous theorems formalized in Coq include (apparently) no complexity-related theorems.

  99. Serge Says:

    John #98, yes, this is true – and quite striking indeed. Shouldn’t computers be used to solve computing problems? Has anybody ever tried to formalize PvsNP in Coq?

    Our tools are finite, which means we only have bounded-length proofs at our disposal – whatever this bound. Let’s say to fix ideas the entire mankind is actually trying to solve this problem. I conjecture that a proof of P!=NP must be longer than the most daring biological hypothesis on our collective intelligence. Symmetrically, if P=NP then I conjecture an efficient algorithm for SAT should require a computing power which exceeds all we can physically build – and then again, this limit doesn’t have to be too restrictive. Now, let’s say that PP(X) means “X is biologically provable” if X is a theorem and “X is physically runnable” if X a program. If, whenever we say “X”, we actually mean “PP(X)”, then we must use intuitionistic logic instead of classical logic. In this logic, “X OR !X” is no more a tautology. Thus, we might have neither P=NP nor P!=NP after all…

    As regards quantum computing, I think it will be just as hard to build the machines that can factor larger and larger integers, as it already is to improve on the efficiency of the best current classical factorization algorithms. In a way, shifting the problem from the software to the hardware is isomorphic to changing the language. But sometimes, a change of language can result in real breakthroughs. After all, our modern operating systems wouldn’t exist without the C language. So, wait and see…

  100. Rahul Says:

    @John

    “It surely is striking — but should it be concerning? (don’t ask me!) — that lists like 100 famous theorems formalized in Coq include (apparently) no complexity-related theorems.”

    Is difficulty the reason or a lack of interest? Are there attempts in progress. Just curious.

  101. John Sidles Says:

    Scott requests “[I] take a negative view of the ability of logic by itself to resolve non-logical questions, especially complexity lower bound questions. Counterexamples are sought.”

    Philosopher/mathematician Rebecca Goldstein asserts (what seem to me to be) a converse historical example in her Incompleteness: the Proof and Paradox of Kurt Gödel (2005, p. 118)

    Wittgenstein never came to accept that Gödel had, through strict mathematics, achieved a result with metamathematical implications. That there could be a mathematical result with metamathematical implications went against Wittgenstein’s conception of language, knowledge, philosophy, everything.

    The complexity theory literature — notably Juris Hartmanis’ monograph Feasible computations and provable complexity properties (1978) — provides us with inspirations sufficient to construct a converse postulate, which by metamathematics achieves a result with mathematical implications:

    The Goldstein/Hartmanis Postulate  Let P’ and NP’ be the set of languages in P and NP (resp.) for which ZFC provides certificates of membership. Then the following three inclusions are strict: P’ ⊂ P, NP’ ⊂ NP, and P’ ⊂ NP’.

    Needless to say, a proof of the Goldstein/Hartmanis Postulate would not win the Clay Institute Millennium Prize for “solving” the P versus NP problem … possibly because its implications for mathematics in general would be deep beyond the conceptions (were they too narrow?) of the founders of complexity theory. Juris Hartmanis excepted, of course! 🙂

  102. Bill Kaminsky Says:

    While we’re on these topics of computer-aided proof checking, formalizing theorems so automated theorem provers could address them, et cetera, let me pose a question that’s long been bouncing around in the back of my mind:

    To a rough order of magnitude, what would be the size of the 3-SAT instance whose (un)satisfiability would signify the (non)existence of a proof of some interesting-to-humans theorem* that’d fit within, say, 10 journal pages if the proof were rewritten in the usual, mathematics-for-humans form?

    [*As examples of what I mean by interesting-to-humans theorems, I point to the aformentioned list of Coq-formalized theorems at http://perso.ens-lyon.fr/jeanmarie.madiot/coq100/ . I’d certainly count all 100 theorems there as interesting-to-humans. (In case you haven’t looked, the list includes Coq formalizations of such major theorems as The Irrationality of Sqrt(2), The Pythagorean Theorem, The Infinity of Primes, the Fundamental Theorems of Arithmetic, Algebra, and Calculus, and Godel’s Theorem.) ]

  103. Michael McG Says:

    I applaud you, Scott, for devising my favorite type of April Fools’–one that openly and initially (but subtly) declares that it is an April Fools’, teasing the mark with its back-handed up-frontness.

    Of course, it helps to know that it is an April Fools’, but one should at least be a bit suspicious of any document containing the word “April” anywhere other than the the date itself promulgated on the first day of said month. ;P

  104. Serge Says:

    Vadim,

    In comment #84 you said:

    > “Universality means that one model of computation (your processor) can effectively simulate another model. With reasonable models, this simulation only adds polynomial overhead. This means that you can take your pick and prove P vs. NP on any “processor” you choose and your solution will be relevant to all of them.”

    A nice analogy can be made between this and the first postulate of Relativity that the physical laws of nature are the same in every inertial frame of reference. Indeed, it seems possible to view the requirement of a polynomial overhead between two computer models as the computing counterpart to the requirement that all inertial frames be in a state of constant, rectilinear motion with respect to one another.

    Thus, a more-than-polynomial overhead between two computing models would be the same thing as an accelerated motion between two frames of reference in mechanics. So, why not try to rewrite General Relativity, where acceleration=gravitation, into a computational framework where exponential overhead=complexity? If this could be done, then it might also explain why P=NP and P!=NP can’t be told apart.

    In fact, Turing machines are by no means physically realistic. In the real world, the tape has a mass which obeys the laws of mechanics – note that it’s actually being accelerated at each instruction of the program that it’s processing! Furthermore, over long periods of computation the relativistic effects on its motion might not be neglected anymore! Therefore, the time of its process could turn out to be slower than the time of the observer – the computer scientist, that is.

    I wonder if these facts are convertible into a Relativity of computer processes…

  105. Serge Says:

    The P?=NP question seems to depend on the frame of reference whereas P!=EXPTIME doesn’t. You can never be sure that, whenever you’re trying to solve P?=NP, your own personal computing model – your mind and personal knowledge – isn’t biased with respect to your computing model of reference – Turing machines, lambda-calculus, etc which are neutral with respect to each other… but probably not to your own mind!

    The crucial difference between the two kinds of problems seems to come from the assumptions made on the input data. NP-complete problems deal with unstructured data. In the subset sum problem, the subsets of integers aren’t supposed to possess any structure. Compare with SORTING, where the input data is endowed with an order relation…

  106. Do natural generalizations of P versus NP exist? | Question and Answer Says:

    […] question was motivated by Scott Aaronson’s recent weblog comments (see below), and the complexity-theoretic richness of this question has subsequently been […]

  107. Sam Says:

    Scott:
    I just stumbled on this particular post (I wanted to see what you thought of Fortnow’s book) and got a kick out of the April Fool’s joke. It reminded me of a day back in 1976 as an MIT undergrad sitting in the physics commons room, when someone rushed in holding a copy of Scientific American saying that special relativity has been show to be “wrong”. It was of course a good prank on the part of Martin Gardner!

    “Gardner has a natural penchant for fun and games. In an April Fools’ piece, he claimed Einstein’s theory of relativity was disproved and that Leonardo da Vinci invented the flush toilet.” (SciAm web site).

    I was a bit startled and asked to see the copy and after reading the article and looking at the date I had a good laugh. We had just studied SR in one of my classes so I was able to quickly spot the joke.

    I really enjoyed Mr. Gardner’s sense of humor and approach to keeping us on our toes. Read, stop, think, … repeat. Then maybe draw a conclusion.

    I think these sort of pranks are good once in awhile since we shouldn’t take everything for granted. My father told me a story from his Tuft’s medical school days – professor lectures for an hour and then tells the students that most of what he just told them is “wrong” and the week’s homework assignment is to correct the lecture! (Not sure I would have liked that but sure would have made me think once I cooled off).

    Also glad you liked Fortnow’s book and since I was on the verge of purchasing it, well I guess it is off to Amazon (good excuse to procrastinate on what I really should be doing!).

    Regards,
    — Sam

  108. P, NP, and The Search For The Impossible | Shaun Ling Says:

    […] reading Scott Aaronson’s blog here, I was introduced to a book called The Golden Ticket: P, NP, and the Search for the Impossible by […]