Subhash Khot’s prizewinning research

I already congratulated Subhash Khot in my last post for winning the Nevanlinna Award, but this really deserves a separate post.  Khot won theoretical computer science’s highest award largely for introducing and exploring the Unique Games Conjecture (UGC), which says (in one sentence) that a large number of the approximation problems that no one has been able to prove NP-hard, really are NP-hard.  In particular, if the UGC is true, then for MAX-CUT and dozens of other important optimization problems, no polynomial-time algorithm can always get you closer to the optimal solution than some semidefinite-programming-based algorithm gets you, unless P=NP.  The UGC might or might not be true—unlike with (say) P≠NP itself, there’s no firm consensus around it—but even if it’s false, the effort to prove or disprove it has by now had a huge impact on theoretical computer science research, leading to connections with geometry, tiling, analysis of Boolean functions, quantum entanglement, and more.

There are a few features that make the UGC interesting, compared to most other questions considered in complexity theory.  Firstly, the problem that the UGC asserts is NP-hard—basically, given a list of linear equations in 2 variables each, to satisfy as many of the equations as you can—is a problem with “imperfect completeness.”  This means that, if you just wanted to know whether all the linear equations were simultaneously satisfiable, the question would be trivial to answer, using Gaussian elimination.  So the problem only becomes interesting once you’re told that the equations are not simultaneously satisfiable, but you’d like to know (say) whether it’s possible to satisfy 99% of the equations or only 1%.  A second feature is that, because of the 2010 work of Arora, Barak, and Steurer, we know that there is an algorithm that solves the unique games problem in “subexponential time”: specifically, in time exp(npoly(δ)), where δ is the completeness error (that is, the fraction of linear equations that are unsatisfiable, in the case that most of them are satisfiable).  This doesn’t mean that the unique games problem can’t be NP-hard: it just means that, if there is an NP-hardness proof, then the reduction will need to blow up the instance sizes by an npoly(1/δ) factor.

To be clear, neither of the above features is unique (har, har) to unique games: we’ve long known NP-complete problems, like MAX-2SAT, that have the imperfect completeness feature, and we also know NP-hardness reductions that blow up the instance size by an npoly(1/δ) factor for inherent reasons (for example, for the Set Cover problem).  But perhaps nothing points as clearly as UGC at the directions that researchers in hardness of approximation and probabilistically checkable proofs (PCP) would like to be able to go.  A proof of the Unique Games Conjecture would basically be a PCP theorem on steroids.  (Or, since we already have “PCP theorems on steroids,” maybe a PCP theorem on PCP?)

It’s important to understand that, between the UGC being true and the unique games problem being solvable in polynomial time, there’s a wide range of intermediate possibilities, many of which are being actively investigated.  For example, the unique games problem could be “NP-hard,” but via a reduction that itself takes subexponential time (i.e., it could be hard assuming the Exponential-Time Hypothesis).  It could be solvable much faster than Arora-Barak-Steurer but still not in P.  Or, even if the problem weren’t solvable any faster than is currently known, it could be “hard without being NP-hard,” having a similar status to factoring or graph isomorphism.  Much current research into the UGC is focused on a particular algorithm called the Sum-of-Squares algorithm (i.e., the Laserre hierarchy).  Some researchers suspect that, if any algorithm will solve the unique games problem in polynomial time (or close to that), it will be Sum-of-Squares; conversely, if one could show that Sum-of-Squares failed, one would’ve taken a major step toward proving the UGC.

For more, I recommend this Quanta magazine article, or Luca Trevisan’s survey, or Subhash’s own survey.  Or those pressed for time can simply check out this video interview with Subhash.  If you’d like to try my wife Dana’s puzzle games inspired by PCP, which Subhash uses 2 minutes into the video to explain what he works on, see here.  Online, interactive versions of these puzzle games are currently under development.  Also, if you have questions about the UGC or Subhash’s work, go ahead and ask: I’ll answer if I can, and otherwise rely on in-house expertise.

Congratulations again to Subhash!

19 Responses to “Subhash Khot’s prizewinning research”

  1. Gil Kalai Says:

    There are many more to say but let me also mention the Khot-Vishnoi negative solution of The Goemans-Linial conjecure – embeddability of negative type metrics into ell-1. This is an amazing example of TCS insights applied to basic questions in pure mathematics which do not refer to coputational coplexity. Congratulations!

  2. Arko Bose Says:

    Scott,
    Completely off-topic, but I was really hoping you would post something to cover your recent participation in the IBM event Quantum Foundations of a Classical Universe.

    Are there videos of the discussions?

  3. Scott Says:

    Arko #2: It’s a-comin’!

  4. rrtucci Says:

    Arko, please be patient with senior citizens.
    http://qbnets.wordpress.com/2014/08/09/new-study-shows-that-quantum-mechanics-virus-affects-3-out-of-every-5-senior-americans/

  5. J Says:

    Simons Foundations has a new article on a new quantum mechanics at Planck Scale. Your thoughts would be helpful please?? http://www.simonsfoundation.org/quanta/20140818-at-multiverse-impasse-a-new-theory-of-scale/ Think this could affect quantum computing?

  6. Scott Says:

    J #5: Thanks for the link! The article was interesting, though I confess it raised more questions for me than it answered. I’m happy if the HEP community is breaking out more from the “supersymmetry vs. multiverse” dichotomy (something that I, though an ignorant outsider, always winced at), and considering other explanations for the apparent fine-tunings. It sounds like the idea of mass and length scales emerging dynamically, and the Planck scale and Standard Model scale emerging from two different processes, is one worthwhile thing for people to work on. (Of course, the article’s opening sentence, about size differences being “illusory,” is hyperbole: even if length scales did arise in some dynamic manner, they’d still exist!)

    Regarding “ghosts”: if a theory predicts negative probabilities for actual measurement outcomes, then the theory is wrong, full stop. So before agravity could be a serious contender, the ghosts would need to be tamed in some way (and it sounds like even agravity’s proponents agree about that).

    So to answer your question: no, I can’t see right now how this would affect quantum computing—since, if the ghosts are tamed (as they’d have to be for any theory involving them to make sense), then the basic principles of quantum mechanics are left completely intact.

  7. J Says:

    @Scott#6 Thank you for your feedback and opinion. So you think there can never be a consistent and satisfactory theory of generalized probability values?

  8. Scott Says:

    J #7: No, that’s not quite what I said. You can construct a theory of just about anything! 🙂

    “Negative probabilities” do show up in QFT, as artifacts needing to be tamed, and they also arise when people try to impose quasi-classical pictures on quantum mechanics (e.g., the Wigner function representation). But the one thing a meaningful physical theory must never do, is predict a negative probability for an actual outcome of an actual measurement. That’s the thing that makes as little sense as it sounds.

    I don’t know the situation regarding “agravity” theories—you should really ask an expert (maybe try Physics StackExchange?). But from what I read, it seemed clear that there are two possibilities:

    (i) People won’t manage to “tame” these ideas into meaningful quantum theories—in which case, they’ll simply reject the ideas (which were never more than speculations anyway), rather than the basic principles of QM. (Just like, when your restaurant bill doesn’t add up, you question the waiter, not the rules of arithmetic…)

    (ii) More optimistically, people will manage to tame the theories—in which case, they’ll be perfectly consistent with ordinary QM.

    Either way, unfortunately, it’s not clear that there would be any implications for computational complexity.

    (Anyone who actually understands these “agravity” ideas and can add further clarification, is strongly encouraged to do so.)

  9. D Says:

    A consequence of UGC as you have stated:

    ” In particular, if the UGC is true, then for MAX-CUT and dozens of other (NP Hard) important optimization problems, no polynomial-time algorithm can always get you closer to the optimal solution than some semidefinite-programming-based algorithm gets you, unless P=NP.”

    So, as its consequence, if one gets a polynomial algorithm for some NP Hard problem than could P = NP true?

    I hope you will enjoy to have a look at:

    http://vixra.org/pdf/1204.0019v1.pdf

    DPM

  10. Scott Says:

    D #9: Yes, if you got a polynomial-time algorithm for an NP-hard problem then P=NP would be true. That’s almost what NP-hardness means.

  11. lewikee Says:

    This question is not specific to Subhash or UGC so I understand if you don’t wish to answer. But I don’t understand how one could possibly prove P!=NP (despite the fact it is almost surely the case). It seems to me it would entail proving that there could exist no future algorithm with clever use of as of yet undiscovered mathematical concepts that could solve a problem in polynomial time. How can one do that? Can you give examples of existing problems (obviously not in NP-hard) where it has already been proven that “it cannot be done quicker, even with the mathematics and clever people of the future”?

  12. Scott Says:

    lewikee #11: Yes, I can give such examples. Most obviously, there’s the halting problem. How could anyone possibly say that the as-yet-undiscovered mathematical concepts of the future could never, ever allow us to design a program that decides whether other programs halt? Well, Turing said it, and his argument remains as valid today as it was in 1936.

    Then there’s the Time Hierarchy Theorem, which lets us make lots of interesting unconditional statements, such as:

      There is no polynomial-time algorithm to decide whether White has the win from a given position, in an nxn generalization of chess.

      There is no polynomial-time algorithm to decide whether a statement about integers, involving quantifiers, variables, constants, and addition (but no multiplication), is true or false. (Even though this problem is decidable, in a stack-of-exponentials running time.)

    Here are a few other examples of things that we know:

      There are no polynomial-size circuits of AND and OR gates (and no NOT gates) to solve the CLIQUE or MATCHING problems. (Even though there are such circuits of exponential size.)

      There is no polynomial-size, constant-depth, unbounded-fanin circuit of AND, OR, and NOT gates to compute the parity or majority of an n-bit string.

      There is no algorithm that solves SAT simultaneously in sublinear space and O(n1.7) time.

    Once again, all of these things have been proved, despite the unlimited number of new mathematical concepts and algorithmic techniques that might be invented in the future—much as Fermat’s Last Theorem has been proved, despite the unlimited number of ideas that might be invented in the future to search for counterexamples to it. (Since the time of Euclid, you might say, the entire appeal of math has been its ability to encompass an infinity of cases in a single, finite argument.)

    Obviously, P≠NP is vastly harder than any of the examples I mentioned—if it weren’t, then it would’ve been solved already. But I think the existence of these known lower bounds (and dozens more that I didn’t mention) should certainly cause one to question the belief that P≠NP must be fundamentally unprovable.

  13. Dave R Says:

    Scott, do you believe the UGC is true?

  14. Scott Says:

    Dave #13: Maybe with ~65% probability. (I had some statement to that effect in the original draft of this post; don’t remember why it got edited out.)

  15. Dave R Says:

    Scott #13: Thanks. Always like open problems which no one really knows for sure which way to go on.

    lewikee’s question actually got me thinking – doesn’t the core reason that we think we can’t solve NP-hard problems in P time that there is just not enough information gained at each step? My intuition is kind of that, for a problem with an exponential number of possible solutions, each of the P steps has to kind of provide an exponential jump in the amount of information we have about the solution.

    Can you provide any info on how this has been explored from an information theoretic perspective?

  16. Scott Says:

    Dave R #15: People have thought along such lines, of course, but the key problem is that it’s extremely hard to formalize what you mean by the “information” an algorithm possesses at a given time. After all, if you approach the question through Shannon’s theory, then you’re forced to say that all the information you want is already “implicitly” there, the instant the algorithm receives its input! 🙂 The trouble, of course, is that while the input might “implicitly” contain the desired information (the prime factors of your number, the satisfiability of your formula, etc.), you want an explicit representation of the answer, a distinction that Shannon’s theory is silent about.

    So, OK then, why not just try to quantify the amount of “explicit knowledge” that the algorithm has about the solution, and show that exponentially many steps are needed before there can be an appreciable amount of such knowledge? Well, to see the difficulty of that strategy, consider any even slightly-interesting polynomial-time algorithm—e.g., for greatest common divisor, or maximum matching, or finding eigenvalues of a matrix. Such algorithms work by doing complicated transformations of the input. If you were just staring at a memory dump, it might not be obvious to you that the algorithm was gaining any knowledge at all about the solution, until everything suddenly came together at the very last step. Or rather: it wouldn’t be obvious to you unless you understood how the algorithm worked, at a higher, “semantic” level.

    (And in fact, almost every amateur “proof” of P≠NP I’ve ever seen could be rejected for the simple reason that nothing it said was specific to NP-complete problems. I.e., if the argument worked to show any algorithm needed exponential time to “gather enough information” to solve a 3SAT instance, then an exactly parallel argument would also have worked for 2SAT. Yet for 2SAT, we know a linear-time algorithm.)

    Worse yet, Razborov and Rudich’s natural proofs barrier shows that, in a certain sense, any approach to proving P≠NP that follows the algorithm step-by-step, and explicitly calculates how “complicated” is the information that the algorithm has produced so far (showing that exponentially many steps are needed before it’s “complicated enough”), is inherently doomed to failure. For, if such an approach worked, we could also use it to efficiently distinguish random functions from pseudorandom functions, which is one of the very problems we were trying to prove was hard.

    This shouldn’t be interpreted to mean that proving P≠NP is impossible: in fact, diagonalization (the thing Turing used to show the unsolvability of the halting problem) is already a technique that evades the barrier pointed out by Razborov and Rudich. (Unfortunately, diagonalization falls prey to a different barrier, called relativization—but this comment is getting too long.) What it means, rather, is that if you’re trying to prove some problem is not in P, then you’d better zero in on something highly specific about that problem, rather than just trying to argue that the solution is “complicated,” and no algorithm could generate anything so “complicated” after a polynomial number of steps.

    (Which is something that one could’ve realized, and that careful thinkers did realize, even before Razborov and Rudich—but the latter made the point sharper and more formal.)

  17. Dave R Says:

    Thanks Scott – and thanks for always making such thoughtful responses even to pedestrian questions :-). Very generous of you!

  18. J Says:

    Scott@7 Will Bardeen mentioned in the article seems to be son of John Bardeen.

  19. Shtetl-Optimized » Blog Archive » Quantum computing news items (by reader request) Says:

    […] problem in quantum polynomial time—thereby disproving the Unique Games Conjecture (which I previously blogged about here), unless NP⊆BQP.  It hasn’t yet succeeded at that goal.  In their earlier work, Farhi et […]