It’s time someone said it.  There exists, among a small but vocal subset of the nerd community, a strange animus against computational complexity theory, which is often rooted in factual misunderstandings, and seems wholly out of proportion to any real shortcomings that complexity theory has.  Granted, every field of science has its backseat-drivers, those extralusionary intellects who feel sure they could best the experts, but haven’t expended any actual effort in solving the problems on which the experts are stuck.  But, perhaps because of the unusual accessibility of its open problems, complexity theory seems (I might be wrong) to attract more such naysayers than other mathematical fields.

It goes without saying that no intellectual pursuit is automatically entitled to respect: it has to earn it by actual accomplishments.  But even after complexity theory delivers spectacular accomplishments—from NP-completeness to PCPs to public-key cryptography to zero-knowledge proofs to quantum computing—the carping continues unabated.  It’s this phenomenon that I’d like to understand better.

Here are the main manifestations of anti-complexitism that I’ve witnessed:

“Monday-morning quarterbacking” of complexity breakthroughs.  For an example, see this thread on Lance and Bill’s blog, which is full of knowing comments seeking to minimize Ryan Williams’s recent NEXP vs. ACC breakthrough.  Some of these comments are strangely off the mark: for example, the new result is taken to task for being “nonconstructive,” relying on the ability to perform diagonalization within the huge set NEXP.  But we’ve known since Natural Proofs that, if you want to make progress on the P vs. NP question, then you’ll need “nonconstructive” techniques that seize on a special property of the function f being lower-bounded—and diagonalization remains one of the few examples of such a technique that we have.  On the other hand, we’ve also known that, if you do use diagonalization, then you’ll need to combine it with some new ingredient to get around both the relativization and algebrization barriers.  Ryan’s proof actually does all of that, which is why many of us are so excited about it.

Refusal to accept the incremental nature of complexity theory (which is shared by every other scientific field known to humankind).  To me, one of the more humorous criticisms of Ryan’s breakthrough is that it “merely” shows NEXP is not in ACC, and not (for example) that the MAJORITY function is not in ACC.  Granted, the statement NEXP⊄ACC is pathetically weak compared to what we believe is true.  But then again, what have you done that’s advanced circuit complexity by 1% as much as this “pathetic” lower bound?

Fervent desire to see complexity theorists get their comeuppance from an “outsider.”  The anonymous blog-commentariat’s pooh-poohing of the ACC breakthrough stands in contrast to the praise that same commentariat heaped on Vinay Deolalikar’s unsuccessful attempt at proving P≠NP a few months ago.  Even though Vinay and Ryan are both academic researchers working at industry labs (HP and IBM respectively), from reading the comments, it appears part of the reason for the differing reactions is that Deolalikar was seen as more of an “outsider.”  Now, I like to root for the underdog as much as the next American, but it’s worth remembering that every scientist starts as an outsider.  It was only a decade ago that Ryan and I were both nerdy kids at Cornell, trying to get our respective crazy ideas taken seriously by professors.  To paraphrase Niels Bohr, is a “scientific insider” anything more than an outsider who’s already made most of the egregious mistakes in some subfield?

Presenting obvious, universally-known limitations of asymptotic analysis as if they were new insights.  Yes, I’m aware that a polynomial-time algorithm can be impractical because of huge constant factors or a whopping exponent.  I’m aware that an NP-complete problem can be easy on those instances that arise in practice.  Even if I have a debilitating brain injury, so that I no longer remember my own name or how to count to 10, I like to think that I’ll still be aware of these facts.  To me, dismissing complexity theory because of its love affair with worst-case, asymptotic analysis is like dismissing physics because of its love affair with frictionless surfaces, point particles, elastic collisions, and ideal springs and resistors.  In both cases, people make the simplifying assumptions not because they’re under any illusions that the world really is that way, but rather because their goal is understanding.  And in both cases, the theory itself gives you the tools to complicate your model—to put legs and hooves on your spherical cow—until you get reasonably-accurate predictions for the practical problems that interest you.  (See: D. E. Knuth, The Art of Computer Programming.)

Dismissal of complexity theory as “not real mathematics.”  There’s no denying that complexity theory is young compared to (say) complex analysis, differential equations, or Lie groups.  But if you were choosing an area to work on, why would that be a point against complexity?  It means all the more to do and discover: instead of having the elegant, unifying theory presented to you in yellow books, you get to create it!  Furthermore, we’ve seen that the connections between complexity theory and “traditional” areas of mathematics are as deep as people want to make them (and often deeper): in recent years, metric embeddings, elliptic curves, algebraic geometry, and arithmetic combinatorics have all played starring roles in complexity results.  On the other hand, yes, sometimes you can achieve a complexity breakthrough the way Ryan did, not by using “deep” mathematics, but just by thinking incredibly hard about simple facts like fast matrix multiplication and the Time Hierarchy Theorem.  Again, that’s a negative?

Attacks on complexity theory for over-reliance on conjectures, even though almost every field outside mathematics (and quite a few within mathematics) rely on conjectures as well.  This one really gets me—and is the reason why I often point out that, if we complexity theorists were physicists, we would have declared P≠NP a law of nature decades ago, then looked back with pride on our far-reaching discovery about the workings of the universe.  The problem of rigorously proving the No-SuperSearch Law would have been relegated to the mathematical physicists, much like the problem of proving the consistency of (3+1)-dimensional quantum field theory.  Instead, because we complexity theorists have the custom of trumpeting what we can’t prove from the rooftops, we give our extralusionary friends the ammunition they need to regard us as dismal failures, should they so choose.  “So you landed on the moon?  How adorable.  But tell me, why does that represent even the slightest progress toward the real goal of visiting other galaxies?”  The response, which should be obvious, is that taunting mountain-climbers for being out-of-shape laggards works best when you’re at the peak of the mountain looking down at them, not at the base looking up.  But then again, if you were climbing the mountain too, you’d be less likely to want to taunt other climbers, even those behind you: for then the competitive instincts common to all humans would have to contend with the feeling of being part of a great and difficult shared enterprise.

155 Responses to “Anti-Complexitism”

  1. Paul Carpenter Says:

    For varying degrees of ignorance, even the oldest fields of mathematics are subject to similar opposition. “How has algebra ever helped anyone?” is one that amazingly enough, you can still hear from adults.

  2. Suresh Says:

    I’m even more bewildered now :). I don’t read comment threads regularly, and had no idea that such vituperation was being thrown Ryan’s way !! Thanks for your eloquent defense – I couldn’t agree more, and am stunned at the complexity haters all coming out of the woodwork anonymously.

  3. gowers Says:

    As an admirer of complexity theory, I’m slightly puzzled by this post. Would you say that the critics are mainly cranks who write unpleasant anonymous comments on blogs, or are you hearing these criticisms from people who consider themselves “real” mathematicians and look down on complexity theory?

  4. A. Nonny Mouse Says:

    I love complexity theory!

    But still, the thing is, in other fields, you don’t have every single advance described as a “breakthrough”. I just wish theoretical CS weren’t so self-congratulatory.

  5. Moshe Says:

    Eh, that’s nothing, you guys have it easy :-).

  6. James Says:

    I haven’t read more than a few of the comments, but I don’t think “vituperative” is the right word for them. Some are certainly skeptical, but that is healthy. Scientists should be skeptical. Some go too far.

    Your argument about an over-reliance on conjecture misses the mark. The right comparison is to mathematics, not to physics. Are there subfields of comparable relative size to mathematics (as complexity theory is to theoretical computer science) that are so heavily reliant on conjectures? (Perhaps somewhere inside number theory? I am not familiar.)

  7. Akhil Mathew Says:

    Complexity theorists can explain their results to outsiders in a way that sounds cool. Zero-knowledge proofs, cryptography, and PCP make great stories with vaguely philosophical implications. Those of us that like algebraic geometry are a loss to explain to the outsider how it is that that “coherence is preserved under proper maps of noetherian schemes” is actually really awesome and not just a random ordered set of alphabets.

  8. Scott Says:

    gowers: On the whole, my interactions with mathematicians have been great—even the ones who don’t actively follow complexity theory to the extent you do are still curious about it and happy to look for problems of common interest. (I suspect that Avi Wigderson moving to IAS singlehandedly played a nontrivial role in making mathematicians more aware of complexity.) So ironically, I’d guess that these days, most of the complaints about complexity theory “not being real math” come from outside the math community. I’m also (believe me!) well aware of the ability of blog comments to amplify the opinions of a disgruntled few, so that an outsider visiting our blogs would form a completely mistaken impression of what the “expert consensus” is on some subject.

    Having said that, yes, I have had some of the arguments from the post explicitly made to me, not just by anonymous blog-commenters, but also (for example) by several distinguished faculty members while I was doing my job interviews. To be fair, it’s hard to know whether the faculty members in question really believed the arguments, or whether they were just playing devil’s-advocate to see how I reacted under pressure.

  9. Scott Says:

    Moshe: Yeah, I guess the grass is always greener. 🙂 At least almost everyone seems to recognize the fundamental nature of the questions HEP theory is trying to answer, even as they eagerly sling mud at their opponents in the String and/or Multiverse Wars.

  10. Scott Says:


    Those of us that like algebraic geometry are a loss to explain to the outsider how it is that that “coherence is preserved under proper maps of noetherian schemes” is actually really awesome and not just a random ordered set of alphabets.

    That sounds like a more-than-worthwhile effort—if you ever try your hand at writing an answer, shoot me a link!

  11. Scott Says:

    in other fields, you don’t have every single advance described as a “breakthrough”. I just wish theoretical CS weren’t so self-congratulatory.

    I agree that “breakthrough” is a slippery word: how big does something have to be to qualify? If I interpret it to mean “result that makes me want to drop whatever else I’m doing to learn the details,” then I’d say there are about 2 per year in the areas I follow. Of course, if I described every such result as “the biggest breakthrough in all of complexity in the last 50 years,” then I’d sound pretty silly before long—but do I do that, on this blog for example? 🙂

  12. Geordie Says:

    It is ironic that in a post about anti-complexitism (which I must admit to being infected with) a complexity theorist claims that complexity theorists are responsible for quantum computation.

  13. IndependentThinker Says:

    I think that the best comment so far is “Scientists should be skeptical. Some go too far.”.

    I’d like to add: “Scientists should be respectful and humble. Many go too far!”

  14. Nmm Says:

    Request for Geordie: tell us more about your anti-complexitism infection. What are your issues with the field and how did they arise? I find this fascinating.

  15. Anonymous Says:

    I think you are overreacting to the comments on Lance’s post. Lance’s post is about why others are not excited about Ryan’s result, but in response to why other should be excited he just gives reasons that explain why complexity theorists are excited and this was in his post that we have had no progress in the last 23 years in proving lowerbounds so we are excited. This gives the wrong impression to the reader that there has been no progress in complexity theory (in the very narrow sense of separating complexity classes) and now as you have said in your previous post it is slightly less embarrassing. I know pretty well how excited you and Lance are about this result and at the same time you want to avoid creating a hype similar to D’s paper, but famous complexity theorists like you and Lance should be more careful in their posts as you are representing all of the community.

    As mentioned in the comments by a number of commentators under Lance’s post, we have had progress in the last two decades even in the narrow meaning of complexity theory, so that is not the right way to explain why Ryan’s result is more interesting than say proving lowerbounds for depth 3 circuits of certain kind (note: I think those results are also interesting, but Ryan’s result is way more interesting).

    If you say we haven’t had progress and now we had made a very small progress (it looks that way from outside that going from AC^0[p] to ACC^0 while weakening the hard problem to NEXP) you cannot expect much better. Taking the negative attitude toward past to explain that the current situation is better is not the right attitude. The right way to explain the importance of Ryan’s result to outsiders is not concentrating on lack of progress in the last two decades. The positive attitude is: Not only Ryan has solved a long open problem, it seems that Ryan’s *method* (I know that others have had related works that Ryan builds on but that is not the point) avoids all known barriers and is very promising to prove even more, that is the reason complexity theorists are excited about it (I even suspect that it might develop to become its own program in a short time).

    To understand that outsiders are not as nasty as you think I suggest you take a look at Lance post (and also your own post) last year on Braverman solving a long open problem. Let me use a metaphor here, if clown claims that the next joke he is going to tell is the funniest joke in the last two decades and then the joke is just the funniest joke in the last year he might be thrown tomatoes. If you want outsiders to be really excited as much as we are, you should make them excited. The attitude of the post has a considerable effect on determining the attitude of the comments.

  16. Raoul Ohio Says:

    I am a “half way knowledgeable” fan of TCS and greatly enjoy the blog and appreciate the work it takes. I also plead guilty to some heckling of you guys who are trying to get some work done, and I have some more below.

    Anyone posting their work or opinions in a public forum is going to get some gruff. You can’t let it worry you. For example, a decade ago Scott mentioned that he didn’t think much of the music of Leonard Cohen. I am not aware that any of the million odd musicians who regard LC as the guru lost any sleep. Forty years ago I also didn’t think much of LC. That was before I heard “So long, Marianne”.

    As a scholar, you soldier on, making incremental contributions to the canon. TCS is a large industry, full of smart people, and also one large elephant called “does any of this actually matter?”. I have no insight one way or the other, but will put on the devil’s advocate hat and state that the issue should not be “swept under the rug”.

    Let’s consider “Presenting obvious, universally-known limitations of asymptotic analysis as if they were new insights.”.

    Is this nit picking, or a major issue? Presumably everybody has figured this out. But isn’t the obvious implication that P vs. NP has little practical relevance?

    Suppose it turns out that P = NP. So what? It is “universally known that” for large n, the calculation can’t be done either way. So who cares? Would P=NP be as useful as, say, the Fast Fourier Transform, or more like “How many angles can dance on the head of a pin”? You tell me.

    Now that I have everyone ticked off, I also want to throw some cold water on “spectacular accomplishments—from NP-completeness …”. Allow me to summarize NP-Completeness for the uniniated: NP-C tell us that “most of the hard CS problems that appear to have nothing in common are actually all the exactly the same”. The near-universal response was “Oh, wow! everything is the same! Who da thunk it?”. Surely I am not the only person who thought “NP-C is a really lame tool if it can’t distinguish these problems”. Can anyone name a single program written that makes any use of NP-C?

  17. Scott Says:

    a complexity theorist claims that complexity theorists are responsible for quantum computation.

    Well, Bernstein and Vazirani played a seminal role in getting the whole field started, and Peter Shor is a complexity theorist some of the time. But I’ll admit that physicists were involved as well. 😉

  18. Scott Says:

    Would P=NP be as useful as, say, the Fast Fourier Transform, or more like “How many angles [sic] can dance on the head of a pin”? You tell me.

    OK, I will. 🙂 P=NP would likely be much more useful than the Fast Fourier Transform (which is saying a lot). It would depend on the constants involved, but whatever they were, there would be a huge practical interest in bringing them down as low as possible. And of course, if you succeeded in making the constants small enough, from that point forward you could use your P=NP-enhanced computer to find algorithms with even smaller constants, and so on.

    Can anyone name a single program written that makes any use of NP-C?

    Uhh, the programs that solve chip-design, AI planning, and lots of other applied optimization problems by first reducing them to 3SAT, then running highly-optimized SAT-solvers on them?

  19. Raoul Ohio Says:

    I must amend my snide implication that NP-C is worthless. It is only worthless for actual computer programs. It is worth plenty for the thousands of people who have published papers about it.

  20. Raoul Ohio Says:

    Kindly rescind my last remark, as I stand corrected. That is what happens when you don’t know the facts and are only guessing.

  21. Tim Converse Says:

    I didn’t really believe in the premise of this blogpost (that anti-complexitarians exist) until I was introduced to Raoul Ohio by his comments above.

  22. Raoul Ohio Says:

    I might wish I had followed my usual rule of “don’t email after midnight”!
    Anyway, further reflection suggests I got a bad impression of NP-C a couple decades ago from sampling some proofs in Garey and Johnson. My recollection is that the reductions tended to result in higher order polynomial algorithms not likely to be practical. I had the misconception that the subject was about translating one problem you can’t do into another problem you can’t do. A down side of studying stuff on your own is the ease of getting stuck down a wrong path.

    The “reducing them to 3SAT, then running highly-optimized SAT-solvers” example shows that NP-C reductions are plenty useful. This example also removes a general complaint I had about N vs NP issues: I had failed to realize that since many NP hard problems have reasonably fast algorithms that often or usually work, a good reduction actually can lead to a useful algorithm.

  23. Friend and Foe Says:

    Any branch of science, and in fact human creation, has been criticized by someone sometime. This is true for relativity theory, quantum mechanics, combinatorics, group theory, Beethoven, Cubism and also complexity theory. So I wouldn’t cast this as something particular to complexity theory.

    And to exemplify how one can attack on principled grounds anything, let me answer Raoul: I reject your “real world application” criterion. I claim that, first, it’s meaningless, and second, when given some meaning, it becomes false, or ideologically biased. Here’s why:

    1. What do you mean by “real world”? There is no consensus, nor even a grasp on this concept. It’s meaningless.

    2. Let’s decide for now that by “real world applications” you mean “what some people use in industry”. So your whole point boils down to a non-scientific criterion. It is a marketing one, or economical one.

    3. Moreover, interpretation (2) could not be a criterion for science, since the purpose of science is not to advance neither the economy nor technology. But to understand certain phenomena.

  24. Jack Says:

    If you read any sports blogs, you’ll realize that only ignorant, vituperative, spelling-challenged think-they-know-it-all, no-lifers comment on blogs.

  25. a Says:

    Friend and Foe, I have to disagree with your 3rd item but I would diverge from the topic of this post.

  26. a Says:

    @Jack, thanks for including us all (me, yourself, Scott and others) among those, but I think it is more like the lier’s paradox, you have proven there is at least one. 🙂

  27. Scott Says:

    Raoul Ohio: May your intellectual honesty and willingness to admit mistakes serve as an inspiration and example to blog-commenters everywhere.

  28. Anon Says:

    I agree that anti-complexitism is not good. Perhaps, part of the problem is the attitude of complexity theorists like Lance: . I am sure many algorithms people (my anonymous self included) were peeved by this post by Lance.

  29. Anon Says:

    Oops.. the web address got messed up. Let me try again.

  30. Geordie Says:

    Nmm: Like Raoul I am also a big fan of the work done in TCS, in particular complexity theory. My favorite scientific blog is Dick Lipton’s. Gary and Johnson is one of my favorite CS books. As a purely scientific endeavor, I really like the work that Scott and people with similar interests do.

    My unease with complexity theory has arisen because, after having worked for more than a decade in building systems that have some utility in the world, it’s pretty clear than complexity theory is usually only relevant to technology development in certain obvious ways. It presents a background of knowledge and tools that you can use to build useful things, but as far as importance, it’s no more important than a dozen other considerations.

    Now I realize that my own particular professional interests (building machines that do important and interesting things) is not the same as doing basic research. But I don’t think that everyone doing complexity theory research understands this basic point.

    I think the reason that many folks might find complexity theory a little irritating is that people who actually build things see these results as the computer science equivalents to advances in basic physics. When they see proponents say that separating two complexity classes that few people have even heard of is some great advance, they (I would argue justifiably) see these advances as having no more relevance to the development of actual computing systems than (say) the discovery of the Higgs boson would have.

    I agree with Scott that many of the problems complexity theorists are working on are related to fundamental truths about the way nature works, and that advances in these directions are very important — some of them rivaling or surpassing the types of discoveries the LHC could make. However it is not at all obvious that even discoveries on this scale would be actually useful in building useful computing systems.

  31. A Layman's Layman Says:

    Could part of the problem be that complexity theory belongs in math rather than CS departments? With how quickly computing has advanced over the last 60+ years, I think people expect that discoveries in computer science should lead to something new on their desks in short order, while results in complexity theory are more fundamental and philosophical. I’ve read, maybe on this blog but I can’t remember, that the reason complexity theory is part of CS departments is because CS gets more funding than math does. I have no idea if that’s really the reason, but it does seem like computer scientists don’t think that TCS people are doing “real work” while mathematicians aren’t exposed to it as much as they should be.

  32. Mark Says:

    @A Layman,

    When CS doesn’t focus anymore on computation and on all of it’s intricacies then it would be a sad day.

    I do have to meet anyone from the department that would dismiss complexity theory etc. (doing and myself)

  33. John Sidles Says:

    Scott remarks (in his “Linear Optics” manuscript): “Complexity theorists who are not interested in the “practical side” of noninteracting-boson computation can skip Section 6, while experimentalists who are only interested the practical side can skip everything else.”

    Scott, the above advice was well-intended … and yet it seems (to me) that the ignorance and rancor that (regrettably and needlessly) sometimes is associated to complexity theory can only mitigated by encouraging colleagues to follow exactly the opposite strategy?

    It was only by careful reading of ALL the sections of your “Linear Optics” manuscript, that I could begin even to conceive reasonable questions about this (outstandingly original and important IMHO) line of research … much less extract answers to those questions.

    Complexity theory is rapidly maturing into a core discipline that impacts almost every STEM enterprise … and this means that everyone in the STEM community is going to have to work a little bit harder, to both teach and learn the lessons that complexity theory offers.

    No doubt, as the literature of complexity theory finds a wider readership, there will be STEM colleagues for whom “The virtue of a logical proof [in complexity theory] is not that it compels belief but that it suggests doubts.” To be sure, acknowledging and addressing these doubts requires a considerable effort … and yet, this effort surely is not wasted.

    My overall point is that despite the undoubted challenges that it brings, this wider interested readership is good news overall, both for complexity theory and for the global STEM enterprise.

  34. Scott Says:

    John, thanks for the comment, and especially for noticing my favorite sentence in the paper! Our secret hope was that readers would do exactly the opposite of what that sentence suggested they do, sort of like children being told that there’s absolutely no reason for them ever to enter a certain room. 🙂

  35. John Preskill Says:

    Well argued as always, but I think you’re too sensitive. As a fan of complexity theory (with only a superficial understanding of it), my advice is to relish your (many) colleagues who share your passion for the subject and not worry too much about those who don’t.

  36. Scott Says:

    John P.: Yeah, I was worried about coming across as oversensitive. On the one hand, the ideas of complexity theory speak for themselves, and don’t need me or anyone else to do PR for them—but on the other hand, I’ve been blogging for long enough that I know I need to stir the pot once in a while. 😉

  37. rasmus Says:

    Being able to solve SAT for larger input sizes would indeed be like being able to travel to far away fertile galaxies. Nobody believes it’s possible. Now some of the smartest people around are trying to proof we can’t. Or rather are making little steps towards that goal. One can not help but wonder if there is no better thing for those smart people to put their minds to. Even if one realizes that the intricate ideas of complexity theory might be and (i guess) are picked up to produce practical algorithms, one still wonders.

    I mean, humankind is facing real problems..

  38. Scott Says:

    rasmus: On the other hand, when I do write about “real problems” on this blog, people tell me I should stick to complexity, since there I actually know something! 🙂 I assume I’ll oscillate on this question for the rest of my life.

  39. Mike Massa Says:

    Isn’t everything u r doing based on the Foundational Crisis in Math?

    Maybe that’s all people are really saying, that it makes no sense because Math makes no sense

    You ask for respect, which you or anyone does deserve, but I do not see how the inner circle respects the Foundational Crisis in Math.

    The sad thing is, P=NP is easy to understand and resolve, but the Empeors of the inner circle ignore the real problem because it’s importance is differcult to understand.

    It is the sad and disgusting reason for every foundational crisis we see in every fractured part of society…. it’s Simple Reality and not Complex Theory that should be asking for respect.

    Scott, why is it being ignored and not receiving the respect it deserves?

  40. Cranky McCrank Says:

    When it comes to the importance of complexity theory, i wonder if has a successful business model :-).
    I also wonder if “Doing real mathematics could be seen as a part of complexity theory” is an acceptable answer answer to the statement “Compexity theory is not a part of real mathematics”

  41. Scott Says:

    Isn’t everything u r doing based on the Foundational Crisis in Math?

    i think the “Foundational Crisis” u r talking about was like 100 yrs ago, and was sorted out by the introduction of ZF set theory. srsly.

  42. Vladimir Levin Says:

    I’m sorry, I haven’t read all of the previous responses, so my apologies for redundancy. For me, your Democritus lectures and some posts represent a wonderful synthesis that shines physics in a different light and kind of brings everything together – physics, math, and complexity. In that respect I find complexity fascinating. However, for me what’s problematic is the the damn oracles. I respect the fact that the separation of everything between P and NP is a start, but it bothers me that, afaict, all of those complexity classes are provisional and based on the use of “magical” oracles. I think most of the people who have a bad attitude are just trolls though…

  43. Raoul Ohio Says:

    I want to respond to Tim that absolutely anti-complexitarians exist, but thanks a lot for thinking I am one of them!

    BTW, Jack’s suggestion of looking at a sports blog is a good idea to get an example of what the standard for discourse is these days.

    I acknowledge that TCS is not one of my top 10 competencies; when I was in college there were no CS courses, but you could use the IBM 360 if you figured out FORTRAN. My copy of Arora and Barak has a bookmark on page 413. I probably only read about 50 pages and understood 20, but I can play along, and want to learn something about derandomization.

    In addition to being cranky in TCS, there are a couple of items in the accepted canon of numerical analysis and one in solar physics that I regard as highly suspect. I also think that Mathematica gives a suspect integral for cosh^p(x) for noninteger p. I briefly discussed that with the expert at Wolfram Research about 10 years ago. I wasn’t sure I understood the six singularities you can expand the 2F1 function around, so maybe their solution was right, but ugly.

  44. Thore Husfeldt Says:

    For those who actually want to spread the word about how important Ryan’s result is (instead of expressing their opinion about other people’s outreach activities), please join in getting into some kind of shape. (Wikipedia’s high page rank is a fact. That‘s where people go to educate themselves. Let’s make them find stuff there.)

    I’d be particularly happy if somebody could write up the connection to solvable monoids (Wikipedia seems to have no page about the interplay between circuit classes and the Barrington–Thérien programme of computing over groups, otherwise it would be easy.) Also, what‘s the proper reference for defining ACC? Barrington (1986) defines it only impliclty, but Beigel–Tarui credit him for it. The oldest occurence of “ACC” I can find is Barrington–Thérien.

  45. csrster Says:

    Now you’ve piqued my interest Raoul! What’s bothering you in Solar Physics?

  46. matt Says:

    Well, I think you are a bit too sensitive here. But, let me take the opportunity to make a bit of a criticism of complexity theory, prefacing this criticism with the statement that I think many of the result from c-s are really among the top intellectual achievements of any field anywhere, and also prefacing it with the statement that I am going to deliberately exaggerate certain negative features for the sake of the argument. So, my complaint is that sometimes I feel like complexity theorists think that they are the only field to encounter a really hard problem (hey, I exaggerate!). Now, I will happily admit that P vs NP is the hardest problem anyone has ever invented by a long shot, and that even P vs PSPACE may still be harder than any problem invented by any other field. However, I find it hard to believe that, say, separating ACC^0 from PSPACE (which, despite Ryan’s spectacular result is still something that you would probably call light years beyond present day technique) is harder than, say, developing a consistent theory of quantum gravity. So, what happened in physics? There is an old quote of Feynman about how going to gravity conferences was a waste of time, because the absence of experiment meant that no one good worked in that field. There still isn’t experiment, but the development of a promising approach (string theory) means that now many of the top physicists do go into that field. So, the field was a backwater until a promising approach appeared. So, are lower bounds a backwater in complexity theory? Maybe they are—most of the amazing results from CS (PCP theorem I personally find to be the most surprising but there are a lot of others) seem to have little to do with lower bounds (I’m sure you can make some argument about how it is connected with lower bounds, but still…). So, would it be unfair to regard lower bounds as a backwater? Could I say that oracle results are now boring and been-there-done-that? Maybe the field of lower bounds will re-awaken now, but if this result doesn’t lead to a series of developments, why should anyone think about this problem except as recreation?

    Separate comment: it’s unfair to take people outside CS to task for boosting Vinay’s result. One of your fellow CS bloggers (Lipton) was the biggest booster. My opinion is that Fortnow/Gasarch clearly didn’t think it was worth looking at, but they hardly made that very clear. You at least came out against it, but personally I think you were way too cautious. I thought it should have been obvious to anyone that there was nothing there.

  47. sweet smile Says:

    There is no anticomplexitsm and as I see in that blog most comments were harmless–why are you so pissed of ? Leave Ryan in peace–he has done a great job and those who understands his work will admire him–you don’t need to shout.

  48. MItch Miller Says:

    Ironic that one of Scott’s tactics to defend complexity theory from unfair attacks is…. to make unfair attacks on mathematical physics!! Yes, consistency on 3+1 QFT is certainly an open problem and people have sort of got on with their lives in regards to using it, but that is quite reasonable with experiments. And there are other examples, like with the positive energy theorem, which had not just 1 but 2 celebrated rigorous proofs (Yau first then Witten).

    Now as far as I can tell the attacks on complexity theory ARE unfair. I mean, relying on conjectures… really? We all know that the “Fundamental Lemma” for Langlands was just proved and we all know that that is where deep math begins and ends :). Granted Langlands has a huge advantage over complexity theory, in that nobody is going to say anything bad about it since nobody knows what the hell it is. (I’ve already said pretty much everything I know about it, for example)

  49. MItch Miller Says:

    Matt said

    ” You at least came out against it [Vinay Deolalikar’s proof] , but personally I think you were way too cautious.”

    Ummm… didn’t he bet his house on it? Guess he should have offered up self-immolation or something.

  50. Scott Says:

    Hi Matt, a few responses:

    1. Complexity is small enough that I don’t think there exists a “circuit lower bound community” analogous to the 1950s quantum gravity community that Feynman derided for not making any progress. As far as I know, everyone (with the exception of Ketan Mulmuley) who thinks seriously about circuit lower bounds also spends the majority of his or her time thinking about other stuff (algorithms, derandomization, PCPs, computational learning theory…) on which there’s been extremely respectable progress. And even within circuit lower bounds, I think the progress over the last 20 years wasn’t at all “pathetic” if you look at arithmetic rather than Boolean complexity (in which case we had the partial derivatives method, Raz’s multilinear formulas breakthrough, the Mignon-Ressayre permanent lower bound…), and/or you’re willing to count deeper understanding of AC0 (in which case we had LMN, Ben Rossman’s great work, Mark Braverman’s proof of the Linial-Nisan Conjecture…).

    2. Everyone calls me a pessimist, but compared to you, I’m downright optimistic about circuit lower bounds! No, I don’t think separating ACC from PSPACE is “light years beyond present day technique”—maybe it is, but absolutely nothing we know today compels that conclusion. Even NEXP vs. P/poly might be doable within the next few decades. My guess would be that the quantum gravity problem is much harder. (Now, P vs. NP, that’s a different story…)

    3. “Could I say that oracle results are now boring and been-there-done-that?”
    You could say it, but it wouldn’t make it true. 🙂 It all depends on the specific oracle result in question. A lot of them are boring. On the other hand, an oracle separating BQP from PH (for example) would be a great achievement—and not coincidentally, something that would require significant advances in circuit lower bounds.

    4. You at least came out against [Deolalikar’s result], but personally I think you were way too cautious. I thought it should have been obvious to anyone that there was nothing there.
    I agree that I could’ve come out stronger than I did. But look, Matt, I put my money (literally) on the table back when almost everyone was saying that this was wonderful, that we should all hold our breaths while the proof is verified, etc. etc.—and was vociferously attacked for doing so. Might I respectfully inquire where you were then? 😉

  51. Scott Says:

    Hi Vlad,

    it bothers me that, afaict, all of those complexity classes are provisional and based on the use of “magical” oracles.

    What does it mean for a complexity class to be “provisional”? Sure, many separations between complexity classes require the use of “magical” oracles—but everyone knows that state of affairs is unsatisfactory, which is why we get so excited when a separation like Ryan’s comes along, which is unconditional and not based on oracles at all.

  52. matt Says:

    Since you ask, not being a complexity blogger, I wasn’t making public statements about it (I also didn’t get to make any public statements about the great advances, either!), but to anyone I spoke with in person I said that it wasn’t worth looking at and offered to bet any amount of money at any odds that it was wrong (no takers alas). I still thought you were totally cautious cause you didn’t just come out and say it’s a waste of time.

  53. MItch Miller Says:


    Scott didn’t even wait for takers of his bet. He offered 300k at infinite odds. I thought he handled the whole thing about as well as possible. Scott saying it is a waste of time etc would have made things worse as the media would play up the “establishment MIT prof arrogantly bullies brilliant outsider” angle. I think there are much better instances to criticize Scott’s blogging than his handling of Deolalikar-gate .

  54. Koray Says:

    I’m not an anti-complexist, but let’s take a look at the ‘spectacular accomplishments’ that you listed as a practitioner of CS:

    * NP-completeness, PCPs: None of my colleagues could recall what PCP even is (Godel prize winning work, eh?). NP-completeness dates back to the 70s and its impact on us has not changed in many decades.
    * Public-key cryptography: This too dates many decades back.
    * Zero-knowledge proofs: I’m sure somebody out there uses these, but admitting not even having heard of it won’t hurt you in a job interview.
    * Quantum computing: We’ll wait until Intel or AMD ship some.

    Nobody disputes that open problems in TCS are easy or irrelevant, yet most of us in the industry can easily afford to be blissfully ignorant of whatever happened in the last two decades.

    We won’t argue with you whether you should call Ryan’s result a breakthrough or not because we haven’t heard of it (I’m sure it made slashdot or something, but wait three months and poll us again with the formal statement alone and see how many remember whether that’s true and how recently it was discovered).

  55. rrtucci Says:

    Scott Aaronson went on to marry and have two children. One was a professional basketball player and the other was a rock star. They both earned 100 more money than their father. They both loved their father, but looked on him with pity, for being a cretin that wasted his life on something pointless and impractical called “complexity theory”

  56. spammer Says:

    rrtucci, it’s a pity that you think trying to understand the universe is a less worthy goal than making money.

  57. Scott Says:

    Koray: You’re absolutely right that there are very few industry jobs that require knowledge of PCPs, zero-knowledge proofs, or quantum computing—just like there are very few industry jobs that require knowledge of general relativity or the Higgs mechanism.

    And yet here you are, reading this blog for some reason. Why? Could it be intellectual curiosity? Or just boredom? 🙂

  58. Or Meir Says:

    Koray: In 1905 people could say “well, it’s nice that this Einstein has suggested a way to resolve the problem of why the light speed doesn’t add up, but really, everyone in the industry can easily afford to be ignorant about it”.

    Scientific discoveries take time to make an impact. It has been 40 years between 1905 and 1945. It has been less than 40 years since the notion of NP-completeness came along.

  59. Raoul Ohio Says:

    Check out new info on Uncertainty WRT Quantum Entanglement:

    This seems likely to be relevant for QC. Note it is a QT result from QIT guys, utilizing good old Alice and Bob!

  60. Vladimir Levin Says:

    @Raoul 59, isn’t wave particle duality basically another way to express entanglement (i.e. particles are entangled as “waves”), and isn’t the uncertainty principle another way to express wave particle duality (double-slit experiment)? The 3 seem to be analogous to me. I’m not sure what these people did, but if it’s genuinely considered to be some kind of breakthrough, I’ll be gobsmacked…

  61. AnonCSProf Says:

    Well, that’s quite a rant.

    If anyone was interested in learning more about why folks might have (legitimate?) concerns about the direction that complexity has taken, I would be happy to share my views on that, and I would find a frank exchange of perspectives interesting. However, I don’t get the impression from your essay that that’s what you’re looking for. To me, this blog post feels like it’s more focused on trying to tar anyone who has anything critical to say about complexity theory as acting in bad faith. So I guess anything critical of complexity theory is unwelcome here. Oh well, too bad.

  62. Tom Says:

    AnonProf — I’d be interested in hearing legitimate concerns about complexity theory — please, share!

  63. daper Says:

    Anti-Xism is a common phrase to label your detractors with, but is unlikely to get much sympathy from outsiders who can’t really tell whether anti-xism is a strange animus or just the same sort of criticism that all things get. Like racism it is a form of otherism which is a universal behaviour of tribal creatures such as ourselves. It’s a dog eat dog world, everyone has to fight their corner, we gotta make the best of it, improvise, adapt to the environment, Darwin, shit happens, I Ching, whatever man, we gotta roll with it.

  64. Scott Says:

    AnonCSProf: Please don’t feel inhibited from sharing your legitimate concerns—no one else here has! 🙂

  65. Greg Kuperberg Says:

    It is completely and irrefutably true that complexity theory is real mathematics. Complexity theory enriches other areas of mathematics, and vice versa, in exactly the same way as any two areas of pure mathematics enrich each other. For instance, the theorem that unknottedness is in NP, is equally a theorem in geometric topology and a theorem in complexity theory. The theorem that it is NP-hard to find (Fq-rational) points on an algebraic variety over a finite field, is equally a theorem in algebraic geometry and complexity theory. And so on.

    It is not even true that complexity theory is so new that you can call it untraditional. Turing’s theorem that the halting problem is not recursive was in 1937; the Hartmanis-Stearns time hierarchy theorem was in 1965. Yes, both of these are more recent than the Riemann mapping theorem. The former is the same age as the definition of Teichmüller space and the definition of cohomology. The latter is about the same age as EGA, Grothendieck’s foundations of algebraic geometry. Nobody in pure math still thinks of these things as too young to be traditional. (While in applied math, there isn’t even a reason to imagine that any part of TCS is in any way different.)

    There is also no fundamental difference in “depth” between complexity theory and other areas of mathematics. Not even some of the time. Ryan Williams proved a great theorem, but the idea that it has some sort of back-to-basics aspect is nonsense. It is a harder and more abstract result than, for example, the definition and proof of invariance of the Jones polynomial. (Certainly Kauffman’s definition and proof of it are amazingly short and accessible.)

    Really the only thing that sets complexity theory apart from other mathematics is that complexity theorists are hired by CS departments. Now, I would not say that complexity theorists should be in math departments “instead”. Math departments should try to hire some of them, but CS departments can hire whoever they want. If CS departments have reasons to hire mathematicians, great. Even so, this is only a bureaucratic dichotomy, not an intellectual one. I think that it’s good when complexity theorists simply act like other mathematicians.

  66. daper Says:

    “I think that it’s good when complexity theorists simply act like other mathematicians.”

    Like starting their own cstheory stackexchange instead of using Mathoverflow. 🙂

    Since emotion isn’t easy to convey in text, let me say that the above remark is meant as a friendly jibe rather than a snide comment.

    Is tcs considered to be math/not-math in a similar way to statistics being math/not-math ?

  67. Scott Says:

    Greg: I agree with much of what you say, but in my experience there are also cultural differences between complexity theory and most of mathematics that go beyond the bureaucratic division. (Of course, the differences are much less pronounced if we compare complexity to combinatorics than if we compare complexity to algebraic geometry!)

    Compared to most of mathematics, I’d say complexity theory is characterized by:

    – Less emphasis on generality and abstraction.

    – Greater emphasis on asymptotic scaling (especially polynomial vs. exponential); conversely, less emphasis on the detailed behavior of quantities whose asymptotic scaling is already known.

    – Greater concern with how things play out for finite values of n. (E.g., it’s not enough to know that some distribution converges in the limit to Gaussian; knowing the convergence rate is crucial.)

    – Greater orientation around a relatively-small number of central goals (P vs. NP, derandomization, faster algorithms for some fundamental problems…).

    – Greater perceived need to motivate papers when writing the introductions, either practically or philosophically.

  68. Greg Kuperberg Says:

    Scott — I agree that certain cultural differences exist or are perceived to exist. But as I see it, these differences or a perception of them can only be bad. For instance, I do not see any good reason for complexity theory papers to put less emphasis on generality and abstraction. I do not see that the results are substantively less abstract or less general. Without any difference in substance, a difference in style is surely misleading.

    Besides, aren’t “greater emphasis on asymptotic scaling” and “greater concern with how things play out for finite values” opposite concerns?

    Yes, we’re on the same page concerning combinatorics. I agree that as a matter of geography of areas of mathematics, combinatorics is closer to complexity theory (and TCS generally) than most areas. But what about Hendrik Lenstra? I think that he is at heart a pure mathematician: an algebraic number theorist and an algebraic geometer. But his work is also great TCS.

    Ultimately, any theory that “mathematicians are from Mars, complexity theorists are from Venus” is a rationalization to forego useful scholarly communication.

  69. David Says:

    If you are interested in real problems with current complexity theory research (none of them is something anyone would call critical):

    1. The main publication venue is conference proceedings. One can see the (horrible) effects in the two fanciest TCS conferences. I don’t think it would happen with journals. I’m probably barging into a wide open door here.

    2. Complexity theorists have by and large ignored the latest wave of parallel computation. The way Moore’s law work these days is through cores, not clock speed. Most commenters here probably have between 256 and 512 cores on their machines GPGPUs. At this rate we will have 10K cores on our desktops in 10 years, and the only useful thing we’ll be able to do out of the box is exactly what we can do now: linear algebra.

    3. As a mater of personal taste, I think that complexity theorists use the phrases “spectacular” and “ground breaking” more often than they should. Other people already remarked on this, and it seems you disagree.

  70. Peter Shor Says:

    Hi Scott,

    The anti-ism is well distributed on all sides. I have talked to a number of computer scientists who are utterly dismissive of any math that is continuous or involves infinity.

  71. Scott Says:


    1. No, I won’t defend the conference-proceedings system, though keep in mind that it’s common to all of CS, not specific to complexity theory. However, I happen to think that journals are a terribly inefficient, outdated system as well—to say nothing about the predatory pricing of many journals. I agree with my colleague Michael Nielsen that, if we were starting over, we’d use Web 2.0 tools to design a vastly better system for disseminating and evaluating scientific results. I hope and expect that that will happen within the next couple decades.

    2. Let me assure you that theorists have been climbing on the “multicore” bandwagon with gusto! Here at MIT, hardly a day goes by that I don’t hear that word in the G5-G6 floors of Stata (where the theorists are concentrated)—often in connection with funding opportunities. Purely from a complexity standpoint, though, it’s not clear to me what issues are raised by multicore that weren’t already studied in the 80s, when there was a huge amount of work by complexity theorists on parallel models like PRAM. We have several nice models of parallel computation, lots of interesting problems that are solvable in those models, but also the theory of P-completeness that tells us that we shouldn’t expect to parallelize everything. As for taking advantage of multiple cores in practice, my sense was that it’s mostly an issue of programming language and compiler design rather than complexity theory. However, if anyone has a bona fide complexity issue arising from multicore, I’d love to be proved wrong on this!

    3. Regarding the word “spectacular”—sorry, I try not to use it more than twice a year or so! 🙂 But what can I say? Complexity is a field largely composed of problems that seem too hard for anyone to make progress on. So, for a very long time there’s nothing … nothing … more nothing … then a clear and decisive advance, like PRIMES in P or L=SL or the proof of Linial-Nisan or NEXP⊄ACC. I might get overexcited at such moments, since they’re the main reason why I entered the field in the first place.

  72. Friend and Foe Says:

    It is completely and irrefutably true that complexity theory is real mathematics.

    Great comment by Greg Kuperberg.

  73. Anonymous Says:

    I am against drawing lines between theoretical computer science and math (and drawing lines in general, and risking offending geometers by saying it) , but I also think the claim that theoretical computer science is (just) math is not true. There are other aspects to theoretical computer science than just mathematics.

    People don’t claim theoretical physics and economics are just mathematics, although both of them are highly mathematical endures, they use and discover/invent high-level mathematics (but not just for the sake of itself), there are mathematicians in math departments working on them, there are mathematicians employed by physics/economics departments, and mathematicians have won many Nobel prizes, but that does not make them part of mathematics. Similarly theoretical computer science is not just mathematics.

    This is a controversial topic and I haven’t seen discussions by people holding different opinions on it reaching agreement, I am just stating a different viewpoint.

  74. Greg Kuperberg Says:

    Comment #73 by anonymous is food for thought. Why would I say that complexity theory (and maybe more of theoretical computer science) is real mathematics, while I agree that theoretical physics and theoretical economics are not the same as mathematics? In my view, the following is a sufficient condition for a field to be real mathematics: A field in which more-or-less everyone wants to see theorems proven. You don’t necessarily have to prove theorems yourself — maybe you have conjectures, definitions, heuristic arguments, etc. But if in a field it would be out of place to claim that theorems are peripheral, then I would call that field real mathematics. (Notice also that I said sufficient, not necessary and sufficient.)

    I think that at least much of theoretical computer science, including complexity theory, meets this criterion. Theoretical physics clearly does not, and neither does theoretical economics. Now, why it turned out that way is a good question, but for now I’ll just say that it seems to be so. As far as I can see, almost everything in the TCS Stack Exchange site is mathematics in this sense.

    Statistics is an interesting split case. Some part of statistics research clearly is probability theory and part of mathematics. Some part is more applied, is not theorem-driven, and can variously be called applied mathematics or separate from mathematics.

    If computer scientists would also like to see something called TCS where proving theorems is peripheral, fine, they can do that, it’s not my issue as a mathematician. As long as proving theorems is fundamental in an area, then there is a pull toward the rest of rigorous mathematics which is (a) inevitable, and (b) valuable. I don’t see any intellectual gain from laying claim to a separate style.

    Peter Shor’s anecdote is a good example. Dismissing continuous mathematics or infinite limits because you work in a CS department is just silly.

  75. Jonathan Vos Post Says:

    Brilliant! I have remarked in various venues on the psychiatric nature of those compelled to murder father-figures, notably Darwin, Cantor, and Einstein. Your analysis is specific to computational complexity theory in very insightful ways, yet overlaps the issues of psychoceramics. Yet we have open minds, and are willing to consider a good idea, regardless of the psychology of the source. Once in a while someone dubbed a “crackpot” by the Establishment does turn out to be original, falsifiable, and provoking new avenues of legitimate inquiry, observation, and calcuation. I’d love to see, for example, a Scott Aaronson episode of “Fringe.”

  76. John Sidles Says:

    Greg Kuperberg’s #74 and Anonymous #73 and Peter Shor’s #70 all provide solid food for thought (IMHO).

    When we contemplate the Aaronson-Arkhipov (AA) Linear optics manuscript in the aggregate light of #70, 71, 73, then we are led to a provocative thesis:

    “Twentieth century complexity theory was that branch of mathematics primarily concerned with the computational complexity of logical deduction; 21st century complexity theory is that branch of mathematics primarily concerned with the computational complexity of sampling-and-simulation.”

    Needless to say, plenty of details remain to be worked out … which for us systems engineers is the main attraction of this evolving point-of-view.

  77. Anonymous Says:

    Greg, I agree that rigorous/mathematical proofs are fundamental for theory, but there are also other fundamental aspects that mathematics lacks. I understand your point about theoretical physics (they assume lemma that they think should be true and don’t like to wait for a proof for those lemmas), but for economics I disagree since as far as I see their work seems to be highly rigorous and proof based. It has been very surprising for me to learn that many Economics Nobel Prize winners have PhD in pure mathematics not in economics! I think that theoretical physics could have been as rigorous as mathematics and theoretical computer science, but it seems that mathematics is behind what they need so they use these nonrigorous techniques in deriving their results (but still these techniques do not seem to be arbitrary, they are just not mathematically justified yet).

    I don’t like drawing a line and don’t think a line can be drawn at all, but one aspect of theory which is not shared by mathematics is that many theoreticians don’t prove theorems for the sake of proving theorems, that is why most theory papers start with motivation and background sections whereas in mathematics papers start with “Let” ;). Most mathematicians don’t seem to care about providing motivations for their work (in fact they don’t even like doing it as it is a slippery slope, if they do the expectation will force them to provide motivation for all of their papers), it is just what it is. I think this is a very important aspect.

  78. Koray Says:

    Scott, the reason I read your blog has almost nothing to do with complexity theory. Similarly, I also read Not Even Wrong and I don’t understand the physics at all, and some biology and medicine blogs. I just like the writers and their personal takes and am interested in insight on how other disciplines function, etc.

    For those who compare CS to hard sciences, or impacts of decades of work in CS to impacts of results, it should be noted that of the real world all we have is a model that is sure to be not complete if not plain wrong in some aspects. All the things that we have been making based on these models (cars, drugs, etc.) have some degree of imperfection or unrealized potential in them. Any result that verifies or corrects a model in use has a positive impact even if it doesn’t directly lead to a killer app.

    Yes, there are few industry jobs that require knowledge of general relativity. Are there any jobs that require anything more than just a passing grade in undergrad complexity theory?

  79. Anonymous Says:

    Regarding Kuperberg’s comment #74: I think one reason that physics and economics don’t concern themselves as much with proving theorems is that they need to make use of a very large corpus of mathematics. Were they to have to spend the time to learn it properly, like “real mathematicians”, it would take them much, much longer. I know of a somewhat well-respected physicist, who is considered quite knowledgeable about Lie groups and Lie algebras (to physicists, anyhow), who nonetheless has a fairly sketchy knowledge of the proofs of some of the deeper theorems in that area; and furthermore, probably couldn’t work out what would qualify as rigorous proofs of even some of the more elementary results. Though he can APPLY what he knows, and though he can toss around jargon with experts, he doesn’t really “know it” like mathematicians know it.

    In some areas of mathematics, of course, properly understanding the definitions is basically the same as “knowing it”, since in those areas the proofs are often “trivial” — all the work goes into setting up the “right definitions”. Even in such areas, however, I would doubt that non-mathematicians would bother to even work out how those “trivial proofs” actually work.

    In summary: if all one cares about is the statements of the theorems, and not their underlying proofs, one can quickly learn a lot of Truth with a modicum of effort.

  80. Thomas Says:

    Criticism of complexity theory is not constructive. Understanding what is and isn’t tractable is indisputably important. If someone can suggest a better way of studying these problems, then please publish it.

  81. KSL Says:

    In my experience, the main difference between mathematicians and complexity theorists is that the former tend to be autistic cyborgs whereas the latter have a well-roundedness akin to their asymptotic analyses.

    In relation to money. I think Scott’s bet was a perfectly calibrated response to Deolalikar’s paper. It forcefully (and courageously) stated how much he disbelieved the proof’s correctness *without* being personal. The resulting over-reaction was ridiculous.

    There is another issue about money that seems to have been less canvassed. I tend to go for my monthly TCS fix here not just because of the quality of the writing (other TCS blogs are equally as good) but simply because here there is no advertising. Part of this is the aesthetic/annoyance factor – there is nothing worse than opting for a little intellectual nourishment and instead being mercilessly flogged stuff – but there is also the question of independence.

    One has to ask the question why other blogs have been so expansive towards (and positive?) Delkokar’s paper? There has almost been a breathless narrative about will it hold … will it fold? This reached its zenith when a well-known blog linked to a seminar on the proof in which (the seminar) was quite unequivocal, and perhaps even a little condescending about Delkokar’s effort. Not once in the blog post linking to it was the seminar’s dismissal of the proof mentioned !!

    Now there are two possibilities here: either the author(s) are genuinely less convinced about the proofs public refutations or else they are quite happy to foster an ongoing drama for the additional eyeballs. Of course I assume and am hoping it is the former and I realize how controversial it is to suggest the latter but nonetheless, when there are advertising dollars on the line, the bloggers’ independence is inevitably called into question.

    I should also mention that I have no doubt that these blogs take considerable and I might say unrewarded (financial) effort. In my view the best ones *should* get paid more by CS departments for their fantastic public service and in another vein, I definitely don’t want to begrudge the authors some well-deserved remuneration for their additional labour (perhaps some alternative model is required – I would pay a reasonable subscription fee?). Equally however, I want to laud those who resist the advertising route: Scott, please keep this resource add-free – it makes a difference.

    (Kuperberg-Sidles Lovechild)

  82. A Reader Says:

    One ‘problem’ with complexity theory is that it is very likely that P \ne NP. Since we operate under the situation of not being able to solve hard problems already, proving this can be dismissed as ‘just intellectual finessing’.

    I used quotes because I don’t necessarily agree with this point of view. However, I do think Speilman’s work on smoothed analysis is exciting even to outsiders like me.

  83. A Reader Says:

    Typo, i meant Spielman.

  84. AnonCSProf Says:

    OK, here goes. (You asked for it!)

    I’m not prepared to give a coherent, unified, well-thought-out criticism of complexity theory. I’m sure what I have to say may have massive gaps and may reveal my lack of understanding of complexity theory. And they may reflect a matter of taste as much as anything else. With those caveats, some specific concerns:

    1) Lack of progress. Complexity theorists sometimes justify their field by pointing to important open problems, like P != NP. Yet there has been little substantive progress on the problem and, as far as I can tell, it’s not clear there is any plausible direction to make progress. If physicists tried to justify their field’s existence by saying “teleportation is a really important problem; if we could solve it, it would have tremendous impact”, they’d get laughed off the campus, because no one has a plausible angle of attack on teleportation. Similarly, as far as I can tell, no one has a plausible angle of attack on proving P != NP. And in any case most researchers aren’t actually working on P != NP, so it is a bit peculiar to justify the importance of your work by pointing to a problem when you’re actually working on something entirely else, which is unlikely to have any bearing on the P != NP question. So I think P !=
    NP can’t be the whole justification, or perhaps even the primary justification, for devoting resources to the study of complexity theory.

    And frankly, to respond to this by asking “what have you done that’s advanced circuit complexity?” is completely missing the point. My point is not that I’m a better complexity theorist than Ryan Williams. My claim is that no one has much of a clue how to make significant progress on the most important questions in complexity theory, and in fact we have seen very little progress in the past few decades, which calls into question whether it is wise to continue pouring so many resources into this area.

    2) “Fundamental understanding”. An argument I often hear is that complexity theory studies questions that are “fundamental” to computer science, and even if the work seems abstruse and disconnected from reality, working on those questions is vital to help us “understand” the nature of computing. Those arguments sound good on the surface (but won’t someone please think of the children?), but I’m not sure they stand up to examination. When I look at what new understanding I’ve gained from complexity theory recently, I’m hard-pressed to identify much.

    Some of the specific questions that get mentioned in this regard feel over-hyped to me. For instance, I sometimes hear people say that it’s a fundamental question to understand the power of randomness in computation: e.g., does access to randomness improve computational power? At first, that sounds like a pretty good justification. But on further reflection, I’m much less persuaded. I’d claim that, for practical purposes, we know pretty much everything we need to know. Sure, in theory, we don’t know of a general way to derandomize all randomized algorithms, with a proof that it works. Sure, it’d be great to have such a proof, if we could get it (cheaply). But for practical purposes, there are wonderful ways to derandomize any algorithm you’re ever likely to run into in real life. Just take a short seed (it can be random, or if for some reason you don’t have *any* randomness, use the digits of pi), and run it through
    a cryptographic hash function to expand it to a long stream of pseudorandom bits. I have yet to see any realistic randomized algorithm for which this procedure fails. So, yes, I know that the theory folks consider derandomization an open problem, but from my perspective, it is a solved problem for all practical purposes. From my perspective, we already have sufficient understanding of the power of randomness, and it’s hard to justify spending person-decades of research on this question. I find it hard to get excited about this question when for engineering purposes we essentially already have the answer.

    In my personal experience, I’ve gotten more “fundamental understanding” from handwavy heuristic arguments, not from the formalistic arguments that complexity theory tends to focus on.

    3) Over-focus on formalisms. I feel like much work in complexity theory has gone too far down a formalistic road, and sometimes the field gives the impression that it values formalism for its own sake, losing sight of the real goal. Asymptotics are one example: too often, complexity theorists focus on improving the asymptotics as though that in itself were of value, and then build algorithms with ridiculous constant factors. Often times other practical considerations (like, say, the difference in access time at different levels of the memory hierarchy, the effect of caches or parallelism) have a much larger effect on practical performance, but are ignored by the theoretical work.

    I know you mentioned the asymptotics criticism, but you didn’t respond to it. All you said was that you were aware of the criticism (you ridiculed critics who seemed to think that this was a new insight). Well, I know that you know of the criticism; I’m not making the mistake of thinking that I have a new insight here (a mistake that you seem to attribute to unspecified other individuals). But the fact that you are aware of the criticism doesn’t make the criticism invalid. I feel like this is an example of where your blog post focuses more on ridicule and knocking down strawmen instead of substance. It’s important to engage with the strongest criticisms of the field, rather than just tearing apart the weakest criticisms.

    4) Over-emphasis on problem difficulty instead of impact. When evaluating work, people tend to include at least two factors: how hard was it to solve the problem, and how important is the problem. I feel like the complexity theory community often places too high a value on problem hardness and not enough emphasis on the relevance of the problem. It feels like there are tons of FOCS and STOC papers that can be characterized as well-executed, technically impeccable solutions of a problem that doesn’t matter (where often it requires deep technical insights to solve the problem). But when’s the last time you saw FOCS or STOC accept a paper where the solution is boring and elementary and not very deep, but it solves an important problem that practitioners care about? Comparatively speaking, it’s much rarer. So I feel like complexity theory risks going too far down the road towards becoming a puzzle-solving or weight-lifting
    contest, where papers are valued based upon how hard it was to do the math rather than the ultimate value of the final solution.

    5) Has missed many of the key advances. When I think about some of the coolest advances in theory, many of them did not come out of the complexity theory community. For instance, in the past decade we have seen a revolution based upon discovery of powerful techniques for solving SAT. The new SAT solvers have had tremendous impact upon a number of other fields. This sounds like exactly the sort of problem that complexity could, and should, have had a hand in. After all, what could be of more fundamental relevance to complexity theory than understanding the practical complexity of SAT, and developing new techniques for solving SAT and then applying it to problems that arise across computer science?

    But in fact, the key advances came not from complexity theorists, from other communities. And in retrospect, this might not be too surprising. it turned out that what was needed, most of all, were not deep mathematical breakthroughs of the kind that complexity theorists seem to value, but rather detailed examination of data structures, clever heuristics that are extremely fast given those data structures, and other such advances. This is the kind of work that I suspect some complexity theorists might turn their nose down at, as “just engineering” or “empirical tweaking”, or at least might not value as highly as the average FOCS/STOC paper, because it doesn’t involve incredibly challenging mathematics and tons of theoretical machinery.

    Those are some of the criticisms that occur to me at the moment. Some of them may well be flawed or poorly thought-out; and I’m sure if I had more time, I could put together a more coherent comment. Please forgive the shortcomings of this note.

    And, after being so critical, I should qualify these comments. I’ve focused on the negatives, but the positives are at least as great. Complexity theory has made wonderful contributions to our field. It is of fundamental importance of computer science and needs our support. Overall, I’m a fan of complexity theory, and I think it’s positives outweigh the concerns I might have. But I also feel like the field has allowed itself to drift in directions that aren’t always positive, and I think that deserves thoughtful introspection and deliberate analysis.

    P.S. One last question. I know may have little to do with the general topic of complexity theory and its value to computer science, but I have to ask: Can anyone explain why I should care about Ryan Williams’ result? Who cares? I mean, mod 6 gates? What the heck? Who cares about mod 6 gates? Where did *that* come from? Why is it important to understand what functions can be computed by constant-depth circuits with mod-6 gates?

  85. Greg Kuperberg Says:

    Concerning comment #79: I won’t rehash my old point, but here is a new one. It somehow gets on my nerves to emphasize that CS theory papers have big introductions with motivation. Of course, most math papers also have introductions, so the question is a matter of degree. I do not know whether CS papers have more introduction on average; of course it’s easier to think that a paper in your own field is accessible and motivated. But let’s say that they do; so what? If the paper has a great result, how much introduction does it need? Sometimes when I read a paper, the introduction comes across as boring or even pompous, like an emcee who warms up the audience for a little too long.

    Concerning comment #84: I agree that it’s a big mistake to over-hype results, even if it’s done for the best intention of educating people, promoting an area that is indeed important, or complimenting people for great work. See above re too much introduction.

    I get the sense that complexity theory or even all of TCS is on the ropes in many CS departments. I don’t know whether this is a mistake (or maybe just a mistaken impression). Either way, the best theorems in theoretical computer science are also great theorems in mathematics. Mathematics as a profession is or should be a safe haven for good, rigorous research that loses favor in other departments.

  86. Scott Says:

    AnonCSProf: Thanks for your thoughtful comment. Despite your protestations of lack of understanding, I think you did an excellent job of summarizing what one might call the “standard critique” of complexity theory research.

    Rather than rebut your arguments directly, what I want to do instead—since, as a complexity theorist, I like arguing by reduction!—is to show how almost all of your arguments could equally well be applied against theoretical physics research, something that almost every academic seems to agree is valuable.

    Consider: theoretical physicists often justify what they do by pointing to a faraway goal, that of finding a consistent quantum theory of gravity that also unifies the fundamental forces and explains the values of physical constants from first principles. To an outsider, though, it’s not clear how much substantive progress there’s been toward that goal over the last 40 years. String theory hasn’t lived up to the initial hopes, and what other ideas there are (like loop quantum gravity) are considered by most physicists to have even more severe problems. As a result, because the central problems are so hard, most theoretical physicists actually work most of the time on other, more technical problems, which have at most an indirect connection to the original goal of a grand unified theory.

    Meanwhile, what clear advances there have been in fundamental physics since the 1950s have involved such high energies that the advances have no direct bearing on any practical problems.

    Yet, at the same time theoretical physicists were formulating the Standard Model, pondering dark matter, etc., there was a real revolution in consumer electronics, something one might have thought theoretical physics would be perfectly placed to have a hand in. But the theoretical physicists were nowhere to be found, so this revolution had to come from elsewhere.

    To me, at least, the arguments above make an excellent case that more money should be spent on applied physics than on theoretical physics. But that’s exactly how it is, and how it always has been! The total resources that go into theoretical physics are an infinitesimal fraction of the resources that go into various applications of physics (counting, of course, both industry and academia), just like the total resources that go into theoretical computer science are an infinitesimal fraction of the resources that go into information technology as a whole.

    Now, as for whether any resources should go into theoretical physics and theoretical computer science, I think the question boils down to the old one: do we, as a society, value fundamental knowledge or don’t we? (Or as you put it, won’t anyone think of the children?)

    Look, I know there are lots of wonderful areas of CS, from robotics to HCI to applied security (I myself was an AI person before I was theorist). But complexity theory, it seems to me, is the only thing we have that can go head-to-head against the intellectual coherence and depth of math, physics, and other disciplines that have been around for much longer than we have. So, if we don’t want it, then we should probably drop the S from CS and just be C!

    Incidentally, since you’re an AnonCSProf, I don’t know which area of CS you work in, but if you don’t mind sharing, it might be instructive for comparison purposes to put your area under the microscope too. How interesting is what you do as fundamental science? Or, if the value of your research lies not in fundamental knowledge but rather in applications, then why aren’t you in industry selling those applications to customers, rather than in academia writing papers about them? My guess would be that you have excellent answers to these questions, but hey, it’s easier to play offense than defense! 🙂

    More coming in the next comment…

  87. Scott Says:

    AnonCSProf, two other points, before I go off to prepare my quantum complexity theory class:

    I actually completely agree with you about the danger of conferences turning into technical weightlifting competitions (a danger, however, that seems to me to be common to all CS conferences, not specific to STOC/FOCS). That danger is exactly the reason why, two and a half years ago, a number of us signed a statement about the value of conceptual contributions in theory, and why a new conference (ICS) was founded specifically to address the problem. I think it’s too early to say whether these things will actually change the culture, but it made me optimistic that so many theorists were willing to go on record acknowledging the problems with the STOC/FOCS system.

    Now, about Ryan’s result: no one, I mean no one, cares about constant-depth circuits with mod 6 gates for their own sake. The importance of this class of circuits is purely that they represented the frontier: the place where progress on Boolean circuit lower bounds had been stalled by what looked like fundamental barriers for the last 20 years. If you just had mod p gates where p is prime, then the seminal work of Razborov and Smolensky from the 80s let you prove circuit lower bounds, by approximating the function computed by the circuit by a low-degree polynomial over the finite field Fp. Unfortunately, however, there’s no finite field of order 6. (The importance of 6 here is just that it’s a product of distinct primes: 10, 14, or 15 would have worked just as well.)

    While that initially seemed like a niggling technical problem, evidence has built up that it’s more than that: today, many of conjecture that there are actually pseudorandom functions computable using mod 6 gates, whereas we know from Razborov-Smolensky that there are no good pseudorandom functions computable using mod p gates where p is prime. (Certainly there exist such functions, under standard assumptions like the security of RSA, if you go even slightly above ACC to TC0, the class of constant-depth circuits with majority gates.)

    And we know, because of Natural Proofs, that the ability to compute pseudorandom functions really does represent an important “phase transition” in the power of a circuit class.

    Besides Natural Proofs, there were two other known barriers to circuit lower bounds—namely, relativization and algebrization—that Ryan’s result gets around, by using a powerful new approach whose limits seem far from exhausted, and which you can read more about on Lance Fortnow and Dick Lipton’s blogs if you’re interested.

    So, to summarize, there’s still undoubtedly a huge number of barriers to overcome before we can answer questions on the scale of P vs. NP, but Ryan’s circuit lower bound technique is the first to avoid all three of the barriers that had been specifically identified in a fully convincing way. As a result, future work aiming toward P≠NP can start from a higher point than before.

  88. Scott Says:

    Oh, but Greg:

    Sometimes when I read a paper, the introduction comes across as boring or even pompous, like an emcee who warms up the audience for a little too long.

    I think you just made my point for me! Most people would say that, when your main act consists of “denote by Gα(kα) the group of kα-rational points of Gα…,” more warming up of the audience is almost always helpful. The people who don’t feel this way are called “mathematicians.”

  89. matt Says:

    Scott, I think you are confusing “theoretical physics”, a very broad field consisting of many subfields, with “theoretical high energy physics, not including phenomenology”, a much smaller field. It’s a bit like confusing “computer science” with “complexity theory”. In fact, most people in theoretical physics (which includes some subfields which are more mathematically rigourous than high energy) are not working on anything connected with quantum gravity, and don’t point to quantum gravity as a justification for their work.

    Amusing to me that you refer to high energy theory, though. I think they also suffer from a bit of the tendency to talk about breakthroughs that complexity theorists do…having had several revolutions now and multiple “mini-revolutions” (there’s a term for it, even). However, like complexity theory, I greatly respect what they do, and I think that they are also aiming at a very hard goal. However, my opinion (as an outsider to both field but one who may know a little more than most physicists), is that they are making much more progress on a consistent quantum gravity than complexity theory is on P vs NP.

  90. matt Says:

    Btw, quick CS question—-even if mod 6 gates allows you to compute pseudo-random functions, why is Ryan’s result really evading natural proofs? That is, he is showing that a huge class (NEXP) is strictly bigger….on the other hand, when we call a function pseudo-random we might mean that from our merely mortal viewpoints of P that it looks random to us. That is, I can tell when a string of bits comes from a pseudo-random generator if I am allowed exponential time: just loop over all possible seeds and check.

  91. Greg Kuperberg Says:

    Scott, I don’t know whether your example is real or fictitious. There is a clear bias in this discussion toward thinking that that introductions in YOUR area are perfectly comprehensible, because of course YOU are trained to understand them; while introductions in SOMEONE ELSE’S area are all gobbledygook and not really introductions at all. Here is a real example:

    “Derandomization has strong ties with circuit lower bounds. In the setting of randomized decision procedures the existence of pseudorandom generators is equivalent to circuit lower bounds for E = DTIME(2^O(n)). At the high end, pseudorandom generators that yield BPP = P are equivalent to E requiring circuits of linear-exponential size. At the low end, pseudorandom generators that put BPP in deterministic subexponential time are equivalent to E requiring circuits of superpolynomial size.”

    If this is meant as an introduction for outsiders, it doesn’t work. What is “derandomization”? What is the philosophical significance of ties between that and circuit bounds? What is a “randomized decision procedure”? What is a “pseudorandom generator”? What’s the “high end”? What does it mean for a generator to “yield” an equality of … I’ll even grant you that it’s an equality of complexity classes, since they are defined in the Complexity Zoo, albeit not in the paper.

    It’s not that I think that this is a bad introduction — despite certain debate positions of the first author. It’s really fine. I don’t want every paper in complexity theory that I read to begin with 5 pages from a textbook. The paper is meant to be short, as it probably should be. I would rather accept certain practical necessities than complain, especially in other people’s areas of research.

  92. matt Says:

    Greg (one last comment and then I’ll shut up), you are totally right. I’ve heard some cs people (and some physics, and some math, and so on) say “I can’t understand this paper organization, this intro, this statement, etc…”, then you go read their own papers and it’s just as incomprehensible unless you put in the work required if you want to read outside your own field. My favorite example, in a math paper I saw “Let … be defined in the natural way.” Excuse me? Wait, this is math, the field that is totally precise on all definitions, and they just say that? Well, the point was that it was in fact a completely precise definition, and very clear to anyone who had read some previous papers, but if you hadn’t read them you would think that this paper (published in CMP, the “gold standard” of math phys) was unable to even define its terms right (but, of course, they had!). CS paper, well, imo their drawback is that they are written as if by people who like to program using libraries for most of the routines…you have to pick through them to find the 1 paragraph that actually has the key step, cause it is buried in everything else.

  93. Greg Kuperberg Says:

    Also, to explain your (possibly hypothetical) example: Let G be a group defined by polynomial equations. For example, the circle in the plane is a group (since you can multiply points if you interpret them as complex numbers) described by the equation x^2 + y^2 = 1. Instead of all of the points in this group, you might be interested the ones that are rational, meaning that x and y are rational numbers. Let’s call the set of these points G(Q). Miracle of miracles: If the group law and inverse law are also polynomials, as in this example, then the set of rational points G(Q) is also a group. Of course, one might want to generalize this principle and replace Q by some other field k. The points are called k-rational and the resulting group is written G(k).

    Is that so bad? This much is almost undergraduate material. Again, it’s just not practical to repeat it across thousands of papers on algebraic groups.

  94. Scott Says:

    Matt, quick answers since I’m on my iPhone.

    I understand the distinction between high-energy theory and theoretical physics broadly construed (though I’d say it’s more like the distinction between complexity and TCS, not complexity and CS as a whole). The problem is that AnonCSProf framed the terms of the debate in such a way that it’s almost impossible to win: if your field isn’t oriented around an incredibly-big aspirational problem (like quantum gravity or P vs NP), then it’s probably technical and esoteric, but if it is, then what progress have you made RECENTLY and when should we expect an answer? 🙂 That’s why my response switched between these two different views of what TCS and theoretical physics are about.

  95. Scott Says:

    Also, Matt #90: The sense of “pseudorandom” relevant to the Razborov-Rudich argument is indistinguishable from random in 2^O(n) time, where 2^n is the size of the truth table. By contrast, the seed can have size s=poly(n), and thus trying all possible seeds would take 2^s time, which dominates 2^O(n) for suitably chosen s.

  96. Anony Says:

    The problem with Complexity theory is that it comes across as more religion than science. As any respectable religion, it has a promise of a Messianic moment sometime at its future – the moment where there would be a proof of P=MP. And like any true religion, this moment seems to be delayed till the second coming of NP. Like any true religion it has an evolving and ever more complicated layers of talmudic results and notations that only true believers can follow (did you know that if XFP is contained in MCC then UPD(5) is contained in QACC(7)?). Like any true religion, any sighting or a whiff of a sighting of the PROOF, gets the true believer into a trance of excitement. Sometime, as demonstrated recently, even a clearly fake sighting of a proof gets the (admitted badly informed) semi-believers into a religious fever, similar only to what happened in history when fake Messiahs appeared.

    As any true religion it gets into discussions about things that no person detached from this bizarre glass-bead game would care about (how many Angles can fit on the head of a pin? [As we know now, the answer depends if you use first fit or best fit.]).

    Not that there is anything wrong with religion. It brought into the world some beautiful concepts, from justice for all, equality, morality, forgiveness, ways to deal with mortality, etc. And so did complexity, to a much lesser extent, from the work on PCP, hardness of approximation, space/time hierarchies, automate theory, etc. And since very smart people work in complexity, sooner or later some useful things come out of it, every once in a while, despite their best efforts.

    And as long as complexity theorist limit themselves to crusades only on blogs, and avoid Jihads with the infidels, I think we can all live in peace. At least until the next blog entry…

  97. Scott Says:

    Anony: I wonder how many worthwhile human endeavors there are, or have ever been, for which a similar comparison to messianic religion couldn’t be made?

    Praise be to NP (whose Name should never again be blasphemously misspelled “MP”), and may It be separated from P efficiently within our days. AmenAmen.

  98. AnonCSProf Says:

    Thanks for the thoughtful and courteous replies! Some reactions:

    Rather than rebut your arguments directly, what I want to do instead—since, as a complexity theorist, I like arguing by reduction!—is to show how almost all of your arguments could equally well be applied against theoretical physics research, something that almost every academic seems to agree is valuable

    I don’t know anything about theoretical physics research, and I’m not committed to any particular view of theoretical physics research (maybe it sucks, for all I know), so unfortunately this reduction doesn’t help me very much.

    One thing I want to clarify: I’m not arguing that complexity theory has no value.

    I think the question boils down to the old one: do we, as a society, value fundamental knowledge or don’t we?

    See point 2) in my earlier message, where I commented on the defense of complexity theory that it is about “fundamental understanding”, and why that argument doesn’t work as well for me as one might think.

    Also, if we were going to organize a discipline motivated solely by gaining “fundamental understanding”, I’m not sure we’d end up with the same value system that complexity theorists currently have (for instance, I’m not sure we’d value formalisms and deep mathematics for their own sake as much as complexity theory appears to).

    I think some of the best work in complexity can be well-justified as “fundamental understanding” (for instance, smoothed complexity jumps out at me as a wonderful insight). But I also feel like there is a lot of work that feels like that it’s really mathematics that is trying too hard to justify itself in pragmatic terms, and instead ought to admit that it is mathematics and is/isn’t interesting for the same reasons math is/isn’t interesting (e.g., for the sheer beauty and elegance of it, rather than because it is important to a fundamental understanding of computer science). If you accept that, then a possible corollary is that the latter kind of work ought to receive resources comparable to the resources that mathematics departments receive, but not resources comparable to areas of CS with greater practical impact.

    If you don’t mind sharing, it might be instructive for comparison purposes to put your area under the microscope too.

    That’s a perfectly fair request, though I doubt it will make a big difference in this case. I think we should assume that if we put my own area under a similar microscope, we’d be able to come up with pretty significant criticisms of my area. I work in an applied area, so I think the specific criticisms would be different in nature, but quite plausibly comparable in severity.

    I don’t mean to suggest that complexity theory is the lone black sheep of computer science. I should have stated that explicitly earlier.

    But it does look to me like complexity theory has some issues that deserve examination. Even if we both have motes in our eyes doesn’t mean that either of us is justified in ignoring the shortcomings of our field. And trying to compare whose mote is larger is probably not a constructive exercise.

    P.S. Thanks for the comments about Ryan Williams’ work. To try to see if I understood, I’m not sure if the argument for studying mod-n gates is that 20 years ago, someone studied mod-n gates and hit some barriers (if so, that pushes the question back 20 years, to ask why did they study mod-n gates back then, and why should we continue studying them today; and raises the question of whether this is an example of weightlifting, where the question is motivated by the claim that it is hard to solve rather than that the question has any relevance of its own) or if the argument is not that mod-n gates are important, but that this is the first example we have of a lower bound result that avoids the three major barriers to lower bounds, and that’s the interesting part, not the specific lower bound that was proven (which would make more sense, so I’ll assume this is it). I don’t think it’s important, and I’m happy to drop it, so please don’t feel obliged to comment further on Williams’ work.

  99. Greg Kuperberg Says:

    Scott: I wonder how many worthwhile human endeavors there are, or have ever been, for which a similar comparison to messianic religion couldn’t be made?

    You can make a lame comparison between messianic religion and more or less anything.

  100. harrison Says:

    Warning: Except for the final paragraph, this is a serious post.

    I think there’s perhaps a corollary to “focus on problem difficulty instead of impact,” which may or may not be a contributing factor to anti-complexitism.

    For some background, let me give the following Fake History of Complexity Theory, which only loosely reflects reality but is easy enough to pick up as a narrative:

    From the 1930s through the 1960s, the basics of modern-day theory were developed, starting with Turing’s resolution of the Entschiedungsproblem and continuing through the work of Hartmanis, Edmonds, Blum, etc. Much of this work was in algorithms, and the “paleo-complexity” results, like the hierarchy theorems, were largely imitations of Turing’s diagonalization.

    In 1972, Cook defined the class NP, showed the existence of complete problems, and conjectured that P \neq NP. At the time everyone expected it to fall soon, probably by another sufficiently clever diagonalization argument. But this hope was quickly dashed by Baker-Gill-Solovay’s paper on relativization.

    But by then interest was growing in logical approaches, following Fagin’s characterization of NP. Immerman’s work seemed to give hope that logic might resolve P vs. NP, but it eventually became clear that, if nothing else, the methods of descriptive complexity were too delicate and subtle for most people to really get a good grasp on.

    In addition there were combinatorial steps being taken, via circuit complexity. In the early-mid ’80s Ajtai, Hastad, Sipser and others proved separations for very restricted circuit classes. Everyone hoped this could lead to a solution. In ’85, Razborov exploded onto the scene with a superpolynomial lower bound for CLIQUE in the monotone circuit model. Everyone was sure that P != NP was just a few years away. But after 1990 or so, progress ground to a halt, and eventually Razborov and Rudich showed that the ’80s methods were insufficient to prove P != NP.

    Nevertheless, there was another glimmer of hope. In 1990, Shamir had proved IP = PSPACE via non-relativizing methods; these also avoided Razborov-Rudich. Many cautiously hoped that these non-relativizing methods could maybe eventually be used to prove an interesting separation. But in the late 2000s, Aaronson and Wigderson showed that this, too, was a pipe dream.

    What’s the point of all of this? Well, just that “breakthroughs” don’t mean much if they don’t eventually lead to marquee results, stuff like P != NP (or at least P != PSPACE) or inapproximability or derandomization or… etc. And complexity theory doesn’t have a great track record of proving marquee results, especially not marquee separation results.

    It’s like if a Virginia Tech running back rushes 10 yards for a first down from VT’s own 20; sure, it advances the ball, but if they can’t get into at least field-goal range, will it really end up making that much difference? And even if Ryan Williams conquers the ACC, is that so meaningful, especially this season?


  101. Scott Says:

    harrison: Thanks; I enjoyed your Fake History of Complexity Theory!

    While I know astonishingly little about football, I’d like to point out what strikes me as a crucial difference between the two Ryan Williams / ACC situations that we’re comparing. Namely, in science, the ball doesn’t get moved back to the middle of the field at the end of every … whatever you call it. Instead it stays however many yards the last player moved it, ready for the next player to come along and move it further, possibly after years or decades. Furthermore, there’s no opposing team moving the ball in the opposite direction—unless you count, say, Theophilus burning down the Library of Alexandria. But the ancient Alexandrians didn’t have Google Cache.

  102. Scott Says:

    AnonCSProf #98: Two quick responses.

    First, I think one reason complexity theory needs to be mathematical is this: over and over again, what looked at first like “hairsplitting mathematical formalism” turned out, on closer examination, to represent a huge conceptual issue that those who refused to think mathematically simply missed.

    To give one example, I’m constantly meeting scientists from other fields who think that factoring is “basically NP-complete.” I mean, factoring and the Traveling Salesman Problem are both exponentially hard, right? So who cares if we happen to know a polynomial-time reduction from factoring to TSP, but not in the reverse direction?

    However, if you thought of factoring and NP-complete problems as being basically the same thing, then you could never have discovered public-key cryptography. Nor could you have discovered Shor’s quantum factoring algorithm. To me, those represent two of the most surprising discoveries in any area of science in the last 40 years, no matter which way you slice it.

    Now, as for Ryan’s result: I’m happy to talk more about it, since it provides a perfect illustration for how what seems at first like an arbitrary thing to study (e.g., ACC circuits) has perfectly reasonable motivations, if you’re willing to take the time to understand what they are.

    Ryan’s result is not just “technical weightlifting.” A better (though still-imperfect) metaphor is that we all knew the goal was to walk on the moon (i.e., prove P≠NP and related separations), and what Ryan has done is to build a model rocket that gets a couple hundred feet into the air, whereas all the previous rockets had suffered from long-identified technical limitations that had kept them from getting up more than 50 feet. It’s easy to snicker, “200 feet is still a helluva long way from the moon! and there’s nothing all that interesting to see 200 feet up, anyhow.” But forget about the 200 feet themselves, and think instead about the improvements in rocket design that made getting there possible. It’s entirely plausible that those improvements really are a nontrivial step on the way to the moon.

    So, I suspect what you really want to know is: why do modular arithmetic gates represent “up” in any meaningful sense? There are at least three answers to that question:

    (1) From a CS perspective, modular counting isn’t that weird or unmotivated! I mean, for crying out loud, “mod” is one of the basic operations in every programming language that I know about, right along with +, -, *, /, AND, OR, NOT!

    (2) The ACC model is robust, in the sense of being equivalent (or closely related) to other models that people came up with for different reasons and that seem at first glance to have nothing to do with modular arithmetic. For more about this, see the answers to this StackExchange question by Dana Moshkovitz.

    (3) Give me any other notion of “up” (i.e., any other model of computation that slightly generalizes the ones complexity theorists already understand), and I almost guarantee you that people studied that one as well! I.e. besides better rockets, we’ve also considered building really tall ladders, roping the moon down to earth, you name it! If you know a better way of getting to the moon, do let us know. 🙂

  103. John Sidles Says:

    Suppose that a scholar answered the question: “In the novel Huckleberry Finn, why did Huck and Jim pass the town of Cairo?” with the logically impeccable answer “Because Huck and Jim could not see Cairo in the dark of a foggy night.”

    This literalist answer is unassailably correct with respect to logic and Twain’s text … and yet we would not think highly of any scholarly discipline that insisted that this literal answer was the complete answer.

    This same principle applies broadly across all STEM disciplines, including complexity theory (CT) in particular, and it is mighty fun and interesting to deconstruct the modern practice of CT from the Huck Finn perspective.

    We teach system engineering students to deconstruct STEM texts by asking “Is this text conservative in its theorems, progressive in its postulates, and bold in its enterprises?”

    A particularly useful word to study is consequently and its variants. Does a text use consequently only in a strictly logical context, to link propositions, theorems, and lemmas? Or Is consequently associated to a wise choice of starting postulates? Or is consequently associated to the conscious, systematic design of enterprises and narratives?

    Almost every person who launches a great STEM enterprise, or conceives a great STEM narrative, is impelled to write a book about it, and it is mighty interesting to deploy the power of Google Books to study these authors’ use of the work consequently.

    Here are three very different examples, from three authors who all built great enterprises upon strong foundations in information-theoretic mathematics:

    von Neumann:  “We recognize that each conceivable measuring apparatus, as a consequence of the imperfections of human means of observation (which result in the reading of a pointer or of locating the blackening of a photographic plate with only limited accuracy), can furnish this [measurement value] with only a certain (never vanishing) margin of error. This margin of error can, by sufficient refinement of the method of measurement, be made arbitrarily close to zero — but it is never exactly zero.”

    Grove:  “Why would computer executives who had proven themselves to be brilliant and entrepreneurial managers throughout their careers have such a hard time facing the reality of a technology-driven strategic inflection point? Was it because they were sheltered from news from the periphery? Or was it because they had such enormous confidence that the skills that had helped them succeed in the past would help them succeed in the face of whatever the new technology should bring? Or was it because the objectively calculable consequences of facing up to the new world of computing, like the monumental cutbacks in staff that would be necessary, were so painful as to be inconceivable? It’s hard to know, but the reaction is all too common. I think all these factors were important, but the last one—resistance to a painful new world, was the most important.”

    Venter:   “Over the years I have changed from being a superconsumer of fossil fuels who was ignorant of the consquences to someone who has become concerned enough about their effects on the environment to seek alternatives.”

    I think everyone hopes and expects that CT/QIT will help provide solid foundations for the great STEM enterprises of the 21st century. My experience has been, however, that the CT/QIT literature’s use of the work consequently is of uneven quality: laudably robust in respect to theorems, unremarkably flabby in respect to enterprise, and—most regrettably of all—peculiarly specialized in respect to the oracular elements of the standard postulates of CT/QIT.

    In practice, this means that STEM-oriented readers are required to work out enterprise-related implications of CT/QIT on their own … which for most folks (me for sure) is an arduous and error-prone task.

    Thus, by far the most challenging task in grasping the CT/QIT literature nowadays is not grasping the theorems, but rather grasping the postulates underlying those theorems, within a context that simultaneously respects both the logical imperatives of mathematics and the practical (and moral) imperatives of enterprise.

    In summary, if CT/QIT workers worked as hard to describe the consequences of their work with respect to postulates, enterprises, and narratives, as they presently work to describe their propositions, theorems, and lemmas, this might greatly help to keep all of us in the STEM enterprise from “drifting past Cairo in the foggy night.”

  104. Igor Markov Says:

    Here’s my 2 nanocents 🙂

    Yes, sniping at complexity theory research and its results is often misguided (as eloquently articulated in the blog post), and, yes, key people will eventually grow thicker skin, for better or for worse, and continue doing good research.

    Now on to second-order effects!
    Some of the skepticism about Complexity Theory can be traced to “complexity theorists going too far” in trying interpret their results in more accessible terms, usually by emphasizing advantages and omitting limitations. “Finding a needle in a haystack” (for quantum search) is one example. Colorful metaphors and flashy generalizations can mislead in the long term and undermine public trust in future claims. This is not unique to complexity theory. Recall the bruising backlash against (unfulfilled) early predictions in the field of AI (AI seems to have recovered just fine). Anyone heard about backlashes against algebraic geometry or algebraic topology ? Mathematicians try hard to avoid misleading others (and sometimes go too far in that, like G. Perelman).

    Theorists sometimes make casual claims about applications, but ought to be more careful. Here’s a (somewhat insignificant) example. Scott writes “the programs that solve chip-design, AI planning, and lots of other applied optimization problems by first reducing them to 3SAT, then running highly-optimized SAT-solvers on them?” All leading SAT-solvers (zChaff, MiniSAT RSAT, etc) support clauses of different lengths, and this was true since the early 1960s (DP, DPLL). Using specifically 3SAT does not seem to buy anything in practice, and none of the software for chip-design or AI planning that I am familiar with uses reductions to 3SAT. I would also replace “chip design” with “chip verification” in the list of SAT applications — these are much more direct and significant.

    Don’t get me wrong, there’s nothing wrong about theorists showing fluency in applications, as long as no one is mislead.
    For example, to look for needles in haystacks, the readers of the New York Times should use _magnets, or Google, or binary search, or SAT-solvers — quantum search will be very far in this list of helpful techniques.

    Once again, I am not terribly worried about second-order effects, but anyway they can be fine-tuned in the future to make everyone happy 🙂 Correct me if I am wrong !

  105. Job Says:

    I can empathize but this sounds alot like: My field is better! No, my field is better! No, i’m the intellectual! No me! Me, myself and my stuff!

    There will always be someone who sees no value in what you do – if it makes you feel any better, these people are usually in denial, living in some sort of self-indulgent fantasy.

    Here are two ways to get noticed:
    1. lead the way
    2. stand in the way

  106. Job Says:

    “When you’re leading, you generally have your back to the crowd.”

    That’s what i should have said.

  107. Scott Says:

    Igor: Thanks very much for the corrections! So, in my existence proof for computer programs that make use of NP-completeness, please replace 3SAT by kSAT and chip design by chip verification.

    Incidentally, a while back Koray was making fun of my list of “spectacular achievements” of complexity theory for either being several decades old or, if recent, then completely impractical:

    Zero-knowledge proofs: I’m sure somebody out there uses these, but admitting not even having heard of it won’t hurt you in a job interview.

    As it happens, just yesterday I was speaking with one of our systems PhD students, Raluca Ada Popa, and she told me about how her group implemented zero-knowledge proofs to protect driver privacy in automatic toll-collection systems such as EZPass:

    Raluca Ada Popa, Hari Balakrishnan, and Andrew J. Blumberg. Protecting Privacy in Location-Based Vehicular Services. In the proceedings of the 18th USENIX Security Symposium, (USENIX SEC’09).

    Give it another decade or two, and I’m guessing that never having heard of this stuff could hurt you in a job interview!

  108. jonas Says:

    You know what strikes me odd? Algebra is criticized because it’s too abstract and doesn’t help in real world problems. Combinatorics is criticized because the methods used are too ad hoc, and don’t have a unified theory. Apparently complexity theory is criticized for both of these. If you believe these critiques, real mathematics would have to cover all real world problems and have a simple theory that can solve any such problem automatically?

  109. John Sidles Says:

    In a short yet immensely influential 1948 essay titled On the Notions of Causality and Complementarity (which was reprinted in Science in 1950, and subsequently anthologized numerous times), Niels Bohr articulated a principle that has served the quantum physics community well for more than 60 years:

    “All well-defined experimental evidence, even if it cannot be analyzed in terms of classical physics, must be expressed in terms of ordinary language making use of common logic.”

    A crucial benefit that is associated to the widespread embrace of Bohr’s maxim, is that in subsequent decades, discussions in the physics literature of the consequences of quantum dynamical models in general have taken care to reduce their implications to testable consequences, by which the design of quantum experiments can be concretely validated, the integrity of experimental data can be concretely verified, and physical hypotheses can be concretely tested.

    Thus for more than 60 years we have been able to read the quantum literature without any risk of feeling cheated, confident that toward the end of most articles, a concrete reduction to “experimental evidence in terms of ordinary language making use of common logic” will be presented.

    It seems to me that the CSE/CT/QIT community would benefit substantially if a similar principle were explicitly formulated and widely adopted, regarding in particular the discussion of oracle-dependent theorems.

    Here the point is that oracular models of computation differ as substantially from physical computation, as quantum state-spaces differ from classical state-spaces.

    What is conspicuously missing from many CT/QIT articles—even articles that explicitly link theorems to experiments—is an Bohr-style discussion of the analysis of experimental evidence that expresses its conclusions wholly “in terms of ordinary language making use of common logic” … that is, without any reference to Hilbert spaces *or* computational oracles.

    The discipline imposed by Bohr-reduction would be especially beneficial now that, as the Aaronson-Arkhipov Linear Optics manuscript puts it, “The most exciting challenge [that recent CT/QIT theorems] leave is to do the experiments.”

    Decades of quantum physics articles have conditioned many STEM readers to expect a concluding Bohr-reduction in any article that links theoretical models to concrete experiments … and perhaps this has been a globally beneficial practice that the CT/QIT community would do well to embrace in coming decades.

  110. Terry Locklen Says:

    John: Do you follow these comment threads at all, or just copy and paste stock paragraphs interspersed with the words “STEM” and “QIT” in random blog posts?

  111. asdf Says:

    Harrison #100, in a just world, PCP would be considered a marquee result.

  112. John Sidles Says:

    Terry, any fan of Monte Python appreciates that when British knights (of theorem-proving) encounter French knights (of systems engineering) misunderstandings comedically ensue.

    Over on Dick Lipton’s blog, Vaughan Pratt asked earlier today (Thanksgiving day):

    (1) Is the space that quantum mechanics currently models as Hilbert space really flat (in the sense of Gauss) the way Hilbert space is?

    (2) If not, would that compromise quantum computing in any way? For example is there some curvature, positive or negative, at which the current best error correction methods fail?

    With respect to Pratt’s questions, we can read (for example) the recent Aaronson-Arkhipov (AA) manuscript narrowly, as being primarily concerned with inclusive relations among complexity theory classes, in which case questions like Vaughan Pratt’s questions and the AA theorems have zero mutual relevance. Or we can read AA more broadly, as describing a new class of experiments, having novel elements in complexity theory, that allow new tests of various hypotheses that are very naturally associated to Pratt’s questions. Or we can read AA *maximally* broadly, as providing new mathematical insights that are relevant to the practical question of whether Nature’s state-space is effectively non-Euclidean, for broad classes of experiments, devices, and technologies. It is this latter, broad reading—not only of the AA manuscript but of all of complexity theory—that is most congenial to systems engineers.

    The point here is that the STEM enterprise accommodates many Holy Grails, and it very commonly happens that in seeking one Grail, researchers find a very different one. Good! Thus, regarding the many Grails that Nature so abundantly provides, we need only embrace a broad appreciation and Python-esque sense of humor, to have ample grounds for Thanksgiving, in this holiday season.

  113. Anonymous Says:

    Both mathematicians and engineers seems to be asking theoretical computer science to be what they think it should be (and their objectives are inconsistent with each other.), but TCS is simply not what you want it to be. Exact proofs in theory can be considered mathematical work, applications of theory can be related to engineering, but that does not mean that all TCS should fall in one of these categories or other ones.

    Greg, the point of motivation is not that to define and repeat the basic stuff which every researcher in that area knows, the point is if you are going to define some *new concept* you have to try to justify that it is not just another made up definition. That is completely different from repeating what is known by people in the field. The objective of motivation is not making it accessible to outsiders but to make it clear for insiders that this new object is interesting. I think you know some areas of mathematics that keep inventing new definitions and proving theorems about them where no one know why any other mathematician should care about any of those results. It doesn’t need to be long, it just needs to convince the reader who is familiar with the subject that the work is interesting. It does not need to go as far as finding a real application, but it needs to justify that it is not another made up definition for the pure sake of proving another theorem. Some good mathematicians follow this rule but in my experience most mathematicians care much less about it.

    TCS is not just complexity theory; Crypto, AGT, CG, Algorithms, … are also TCS (to name a few of areas in TCS with applications). But expecting every area in TCS to be as close to applications as others is misguided. To solve a problem you just need to come up with an algorithm (if you prove it is a good one everything is OK, if you can’t but real world samples shows that it works then you call it heuristics, I think without complexity theory, algorithms would go toward becoming numerical analysis, people unfamiliar with NA should go and read a few NA papers on integration algorithms and how they are compared to each other to get the feeling), but at the heart, complexity theory is about proving *nonexistence* of algorithms! It is naturally both more difficult and less applicable. To do something you just come up with some way of doing it, but how would you show no-one can ever do something? Complexity theory generates results with applications from time to time, but the main part of it is the study of impossibility, it is about the limitations, and most people don’t like hearing about what they can’t do, they just want to know what they can do. Without complexity theory people might have spent considerable resources on trying to find nonexistent algorithms. I would say it is about strategy compared to algorithms which is about tactics. To get a fair evaluation of complexity theory one should not focus on what are the applications of complexity theory but rather on what would be the consequence of not knowing a complexity result. Governments and companies might have been spending huge amounts of money trying to solve particular NP-complete problems if the notion of NP-completeness didn’t exist. Unlike tactical decisions, strategic decisions are not in front of eyes of everybody, and are valued less even if their contribution is much larger. If you find a good algorithm for a problem it allows you to patent it and make profit, but proving that there is no such algorithm does not help you that much but it has benefits for the community if they know about it.

    I agree comparing TCS with theoretical physics is a good one. One does not need to use relativity or quantum physics to design rockets or bridges or skyscrapers, Newtonian physics suffices, these theoretical works have applications only in a very small but very influential applications like nuclear power reactors. Similarly you don’t need to know much computer science (let alone theoretical computer science) to program in Google or Microsoft. Most software projects only need knowledge of (using) programming languages, databases, networking, … . Most bachelor in CS/CE working in the industry use a very small amount of what is being thought in universities. But there are a very small number of projects where understanding the limitations of what can be done efficiently is essential.

    I agree that complexity theory should get a little bit more interested in how algorithms perform in real world, but the situation is not that bad, after all we now know why simplex performs extremely well in practice, or when available SAT solvers fail. At the end, classes like P and NP are abstractions and simplifications like physical theories, and it is up to engineers to know the limitations of theoretical models in particular applications.

    Another criticism is that complexity theory is focusing too much on P vs NP and are ignoring other models, but the problems we are facing in separating P from NP are also present in other formulations of efficient computation and efficient verifiable, so it is not unreasonable to take P vs NP as the flagship for these problems.

    The final criticism seems to be: complexity theory makes too much noise which people in other areas do not like. I really don’t understand this one, I mean what is wrong with people in an area being excited about their area? Is that wrong? And where do we make those noise? On complexity blogs! If you don’t like complexity theorists being excited about their area why are you reading these blogs? The metaphor would be an outsider going to a wedding and asking the people in the wedding with a very serious tone: “why is everyone happy here?”

  114. Annonymous Says:

    There are not many sciences talking about impossibilities. To prove an impossibility result one needs a formal model and this explains the highly mathematical nature of complexity theory, it is unclear to me how can one prove an impossibility result without considering a mathematical model. But the ultimate goal of complexity theory is proving impossibility of solving certain problems efficiently in practice therefore complexity theory needs to be connected to real world applications. There has not been many scientific endeavors concerned with impossibility results in practice.

  115. Scott Says:

    Anonymous #113: Amen!

  116. Gil Kalai Says:

    I think that the title of Scott’s earlier post on Ryan’s very impressive achievement which was “State of circuit lower bounds now slightly less humiliating” is on the mark. Overall, the most negative views I saw regarding complexity theory in the midth of the enthusiastic reports about the new result (and had I understood it better I might have posted about it too,) essentially reflected what Scott said in his title. So I truly fail to understand this “anti-complexitism” complaint.

  117. John Sidles Says:

    Anonymous #113: Non!

    Actually, anonymous, I agree with Scott that your reply was pretty good. But it seems to me that the starting premise could stand some adjusting:

    Both mathematicians and engineers seem to be asking theoretical computer science to be what they think it should be (and their objectives are inconsistent with each other).

    The phrase “are inconsistent with each other” might with equal justice have been replaced with “mutually support one another” … and this amendment would (IMHO) more accurately describe a 21st century whose harsh realities and global-scale challenges will in coming decades (in von Neumann’s memorable phrase) “Merge each [academic discipline’s] affairs with those of every other, more thoroughly than the threat of a nuclear or any other war would have done.”

    For example, over on Dick Lipton’s weblog, my answer to Vaughan Pratt’s questions regarding the geometry of Hilbert space drew upon the recent Aaronson-Arkhipov Linear Optics manuscript to infer certain consequences … consequences that extend somewhat beyond the boundaries of pure complexity theory.

    Now, it may be that there are some ultra-orthodox complexity theorists who are utterly indifferent as to whether their theorems ever have the kind of practical consequences that are associated to Vaughan Pratt’s questions. For these purists, here is a modest suggestion that will reliably repel the unwanted attentions of engineers and scientists: Omit all use of the word “consequence” and its derivatives from your complexity theory manuscripts. Instead, consider strictly mathematical phrases like “this lemma follows.”

    Conversely, if a complexity theory manuscript does discuss broader consequences—and we engineers applaud all such efforts!—then it is best to accept that “consequences” is associated to a constellation of ideas that are broadly shared among mathematicians, engineers, and scientists … and therefore, it is best to take the trouble to use the word “consequences” not in its specialized mathematical sense, but in accord with this broadly shared usage.

  118. Scott Says:

    Gil #116: What’s the difference between when I say the state of circuit lower bounds is “humiliating,” and when the (in many cases self-described) anti-complexitists say the same thing? I was trying to strike a jocular note of humility about the titanic difficulty of the P vs. NP question, even as we rightly celebrate Ryan’s achievement. They were dismissing the achievements of complexity theory as a whole, and Ryan’s result in particular, and they really meant it! So it basically boils down to the difference between an ethnic joke when told by a member of the ethnic group, vs. by someone else for whom it clearly isn’t a joke at all.

  119. Hopefully Anonymous Says:

    In the spirit of Raoul Ohio, I’m curious what Scott, Gowers, John Sidles, and some of the other bright minds in this thread feel cranky about?

  120. Raoul Ohio Says:

    csrster, re. stellar physics discrepancy?.

    As with Complexity Th, my background is limited, so I might be missing something that should be obvious. Also, I should rework the derivation through from scratch and see if I still get a discrepancy after a 15 year rest break! Anyway, here goes from memory:

    In stellar physics, an idealized “zero’th order” model for a star leads to the Lane Emden equation. The LE can be solved approximately, or exactly for a couple parameters, and it serves as a rough check for more realistic calculations that can be solved only numerically.

    A simple model for a star is a non rotating sphere of gas, not creating any energy. If you took a PDE course, you should be able to write down the differential equation for it. At any point the force of gravity on a chunk of star of mass m is given by F = GMm/r^2, where M is the mass of gas in a sphere of radius r. Write this out in radial coordinates in terms of density (rho). If there is no energy flow, pressure is given by the Adiabatic gas law, P = P0*rho^gamma, where gamma depends on the gas.

    When I worked this out, I did not get the LE equation. Working through the standard derivation I located the difference. The issue comes in setting up the Force due to gravity equals Force due to pressure difference equation. In the LE equation, the pressure difference has been done in rectangular coordinates, but everything else in spherical coordinates.

    The force due to pressure difference across a displacement of dr should be:

    4*pi[P(r+dr)*(d+dr)^2 – P(r )r^2]

    whereas in the LE derivation, the expression

    4*pi*r^2[P(r+dr) – P(r )]

    is used. In the resulting differential equation, this results in replacing (r^2P)’=2rP+r^2P’ with r^2P’. As I recall, this makes the DE easier to work with.

    Does it look like I missed the point here?

  121. Raoul Ohio Says:


    Perhaps the key issue is “should rectangular coordinates be used there”? I don’t see why.

  122. John Sidles Says:

    Hopefully Anonymous Says: I’m curious what [John Sidles & others] feels cranky about?

    Anonymous, the issues that make me feel, not cranky, but soberly challenged, are the ones I posted about on recent Fortnow/GASARCH weblog.

    Pretty much all of my research topics are selected with a conscious long-term view toward creating new resources for meeting these urgent needs … which in the end have to be met on a planetary scale, if they are to be met at all.

    My posts quote often from John von Neumann’s writings, because von Neumann’s mathematical research subsequent to 1937, in game theory, fluid dynamics, and computation, was largely conditioned (in my reading of his work) by a similar long-term view toward the fostering of enterprise.

  123. Gil Kalai Says:

    Scott #116.

    “They were dismissing the achievements of complexity theory as a whole, and Ryan’s result in particular, and they really meant it!”

    This is not a correct characterization of the discussion you linked to, Scott, from Lance and Bill’s blog. I could not see there anybody who was dismissing the achievements of CC like NP completeness, zero knowledge proofs, and public keys. Certainly, this was not the tone of the discussion.

    Some people mentioned other advances in lower bounds in the last two decades. Some people expressed the idea that you yourself expressed that this is a “tiny progress” seen in perspective of the conjectures or even in terms of earlier steps, while other explained why it is very promising and important. Some people were not excited from the mod m gates.

    Personally, I support politically correctness (with common sense) but I would be hesitant the expand the scope of PC to include “not being excited by mod m gates” as a teribble no no.

  124. Hopefully Anonymous Says:

    John Sidles, I think you misunderstand my comment (I meant crank in the sense of distrusting expert consensus in an area outside your expertise).

    Still, I think in general more math analysis of small unit operations optimization (not just military, but in all human activity) is probably a good thing. Various work spaces, academic spaces, leisure spaces, etc. What is the math of small teams participation and leadership optimization?

  125. John Sidles Says:

    Hmmm … in my mind the relevant meaning of the word “cranky” is the OED definition:

    Cranky  Out of order, out of gear, working badly; shaky, crazy;

    The origin of the word is nautical:

    Crank  Liable to lean over or capsize: said of a ship when she is built too deep or narrow, or has not sufficient ballast to carry full sail.

    A crank ship lacks a stable dynamical equilibrium, hence is difficult to steer, and is liable to unpredictable capsizing … I think pretty much everyone appreciates that the global STEM enterprise has in significant respects become undesirably “cranky” in recent decades.

  126. asdf Says:

    Anon #113 and others, I just don’t see why complexity theory should have “physics envy” (a condition sometimes ascribed to e.g. social “sciences”). I see it as a branch of mathematical logic, finding the limits of knowledge. Gödel, Turing, Post, Kleene, Tarski, etc. … giants upon whose shoulders any of us should be proud to stand.

  127. Anonymous Says:


    Godel, Turing, … were not ordinary pure mathematics, they were interested in questions related to real world, e.g. Turing machines needed *non-mathematical* justification to be accepted as the correct definition of mechanically computable (See Turing’s paper). Almost all of Godel’s works are tied with real world and philosophical questions about what the limitations of what can be done in practice (See Godel’s collected works). They were not simply interested in arbitrary purely mathematical questions, what I said about complexity theory also applies to their work. The main difference is that complexity theory is interested in not just effectiveness but also efficiency of the computation and therefore needs to care even more about real world and practical issues.

    Anonymous #16,#73,#77,#113,#114

  128. Hopefully Anonymous Says:

    John Sidles,
    Do you think there was a relative golden age of STEM?
    Where do we hit dimininishing returns in terms of how much of GDP goes into STEM (basic research)?

    In my recollection, the USA is top tier in how much we put into STEM per GDP, but even the top countries are below 10%.

    It would seem reasonable to me to put 33% of GDP into STEM, but that’s just intuition.

    This gets close to your expertise (and also your passions), so I wonder what you think?

  129. John Sidles Says:

    Anonymous asks: Where do we hit dimininishing returns in terms of how much of GDP goes into STEM (basic research)?

    Anonymous, you have asked one of those questions regarding which a rigorous uniformity of opinion is neither feasible nor desirable, and so it is my sincere hope that opinions vary greatly regarding it.

    One reasonable answer begins with *not* identifying STEM solely with basic research. After all, the “TE” in STEM stands for “Technology and Engineering”, neither of which is basic. Moreover, modern systems engineering has evolved so broad a scope that it is effectively a surrogate for “enterprise,” which assuredly is not basic.

    With a view toward balancing the basic and applied elements of STEM, we teach students the maxim: “Be conservative in theorems; progressive in postulates; bold in enterprises.” The point is that progress that is strictly confined to just one of these three fronts is more-or-less futile, while simultaneous progress on all three fronts yields generous returns on almost any level of STEM investment.

    The surprise hit of the Holiday season at our house has been Logicomix … which even now, both of my sons are scheming to filch from me. Good! It seems (to me) that a big part of the appeal of the Logicomix narrative is that its 20th century characters all are “conservative in theorems; progressive in postulates; bold in enterprises.”

    Provided that the 21st century STEM enterprise can sustain a similar creative & dynamic balance among theorems, postulates, and enterprises, then IMHO we don’t have much to worry about … because it’s going to be a terrific century.

  130. Hopefully Anonymous Says:

    Prof. Sidles,
    Well, I’m also interested in balancing all economic activity with the STEM portion of the activity, both basic and applied (I get your point that the applied component can be interpreted quite broadly in terms of % of GDP it currently occupies).
    I’m not an expert on this topic, but for example, how much would be too much to invest in TCS?
    Could Prof. Aaronson and his cohort make good use of a stimulus-sized $1T in spending per year, to pick what I’m guessing would be an extreme amount?
    $100 billion?
    A DOE sized $30 billion annual budget managed by a Secretary of TCS?
    To what extent is our irrationality distorting our research investment.

  131. Lukasz Grabowski Says:

    > the feeling of being part of a great and difficult shared enterprise

    Do you really mean it or is it just being poetic? If you really mean it, can I ask when did you start experiencing this feeling?

    Personally, I like complexity theory precisely because the questions it asks are so ridiculously obvious, yet noone knows how to prove them. Just the other day I went to a talk in number theory where the speaker mentioned that humanity can’t prove that there are infinitely many zeros in the decimal expansion of sqare root of two! Also ridiculous :-).

  132. John Sidles Says:

    So far, there are eighteen uses of “everyone” in this thread … and yet there is no consensus regarding membership in this class, nor consistency regarding its case-by-case use.

    Certainly there are plenty of folks who are using the word “everyone” in an other-than-common-language sense. As Krusty the Clown (of The Simpsons) often says, “Yikes, that’s not good!”

    Other words and phrases having specialized meanings in complexity theory that conflict with ordinary-language usage include “consequences”, “sample”, and any class for which membership verification requires an “oracle” … many more such examples could be cited.

    And yet, the assignment of specialized meanings to these ordinary-language words is both hallowed by the traditions of complexity theory, and essential to further progress.

    As complexity theory becomes more relevant to practical science and engineering, we see more plainly the wisdom of Neils Bohr’s 1948 maxim regarding quantum mechanics:

    “All well-defined experimental evidence, even if it cannot be analyzed in terms of classical physics, must be expressed in terms of ordinary language making use of common logic.”

    Specifically, in those complexity theory articles that assert a claim to “consequences”—in the same sense that articles on quantum physics claim to have consequences—then it is very helpful and beneficial when those consequences are spelled-out concretely in Bohr’s ordinary language making use of common logic.

    At present this particular reduction is not common practice in complexity theory, but perhaps eventually it will be.

    If so, good! 🙂

  133. Raoul Ohio Says:

    What a great discussion! I am having trouble keeping up with the reading, I am at about 60% right now.

    For better or worse, Scott has become somewhat of a spokesperson for TCS. It is remarkable that he takes on all comers, learned and crackpot alike, in defending the honor of the guild. Be sure not to take the slings and arrows personally.

    I am a spectator/fan of TCS, but don’t see any problem with tossing out some criticisms. Some are wrong. Some others were more articulately expressed by AnonCSProf.

    Somebody pointed out that TCS is in kind of a neverland between CS and Math. I might make that between CS, applied math, and ultra pure math. Pure math is susceptible to the direction of a field being hijacked by whatever constructions allow one to prove ever more powerful theorems about …, what?

    As a concrete (and tragic?) example, consider point set topology, the study of continuity, convergence, compactness, etc. PST and TCS have a bit of similarity.

    Fifty or sixty years ago PST was arguably the most important area of math, providing a foundation for functional analysis and lots of applied math. After things that matter were figured out, the PST bandwagon went merrily on its way, exploring the effect of alternative axiom systems on weirder and weirder things, and cranking out theorems. A large fraction of the effort went into studying transfinite cardinals. It is obviously a fun sport to try to figure out what happens when you pile ever bigger kinds of infinity on top of one another, but at what point does it become Dungeons and Dragons?

    It is particularly hard to predict what areas of math or TCS will prove to be useful. As a 23 year old, I thought number theory was totally useless. Oops! And in another 40 years it might be the case that tools from transfinite cardinals prove the final link in proving P vs NP. Could happen.

  134. Greg Kuperberg Says:

    The objective of motivation is not making it accessible to outsiders but to make it clear for insiders that this new object is interesting.

    So what’s being said here, that complexity theorists need to keep reminding each other why their work is important, but mathematicians don’t? If there is any real truth to this, then it’s only a victory of style over substance.

    I think you know some areas of mathematics that keep inventing new definitions and proving theorems about them where no one know why any other mathematician should care about any of those results.

    Yes, it’s sometimes a credible accusation, but it’s not one that I usually like to make, because it’s not a nice thing to say. Besides, are complexity theorists so different? Didn’t they spend the 80’s and 90’s defining more and more complexity classes until there were too many of them?

    Godel, Turing, … were not ordinary pure mathematicians, they were interested in questions related to real world,

    Even “ordinary” pure mathematicians can be interested in the real world. Godel was very much a pure mathematician. Yes, he was a brilliant man with wide-ranging interests, from constitutional law to general relativity. Yet it was clear that what he really had to offer as a professional researcher was abstract theorems.

  135. John Sidles Says:

    Dick Lipton’s (terrific) weblog Gödel’s Lost Letter and P=NP is running a topic “Notation And Thinking” whose themes and memes have considerable congruity with the points that Scott’s essay raises. Namely, to the extent that the broader STEM community sometimes exhibits confusion in grasping the results of complexity theory (CT), this confusion often is associated to CT notations and nomenclature whose use is localized and/or idiosyncratic to CT (oracles and relativization for example).

    My own views amount to a homage to Edsger Dijkstra that might have been titled On the cruelty of Really Teaching Formal Notation, and perhaps later this week I’ll post a followup—also a homage to Dijkstra—titled Dirac notation considered harmful.

    To adapt the title of another computer science essay (by Fred Brooks) to make the main point, in dealing with issues associated to notations and nomenclature, there is No silver bullet … but fortunately, patience and tolerance can take us far.

  136. Anonymous Says:

    @Greg #134

    > So what’s being said here, that complexity theorists need to keep reminding each other why their work is important, but mathematicians don’t? If there is any real truth to this, then it’s only a victory of style over substance.

    I think I was clear, we need to justify that we are not proving theorems for the pure sake of proving theorems. If someone comes up with a new model of computation and proves results about it, she/he also needs to first justify that this is an interesting model to other people in the area. It is similar to physics, it is not difficult to come up with arbitrary mathematically interesting but useless physical theories and prove results about them, but without justification for the model being interesting no one would care about it and in fact it would be very difficult to publish it as a physics paper but it can be published as a pure math paper. The situation is different in math, no one ask a mathematician why these new objects are interesting (specially in some areas).

    > Yes, it’s sometimes a credible accusation, but it’s not one that I usually like to make, because it’s not a nice thing to say. Besides, are complexity theorists so different? Didn’t they spend the 80’s and 90’s defining more and more complexity classes until there were too many of them?

    I am not saying what is good and what is bad, I am trying to clarify my opinion that there are differences between pure mathematics and complexity theory (but of course I agree they share more than they differ). For sure it has happened and happens in complexity theory, but the point is complexity theorists are expected to explain why a complexity class is interesting while mathematicians are not expected (and in fact don’t like being asked for it). The situation between pure math and complexity is not a either zero or one situation, there is continuum, so I am simplifying (and hopefully not oversimplifying) the situation.

    Let me repeat my main point: complexity theorist don’t prove theorems for the pure sake of mathematics (neither theoretical physics and economists), but mathematicians do. Mathematicians can live in the Cantorian Mathematical Paradise without caring about the real world, we are expected and have to connect our results to the real world situations. The objective are different.

    > Even “ordinary” pure mathematicians can be interested in the real world. Godel was very much a pure mathematician. Yes, he was a brilliant man with wide-ranging interests, from constitutional law to general relativity. Yet it was clear that what he really had to offer as a professional researcher was abstract theorems.

    Yes, many great mathematicians were and are interested in such questions, questions which are not purely mathematical and connect to the philosophical and real world questions, and they use their extra-ordinary intelligence and mathematical knowledge to attack such problems. The famous results of Godel are all of this kind, from his work on completeness, incompleteness, T, and Dialectica interpretation, to his works on set theory and relativity. But the norm in mathematics (as far as I see) is that there is no need for such justifications.

    Anonymous #16,#73,#77,#113,#114,#127

  137. Raoul Ohio Says:

    Anonymous #113 and #114:

    Excellent points, elegantly stated.

  138. John Sidles Says:

    One of the (many) virtues of this particular Shtetl Optimized topic is its (re)introduction of the portmanteaux word extralusionary (coined by Scott and Dave Bacon).

    From the viewpoint of systems engineering, several reasons are evident why complexity theory itself may be at-risk of evoling into the 21st century’s extralusionary discipline par excellence … and I have just posted (what amounts to) the case for that conclusion on Dick Lipton’s weblog.

    Mainly though, I was looking for an excuse to praise the Cohen brothers’ recent small masterpiece of film-making, A Serious Man. Especially in this holiday season, we can all of us hail the serious and yet hilarious quest for truth of Professor Larry Gopnik! 🙂

  139. Raoul Ohio Says:

    Fractional Exponential Functions.

    I am taking advantage of this lull to revisit half exponential function h(x) question that came up a couple weeks ago.

    Finding a formula for h(x), where h(h(x)) = exp(x) (or, 2^x if you prefer) appears to be a really hard question. To see why, consider the embedding of the reals into the complex numbers, and then the Riemann sphere:
    which is the complex numbers wrapped onto a sphere and one additional point, infinity, pasted on at the top. To examine any function f(z) as z goes to infinity, you just look at f(1/z) as z goes to zero.

    To understand what is going on with exp(x) for large x, we thus consider exp(1/z). Guess what? This is the poster child example of an ** essential singularity **. A key fact, called the “Big Picard” theorem; Every complex value (with at most one exception) occurs infinitely often in any neighborhood of the singularity. I.e., the function is totally wild. Any representation for h(x) would have to reproduce this wildness, but in two steps, which seems extra tough.

    Essential singularities appear to be all or nothing: you are either there, or not. This effect seems to appear no matter how you try to sneak up on the exp(x) function. For example, suppose you try differential equations, with

    y’ = ay^b, with solution

    y = [(1-b)(ax+c)]^(1/(1-b)) if b != 1, and

    y = c * exp(at) if b = 1.

    As b approaches 1, there is an abrupt jump from an algebraic function to an exponential function, half jumps not welcome.

    It is not obvious that there is any function h such that h(h(x)) = exp(x) on the real line, let alone for all complex numbers. My guess is that there are actually many different ones, at least on the reals. If so, you would want to pick out a preferred one.

    Here is a simpler challenge. Suppose we want to look at a rough graph of h(x), and you get the task of computing h(x) numerically, accurate to 10%, say. How would you proceed? You will discover that this gets ugly.

  140. John Sidles Says:

    Raoul, here is a holiday gift for everyone who has ever wondered (as I have) whether (smooth) half-exponential functions exist, and if so, what their graphs look like.

    Yep, half-exponential functions exist … and their graphs look pretty much like we might (in retrospect) expect.

    Happy Holidays to everyone! 🙂

  141. John Sidles Says:

    Gosh, on doing a little research into the mathematical nature of half-exponential functions—which I originally constructed purely for fun—I was surprised to find that this question has been posed on MathOverflow (in various guises) at least five times, most recently by Scott and by Gil Kalai (in separate questions).

    One reason these questions keep getting asked (AFAICT) is that no-one has posted a picture of the functions in question … so here’s a picture to gaze upon.

    But there’s another reason these questions keep getting asked: the answers given on MatheOverflow—while perhaps formally correct—sometimes aren’t particularly helpful with respect to the practical task of computing these functions.

    For example, if we combine Scott’s question titled “Closed-form” functions with half-exponential growth” with the highest-rated answer (by Gerald Edgar), we find the MathOverflow consensus is

    No half-exponential function can be expressed by composition of the operations +, -, *, /, exp, and log (together with arbitrary real constants) because all such compositions are transseries, and no transseries (of that type) has this intermediate growth rate.

    Now, how do we reconcile this (very strong) formal conclusion of Aaronson/Edgar with the (also strong) empirical observation that, in practice, rational polynomials (i.e., Padé approximants) and EXP together do a terrific job of computing half-exponential functions?

    The reconciliation is straightforward (AFAICT) … the MathOverflow statement excludes nonuniform algorithms, that is, algorithms that use one expansion for “small” arguments, and a different expansion for “large” arguments.

    If we drop this restriction, then there is (AFAICT) no fundamental obstacle to writing a Mathematica package that computes half-exponential functions with similar accuracy and efficiency to Mathematica’s build-in function HypergeometricPFQ[] (which implements a set of nonuniform algorithms, derived mainly via Kummer relations, that in aggregate span the domain of complex arguments).

    Moreover, when we examine the Padé approximants in the complex plane, we find that the analytic structure of the half-exponential function is sufficiently simple, that it is at least conceivable that this function can be expressed wholly in terms of HypergeometricPFQ[] (and the like). In this respect it would resemble almost all of the “closed-form” functions of mathematical physics … since HypergeometricPFQ[] is the “Swiss Army Knife” of classical functions. We can hope! 🙂

    There is a broader lesson here, perhaps, that is relevant to complexity theory and to the topic of this webblog: Postulates—no matter how convenient—must not become articles of faith.

    For example, all that humanity has learned (so far) in mathematics, physics, and complexity theory (CT) is consistent with the postulate of a CT-friendly universe in which Hilbert space ain’t flat, ain’t static, and doesn’t have exponentially many dimensions … a universe in which quantum dynamics is a subset of classical dynamics, not a superset … in which quantum simulations are generically easier than classical simulations … in which proving membership in P is the most common natural problem that is naturally infeasible … a universe in which Nature embraces the ECT to ensure that she has the resources to compute her own past-and-future evolution.

    To assert these postulates is not to express any animus towards complexity theory. Quite the reverse … these postulates invite complexity theorists to embrace a universe that has CT-friendliness blended into its very foundations. In short, these are happy postulates. 🙂

    Cuz’ heck … maybe … just maybe … in this 21st century CT-friendly universe, future releases of Mathematica may possibly include built-in half-exponential functions! 🙂

  142. Raoul Ohio Says:

    Thanks, John, that is great.

    1. I need a couple cups of stiff coffee before reading any Mathematica, so I hope the following question isn’t dumb:

    What is the role of alpha in this version of the half exponential equation? Is it essential, or just something to trick Mathematica into doing some magic? There is no alpha value that recovers the usual (?) situation; h(h(x)) = exp(x). Is there a way to convert the functions generated into the usual h(x), so I can check and see if it works?

    2. Discussion of expanding functions at more than one place:
    Many functions “SHOULD BE” expanded around more than one point, usually the singularities of the defining differential equation. For example, the 2F1 (hypergeometric) equations have three singularities, at 0, 1, and infinity. The 1F1 (confluent hypergeometric) equations are a limiting case of 2F1, where the singularity at 1 is parameterized as a, and the limit of a going to 0 is done, thereby merging (“confluing”?) two of the singularities, resulting in singularities at 0 and infinity.

    For example, the bessel function J(a,x) is the best known 1F1 function. Here the order a is a nonnegative real. You can expand a function at a regular point, or a singular point, which is usually the best place to do it. You generally expand J(a,x) at either 0 or infinity, and the expansion converges until it hits another singularity. So in this case, both expansions converge for all nonnegative reals. BUT, they don’t converge very well. The expansion at 0 converges nicely for x less than a, and the expansion at infinity converges nicely for x greater than a, and you can’t do squat for x near a. For small x, J(a,x) has leading behavior x^a, which can be fleshed out as
    and for large x, J(a,x), is basically z^-0.5 * cos(z – z0), again fleshed out as

    [Aside: I, along with many others including Scott, sometimes make fun of Steve Wolfram for buffoonery. It is well deserved. On the other hand, Wolfram Research puts spectacular mathematical resources on line for free, so three cheers for Steve!].

    Now, after the “expansions 101” review, the point is that infinity is the only singularity of the exponential function, so that is the “right” place to expand it at, and presumably the right place to expand a half exponential function at.

    Admittedly, expanding exp(x) at x=0 is the posterchild of a nonterminating expansion, and has pretty nice details. But in the great “Half Exponential Chase” the main thing is what happens out toward infinity, in the spirit of algorithmic analysis, so I think that “expanding at infinity is essential” is a reasonable guess.

    Expanding exp(x) at infinity is equivalent to expanding exp(1/x) at zero. There are probably 100 theorems about this from the 1800’s; but little did those guys know about Mathematica!

    Next question: which is easier to think about: h(h(x)) exp(1/x), or h(1/h(1/x)) = exp(x)?

  143. Raoul Ohio Says:

    Some times you find an excellent discussion on YouTube:

  144. John Sidles Says:

    Rauol, to answer just one of your questions: “What is the role of α in the half-exponential composition equation”:

    f(f(x)) =&nbspα exp(x-α)

    The short answer is “a purely conventional normalization” rather like the conventional normalizations in the defining equation for exp

    exp'(x) =&nbspα exp(x)

    But the long answer is more subtle: it’s that our starting idea “construct the half-exponential function” has a defect in it, namely the word “the” … it turns out that our defining composition relation is obeyed, not by one function, but by an uncountable family of half-exponential functions, that (seemingly) bear no trivial functional relation to one another. Thus, normalization issues that are trivial for exponential functions are highly nontrivial for half-exponential functions. There is a classic 1961 paper by Szekeres, titled “Fractional iteration of exponentially growing functions,” that discusses this nonuniqueness problem thoroughly.

    In consequence, our sought-for solution set of half-exponential functions turns out to be—counter-intuitively and undesirably—an intractably large set. The smooth half-exponentials that I constructed are (by analogy) instances of a small group of honest citizens, who reside in a Wild West Town that includes plenty of mighty nasty hombres. 🙂

    The exercise of constructing concrete realizations of half-exponential functions helps us appreciate that the problem of “seemingly simple sets that in practice turn out to be far too big” is generic and ubiquitous in the theory of dynamical systems, in complexity theory, and in quantum information theory.

    For example, the honest citizens of the complexity class P are those who have birth certificates (that is, theorems) proving their citizenship; whereas the infinitely more numerous hombres of P claim citizenship in P by oracle rather than by proof.

    Similarly, the honest citizens of Hilbert space live in a small, smooth state-space of polynomial dimensionality and Kählerian geometry; whereas the hombre-states of Hilbert space reside in a unexplored state-space that is postulated to have exponentially large dimensionality.

    For the very first time in human history, Scott and Alex Arkhipov’s linear optics experiments are proposing to survey a pretty considerable segment of the vast Hilbert jungle. I respect their vision hugely … even while I wonder whether the Hilbert jungle is in fact as exponentially vast as its most enthusiastic explorers claim.

  145. k-jl Says:

    There is the theory of the two ends of a gun – one of them the wrong one. I think that the problem of open questions has this nature: Recently I compared the theories of complexity and of formal languages.Having questions like the relation between determinism and nondeterminism in mind I wrote that while in complexity theory most of the important questions are unsolved in contrast to that the important questions in formal languages are answered. The fact that I got angry comments by formal language people and not by people from complexity indicates where to look for the wrong end of this gun.

  146. John Sidles Says:

    Raoul, I upgraded my survey of half-exponential functions, and can now pass on some specific advice regarding which numerical methods are likely to “just work” in computing these functions.

    Namely, look for smooth analytic solutions of f(f(z)) = β exp(z) iff 0

  147. John Sidles Says:

    … hmmm … not sure why the remainder was cut-off …


    … look for smooth analytic solutions of f_β(f_β(z)) = β exp(z) iff 0<β≤1/e … because it turns out that, viewed as an analytic function of the parameter β, the half-exponential function f_β(z) has analytic structure associated to the Lambert function W(-β), namely a cut (that by centuries-old convention is placed) on the positive real axis for β ∈ (1/e,∞).

    For values of β along the Lambert cut, it turns out that analytic half-exponential functions can be constructed, but they exhibit “jumps” on the real axis (accompanied by the usual Gibbs phenomena) as is readily seen in the Padé approximants to f_β(z).

    Thus (AFAICT) pretty much any reasonable approximation scheme will yield a smooth half-exponential function, iff the “exponential gain” β is be below the critical value β=1/e; the various pathologies and confusions that are reported in the literature thus are seen to be largely associated to choosing a too-large β.

    We thus conclude—with reasonable confidence but not absolute certainty—that the half-exponential function f_β(z), considered as a doubly analytic function of z and β, is similarly well-defined and well-behaved to any other special function of mathematical physics.

    Perhaps I will write a formal MathOverflow question that challenges the mathematical community to prove these claims in full rigor, with a view toward establishing f_β(z) as a respectable special function of informatic mathematics … that is, an function whose definition is simple and mathematically natural, and whose analytic structure is sufficiently well-characterized, and whose computation to any desired numerical accuracy is sufficiently efficient, as to be worthy of inclusion in standard compendia of special functions.

    As Feynman once wrote:

    “One thing is to prove it by equations; the other is to check it by calculations. I have mathematically proven to myself so many things that aren’t true. I’m lousy at proving things—I always make a mistake. […] So I always have to check with calculations; and I’m very poor at calculations—I always get the wrong answer. So it’s a lot of work […]”

    Rigorously proving everything that concrete calculations are telling us about the analytic structure of half-exponential functions surely would be a lot of work … but it might be fun too. `Cuz people have been wondering about these functions for a long time! 🙂

  148. John Sidles Says:

    … I extended the Padé survey of the half-exponential function to encompass the general composite relation f(f(z)) = γ β^z, where the mnemonic is that the parameter γ specifies the exponential “gain” and the parameter β specifies the exponential “base” … for base β=2 the resulting functions look very pretty.

    Moreover, when these functions’ Padé approximants are extended to the complex z-plain, the observed Riemann structure is in reasonable accord with our emerging understanding of their analytic properties. Good!

    I wonder, are there any other analytic functions that have largely or wholly originated as natural functions in complexity theory?

    If Shtetl Optimized readers can identify more such “special functions of complexity theory”, please post about them! Because a review article titled The Special Functions of Complexity Theory would be too much fun to pass over! 🙂

    I originally pursued these half-exponential calculations purely for fun … and to illustrate an essay—still in progress!—on theme of uniformity, naturality, and verification in STEM enterprises.

    But now I am sufficiently encouraged to foresee also, that these functions might eventually—with much further work—be the subject of an article of similar scope and completeness to the classic 1996 article On the Lambert W Function, by Corless, Gonnet, Hare, Jeffrey, and Knuth. That would be great! 🙂

  149. Raoul Ohio Says:

    Although simple to approximate, the inverses of common functions of algorithm analysis, such as n^q*(ln n)^p, come up in a natural way. BTW, that is a great paper on the Lambert function.

  150. John Sidles Says:

    BTW, that is a great paper on the Lambert function.

    Yes, it was great fun to read in that article’s historical appendix:

    Lambert had dreams [in the 1760s] of building a machine to do symbolic calculation … Thus his wishes anticipated those of Lady Ada Lovelace by roughly one century, and the actuality of symbolic computation systems by roughly two.

    This was news to me! 🙂

  151. anon Says:

    #113: Wonderful comment. Thank you.

  152. Raoul Ohio Says:

    anon, thanks for prompting me to read #113 again; a very reasonable and elegantly stated view on a large and slightly contentious area of scholarship. It would make an excellent preface for a TSC book. Anonymous is the leading contender for the $1 R-O Prize for “best S-O comment of the decade”. Two and a half weeks to go!

  153. LifeIsChill Says:

    Hi Prof: This is with regard to Algebraic Geometry analogy by Akhil Mathew. Having read Akhil’s talent in a newspaper, I do feel he has a point. When I ask a university mathematician about the NP problem, they usually belittle it somewhat as “end of the world problem”. For a mathematician trained in mature and deep subjects, even mathematics of biology(for which nothing is known) may seem boring. It is hard to really explain the depth of the NP=P problem to someone trained in abstract fields as the tools one uses to describe the P=NP problem seems childish. May be if one of the 20-30 intermediate steps you are thinking of that are essential to settling P=NP question has a nice flavor that attracts some good mathematicians (say the way Weil’s finite field Riemann hypothesis attracted Grothendieck), then probably complexity theory may not sound like a child’s game to outsider’s who may think the only serious problem in CS is the P=NP and not any of the intermediate ones.

  154. asdf Says:

    Is this interesting? It’s gotten a lot of press attention. I looked at the arxiv preprint and the stuff in it all looks standard, I would have thought.

  155. Walt Says:

    I know this is a reply to a month-old comment, but it’s important to set the record state. Raoul Ohio’s description of the history of point-set topology is highly misleading, and in some respects the exact opposite of the truth. There was never any point in history in which point-set topology was the most area in math, or even the most important area in topology. It was a technical subject that clarified the relationships between certain fundamental notions in analysis. As soon as it managed that, the field died out. It died out so completely that it’s often held up as a cautionary tale of what can happen to your field when you run out of interesting questions. Some people who specialized in the field kept working on it, but the interest and employment opportunities dwindled. Halmos in his autobiography tells a story about how he couldn’t help a researcher get a job because he was a point-set topologist, and no one wanted to hire point-set topologists.

    Eventually, after the field was in terminal decline, set theorists went back and looked at the few open questions left, and discovered that they were all independent of the axioms of set theory. (This is the only thing I can think of that fits Raoul Ohio’s statement about the transfinite.) Set theory itself, while still active, is an incredibly unfashionable field, so unfashionable that people frequently use “core mathematics” on Math Overflow to mean “areas other than set theory”.