A not-quite-exponential dilemma

A commenter on my last post writes:

Scott, it’s your blog – but can’t we switch back to some QC or TCS topics?

I confess: after three years of staid, dignified posts about quantum computing and theoretical computer science, I somehow did the unthinkable, and let this once-respected blog become less about elucidating research than procrastinating from it.  Many readers, or so they tell me, rely on Shtetl-Optimized to keep abreast of the latest theoretical insights.  And rather than ask those readers whether they also rely on deep-fried Snickers bars for the Vitamin E in the peanuts, I have a moral obligation to serve them.

Fortunately for the theory-starved among you, a certain je ne sais quoi in the air last week has caused me to refocus my attention on research.  The mysterious force affected not only me, but seemingly my entire floor of the Stata Center—giving rise to a carnival-like crescendo of increasingly-frantic theorizing that ended just as inexplicably as it began, around 6:59PM Thursday night.

So today, I’m proud to post something vaguely related to science once again.  On the suggestion of Wim van Dam, I hereby announce another contest, with no prize or even possibly winner.  Your task is simple:

Come up with a catchy name for growth rates of the form 2n^α, 0<α<1.

(For example, the running time of the fastest known classical factoring algorithm has this form, as does that of the fastest known algorithm for graph isomorphism.)

The word “subexponential” is often used, but should not be, since we already use it for growth rates smaller than 2n^α for all α>0.

This just in: Friend-of-the-blog Greg Kuperberg, who’s always more fun than a cinder block combined with a reprimand, informs me that 2n^α growth rates already have a name: stretched exponentials.  But

  1. I’ve never heard that term in my life,
  2. I don’t like it: it sounds like something bigger than exponential, not smaller, and
  3. Having called 2√n “subexponential” in his otherwise-great paper on a quantum algorithm for the Dihedral Hidden Subgroup Problem, for Greg to now lecture others on this issue seems like … stretching it.

So my and Wim’s challenge to the readerariat stands.

95 Responses to “A not-quite-exponential dilemma”

  1. Barath Says:

    The two names that come to mind (the first of which is obvious) are “squished exponentials” and “wilted/wilting exponentials”.

  2. WF Says:

    By analogy with NP, it should be “[N]o really it’s still [E]xponential,” abbreviated NE.

  3. Tracy Hall Says:

    In geometric group theory there is a notion of “intermediate growth” which in practice usually takes this form, although the term can technically refer to any function asymptotically larger than any polynomial but smaller than any exponential function, such as n^log(n), on the low side, or on the high side 2^(n/log(n)) .

    Since growth rates of the form you’re asking about depend on (and are ranked by) the parameter alpha, they could be called alpha-intermediate, as in “Kuperburg found an algorithm of 1/2-intermediate complexity”. I can see that morphing into “alpha-intermediate growth with alpha=1/2”, though–the danger of incorporating variable names into notation is that variables, once knighted, resist being demoted to unnamed numbers.

  4. GB Says:

    semi-exponential

  5. D. Eppstein Says:

    Surd-exponential?

  6. mitchell porter Says:

    Supra-subexponential.

  7. John Moeller Says:

    An absurd suggestion: exporadical

    I’ll second David’s suggestion of “surd-exponential.” It’s much more elegant than “root-exponential,” which is what I was thinking.

  8. Shahab Says:

    Sublinear exponential?

  9. Wai Keen Says:

    How about bounded exponential?

  10. Chris Granade Says:

    Here’s one fraught with danger: nearly exponential.

  11. Antti Rasinen Says:

    This isn’t totally accurate, but sounds cool: root power exponential.

  12. anonymous Says:

    feebly exponential
    barely exponential

    or why not

    not quite exponential

  13. MattF Says:

    wannabe exponential

  14. anon Says:

    lols: wannabe exponential…

  15. JK Says:

    Why can’t we just redefined subexponential? (For what it’s worth, current factoring algorithms are usually described as being subexponential.)

  16. Robin Blume-Kohout Says:

    Okay, I’m gonna play Simplicio here (it comes naturally to physicists). What’s really wrong with “superpolynomial”? Whenever I’ve run into this kind of growth, I’ve been far more interested in the fact that it grows faster than any polynomial than in the fact that it grows slower than exp(n). So, if you’re genuinely interested in the latter rather than the former, I’d love to hear why.

    (N.B. Granted, I do feel like exp(polylog(n)) is substantially different from exp(poly(n)), even though they’re both superpolynomial, so maybe I’m on the wrong track there.)

    Anyway, if that doesn’t float your boat, I nominate the neologism “expolynomial” as a sexier definition for exp(n^alpha). Which doesn’t cover the distinction between alpha1, but I’m still not convinced that it’s truly an important one.

  17. Scott Says:

    JK: For me the reason is that, when we use the word “subexponential” in complexity theory (“If NP has subexponential-size circuits,” etc.), we’re generally thinking of “exponential” as meaning 2n^c for an arbitrary c (maybe greater than 1, maybe less). By padding, every polynomial is equivalent to every other one anyway. 🙂 So I’d prefer to reserve “subexponential” to mean “less than an exponential, even under polynomial rescalings of n.”

  18. Scott Says:

    Robin: The people who worked for years to create factoring algorithms that run in ~2n^(1/3) time, or graph isomorphism algorithms that run in ~2√n time, care for obvious reasons about the difference between such growth rates and 2n! As do the cryptanalysts who use (say) the Number Field Sieve for breaking RSA.

    Let me bring things closer to home (where home=QuantumLand): if someone developed even a 2n^0.999 quantum algorithm for 3SAT, I’d consider it the biggest breakthrough in quantum algorithms since Shor and Grover.

  19. Kurt Says:

    How about “quasi-exponential” (for “not quite exponential”)? It’s vague enough that it could mean almost anything, so if you’re the first to adopt it you get to decide. (Google tells me that this term is already used in econ and some areas of physics, but I haven’t seen it used in CS.)

    I also like “reduced-exponential”, although I suppose you could get some complaints that it would be confused with “exponential reductions”.

  20. Erik Says:

    I like the suggestion “semi-exponential”; it suggests that there is something exponential about the growth, but that it is less exponential than what we might normally call “fully exponential”. In the sense of “semi-” as meaning “to some extent” or “partial”, it’s a good match. And since “semi-” is also sometimes meant to use “half”, it’s connotative of the fact that 2n1/2 would count as semi-exponential.

    Personally, I like to reserve “sub-” and “super-” to mean strictly “less than” and “greater than”, without denoting how much less or greater it is. For example, claiming that some algorithm is “subexponential” would indicate to me that it is strictly less than exponential, but wouldn’t rule out polynomial, linear, logarithmic, etc. It might be odd to use “subexponential” when you know it’s linear, but there are many circumstances in which you don’t know exactly which of the subexponential classes some problem belongs to or in which you don’t care.

    The only problem I could imagine is potential conflict with other uses of “semi-exponential”, if there are well-established uses. Has anyone encountered any? I did a quick Google search, but I found no uses for which the meaning became clear to me before I got bored.

  21. Paul Beame Says:

    Impagliazzo, Paturi, and Zane have already solved this problem. They used the term “weakly exponential” and address this question very explicitly in the their paper “Which problems have strongly exponential complexity?” in which they make the distinction robust.

    For many complexity classes (e.g. P, NP, EXP, L, etc.), a polynomial slop in the encoding of the input doesn’t change the class so any exponential upper bound can be changed to 2n^a for a< 1 by changing the problem encoding. (It does matter for E and NE but E/NE are atypical.) Therefore people working on lower bounds usually call any lower bound of the form 2n^a “exponential”. IPZ give reasons for making a distinction between the “weakly exponential” a<1 and “strongly exponential” a≥ 1. They define sub-exponential reductions that precisely highlight this distinction and introduce an interesting sparsification technique that allows them to prove, for example, that k-SAT is complete for SNP under these reductions – if it has a weakly exponential (or indeed sub-exponential) upper bound then so does all of SNP.

  22. Scott Says:

    Paul: The trouble is, Impagliazzo et al. use “weakly exponential” to mean 2n^ε or greater, which is different from what we’re talking about here (in their terms, “weakly exponential but not strongly exponential”).

  23. Joe Fitzsimons Says:

    Pseudo-exponential would seem to make sense, since they look a lot like exponentials, but aren’t really the genuine article (quite literally falling short).

  24. tgm Says:

    I like some of the above suggestions (e.g., wilted, or wannabe, or barely exponential), but here are a few more suggestions: compressed exponential, shrinked exponential (these are inspired by the stretched exp terminology), inferior exponential.

  25. ejeffrey Says:

    I understand not wanting to contradict previous literature, but it seems silly to me to define weakly exponential as a superset of strongly exponential, rather than disjoint, and confusing to have an unrelated term mean weakly but not strongly.

    So I would say something like “only weakly exponential” to mean what you are looking for, and say “at least weakly exponential” to include strongly exponential. This would be consistent with the existing use while implying that weakly exponential should really only apply to the a

  26. Robin Blume-Kohout Says:

    Scott: Okay, I’m provisionally convinced that the distinction is important. But… only provisionally, because it mostly comes down to me believing that you care, and trusting your expertise.

    E.g., the argument “The people who worked for years to create factoring algorithms that run in ~2n(1/3) time, or graph isomorphism algorithms that run in ~2√n time, care for obvious reasons,” sounds a tad hollow. Some people have worked passionately for years on hackysack, fashion modeling, intelligent design, and baking the perfect souffle — but their existence isn’t a compelling argument why I should care. Is the implicit argument “They care because those algorithms solve practical problems faster?” (Not a trivial question — AFAIK Coppersmith-Winograd is interesting but not useful).

    Re: 3SAT, I’m totally prepared to be convinced here, but my ignorance is so staggering that I don’t know why an O(2^(n^0.99)) algorithm would be cool. Does it follow from (a) BBBV, and (b) some result showing a [convincing/plausible] classical lower bound of 2^O(n) for 3SAT?

    The “semi-exponential” camp is sounding persuasive to me.

  27. Richard Cleve Says:

    This reminds me of terminology that I heard way back in the 1980s for this: “fractional exponential”, or “frexponential” for short, for growth rates of the form 2^(n^f), where the f

  28. Job Says:

    Partial-Exponential. Faded-Exponential.

  29. Richard Cleve Says:

    My last post got truncated.

    The f is a “fraction”. If memory serves, this came up in the 1980s in the context of monotone lower bounds on the circuit complexity of certain monotone functions. I guess the terminology never caught on.

  30. Uri Says:

    exponential-on-crutches

  31. Job Says:

    I’ve also been affected by this “je ne sais quoi”. I think it’s because the weather has changed but it has also occurred to me that maybe we’re being harvested for computation by aliens.

  32. Job Says:

    Aliens or robots, or alien-robots. They make us compute while they’re living it up.

  33. Ravi Says:

    The term ‘fractional exponential’ is also used in Peter Cameron’s combinatorics text.

  34. matt Says:

    Coincidentally, I’m looking for some language help too. I’m a physicist getting interested in computational complexity, and I’ve noticed that often you can prove things about the complexity of algorithms which have access to certain black-box functions. I’m looking for a good term for such proofs. Author-of-the-blog Scott Aaronson, who’s always more fun than a cinder block combined with a reprimand, informs me that such proofs already have a name: relativizing. But

    1. I’ve never heard that term in my life,
    2. I don’t like it: it sounds like something related to relativity theory, and
    3. Having made up the strange word “algebrizing”, for Scott to now lecture others on this issue seems morally very relative.

    Seriously, Scott, the functions you’re talking about are indeed called stretched exponential. Maybe not in comp sci, but that’s a standard term in physics. It can be a little confusing whether stretched exponential means that the power is less than one or greater than one, for which I find the term “compresed exponential” invented above to be nice, but just think that you are stretching the function on the x-axis and then everything will make sense.

  35. Wim van Dam Says:

    Judging by its Wikipedia entry, “stretched exponentials” are used for decay rates exp(‘something negative’). Is it really common to use the same term for functions that increase?

  36. James Says:

    Here’s another suggestion: fractionally exponential

    (By the way, sub-exponential should mean anything less than exponential. But I guess this is what happens when you let computer scientists use mathematics unsupervised.)

  37. David Says:

    Exponentish, exponentiod, exponentian, exponentical, exponentialish

  38. Scott Says:

    Robin: The Number Field Sieve and the Babai-Luks graph isomorphism algorithm really are among the more impressive accomplishments of algorithms research. They solve central problems of the field, they use fancy techniques, and it’s completely non-obvious that such algorithms should be possible. And the NFS is actually used—at least by people who break current RSA implementations to show they’re not secure (and one assumes, by our friends at NSA as well).

    Yes, a 2n^0.999 quantum algorithm for 3SAT would “break the BBBV barrier,” and show that a more-than-polynomial speedup is possible for an NP-complete problem. As far as I know, no one has even ruled out that the adiabatic algorithm solves 3SAT in 2√n time. So it’s conceivable, but seems like an extremely hard question to answer (even by numerical simulation, since we don’t have a quantum computer handy for the purpose).

  39. Luca Says:

    I suggest we call such functions “Scott Functions,” as in “the running time of the number field sieve factoring algorithm is upper bounded by a Scott function.”

  40. Greg Kuperberg Says:

    Having called 2√n “subexponential” in his otherwise-great paper on a quantum algorithm for the Dihedral Hidden Subgroup Problem, for Greg to now lecture others on this issue seems like … stretching it.

    Guilty as charged, Scott. Although to be fair, I think that I admitted to this in the e-mail.

  41. af Says:

    Expunyential.

  42. subsubexp Says:

    How about ‘mid-exponential’ ?

  43. Robin Blume-Kohout Says:

    Scott: Okay, as of comment #38, I contentedly concede. I’m convinced and satisfied that semiexponential time is substantially, importantly different from exponential. On a meta-interesting level, I’m not precisely sure why I’m satisfied, but the key sentence seems to be “They solve central problems of the field, they use fancy techniques, and it’s completely non-obvious that such algorithms should be possible”.

    I’m actually a little bothered that the use of fancy techniques is what convinced me… but, then, fancy techniques are often surprisingly effective at revealing hidden structure. And your sentence brought the FFT algorithm to my mind, which “only” gives a n2–>n log(n) speedup, but is clearly a Big Deal (and not just for practical reasons).

  44. Marcello Says:

    Polynomially slowed exponential.

  45. Raoul Ohio Says:

    “Near Exponential” kind of captures the idea, but I am going with “Root Exponential” (RE).

    Here is my reasoning:

    1. Simple sounding and immediately guessable. We refer in grade school to powers of form 1/n as “roots”. A slight generalization of “form 1/n” to “smaller than 1” is not much of a stretch.

    2. Unlike “semi”, root is not overused for whatever generalization we are thinking of at the moment, such as: semi-linear, semi-simple, semi-crazy (check out Junior Brown song), semi-truck, semi-formal, semi-colon, Semi-nole, Simi-Valley, ….
    The point being that “root” actually has something to do with the issue.

    BTW, how many readers, upon first learning of P, stopped to figure out something in the gap between polynomial and exponential? It was probably obvious to most, but I had to stop listening to the lecture and calc for a couple minutes.

  46. Amit C Says:

    Forget 2^{n^a}… come up with a catch name for growth of the form O(n log n). It’s ubiquitous in CS and yet I’ve never heard a better name for it than “enlogen”… ick! Unfortunately, quasi-linear is too general.

  47. milkshake Says:

    call it a pygmoid or pygmoidial. (I pray you don’t get hit with an arrow for it)

  48. Raoul Ohio Says:

    From names to notation: I am taking this opportunity to get on my soap box about notation:

    The CS and Discrete Math worlds have been greatly aided by many inspired notations revealed by St. Donald (Knuth, for the noniniated). I think he generalized the Landau “Big O” to Big Omicron, introduced Big Omega, Big Theta and also floor and ceiling. We use them every day.

    I think he also introduced the underbar and overbar on powers for “raising powers” and “falling powers”. These might be hard to write on this blog; but e.g.,

    x^(3 underbar) = x (x-1) (x-2),
    x^(4 overbar) = x(x+1)(x+2)(x+3),
    n^(n underbar) = n!,
    Perm(n,k) = n^(k underbar),

    etc. They elegantly replace the way kludgly pochhammer symbol.

    Raising and falling powers are incredibly useful for calculations in combinatorics and hypergeometric functions, to name a few areas. For a good exercise to see how well they work, put on your power series hat and calc something like the antiderivative of a power times a 2F2 function. You discover it is not so hard and Gauss was a clever fellow when he set this stuff up.

    But raising and falling powers don’t seem to be getting any traction in math books. I call on all readers to fight the good fight by using them as called for. The resulting improvement in math notation just might get the world economy back on track!

  49. Paul Beame Says:

    The trouble is, Impagliazzo et al. use “weakly exponential” to mean 2n^ε or greater, which is different from what we’re talking about here (in their terms, “weakly exponential but not strongly exponential”).

    Scott: They only defined the term as a parenthetical statement in a sentence in which they use weakly exponential to describe a lower bound that is “merely” 2n^Omega(1) in contrast to one that is “strongly exponential which they defined as 2Omega(n). In this context, the “merely” implies that the bound is the former but not the latter, despite the apparent mathematical form of the bound.

    They are focused on lower bounds which is why they write things this way since a strong lower bound implies a weak lower bound, too. However, there is no reason not to use the terminology to describe the class of functions you want. There would be no confusion with their use of the term.

  50. Ron Says:

    As with the “consequences of the French revolution” it’s too early to say. Perhaps “lowly/mildly exponential” although “fractionally exponential” would get my vote for now. But how can anyone be so sure?

    If this notion turns out to be incredibly important then “critically exponential” might be appropriate; if its ubiquity stays at current levels perhaps “fractionally exponential” or “weakly exponential” is the go following precedent (depending on the bounding context – I agree “stretched exponential” seems too unnatural and domain-specific). If the notion turns out to continually have the property of wasteful procrastination following its consideration, then perhaps the suggested – “Scott Function” is the natural label?

    Given all this uncertainty before a future observational collapse, my entry would be “tentatively exponential” – this also captures an earnest attempt at being a “real exponential” – tentatively.

    By the way, does the following quote taken from some other blog page mean that the “Worldview Manager” is not yet operational?

    “… Mermin’s infamous insistence on spelling qubit “Qbit … A language consists of shared, often highly-irrational conventions that cannot be changed unilaterally. …”

    All this does however, raise the interesting question as to when a convention is sufficiently irrational (obsolete, unworkable etc) that it should be changed (either unilaterally or multilaterally). Does every term have a natural PONR (point of no return) where the benefits of changing no longer outweigh the ongoing costs of a lack of historical foresight?
    I’ve always enjoyed TCS’s linguistic overtones-it seems to be a distinguishing characteristic of the field. It seems to be the only remaining hard science where the proofs contain more natural language than dense symbolism. Perhaps this is related to its maturity although hopefully it is also inherent. The Merlins/Arthurs/Alices/Bobs/Oracles/Witnesses/X-Complete/X-Hard/X-Easy/Reductions/etc really add to its readability and arguably its growth on so many different levels. Having said this there are some anomalies and I often wonder “How optimal is current TCS terminology?” No doubt this has been discussed before but how about a competition for the most suboptimal/inappropriate/annoying terminology in TCS?
    My vote would go for the complete lack of systematization used in all the circuit complexity classes (AC, ACC, TC, NC etc )

    B.T.W. I think a blog (democratic?) consensus is not the worst way of seriously defining a notation and also, perhaps typesetting technology needs to be tweaked to incorporate placeholders instead of hard-wired notation. These could be used to automatically substitute a handful of varieties chosen by users’ preferences and made “repository modifiable” over time (in this way the PONR is skewed in favour of more frequent improvements)

  51. Thomas Says:

    Re: Amit C, Comment #46:

    A name for Theta(n log n) I’ve heard is “linearithmic”, a portmanteau of linear and logarithmic. Not quite sure I like it, but still. In any case, wikipedia knows about it ;).

    I’ve also heard “almost linear” defined as O(n log n), but this has the same problems as quasi-linear.

  52. JerboaKolinowski Says:

    Maybe “intraponential”, since the exponent is always within some bound? For the case where the exponent is almost n ( n^0.999…) perhaps “usqueponential” would be a useful term? 🙂

  53. Roland Bacher Says:

    What about “log-sublinear” or “log-convex” for (classes of)
    functions with sublinear, respectively convex logarithms?

  54. Paul Carpenter Says:

    Since the power is reduced by (1-α)%, I would call it exponential with (1-α) kryptonite.

  55. Paul Carpenter Says:

    There’s a typo in my previous comment – I do know how to do percentages really.

  56. tomate Says:

    A question:

    Are there problems which go like N^N? Is it substantially faster than exponential time? Does it have a name?

  57. Joe Fitzsimons Says:

    tomate: N^N = exp(N ln N)

  58. Scott Says:

    tomate: Yes, there are algorithms that run in NN time, and even more that run in N! time (which is about (N/e)N by Stirling’s formula). However, using some cleverness, it often turns out to be possible to make those algorithms run in exp(N) time instead.

    NN = 2N log N; you can judge for yourself whether that’s “substantially faster” than 2N. 🙂

    NN growth rates don’t have a special name I’m aware of (except “N factorial,” which is usually how they arise).

  59. Greg Kuperberg Says:

    NN growth rates don’t have a special name I’m aware of (except “N factorial,” which is usually how they arise).

    It’s natural enough to call any growth rate of the form 2^O(n*(log n)) “factorial growth”, or to say that a function “grows factorially” instead of “grows exponentially”. This is somewhat standard terminology in pure math, at least. If you are a complexity theorist it might be a moot distinction, if you call any 2^poly(n) “exponential growth”.

  60. Douglas Knight Says:

    I’ll third “surd-exponential.”

  61. Gil Says:

    I saw “mildly-exponential” being used several times

  62. Gil Says:

    (In fact I already commented about it in the memorable “favorite growth rates” post here: https://scottaaronson-production.mystagingwebsite.com/?p=263#comment-14069
    )

  63. Raoul Ohio Says:

    Order “n log n” is widely called “order n log n”. I don’t see why it is a good idea to come up with a new confusing name for something that already has a widely used simple name. Has anyone ever been confused when you hear “n log n”?

  64. Gil Says:

    hmm, surd-exponential is nice… (I did not know about this word but http://en.wikipedia.org/wiki/Nth_root )

  65. Jonathan Vos Post Says:

    I see that you’re attending as invited workshop guest. Shall you keep us informed on this or another thread?

    John Preskill writes to me [Dave Bacon] about workshop being quickly organized in response to the release of a report by US National Science and Technology Council calling for a national initiative in quantum information science. I saw this report a while back and have some half written blog posts about it that I need to finish off. Anyway the workshop website is http://www.eas.caltech.edu/qis2009/index.html. Everyone in the quantum information science is invited to attend and the registration and deadlines are, like, almost now!

    Here is the blurb from the website:

    In January 2009, the United States National Science and Technology Council issued a report on A Federal Vision for Quantum Information Science. The report proposes that:

    The United States … create a scientific foundation for controlling, manipulating, and exploiting the behavior of quantum matter, and for identifying the physical, mathematical, and computational capabilities and limitations of quantum information processing systems in order to build a knowledge base for this 21st century technology.

    This Workshop on Quantum Information Science (QIS) has been organized in response to the NSTC report. It brings together leading theorists and experimenters drawn from physical science, computer science, mathematics, and engineering who will assess recent progress in QIS and identify major goals and challenges for future research.

    The workshop will include open evening sessions so that all participants can express their views concerning the priorities for a national QIS initiative. The workshop will be followed by a report that will be submitted to the federal agencies that sponsor or perform QIS research.

    Note: Deadline for workshop rate at the Marriott is Wednesday, April 8, 5:00pm EST.

    Posted by Dave Bacon at 12:57 PM

  66. Jonathan Vos Post Says:

    Agree wil Gil @ 64: “surd-exponential is nice”

    But expect it to streamline to “sexponential.”

  67. Bram Cohen Says:

    Maybe you’d care for some commentary from Mary Poppins.

  68. KaoriBlue Says:

    I like S-ponential or maybe Sponential. ‘S’ for slow, squished, shoddy, etc. Judging from the previous comments, the ‘S’ could also stand for ‘sassed’.

  69. Ian Durham Says:

    > deep-fried Snickers bars for the Vitamin E in the peanuts

    Hey, now there’s an idea…

  70. Amit C Says:

    @Raoul Ohio #63: It is not about confusion. Almost anyone will prefer saying “linear/quadratic” to “order n/order n-squared”, and not because the latter forms are any less clear. Also, to a native speaker of English, the expression “order enlogen” does not sound like an adjective.

  71. Ghaith Says:

    Hi Scott,

    Anything to say about this

    http://www.math.rutgers.edu/~zeilberg/Opinion98.html

  72. Scott Says:

    Ghaith: It’s a good thing Zeilberger clearly indicated the date (April 1st)—otherwise, I might have read another one of his contrarian mathematical diatribes, and not even considered whether he was being facetious! 🙂

  73. Ghaith Says:

    Scott, I clearly knew he wasn’t serious about the proof. However, I thought he was serious about his opinion.

  74. Ari Stern Says:

    How about “convexponential,” a portmanteau of “convex” and “exponential”? The term “convex” instantly calls to mind a coefficient between 0 and 1, so it might be easier to remember than semi/pseudo/sub/stretched/etc.

  75. Scott Says:

    Ghaith, what I was trying to say is that, April 1st or not, it’s always hard for me to tell when Doron is serious. Like Charlie Rackoff and a few other people I know of, his general strategy seems to be

    (1) Look for the conventional wisdom,
    (2) Advocate the opposite.

    This is often a good strategy for having original ideas (which after all is academics’ job), but I don’t think it’s the strategy that maximizes the probability of being right.

    In the case of P vs. NP, yes, obviously it’s conceivable that P=NP and the algorithm takes n10^1000 steps. Anything not yet disproved is “conceivable” in the same sense: maybe Goldbach’s Conjecture has exactly 42 counterexamples, the Riemann Hypothesis is independent of ZFC, and aliens are visiting the earth to mutilate cattle. In all these cases, it’s not a question of what’s been proved; it’s a question of what it’s fruitful to spend time thinking about. What Doron criticizes (facetiously or not) as the “flawed dogma of contemporary computer science,” that polynomial=easy, has led to hundreds of profound insights. By contrast, speculation about n10^1000 SUBSET SUM algorithms has led to very little that I’m aware of. I’m willing to be convinced that I’m wrong, but as with anything else, it will take results, not a priori philosophical arguments.

  76. Jon Says:

    I think it is a very good thing that there is *not* an established term for this in computer science. It forces people to define their terms, and reduces unnecessary jargon in the field. So call it whatever you like.

    If we absolutely must have an established term, though, then I would suggest “subexponential” for upper bounds, and “nearly exponential” for lower bounds. The mathematicians would hate it.

  77. Anand Nalya Says:

    Infra-exponential or iExponential?

  78. Pat Cahalan Says:

    @ Kaori

    “Sponential”

    I like it, but in spoken conversation it would be ambiguous.

  79. Douglas Knight Says:

    Jon,
    Using words to indicate whether you’re talking about upper bounds or lower bounds is good, but sometimes you want to be more specific. When you describe the best known algorithm, it’s both a upper bound on the problem and a lower bound on the current state of knowledge. Usually people want to emphasize only one part of that and they should make that emphasis clearer, but sometimes they talk about both or about particular exponents, as in comparing different factoring algorithms.

  80. Jonathan Baxter Says:

    soft-exponential

    Inspired by Java which has WeakReference and SoftReference.
    weak-exponential is already taken , so…

    [analogy is not perfect since a SoftReference is strictly stronger than a WeakReference in java, but if you define the power of a reference in the negative – ie in terms of “unreachability”, then soft references are less (unreachable) than weak references.]

  81. Paul Carpenter Says:

    Jon – nothing you guys do will make us hate you, we are full of love. Love and coffee.

  82. Timothy Chow Says:

    Regarding the problem of telling when Doron is joking: Except for his April 1 messages, Doron’s stated opinions, however unlikely they seem to be, are always at least semi-serious (weakly serious? sub-serious? stretched?).

    Regarding whether the contemplation of O(n^huge) algorithms for SAT is worth anything, I once heard Noga Alon give a lecture (about property testing, I think) in which he said that in the course of proving his main result, he found himself briefly considering the possibility that his methods might prove that P = NP. The reason he considered it possible was that he was getting large exponents. Now you could argue that Noga’s mental digression here was not, in fact, fruitful. Nevertheless, it seems to me that it’s worth keeping the O(n^huge) possibility in the back of one’s mind just in case it does turn out to be true. It would be embarrassing to overlook the proof just because of a psychological blindspot.

    Martin Davis remarked in an interview that part of the reason for his big contribution to Hilbert’s Tenth Problem was his willingness to consider the possibility that high-degree Diophantine equations could capture arbitrary computations. According to Davis, most of his contemporaries had a bit of a blindspot here because their intuition was overly influenced by their experience with low-degree polynomials. For this reason Davis is 50/50 on the P vs. NP question: He feels that we simply don’t have enough intuition about what O(n^huge) algorithms can accomplish. Just because the equation “P = feasible” is useful doesn’t mean that we shouldn’t try to think outside the box occasionally.

  83. Jair Says:

    How about: lessponential.

  84. Jan Says:

    Well, shamelessly “borrowing” (more accurately: consciously misinterpreting) from complex analysis I would suggest: an exponential of order less than one.

    But you complexity theorists seem to already have taken “order” to mean something different… 😉

  85. Jan Says:

    Or one could go the way of the “-oids”: exponentioid.

  86. Christopher H Says:

    Expolynential

  87. Gerke Says:

    dampened exponential?

  88. Greg Kuperberg Says:

    Google Scholar vindicates the terms “stretched exponential” in physics and somewhat vindicates “mildly exponential” in computer science. The former term has 13,000 hits, generally condensed matter physics papers but also some networking papers (Jon Kleinberg type stuff). The latter term has 92 hits, which is not a huge number but enough to get by, especially since they are all cs theory papers. Note that both of these numbers overestimate distinct hits by 3/2 or 2; but also may miss old papers that have not been digitized.

    The bad news from Google Scholar is that subexponential is already widely used in cs theory as well, and very often means 2^O(n^c) for c < 1 (or O-tilde). “Subexponential time” (avec quotes) has 1,600 hits in Google Scholar and “subexponential algorithm” has hundreds more. One notable paper was on subexponential time algorithms for NP-hard problems, which must mean stretched or mildly exponential time, barring a subexponential miracle for 3-SAT.

    I honestly don’t think that these barbarous neologisms help much. Unless of course they are the real point of the discussion, in which case they are all great!

  89. Michael Luvaul Says:

    I’ll echo the thoughts of many previous posters regarding convex being part of the name. Convexponential or something of that sort would do just fine.

  90. Joe Shipman Says:

    We also need a name for functions f such that f(f(x)) has exponential growth, or f(f(f(x))) does, etc.

    I think those functions are MUCH more interesting than functions of the form 2^(n^a) since the latter class is equivalent to exponential if you allow a polynomial amount of “slop” in coding, as has been pointed out earlier on this thread.

  91. Jonathan Baxter Says:

    I don’t know too many portmanteau words that have caught on in a scientific field, so I would vote against things like “convexponential”. (my favorite portmanteau word “travelocity” may have a lot to do with conference travel, but that doesn’t count…)

  92. Greg Kuperberg Says:

    We also need a name for functions f such that f(f(x)) has exponential growth, or f(f(f(x))) does, etc.

    I think those functions are MUCH more interesting than functions of the form 2^(n^a) since the latter class is equivalent to exponential if you allow a polynomial amount of “slop” in coding, as has been pointed out earlier on this thread.

    I agree with you, Joe, that if f is a square root of 2^x under composition, that that is a much more interesting growth rate than 2^sqrt(x). But as it happens, the lion’s share of interesting algorithms, that accelerate problems in E but not all the way down to P, have the latter form. So you’re saying that a kind of Easter egg that is almost never found is much prettier than another kind of Easter egg that is still nice and is found from time to time.

    Polynomial reductions play a different role for hard complexity classes than for easy complexity classes. For instance let f(x) = 2^O-tilde(x^1/3) be the time complexity of factoring. It is true that f(poly(x)) can be simple exponential or worse. But that just means that it’s not exciting to factor a number whose digits are polynomially expanded from other data. It is still exciting to factor numbers with these algorithms! For one reason, the other kind of composition, poly(f(x)), is always smaller than simple exponential. Without subexponential factoring algorithms (in the sense of stretched/mildly exponential), factoring a number with 100 digits would be unthinkable.

    Of course if f(x) itself is polynomial or better, then f(poly(x)) and poly(f(x)) can both look good.

  93. Scott Says:

    Joe: Yes, those functions have been some of my favorites since I was 12 or 13! And they do arise in complexity theory: for example, in circuit lower bounds for the class MAEXP. But they already have a name, which (at least so far) I’ve found pretty workable in practice: “half-exponential.” We had a whole discussion thread about half-exponentials two summers ago—see here.

  94. Howard Barnum Says:

    I tend use “near-exponential” for this…

  95. Tal Says:

    “Compressed exponential”,
    polynomially compressed exponential,
    polynomially reduced exponential.