My diavlog with Anthony Aguirre

July 24th, 2010

Bloggingheads has just posted an hour-long diavlog between the cosmologist Anthony Aguirre and your humble blogger.  Topics discussed include: the anthropic principle; how to do quantum mechanics if the universe is so large that there could be multiple copies of you; Nick Bostrom’s “God’s Coin Toss” thought experiment; the cosmological constant; the total amount of computation in the observable universe; whether it’s reasonable to restrict cosmology to our observable region and ignore everything beyond that; whether the universe “is” a computer; whether, when we ask the preceding question, we’re no better than those Renaissance folks who asked whether the universe “is” a clockwork mechanism; and other questions that neither Anthony, myself, nor anyone else is really qualified to address.

There was one point that sort of implicit in the discussion, but I noticed afterward that I never said explicitly, so let me do it now.  The question of whether the universe “is” a computer, I see as almost too meaningless to deserve discussion.  The reason is that the notion of “computation” is so broad that pretty much any system, following any sort of rules whatsoever (yes, even non-Turing-computable rules) could be regarded as some sort of computation.  So the right question to ask is not whether the universe is a computer, but rather what kind of computer it is.  How many bits can it store?  How many operations can it perform?  What’s the class of problems that it can solve in polynomial time?

My philomath project: Sensitivity versus block-sensitivity

July 13th, 2010

If you like math, and you don’t yet have a Math Overflow account, stop reading this post now (not right now, but by the end of the sentence) and set one up, before returning here to finish reading the post.  Math Overflow is the real deal: something that I’ve missed, dreamed about, and told my friends someone ought to set up for the last fifteen years, and that now finally actually exists.  (It was founded by Berkeley grad students and postdocs Anton Geraschenko, David Brown, and Scott Morrison.)  If you have a research-related math problem you can’t solve, you can post it there and there’s a nontrivial chance someone will solve it (or at least tell you something new), possibly within eleven minutes.  If you’re an ambitious student looking for a problem to solve, you can go there and find one (or a hundred).

To take one example, here’s a terrific complexity question asked by Timothy Gowers, about a notion of “average-case NP-completeness” different from the usual notions (if you think he’s asking about a well-studied topic, read the question more carefully).  I didn’t have a good answer, so I wrote a long, irrelevant non-answer summarizing what’s known about whether there are average-case NP-complete problems in the conventional sense.

But my real topic today is the sensitivity versus block-sensitivity problem, which I recently posted to MO in a disguised (and, dare I say, improved) form.

For non-Boolean-function-nerds, sensitivity vs. block-sensitivity is a frustrating and elusive combinatorial problem, first asked (as far as I know) by Noam Nisan and by Nisan-Szegedy around 1991.  Here’s a lovely paper by Claire Kenyon and Samuel Kutin that gives background and motivation as well as partial results.

Briefly, let f:{0,1}n→{0,1} be a Boolean function, with n input bits and 1 output bit. Then given an input x=x1…xn to f, the sensitivity of x, or sx(f), is the number of bits of x that you can flip to change the value of f.  The sensitivity of f is s(f) = maxx sx(f).  Also, the block-sensitivity of an input x, or bsx(f), is the maximum number of disjoint sets of bits of x (called “blocks”) that you can flip to change the value of f, and the block sensitivity of f is bs(f) = maxx bsx(f).  Clearly 1 ≤ s(f) ≤ bs(f) ≤ n for every non-constant Boolean function f.  (bs(f) is at least s(f) since you could always just take each block to have size 1.)

To give some examples, the n-bit OR function satisfies s(OR)=bs(OR)=n, since the all-zeroes input is sensitive to flipping any of the n input bits.  Likewise s(AND)=bs(AND)=n, since the all-ones input is sensitive to flipping any of the bits.  Indeed, it’s not hard to see that s(f)=bs(f) for every monotone Boolean function f.  For non-monotone Boolean functions, on the other hand, the block-sensitivity can be bigger.  For example, consider the “sortedness function”, a 4-input Boolean function f that outputs 1 if the input is 0000, 0001, 0011, 0111, 1111, 1110, 1100, or 1000, and 0 otherwise.  Then you can check that bs(f) is 3, whereas s(f) is only 2.

Here’s the question: What’s the largest possible gap between s(f) and bs(f)?  Are they always polynomially related?

What makes this interesting is that block-sensitivity is known to be polynomially related to a huge number of other interesting complexity measures: the decision-tree complexity of f, the certificate complexity of f, the randomized query complexity of f, the quantum query complexity of f, the degree of f as a real polynomial, you name it.  So if, as is conjectured, sensitivity and block-sensitivity are polynomially related, then sensitivity—arguably the most basic of all Boolean function complexity measures—ceases to be an outlier and joins a large and happy flock.

The largest known gap between sensitivity and block-sensitivity is quadratic, and is achieved by “Rubinstein’s function.”  To define this function, assume for simplicity that n is an even perfect square, and arrange the input bits into a √n-by-√n square grid.  Then we’ll set f(x)=1, if and only if there exists a row that has two consecutive 1’s and all other entries equal to 0.  You can check that bs(f)=n/2 (for consider the all-zeroes input), whereas s(f)=2√n (the worst case is when every row contains exactly one 1).

It’s a reasonable guess that Rubinstein’s function gives pretty much the largest gap possible, and how hard could that possibly be to prove?  Well, how hard could a white rabbit in front of a cave possibly be to kill?

I’ll confess to going on sensitivity versus block-sensitivity binges every couple of years since I first learned about this problem as an undergraduate at Cornell.  The last binge occurred this weekend, triggered by the strange block-sensitivity properties of my counterexample to the GLN Conjecture.  And that’s when it occurred to me to use the hyper-inter-network tools of Web 2.0, together with my power and influence here at Shtetl-Optimized, to unleash a new flood of activity on the problem.  There are at least four factors that make this problem well-suited to a collaborative math project:

  1. The statement can be understood by almost anyone.  I could explain it to my parents.
  2. It seems unlikely (though not impossible) that the solution will require any heavy-duty math.  What seems needed, rather, is lots of creativity to come up with new ideas specific to the problem at hand, as well as diabolical examples of Boolean functions that refute those ideas.
  3. Even though the problem has been around for 20 years, the relevant literature is very small (maybe half a dozen papers); it would take at most a day to learn everything known about the problem.
  4. Despite 1-3, this is a real problem that a significant number of people would care about the answer to.

If you feel like you want a new angle on the problem—something that hasn’t already been explored to death, or even to serious injury—you can try my “geometric variant” of sensitivity vs. block sensitivity described on Math Overflow.

I’m calling this a “philomath project,” a term that pays tribute to the successful polymath projects popularized by (and carried out on) Timothy Gowers’ wonderful blog, but that avoids infringing on a registered trademark of GowersCorp.

So, here are the philomath project rules: do you have an idea about sensitivity vs. block sensitivity?  Or a vague pseudo-idea?  Or a proposal for an easier variant?   Then post it here!  Or go over to Math Overflow and post it there.  Let’s see if a block of us acting in unison can flip this problem.

The Generalized Linial-Nisan Conjecture is false

July 11th, 2010

In a post a year and a half ago, I offered a prize of $200 for proving something called the Generalized Linial-Nisan Conjecture, which basically said that almost k-wise independent distributions fool AC0 circuits.  (Go over to that post if you want to know what that means and why I cared about it.)

Well, I’m pleased to report that that’s a particular $200 I’ll never have to pay.  I just uploaded a new preprint to ECCC, entitled A Counterexample to the Generalized Linial-Nisan Conjecture.  (That’s the great thing about research: no matter what happens, you get a paper out of it.)

A couple friends commented that it was wise to name the ill-fated conjecture after other people rather than myself.  (Then again, who the hell names a conjecture after themselves?)

If you don’t feel like downloading the ECCC preprint, but do feel like scrolling down, here’s the abstract (with a few links inserted):

In earlier work, we gave an oracle separating the relational versions of BQP and the polynomial hierarchy, and showed that an oracle separating the decision versions would follow from what we called the Generalized Linial-Nisan (GLN) Conjecture: that “almost k-wise independent” distributions are indistinguishable from the uniform distribution by constant-depth circuits. The original Linial-Nisan Conjecture was recently proved by Braverman; we offered a $200 prize for the generalized version. In this paper, we save ourselves $200 by showing that the GLN Conjecture is false, at least for circuits of depth 3 and higher.
As a byproduct, our counterexample also implies that Π2p⊄PNP relative to a random oracle with probability 1. It has been conjectured since the 1980s that PH is infinite relative to a random oracle, but the best previous result was NP≠coNP relative to a random oracle.
Finally, our counterexample implies that the famous results of Linial, Mansour, and Nisan, on the structure of AC0 functions, cannot be improved in several interesting respects.

To dispel any confusion, the $200 prize still stands for the original problem that the GLN Conjecture was meant to solve: namely, giving an oracle relative to which BQP is not in PH.  As I say in the paper, I remain optimistic about the prospects for solving that problem by a different approach, such as an elegant one recently proposed by Bill Fefferman and Chris Umans.  Also, it’s still possible that the GLN Conjecture is true for depth-two AC0 circuits (i.e., DNF formulas).  If so, that would imply the existence of an oracle relative to which BQP is not in AM—already a 17-year-old open problem—and net a respectable $100.

Doing my oracle duty

July 5th, 2010

I promised myself I’d stop blogging about controversial issues whose mere mention could instigate a flamewar and permanently get me in trouble.  Well, today I’m going to violate that rule, by blogging about the difference relativized and unrelativized complexity classes.

Recently a colleague of mine, who works in the foundations of quantum mechanics, sent me a long list of questions about the seminal 1993 paper of Bernstein and Vazirani that introduced the complexity class BQP (Bounded-Error Quantum Polynomial-Time).  It was clear to me that all of his questions boiled down to a single point: the distinction between the relativized and unrelativized worlds.  This is an absolutely crucial distinction that trips up just about everyone when they’re first learning quantum computing.

So I fired off a response, which my colleague said he found extremely helpful.  It then occurred to me that what one person found helpful, another might as well—and that which makes 30% of my readers’ eyes glaze over with its thoroughgoing duh-obviousness, might be very thing that another 30% of my readers most want to see.  So without further ado, the two worlds of quantum complexity theory…

In the relativized world, we let our algorithms access potentially-powerful oracles, whose internal structure we don’t examine (think of Simon’s algorithm for concreteness).  In that world, we can indeed prove unconditionally that BPP≠BQP—that is, quantum computers can solve certain problems exponentially faster than classical computers, when both computers are given access to the same oracle.

In general, almost every “natural” complexity class has a relativized version associated with it, and the relativized versions tend to be much easier to separate than the unrelativized versions (it’s basically the difference between a masters or PhD thesis and a Fields Medal!)  So for example, within the relativized world, we can separate not only BPP from BQP, but also P from NP, NP from PSPACE, NP from BQP, etc.

By contrast, in the unrelativized world (where there are no oracles), we can’t separate any complexity classes between P and PSPACE.  Doing so is universally recognized as one of the biggest open problems in mathematics (in my opinion, it’s far-and-away the biggest problem).

Now, Bernstein and Vazirani proved that BQP is “sandwiched” between P and PSPACE.  For that reason, as they write in their paper, one can’t hope to prove P≠BQP in the unrelativized world without also proving P≠PSPACE.

Let’s move on to another major result from Bernstein and Vazirani’s paper, namely their oracle separation between BPP and BQP.  You might wonder: what’s the point of proving such a thing?  Well, the Bernstein-Vazirani oracle separation gave the first formal evidence that BQP “might” be larger than BPP.  For if BPP equaled BQP relative to every oracle, then in particular, they’d have to be equal relative to the empty oracle—that is, in the unrelativized world!

(The converse need not hold: it could be the case that BPP=BQP, despite the existence of an oracle that separates them.  So, again, separating complexity classes relative to an oracle can be thought of as a “baby step” toward separating them in the real world.)

But an even more important motivation for Bernstein and Vazirani’s oracle separation is that it led shortly afterward to a better oracle separation by Simon, and that, in turn, led to Shor’s factoring algorithm.

In a sense, what Shor did was to “remove the oracle” from Simon’s problem.  In other words, Shor found a concrete problem in the unrelativized world (namely factoring integers), which has a natural function associated with it (namely the modular exponentiation function, f(r) = xr mod N) that one can usefully treat as an oracle.  Treating f as an oracle, one can then use a quantum algorithm related to Simon’s algorithm to find the period of f, and that in turn lets you factor integers in polynomial time.

Of course, Shor’s algorithm became much more famous than Simon’s algorithm, since the implications for computer science, cryptography, etc. were so much more concrete and dramatic than with an abstract oracle separation.  However, the downside is that the speedup of Shor’s algorithm is no longer unconditional: for all anyone knows today, there might also a fast classical algorithm to factor integers.  By contrast, the speedup of Simon’s algorithm (and of Bernstein-Vazirani before it) is an unconditional one.

Fighting Hype with Hype

June 6th, 2010

I’ve been depressed all month about the oil spill.  So what better to cheer me up than a flurry of comments and emails asking me to comment on an Ars Technica story by Chris Lee, reporting that it’s now been proven once and for all that quantum computers don’t help with NP-complete problems?

Now, just to really put the screws on any optimists out there, a new paper has shown that adiabatic computers are actually quite bad at hard math problems …

What [the researchers] have shown is that, when adiabatic quantum computers are used to solve NP-complete problems, the energy gap between the lowest energy state and the next state up is not well behaved. Instead, it narrows faster than exponentially, meaning the adiabatic quantum computing cannot, even in principle, solve NP-complete problems faster than a classical computer …

In the end, they conclude that NP-complete problems are just as hard on an adiabatic quantum computer as on a classical computer. And, since earlier work showed the equivalence between different variants of quantum computers, that pretty much shuts down the possibility of any quantum computer helping with NP-complete problems.

I don’t think anyone in the field will be particularly surprised by this. The failure of earlier work to show that quantum computers offered a speed-up on any NP-complete problem indicated that it was likely that it simply was not possible.

I’m heartened by the progress we’ve made these last ten years: from overhyped and misleading claims that quantum computers can solve NP-complete problems in polynomial time, to overhyped and misleading claims that they can’t.

The link to the paper from the article is broken, and the article doesn’t give the names of the researchers involved, but from the context, I’m pretty sure the article’s attempting to talk about this paper by Boris Altshuler, Hari Krovi, and Jeremie Roland, amusingly entitled “Anderson localization casts clouds over adiabatic quantum optimization.”  This paper really is an interesting and important one—but alas, the Ars Technica story grossly misstates and exaggerates what it does.

For what I hope will be the last time, but I’m sure won’t: yes, almost everyone in the field believes it’s true that quantum computers can’t solve NP-complete problems in polynomial time.  But we have no idea at present how to prove anything of the kind.  In fact, we don’t even know how to prove classical computers can’t solve NP-complete problems in polynomial time (that’s called the P vs. NP question; maybe you’ve heard of it!).  Nor do we even know how to prove a conditional statement, like “quantum computers can’t solve NP-complete problems in polynomial time unless classical computers can also.”  Any such result would be the biggest advance in theoretical computer science at least since I was born.

So then what do Altshuler, Krovi, and Roland do?  They consider a specific quantum algorithm—namely, the quantum adiabatic algorithm with linear interpolation—applied to random instances of an NP-complete problem, namely Exact Cover.  They then argue, based on a combination of numerical simulations and perturbation theory approximation, that the spectral gap decreases exponentially (actually, like 1/n!), which would imply that the adiabatic algorithm generally requires exponential time to reach the ground state.

If that sounds pretty interesting, you’re right!  But what’s the fine print?  Well, let’s accept, for the sake of argument, Altshuler et al.’s claim that their conclusions about Exact Cover would likely generalize to 3SAT and other standard NP-complete problems.  Even then, there are three crucial caveats, all of which the Ars Technica story ignores:

  1. Most importantly, the limitation (if it is one) applies only to one specific algorithm: namely the adiabatic optimization algorithm (with a specific interpolation schedule, but let’s ignore that for now).  Now, some people seem to think a limitation on the adiabatic algorithm implies a limitation of quantum computers in general, since “adiabatic is universal”—a buzzphrase that’s caused a lot of confusion.  In reality, what Aharonov et al. proved, in a beautiful paper six years ago, is that the adiabatic model of computation is universal.  But they were talking about something much more general than the adiabatic optimization algorithm. For example, the ground state of Aharonov et al.’s adiabatic process is not the solution to a combinatorial optimization problem, but rather a “history state” that encodes an entire computation itself.
  2. The Altshuler et al. paper talks about random instances of the Exact Cover problem—but the uniform distribution over instances is just one particular distribution.  Even if the adiabatic algorithm doesn’t help there, it’s possible that there are other natural distributions over instances for which it exponentially outperforms (say) classical simulated annealing.
  3. Finally, even given the above two caveats, Altshuler et al. only show that the adiabatic algorithm fails on random Exact Cover instances at a “physics level of rigor.”  In other words, their argument relies on a “perturbative approximation” that seems plausible but isn’t proved.  A cynic might retort that, at a “physics level of rigor,” we also know that P≠NP!  But such a cynic would be unfair.  I don’t want to knock Altshuler et al.’s contribution.  For almost two decades, there’s been a spectacularly fruitful interaction between the physicists and the math/CS folks in the study of random constraint satisfaction problems.  Indeed, many of the conjectures (or, in physics lingo, “results”) in this area that the physicists derived using their hocus-pocus methods, were later rigorously confirmed by the mathematicians, and I don’t know of any that were disconfirmed.  Even so, the distinction between a proof and a “physics proof” is one that seems worth insisting on—especially in theoretical computer science, an area that’s often far removed from conventional “physical intuition.”

In summary, while it feels like swimming through a burning oil slick to say so, I have to side with D-Wave about the Ars Technica piece (though my reasons are mostly different).

So congratulations, Ars TechnicaLike The Economist before you, you’ve successfully cast clouds over yourself when reporting about stuff I don’t understand.

PS. I’m at STOC’2010 right now, in exotic Cambridge, MA.  If you’re interested, here’s the talk I gave this morning, on “BQP and the Polynomial Hierarchy,” and here’s the talk my PhD student Andy Drucker gave, on “A Full Characterization of Quantum Advice.”  And Lance, please stop looking over my shoulder!

The Prince of Nerds has left us

May 24th, 2010

What’s taking so long, Mr. Babbage?

May 22nd, 2010

Recently a journalist asked me why we don’t yet have quantum computers.  Since I get that question, in some form, at least 300 times per day, I thought it might be worthwhile to collect my thoughts about it in one place.  The essay below doesn’t say anything that I and others in the field haven’t said many times before, so hardcore Shtetl-Optimized fans should probably skip it.  (Don’t worry, I’ll let y’all know when I have something new to say and am reckless enough to say it.)


When people ask me why we don’t yet have quantum computers, my first response is to imagine someone asking Charles Babbage in the 1820s: “so, when are we going to get these scalable classical computers?  by 1830? or maybe 1840?”  In that case, we know that it took more than a century for the technology to catch up with the theory (and in particular, for the transistor to be invented).  More generally, we have lots of precedents for a technology being imaginable decades or even centuries before it became technologically feasible—heavier-than-air flight is another example.  So there’s nothing weird or anomalous about our current situation.The central technological obstacle to building a scalable quantum computer is well-known, and is decoherence, or unwanted interaction between the computer and its external environment.  When information about a quantum state leaks into the outside world—by any means whatsoever, intended or not—the state loses its “quantumness” and reverts to being classical.  So to do a quantum computation, it’s necessary to keep the qubits (atomic nuclei, photons, or whatever else they are) almost fanatically isolated from their environment.  But at the same time, you also need to manipulate the qubits, move them around, etc., in such a way as to carry out the computation.  Those twin requirements are the reasons why the most famous ‘success’ of practical quantum computing to date was factoring 15 into 3×5.

Indeed, the problem of decoherence is so severe that you might even conjecture that building a quantum computer is impossible in principle.  However, by now people have thought pretty hard about that possibility, and have failed to find any fundamental obstacle to quantum computing.  Indeed, the starting assumption that quantum computing “must be impossible” hasn’t led to a research program with any real successes.

On the positive side, by contrast, in the 1990s computer scientists and physicists developed the beautiful theory of quantum fault-tolerance, which shows that, as long as you can get the decoherence rate below a certain finite threshold (which current estimates put at somewhere around a 10-3 probability of failure per qubit per gate time), you can fix the errors caused by decoherence faster than you introduce new ones, and thereby perform an arbitrarily long quantum computation.  (I like to think of this fault-tolerance threshold as analogous to the critical mass for an atomic bomb!)  So, today there are large research efforts in two directions: on the experimental side, to push down the decoherence rate, and on the theoretical side, to find error-correction schemes that can cope with higher decoherence rates.  Hopefully those efforts will “meet in the middle” someday, and then we’ll have quantum computers!

However, long before we see general-purpose quantum computers, I predict that we’ll see a proliferation of special-purpose quantum simulators—basically, quantum computers that are tailored to some specific physics, chemistry, or optimization problem.  Indeed, arguably we already have that today, in that we have plenty of many-particle entangled quantum systems (such as high temperature superconductors and quark-gluon plasmas) that we don’t know how to simulate efficiently on a classical computer.  You could think of all these systems as “quantum computers” that compute their own time evolution!  From this perspective, the challenge is “merely” to get a programmable quantum computer, one that can solve a problem of our choosing (like factoring a huge composite number).  But I can easily imagine gradual steps in that direction from what we already have over the next couple of decades.

Finally, maybe it’s worth mentioning that there have already been some significant spinoffs of quantum computing in classical computer science (see here for a beautiful survey of them).  Also, as “ordinary” transistors get closer and closer to the atomic scale, I predict that ideas from quantum information science will be increasingly relevant, if only to suppress the quantum effects that the chip designers don’t want!

Schrödinger’s cash

April 15th, 2010

There’s an article in this week’s New Scientist by Justin Mullins about unforgeable quantum money.  By the standards of “quantum mechanics journalism,” the article is actually really good; I’d encourage you to read it if you want to know what’s going on in this area.  In particular, Mullins correctly emphasizes that the point of studying quantum money is to understand quantum mechanics better, not to mint practical QCash anytime soon (to do the latter, you’d first have to solve the minor problem of the money decohering within microseconds…).

My main quibble is just that I think the article overstates my own role!  In my Complexity’09 paper, the main thing I showed is that secure quantum money that anyone can verify is possible, assuming the counterfeiters only have black-box access to the device for verifying the money.  I also showed that, to get quantum money that anyone can verify, you have to make computational assumptions.  (By contrast, Stephen Wiesner’s scheme from the 1960s, in which only the bank could verify the money, was information-theoretically secure.)  But in terms of coming up with actual candidate quantum money schemes (as well as breaking those schemes!), the other members of the “quantum money club”—Andy Lutomirski, Avinatan Hassidim, David Gosset, Ed Farhi, Peter Shor—have been more active than me.

Two other quibbles:

(1) Mullins writes: “Then last year, Aaronson proposed a new approach that does away with the banknote and concentrates instead on the stream of information that represents quantum cash.”  In Wiesner’s scheme, too, I think it was pretty clear that the “banknote with qubits stuck to it” was just a fun way to tell the story…

(2) The article does a good job of explaining the distinction between information-theoretic and computational security.  But it doesn’t stress that, with the latter, we can’t actually prove that any of the “hard problems” are hard, without also proving P≠NP!  (I’ll admit that the importance of this point is slightly hard to convey in a popular article, possibly because many people, or so I’m told, go about their lives without proving anything.)  The best we can do is show that, if you could solve this problem, then you could also solve this other problem that people have studied for a long time.  But in the case of quantum money, we don’t even know how to do that—which is what we meant when we wrote in our ICS paper that “it seems possible that public key quantum money intrinsically requires a new mathematical leap of faith.”

Considered as research topics in complexity theory, uncloneable quantum money, copy-protected quantum software, and so on are almost as wide-open today as public-key encryption was in the 1970s.  That is, we don’t have a compelling intuition as to whether these tasks are possible at all: all quantum mechanics does is open up the possibility of them, which wasn’t there in the classical world.  Unfortunately, in the case of quantum money, most of the ideas we’ve had for realizing the possibility have turned out to be insecure—often for non-obvious reasons.  Assuming quantum money is possible, we don’t know what the right protocols are, what types of math to base them on, or how to argue for their security.  So if you’re not impressed by the results we have, why don’t you try your hand at this quantum money business?  Maybe you’ll have better luck than we did.

(Addendum: I also have a PowerPoint presentation on quantum money, which ironically goes into more detail than my Complexity paper.)


The Future of Computer Science, and Why Every Other Major Sucks By Comparison

April 12th, 2010

Does this post finally herald my return to regular blogging after a months-long absence?

I don’t know.  For me, writing a Shtetl-Optimized entry always followed the same process: I’d get an idea and start typing, furiously rocking back and forth in my chair.  Then the voices in my head would pipe up: no, I can’t say that—what will everyone think?—judging from past experience, they’ll probably take offense—I can already see the commenters boiling me alive—maybe if I rephrased it, or, y’know, provided some context—but to explain the real context, I’d need a whole book—and who has the time for that?—better wait till after tenure—meantime, maybe I could blog about something light and uncontroversial instead—but then what’s the point?—we already have one GASARCH—well, I could always put off a decision till later—

Back in the blog’s heyday, I’d win these fights about 40% the time and the voices would win about 60%.  (In other words: if you’ve ever taken offense at an entry of mine, rest assured that you haven’t even seen the half of my drafts folder.)  But now that I have an actual stake in this shabby world—students to advise and look after, a tenure case to build, conceivably even a family to start—the voices win more like 98% of the time.  And that’s why my blogging fell off.

Occasionally, though, something comes along so uncomplicatedly joyous that I feel no reservations about sharing it with the world.  Such was the case this weekend, when I was somehow called upon to represent MIT’s EECS Department in the annual “Professor Talent Show” at Campus Preview Weekend.  This is an event where six faculty members square off, taking eight minutes each to

(1) explain why their department is the coolest,
(2) crack jokes, and
(3) possibly demonstrate a musical or athletic talent.

Then, using electronic clickers, the several hundred prefrosh in attendence vote for which major carried the day.  Though I had no absolutely no talent of any kind to demonstrate, and was up against a banjo-player, violinist, and basketball-spinner among other tough competitors, for some reason EECS won!  You can see my PowerPoint slides here:

The Future of Computer Science, and Why Every Other Major Sucks By Comparison
http://www.scottaaronson.com/talks/futurecs.ppt

(You can read the jokes that go along with each slide in the slide notes at the bottom.)

Update (4/15): I hadn’t realized at all that there’s actually a video of me giving the talk!  (Click on “Part 2.”)

Announcement

February 14th, 2010

I thought the eight people who still read this blog might be interested to know that the FOCS’2010 Call for Papers is now out.