Great Ideas in Theoretical Computer Science Lectures 12-15

May 2nd, 2008

Yeah, yeah, I know. Combination of the end of the semester and family matters. But I’m back with four more GITCS lectures:

  • Lecture 15: Derandomization / Crypto Double Feature

Comments welcome as always.

Open thread

April 11th, 2008

I’ve had a miserable week (only partly because of the headaches and coughing fits that have been keeping me up all night), and feel a need to be of use to some other human being without leaving my apartment. So this thread is for you to ask about whatever’s on your mind — complexity classes, philosophy, grad school advice, anteaters … anything asked in earnest will be responded to, in considerably less than the two years it took me for Lev R.

Update (4/13): Having spent a good part of the weekend answering 57 questions about everything from quantum computing to painting elephants, I think it’s time to call it quits. Thanks to everyone who submitted; it really cheered me up! We’ll do this again sometime.

Great Ideas in Theoretical Computer Science Lectures 8-11

April 7th, 2008

A few more lectures, hot off the Turing machine tape:

Yeah, it’s all standard material, but if you don’t know it … well, this is one more place you can learn.

Apropos of computer science education, some of you may have heard the sad news that the College Board is killing the AP Computer Science AB test [Slashdot] [Pontiff]. The reasons they gave were that (1) not enough students were taking it and (2) of those who were, not enough were underrepresented minorities.

I took the AB test in ’96, despite not having taken the course (my high school didn’t offer it) and not knowing Pascal (the language they still used, before they switched to C++ and then Java). I did read a book to prepare, and that’s where I got my first exposure to O-notation, linked lists, and other crumbs from the feast table of theory. I devoured it all like a hungry dog licking the floor.

It goes without saying that high school students (including underrepresented minorities) won’t take an AP course if it isn’t offered, or if their schools or the colleges they’re applying to don’t take it seriously. Computer science education is a threat to many people’s pet ideas about the world: that CS is not a real subject and is devoid of intellectual content. That everything having to do with computers is mundane and technical, and should (for that reason) be outsourced to China and India. That if we must teach CS, we should at least focus on applications, and get rid of any math or logical thought. That CS is passé, a 20th-century relic that was long ago superseded by the life sciences (haven’t computer scientists gotten the message yet?). Against this backdrop, it’s not surprising that the AB test is being killed: what’s surprising is that it lasted so long.

Blogging will continue to be sporadic, as I have a cold.

Update (4/10): Lenore Blum just gave a talk here at MIT about the future of CS education, in which she echoed several themes of this post, but also argued (among other things) that the cancellation of APCSAB is good news given the course’s obsessive focus on the details of the programming language du jour, and given high schools’ refusal to treat it as a real academic subject, instead grouping it with “electives” like woodshop and metalworking. She held out hope that the US will someday develop a reasonable K-12 computer science curriculum, pointing to successful models from Israel and New Zealand. Needless to say, I have mixed feelings about canceling the mediocre (AB), keeping the truly bad (A), and hoping for the distant good.

Lev R.’s question answered at last; fate of humanity revealed

April 1st, 2008

Almost two years ago, a reader named Lev R. won my Best Anthropicism Contest with the following gem:

why aren’t physicists too interested in computational complexity? because if they were, they’d be computer scientists.

As the champion, Lev won the right to ask any question and have me answer it on this blog. Here was Lev’s question:

I like your “Earth Day, Doomsday, and Chicken Little” post, but you dodged the big question. Will the world end (humans go extinct) anytime soon? Or do you think that despite our best efforts, we’ll somehow end up not destroying ourselves?

In general, I despise being asked to make predictions, even about infinitely less weighty topics — especially when there’s a chance of my being wrong, and people looking back Nelson-Muntz-like and saying “ha ha, Scott was wrong!” That’s one of only several reasons why I could never be a physicist.

An answerable question would be one that asked me to clear up a misconception, or render a moral judgment, or discuss the consequences of a given assumption. (Unanswerable: “When will we see useful quantum computers?” Answerable: “Didn’t that company in Vancouver already build one?”) Questions about relationships between complexity classes or other unsolved math problems are also fine. But as for the universe, how am I supposed to know what it’ll decide to do, among all the things it could do within reasonable bounds of physics and logic? How am I even supposed to have a prior? As a CS theorist, I’m trained to think not about what’s likely to happen, but about the very worst that could happen — within stated assumptions, of course. Among the practical consequences of this attitude, I never gamble and I never play the stock market (and not only because, while there are many things I want, almost none of them can be traded for money). I also don’t worry about being put out of a job by prediction markets. Where the Bayesian stops, and says “every question beyond these is trivial or meaningless,” that’s where I’m just getting started.

But despite everything I’ve said, after years of diligent research into the future of the human race — reading hundreds of trillions of books and articles about climate change, overpopulation, Peak Oil, nuclear proliferation, transhumanism, AI, and every other conceivably relevant topic (what do you think I was doing, writing CS papers?) — I am finally prepared, this somber April 1st, to answer Lev’s question with the seriousness it deserves. Obviously my predictions can only be probabilistic, and obviously I can’t give you the deep reasons behind them — those would take years to explain. I shall therefore present the human future, circa 2100, in the form of a pie chart.

Mistake of the Week: “But even an X says so!”

March 17th, 2008

Consider the following less-than-hypothetical scenarios:

  • Joseph Weizenbaum (who passed away two weeks ago), the MIT computer scientist who created the ELIZA chatbot in the 1960’s, spent the rest of his career decrying the evils of computer science research, holding (perhaps strangely) both that almost everything that’s done with computers could be done just as well without them, and that computers have made possible terrible things like missile guidance systems that now threaten our civilization.
  • Distinguished mathematician Doron Zeilberger argues that mathematicians are wasting their time pursuing chimeras like “beauty” and “elegance,” and that within the near future, mathematics will be entirely the domain of computers.
  • Ayaan Hirsi Ali, who was born into a Muslim family in Somalia, and who escaped from an arranged marriage after being forced to undergo FGM, tells Westerners they’re deluding themselves if they think current Islamic practices are compatible with Enlightenment values.
  • John Browne, the Chief Executive of BP, tells the world that urgent action is needed on global warming.
  • A former atheist stumps for Christianity (or vice versa).

The obvious question in all these cases is: how much extra credence does a person gain by belonging, or having once belonged, to the group he or she is criticizing? From a strict rationalist standpoint, the answer would seem to be zero: surely all that matters is the soundness of the arguments! Who cares if the keynote speaker at the anti-widget rally also happens to be past president of the Widget Club?

I can think of three possible reasons for giving extra credence to attacks from insiders:

  1. The insider might simply know more about the realities of the situation than an outsider, or be less able to ignore those realities.
  2. One assumes the insider is someone who’s at least grappled with the best arguments from her own side before rejecting them. (In FantasyLand, one could assume that anyone making an argument had first grappled with the best arguments from the opposing side, but FantasyLand≠Earth.)
  3. When someone relentlessly attacks a single group of people — seeming to find them behind every perfidy on earth — history says to assume the worst about their motivations, and not to accept the refrain “I’m only criticizing them for their own good!” However, it’s possible that members of the group themselves should merit a pass in this regard. (Though even here there are exceptions: for example, if the person has renounced all ties with the despised group, or, as in the case of Bobby Fischer, refuses to accept the reality of his membership in it.)

On the other hand, I can think of five reasons why not to give extra credence to attacks from insiders:

  1. Given any exotic mixture of beliefs and group affiliations, there’s almost certainly someone on earth who fits the description — and is even available for a fee to speak at your next event. If you want an accomplished scientist who sees science as an expensive sham or tool of the military, you can find one. If you want a former Republican hardliner who’s now a Naderite, you can find one. If you want a Jew who renounces Jews or Israel, you can find a stadium of them. So you can’t conclude anything from the mere existence of such people — at most, you can possibly learn something from their number.
  2. Any group of people — computer scientists, CEO’s, Israelis, African-Americans — will consist (to put it mildly) of multiple factions, some of whom might seek to gain an advantage over the other factions by blasting their group as a whole before the outside world. So one can’t simply accept someone’s presentation of himself as a lone, courageous whistleblower, without first understanding the internal dynamics of the group he comes from and is criticizing.
  3. The very fact that people within a group feel free to criticize it can in some cases speak well about the group’s tolerance for dissent, and thereby undermine some of the critics’ central claims. (Of course, one has to verify that the tolerated dissenters aren’t just a sham maintained by the ruling faction, as in Communist regimes throughout their history.)
  4. Some people simply enjoy dissenting from their peers, as a way of proving their independence or of drawing attention to themselves.
  5. Just as most people like to toot their own group’s horn, a few are masochistically biased toward the opposite extreme. We can all think of people who, for whatever deep psychological reasons, feel a constant need to repent the sins of themselves or their group, in a manner wildly out of proportion to any actual guilt. Granted, anyone can understand the conflict a physicist might feel over having participated in the Manhattan Project. On the other hand, when the invention you’re renouncing is the ELIZA chatbot, the question arises of whether you’ve earned the right to Faust-like penitence over the unspeakable evil you’ve unleashed.

So what’s my verdict? Belonging to the group you’re criticizing can give you one or two free starting chips at the table of argument, entitling you to a hearing where someone else wouldn’t be so entitled. But once you’ve sat down and entered the game, from then on you have to play by the same rules as everyone else.

Great Ideas In Theoretical Computer Science Lectures 2-7

March 10th, 2008

For those who missed it on the sidebar, we now have six more GITCS lecture notes available:

Lecture 2: Logic

Lecture 3: Circuits and Finite Automata

Lecture 4: Turing Machines

Lecture 5: Reducibility and Gödel

Lecture 6: Minds and Machines

Lecture 7: Complexity

More are on the way — compared to the Democritus notes, it’s so much easier with others doing the writing! These notes were prepared almost entirely by the students, with only minor editing from me and Yinmeng. In general, I think the students have been doing a fantastic job. On the other hand, if you rely on these notes to build a Turing-machine-controlled jumbo jet which then crashes in the Himalayas, it’s entirely possible that it wasn’t my fault.

Statement on conceptual contributions in theory

March 7th, 2008

About six months ago, a group of theoretical computer scientists started raising concerns about what they saw as a growing problem in our field. (My parody post “FOCS’36 notification” was one attempt to explain what this problem is.) The group has now put together a statement, which I was happy to sign, and which is meant to serve as a starting point for further discussion at the STOC’08 business meeting. If you support this statement and want add your name to it, please say so in the comments section! Of course criticism is welcome too. –SA


We, the undersigned, are concerned about two related attitudes that seem to be increasingly prevalent in the TCS community, and in particular, are affecting its program committees and their decisions. The goal of this statement is to attempt to recognize and reverse this trend. We are happy to note that the STOC’08 PC made a conscious effort to move in the direction of this proposal. The trends that worry us are the following:

  1. Assignment of little weight to “conceptual” considerations, while assigning the dominant weight to technical considerations.
  2. The view that technical simplicity is a drawback, and the failure to realize that simple observations may represent an important mind-switch that can pave the way to significant progress.

Most works offer a mix of conceptual and technical aspects, where by “conceptual” we mean the aspects that can be communicated succinctly, with a minimum amount of technical notation, and yet their content reshapes our view/understanding. Conceptual contributions can be thought of as contents of the work that are most likely to be a part of a scientific hallway discussion. They may appear in a work’s “bottom line” or “along the way”.

  • A conceptual “bottom line” may be a result that affects the worldview of researchers outside the community studying the problem, or the introduction of a new problem that may appeal to the wider TCS community.
  • A conceptual aspect “along the way” may be an innovative way of modeling, looking at, or manipulating a known object or problem, including establishing a new connection between known objects/problems.

Needless to say, the above list is not exhaustive.

Once understood, conceptual aspects tend to be viewed as obvious, which actually means that they have become fully incorporated in the worldview of the expert. This positive effect is actually a source of trouble in the evaluation process, because the evaluators forget that these contributions were not obvious at all before being made.

Indeed, our community should be warned of dismissing such contributions by saying “yes, but that’s obvious”; when somebody says such a thing, one should ask “was it obvious to you before reading this article?”

We believe that the community needs to remain vigilant about these issues, and program committees should make a conscious effort to pay attention to conceptual contributions (as apparently done by the STOC’08 PC). This will enable our conferences to continue to be a driving force in the progress of our field.

Scott Aaronson
Allan Borodin
Bernard Chazelle
Oded Goldreich
Shafi Goldwasser
Richard Karp
Michael Kearns
Christos Papadimitriou
Madhu Sudan
Salil Vadhan

Long-dreaded politics post

March 5th, 2008

Until today, I have failed to uphold one of the most sacred responsibilities of the guild of bloggers: that of weighing in on the Democratic primary. This is not because of any desire to keep politics out of this blog: I’ve never succeeded in keeping anything out of this blog. Rather, it’s because I find the question genuinely difficult.

The general election is so damn easy by comparison. There, the only questions I need to ask myself are, “do I prefer the Enlightenment or the Dark Ages that preceded it? Is the Earth 4.6 billion years old or 10,000? Do anti-gay laws spring from a less repugnant part of human nature than Jim Crow laws?” While I look forward to the day when my answers to such questions won’t determine my vote, so far they unfailingly have — thereby eliminating the need for me to adjudicate more complicated social and economic issues that I don’t really understand.

In other words, my view of Democrats and Republicans couldn’t possibly be further from that of (say) Eliezer Yudkowsky, who sees the general US election as a meaningless, Kang vs. Kodos popularity contest. Like Yudkowsky, I can easily imagine two political parties fighting over nothing — but what I see in reality is a clearly-identifiable neo-Union and neo-Confederacy, who every four years re-fight the Civil War. As many others have pointed out, even the geographic boundary between America’s two subcountries has barely changed since the 1860’s; the one real irony is that the “party of Lincoln” now represents the Confederate side. (And yes, if the free-market/libertarian wing of the Republican Party ever broke free of the medieval wing, then this correspondence would break down. I’m only talking about things as they currently stand.)

On the other hand, as Clinton and Obama debated their subtly-different proposals for health insurance, subprime lending reform, etc., I realized that, in a race between Democrats (or a general election in a more normal country), my “go with the Enlightenment” approach can only take me so far. Faced with two non-lunatic candidates, you almost have to, like, know something about policy or economics to make a sensible choice.

So being an ignorant computer scientist, what can I say? Let’s start with the obvious: that after seven years of Bush, to ask whether I’d “prefer” Hillary or Obama is like asking a drowning person surrounded by sharks which of two lifeboats he prefers to be rescued by (and adding, in case it’s helpful, that one lifeboat is rowed by a woman and the other by a half-Kenyan). It’s a shame we can’t elect both of them, and then send one back in time to have been president for the last eight years. As the next best option, I wish the candidates would just agree right now to choose the winner by an Intrade-weighted coin flip, and thereby save money for defeating the religious-right-courting hypocrite McCain.

But of course they won’t do that, and hence the question of whom to prefer. Until recently I had a mild preference for Hillary, my reasons being as follows:

  1. Because she’s been despised for so many years by so many people who I despise (and the worse they say about her, the better she seems).
  2. Because she’s been doing better than Obama in crucial swing states like Florida.
  3. Because with her you get all the advantages of her husband but with considerably less chance of a sex scandal.
  4. Because on one issue that I actually follow — ending the Republicans’ “war on science” — her position paper is full of excellent specifics, whereas (so far as I know) Obama has only said much vaguer things in the same direction.

Recently, though, I’ve been tilting more toward Obama, for five reasons:

  1. Because he’s winning (still, after last night). This, of course, would be an important piece of evidence about his likelihood of winning the general election, even if it weren’t also a prerequisite to winning.
  2. Because unlike Hillary, he’s clearly stated his position on the inefficiency of bubblesort.
  3. Because I’m told that some Americans now supplement their reading of text by the viewing of “YouTubes” and “tele-vision boxes” — and in those settings, Obama clearly does better. His jokes succeed where Hillary’s fail.
  4. Because the 2000 and 2004 elections suggest that experience is now a severe liability: it simply translates into more stuff that an opponent will twist against you.
  5. Because people whose judgment I respect, and who follow politics more closely than I do, seem to prefer Obama by a wide margin. As in Aumann’s Agreement Theorem, the mere fact of these other people’s opinions ought to change my own opinion if I’m a rational agent. Whether for rational reasons or not, it has.

Incidentally, so far as I can tell, the accusations of anti-Semitism against Obama that have filled the right-wing blogosphere are completely baseless. The assumption underlying these accusations is that admiration is a transitive predicate: that is, if x admires y and y admires z (where, say, z=Farrakhan), then x must admire z, even if x claims to “reject and denounce” z. But it’s easy to think of counterexamples: I admire Sakharov who admired Stalin (at least for part of his life), I admire Bertrand Russell who admired all sorts of thugs and poseurs, etc. Of course it’s impossible to know Obama’s heart about these matters, but I don’t think one needs to: it’s enough to know his brain.

Penrose’s Gödel argument in rap

March 2nd, 2008

About as logically sound as the original, and with a better backbeat (link to MP3). From computer science grad student / hip-hop artist MC Plus+.

Sourkatz

February 25th, 2008

A reader named Hernan asked me for my opinion of a well-known rant by Jonathan Katz of Washington University, about why young people shouldn’t go into academic science since there are so few jobs and the jobs that there are stink anyway. I posted my response in the comments section, but since it seems to be of general interest I thought I’d make a proper entry of it.

Katz is correct that opportunities in academic science (at least in the US) are much scarcer than they were during the Cold War; I think government shortsightedness deserves a huge part of the blame for that. On the other hand, countless would-be grad students have already followed the invisible hand and taken Katz’s advice, and are doing quite well in Wall Street, Silicon Valley, etc. So the ones going to grad school are mostly the ones willing to assume the (by now well-known) risks: if they weren’t, they wouldn’t be there.

My fundamental disagreement with Katz is that I think PhD work is increasingly excellent preparation for industry careers. Of course, in some cases (e.g. a quantum computing PhD going to work for an Internet startup), it’s hard to argue that the PhD provides much beyond general skills like analytical thinking, teamwork, project completion, etc., and that those skills couldn’t just as well be obtained elsewhere. But even in those cases, I think a PhD at least won’t hurt your chances in industry these days (notwithstanding Phil Greenspun’s PhD expunging service). So what the PhD does is to give many people an opportunity to spend six years advancing human knowledge and doing something they enjoy, before switching to something that’s actually rewarded by the economy. (One corollary is that, if you’re not enjoying grad school, then you shouldn’t be there. But this is just an instance of a general rule: don’t choose a career option that causes you years of suffering in the hope that the suffering will end later; it probably won’t.)

Furthermore, if there used to be a stigma attached to leaving grad school for industry, I think that’s basically vanished, and that now many PhD programs even see training students for industry as a fundamental part of their mission.

I can’t comment on the rest of Katz’s complaints (the need for conformity, the burden of writing grant proposals, etc.), except to say that so far, my own experience has been more positive. Maybe the worst is ahead!

Incidentally, my comments apply most clearly to computer science PhD programs, which are what I’m most familiar with, but I believe they also apply to physics and other sciences. As for humanities PhD’s … dude, you’re on your own.