Archive for October, 2005

Hath not a nerd eyes?

Friday, October 21st, 2005

When someone wrote to Richard Feynman to tell him how his bongo-drumming habit “proved that physicists can also be human,” Feynman shot back a scathing reply: “I am human enough to tell you to go fuck yourself.” Why was Feynman so angry? Because for him, the notion that physicists had to “prove” their humanity by having non-scientific interests was an arrogant presumption. Why not point to a guitarist who enjoys doing math on the side, as “proof that musicians can also be human”?

While it’s possible that Feynman overreacted to what was meant as a compliment, a quick glance at American popular culture demonstrates that he had a point. In the minds of many Hollywood writers, there are apparently only two kinds of scientist: (1) the asexual nerd who babbles incomprehensibly before getting killed around scene 3 (unless of course he’s the villain), and (2) the occasional character who’s human “despite” being a scientist, as demonstrated by his or her charm, physical agility, and fashion sense. The idea that one can be both nerdy and sympathetic — indeed, that nerdiness might even have positive aspects — is absent.

This trend is so pervasive that, whenever a movie bucks it even partly, I’m inclined to overlook any other flaws it might have. Thus, for example, I enjoyed both A Beautiful Mind and Enigma, despite those movies’ liberal departures from the true stories on which they were based. But the most unabashed celebration of nerdiness I’ve seen in cinema is a little-known 80’s comedy called Real Genius. I was introduced to this movie by Christine Chung, a friend at Cornell. Then I saw it again with friends at Berkeley. Yesterday I continued the tradition by organizing a screening for friends at Waterloo.

Briefly, Real Genius follows the adventures of Mitch, a 15-year-old who goes to a college obviously based on Caltech, having been recruited by the duplicitous Professor Hathaway to work on powerful lasers. Mitch is sympathetic, not because he defies the stereotype of a 15-year-old at Caltech, but because we’re shown some of the emotions behind that stereotype: the feeling of outsiderness, of taking up space on the planet only at other people’s mercy; the fear of failure, of letting down his parents, Professor Hathaway, and others who “expect great things from him”; but at the same time, the longing for the easy social confidence represented by his roommate Chris (who used to be a teenage prodigy like Mitch, but is now a womanizing slacker). All of this is shown with enough wit and humor that there’s no need for Mitch to make an explicit declaration:

Hath not a nerd eyes? Hath not a nerd hands organs, dimensions, senses, affections, passions; fed with the same food, hurt with the same weapons, subject to the same diseases, heal’d by the same means, warm’d and cool’d by the same winter and summer, as a quarterback is?

A trivial post

Wednesday, October 19th, 2005

Why do academics feel the need to stuff their papers with “nontrivial” results? After all, if a paper is remembered decades after it was written, it’s almost always for a simple core idea — not for the extensions and applications that fill 90% of the paper’s bulk.

The nontriviality virus can infect even the greats: think of Leonid Levin’s famous paper on universal search. According to legend, the reason Levin was scooped by Cook and Karp is that he spent a year trying to prove Graph Isomorphism was NP-complete! You see, that would’ve been a deep, publication-worthy result, unlike the “obvious” fact that there exist natural NP-complete problems.

Here’s a more recent example. In my opinion, this 43-pager by Barak et al. is one of the sweetest computer science papers of the past decade. But what makes it so sweet is a two-sentence insight (my wording):

There’s no generic, foolproof way to “obfuscate” a computer program. For even if a program looked hopelessly unreadable, you could always feed it its own code as input, which is one thing you couldn’t do if all you had was a “black box” with the same input/output behavior as the program in question.

So why did the authors go on for 43 more pages?

One possibility was suggested to me by Robin Hanson, an economist at George Mason who spews interesting ideas out of his nose and ears. Depending on your prejudices, you might see Robin as either a visionary futurist or a walking reductio ad absurdum of mainstream economic theory. Either way, his web page will surprise and provoke you.

When I talked with Robin in August, he speculated that nontrivial results function mainly as “certificates of smartness”: that is, expensive, difficult-to-fake evidence that the author(s) of a paper are smart enough that their simple core idea is likely to be worth taking seriously. Without these certificates, the theory goes, we academics would be deluged by too many promising ideas to entertain them all — since even if the ideas are simple, it usually isn’t simple to ascertain their worth.

Note that this theory differs from a more standard complaint, that academics fill their papers with nontrivial results for the sole purpose of getting them published. On Robin’s account, nontrivial results actually are useful to readers, just not in the way the paper advertises. Think of the author as a groom, the reader as a bride, and the nontrivial result as a wedding ring. The bride doesn’t care about the actual ring, but she does care that the groom was rich and devoted enough to buy one.

One prediction of Robin’s theory would be that, once you’ve established your smartness within the community, you should be able to get papers published even if they contain only simple observations. Another prediction would be that, if you’re very smart but emotionally attached to a simple idea, you should be able to buy exposure for your idea by encrusting it with nontrivialities. (As Robin remarked to me, everything in social science is either obvious or false; the only question is which.)

I don’t have anything deeper to say about Robin’s theory, but I’m enjoying the freedom to blog about it anyway.

Climbing Mount Boredom

Tuesday, October 18th, 2005

Two weeks ago, I argued that scientific papers are basically a waste of time. Today I’d like to generalize the results of that earlier post, by explaining why scientific talks are also a waste of time.

Let me set the scene for you. You arrive at the weekly colloquium eager to learn, like a cargo cult member who’s sure that this time the planes are going to land. But then, about fifteen minutes after the PowerPoint train has left the station, you start to get nervous: “Why are we stopping at all these unfamiliar little hamlets? Are we really headed for the place mentioned in the abstract?” You glance at your fellow passengers: are they as confused as you are? (You’d ask the guy sitting next to you, but he’s sound asleep.) Eventually the announcer comes on and … uh-oh! It seems the train is about to begin its long ascent up Mount Boredom, and you don’t have the prerequisites for this leg of the trip. Can you dodge the ticket collector? Too stressful! You get off, and the train roars past you, never to return.

Such was my experience again and again until three years ago, when I finally gave up on talks as a medium for scientific communication. These days, whenever I have to sit through one, I treat the speaker’s words as background music for my private fantasies and daydreams, unless the speaker chooses to interrupt with a novel idea.

But what about when I have to talk? To be honest, I haven’t intentionally perpetrated a research talk in years. Instead I do a stand-up comedy routine where you have to be a quantum computing expert to get the jokes. It’s like Seinfeld, except not that funny. So why does it work? Simple: because the crowd that expects to be bored is the easiest crowd on Earth.

Now one could argue that, by stuffing my talks with flying pigs and slide-eating black holes, I’ve been setting back the cause of scientific knowledge. But I don’t think so. See, the basic problem with talks is that they have no anti-boredom escape hatch. I mean, if you were chatting with a colleague who droned on for too long, you’d have several options:

  • Change the subject.
  • Say something like “yeah, I get it, but does this actually lead to a new lower bound?”
  • Tap your fingers, study the wall patterns, etc.
  • If all else fails, mention your immense workload, then excuse yourself and go back to reading weblogs.

The key point is that none of these tactics are inherently rude or insulting. All of us use them regularly; if we didn’t, it’d be impossible to tell when we were boring each other. Put differently, these tactics are part of the feedback and dialogue that’s essential to any healthy relationship:

“Was it good for you?”
“Could you maybe go a little faster?”
“Do you like it when I use this notation?”

The seminar speaker, by contrast, is a narcissist who verbally ravages his defenseless audience. Sure, it’s fine to interrupt with things like “Aren’t you missing an absolute value sign?,” or “How do you know A is Hermitian?” But have you ever raised your hand to say, “Excuse me, but would you mind skipping the next 20 slides and getting right to the meat?” Or: “This is boring. Would you please talk about a different result?”

(Incidentally, as my adviser Umesh Vazirani pointed out to me, when people get “lost” during a talk they think it means that the speaker is going too fast. But more often, the real problem is that the speaker is going too slow, and thereby letting the audience get mired in trivialities.)

So what’s the solution? (You knew there was going to be one, didn’t you?) My solution is to replace talks by “conversations” whenever possible. Here’s how the Aaronson system works: you get five minutes to tell your audience something unexpected. (Usually this will involve no slides, just a board.) Then, if people have questions, you answer them; if they want details, you provide them. At any time, anyone who’s no longer interested can get up and leave (and maybe come back later), without being considered a jerk. When there are no further questions, you sit down and give someone else a chance to surprise the audience.

If you don’t think this system would work, come visit our quantum algorithms lunch at Waterloo, Tuesdays at 11:30 in the BFG seminar room. Bring a result or open problem.

Einstein the man

Monday, October 17th, 2005

Sorry for the inordinate delay in updating! This weekend I was busy with several things, one of which was “EinsteinFest,” Perimeter Institute’s celebration of the hundred-year anniversary of Einstein’s annus mirabilis. The Fest is a monthlong program of exhibits, talks, etc., aimed at the general public, and covering four topics: “The Science, The Times, The Man, The Legacy.” This weekend’s talks were about “The Man,” which is why I attended.

See, I was worried that the Fest would place too much emphasis on Einstein the sockless symbol of scientific progress, Einstein the secular saint, Einstein the posthumous salesman for Perimeter Institute. And how it could do that without indulging in the very pomposity that Einstein himself detested?

My fears were not assuaged by the many exhibits devoted to Freud, Picasso, the Wright Brothers, the automobile, fashion at the turn of the century, and so on. These exhibits gave visitors the impression of a great band of innovators marching into the future, with Einstein cheerfully in front. The reality, of course, is that Einstein never marched in anyone’s band, and — like his friend Gödel — saw himself as opposed to the main intellectual currents of the time.

It didn’t help either that, to handle the influx of visitors, Perimeter has basically transformed itself into Relativistic Disney World — complete with tickets, long lines, guides wearing uniforms, signs directing traffic, cordoned-off areas, and an outdoor tent for kids called “Physica Fantastica.” To some extent I guess this was unavoidable, although sometimes it resulted in unintended comedy:


(Sorry, I just bought a digital camera and couldn’t resist.)

So it was a pleasure to attend the talks on “Einstein the Man” and find that, in spite of everything, they were fantastic. We heard David Rowe on Einstein and politics, Trevor Lipscombe on Einstein and Mileva, and John Dawson on Einstein and Gödel. Partly these speakers won me over with wisecracks (Dawson: “Gödel thought he’d found a flaw in the Constitution, by which the US could legally turn into a dictatorship. In light of recent events, I don’t see why anyone would doubt him”). But mostly they just let the old man speak for himself. We saw Einstein write the following to his then-mistress Elsa:

If you were to recite the most beautiful poem ever so divinely, the joy I would derive from it would not come close to the joy I experienced when I received the mushrooms and goose cracklings you cooked.

And to Mileva, during the months when he was finishing general relativity:

You will see to it: (1) that my clothes and linen are kept in order; (2) that I am served three regular meals a day in my room; (3) that my bedroom and study are always kept in good order and that my desk is not touched by anyone other than me.

We saw Einstein the pacifist urging the Allies to rearm at once against Hitler, and Einstein the secular internationalist supporting the creation of Israel. And eventually we came to understand that this was not an oracle spouting wisdom from God; it was just a guy with a great deal of common sense — as much common sense as anyone’s ever had. Isn’t it strange that, despite deserving to be celebrated, he is?

Reaching agreement with Aumann

Wednesday, October 12th, 2005

This year’s Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel has been awarded to two game theorists: Robert Aumann of Hebrew University, and Thomas Schelling of the University of Maryland.

In 1976, Aumann wrote a famous paper called “Agreeing to Disagree,” which proved the following fact. Suppose you and your friend are perfectly rational Bayesians, who’d both form the same opinions if given the same information. Then provided your opinions are “common knowledge” (meaning you both know them, you both know you both know them, etc.), those opinions must be equal — even if neither of you knows the evidence on which the other’s opinion is based! Loosely speaking, then, you can never “agree to disagree.”

As an example, suppose Alice offers to sell you some stock. Then the mere fact that she’s trying to sell it gives you useful information — namely, that something must’ve convinced her the stock is headed south. So even if you have no idea what that something is, her offer should cause you to decrease your own valuation of the stock. Similarly, if you agree to buy the stock, that should cause Alice to increase her valuation. As observed by Milgrom and Stokey, the end result of all this second-guessing is that you might as well never trade at all! This is assuming three conditions: (i) that you and Alice would believe the same things if given the same information, (ii) that you’re both trying to maximize expected wealth, and (iii) that you both have the same liquidity needs (i.e. neither desperately needs to pay off a mortgage). If you’re still confused, read this delightful survey by Cowen and Hanson.

A year ago I proved a complexity-theoretic analogue of Aumann’s theorem: that not only will two Bayesians agree “in the limit” of common knowledge, but they’ll also (probably, approximately) agree after a really short conversation. I sent my paper to Aumann just for fun, not expecting any response from the great man. To my surprise, Aumann promptly wrote back with a thoughtful critique — telling me to cut out my philosophical musings and let the math speak for itself. I hated this advice at the time, but eventually came to the grudging realization that it was right.

Interestingly, besides being a world expert on rationality, Aumann is also an Orthodox Jew, who’s written several papers applying game theory to the Talmud. He was born in Germany in 1930 and escaped to the US in 1938.

Congratulations to Aumann and Schelling!

Derandomizing BBQ

Wednesday, October 12th, 2005

An addendum to my last post: a few days ago, I got an email with the subject line “BBQ,…”

“Awesome!” I thought. “Free food! Where?”

But no, the email was from someone who had read one of my papers, and who wanted references for the strange, unfamiliar terms that littered the text — terms like “BBP” and “BBQ.” I wasn’t sure what to tell him, except that the class BBQ contains BYOB and is conjectured to be incomparable with BLT.

Run free, my animal friends!

Tuesday, October 11th, 2005

In August of 2002 I opened the Complexity Zoo: an online bestiary of 196 complexity classes, since expanded to 443. Yesterday I entrusted the Zoo to anyone on Earth who wants to feed the animals or contribute their own. This was possible because of John Stockton, who graciously converted the Zoo to wiki form.

The decision to relinquish control of my best-known work was tinged with regret. But at age 3, my baby is all grown up, and it’s time to send it off to grad school so I can move on to other things.

This seems like a good occasion to ask a potentially heretical question:

Did theoretical computer science take a wrong turn when it introduced complexity classes?

For readers with social lives, I should explain that a “complexity class” is a class of problems solvable by some sort of idealized computer. For example, P (Polynomial-Time) consists of all problems that an ordinary computer could solve in a “reasonable” amount of time, meaning an amount that increases like the problem size raised to a fixed power. To illustrate, a few years ago Agrawal, Kayal, and Saxena made international headlines for showing that “PRIMES is in P.” What this means is that they found a general method to decide if an n-digit number is prime or composite, using only about n12 steps — much less than you’d need to try all possible divisors. Faster methods were known before, but they had a small chance of not producing an answer.

Other complexity classes include PSPACE (Polynomial Space), BQP (Bounded-Error Quantum Polynomial Time), EXP, NP, coNP, BPP, RP, ZPP, PH, Σ2P, P/poly, L, NL, PP, AWPP, LWPP, BQNC, QMA, QCMA, S2P, SZK, NISZK, and many more.

The advantage of this alphabet soup is that it lets us express complicated insights in an incredibly compact way:

  • If NP is in BPP then NP=RP.
  • If NP is in P/poly then PH = Σ2P.
  • PH is in P#P.
  • NL=coNL.

The disadvantage, of course, is that it makes us sound like the fabled prisoners who tell each other jokes by calling out their code numbers. Again and again, I’ve had trouble getting across to outsiders that complexity theory is not “about” capital letters, any more than chemistry is “about” weird strings like NaCl-KCl-MgCl2-H20. Why is it so hard to explain that we don’t worry about EXP vs. P/poly because we’re eccentric anal-retentives, but because we want to know whether a never-ending cavalcade of machines, each richer and more complicated than the last, might possibly succeed at a task on which any one machine must inevitably founder — namely, the task of outracing time itself, of simulating cosmic history in an eyeblink, of seeing in the unformed clumps of an embryonic universe the swirl of every galaxy and flight of every hummingbird billions of years hence, like Almighty God Himself?

(Alright, maybe I meant BQEXP vs. BQP/poly.)

In the early 70’s, there was apparently a suggestion that NP be called PET, which could stand for three things: “Probably Exponential Time,” “Provably Exponential Time” (if P!=NP), or “Previously Exponential Time” (if P=NP). If this silly name had stuck, would our field have developed in a different direction?

Down with municipal government

Monday, October 10th, 2005

Forgive me if this post isn’t particularly timely — I just started blogging, so I’m still clearing out my cognitive backlog.

A month ago, the economist Steven Landsburg wrote a Slate column arguing that we shouldn’t help Hurricane Katrina victims too much. His reasoning? Presumably, the hurricane risk in New Orleans and surrounding areas was already reflected in property values being lower than what they would have been were there no such risk. So if the US spends federal tax dollars on hurricane relief, then it’s artificially subsidizing people who choose to live in hurricane-prone areas — thereby

  1. raising taxes for everyone, including those who live in “safe” areas, and
  2. raising property values in the hurricane-prone areas, which limits people’s freedom to select cheap but risky housing over expensive but safer housing.

I’d had some pleasant correspondence with Landsburg in the past, so I emailed him to say that, while I could find no flaw in his logic, I was confused as to why he didn’t take the argument even further. For example, what are fire departments, if not an artificial subsidy for people who choose to live in wooden houses rather than stone ones? And police departments? Clearly a lose-lose proposition. If you have a personal bodyguard, then you’re forced to pay for protection you don’t need. And if you don’t have a bodyguard, then you’re deprived of the freedom to choose lower taxes in exchange for having no one to call if you get stabbed.

See, in my view, if you’re going to be a radical libertarian, then you might as well go all the way. For — just like the denial of relief to hurricane victims — such consistency makes all parties better off than otherwise. Those willing to follow you all the way into Galt’s Gulch get the genuine Ayn Rand experience, with no wussy collectivist compromises. And for others, you’re all the more valuable as a walking, talking reductio ad absurdum.

Carla and me

Sunday, October 9th, 2005

On Friday, I drove to the University of Toronto to give a talk. This was the first time I’d ever driven on a freeway alone. I didn’t drive at all until a year ago, for four reasons:

  1. Global warming. I assuaged my conscience by buying a Prius (though admittedly, given the waiting lists for hybrids, I’m probably increasing CO2 concentrations by preventing someone who drives more than I do from having my car).
  2. Fear of getting lost. The solution to this one was “Carla,” my sultry female computerized travel companion (“Proceed on the current route for 0.3 miles”). I realize that for some guys, Carla would feel like a direct assault on their virility — especially since she’s always right. But I love her, and I predict that in five years’ time, everyone else will want her too.
  3. Lack of any social life that would necessitate a car. I’ve since realized that this was as much a symptom as a cause of my carlessness.
  4. Fear of dying a gruesome death. I haven’t yet licked this one, as became evident on Friday.

To avoid the traffic, I left Waterloo at 5:30am (yes, I’d been up all night). Unfortunately, that’s when all the trucks were out, and trucks on a freeway make me nervous. See, the problem with freeways is that there are no red lights — and therefore, no time to hunt down the neurons firing off about Futurama or BQP/qpoly, and refocus their attention on the road. It’s like having to play Super Mario all the way through without pausing — the differences being that there are no stars or mushrooms, you only get one life, and it’s your actual life. (Also, you can’t stomp on the goombas, since they’re people too.)

So when I finally pulled into the parking garage at U of T, palms white and sweaty on the steering wheel, I started laughing hysterically: “I made it! I’m still alive! At least in this branch of the wavefunction, I’m alive! Joy to the world!” That I hadn’t yet written the talk that I was to give in two hours seemed utterly insignificant.

For the ride home, I asked Carla to find me a route that avoided freeways, and ended up zigzagging through the small towns of southeast Ontario. The stoplights looked as pretty as the setting sun.

Ig-nore this post

Saturday, October 8th, 2005

If you haven’t seen yet, the 2005 Ig Nobel Prizes have been announced. Reading through the list of previous winners, I learned two things:

  • For weeks, I’d been wondering why the shower curtains in my new apartment billow inwards. At first I thought it was because the hot water created a pressure difference, but then I found that cold water causes the same effect. Now I know why I couldn’t figure it out: the explanation is sufficiently nontrivial as to have earned an Ig Nobel Prize in Physics for its discoverer.
  • Instead of futzing around with Recursive Fourier Sampling, I should’ve been working on more socially-relevant CS problems, like software that detects when a cat is walking across your keyboard.