The neologistas

May 27th, 2006


Ever since I arrived at fellow blogger Dave Bacon‘s house on Tuesday, the Pontiff and I have been tossing around ideas for a joint blog initiative. Finally we hit on something: since we’re both neologistas — people who enjoy spending their free time coining new words — we decided to compile a list of the neologisms we’d most like to see adopted by the general population. Without further ado:

shnood: (roughly) an imposter; a person oblivious to just how trivial or wrong his ideas are.

“Were there any interesting speakers at the conference?”
“No, just a bunch of shnoods.”

“The magazine New Scientist loves to feature shnoods on the cover.”

Note: someone who’s utterly contemptible would not be a shnood, but rather a schmuck.

iriterie: a list or compilation of people named Irit.

See the comments on the last post for an example of an iriterie.

extralusionary intelligence: intelligence in one domain that is misapplied in another.

“Bob’s a brilliant physicist — I bet he’s onto something with his condensed-matter approach to P versus NP.”
“No, he’s just suffering from extralusionary intelligence.”

circumpolitical: So far to one end of the political spectrum that one is actually on the other end.

“Professor Zimmerman mounted a circumpolitical defense of hereditary dictatorship, female genital mutilation, and the dragging of murdered homosexuals through the streets, arguing that we have no right to condemn these indigenous practices of non-Western peoples.”

philosonomicon: A philosophical prolegomenon.

Dave’s PhD thesis begins with a philosonomicon, as does mine.

high-hanging fruit: the opposite of low-hanging fruit.

“Do you ever think about the Nonabelian Hidden Subgroup Problem?”
“No, that’s high-hanging fruit. I like to watch other people jump for it.”

napotonin: any substance that makes you want to nap.

“Ohhhh … must’ve been a lot of napotonin in that calzone … can’t work … unnngghhhh”

nontrivia: the opposite of trivia.

“If you’re so smart, how come you’re no good at Trivial Pursuit?”
“Because I prefer to fill my brain with nontrivia.”

In an effort to speed up the adoption of these words by the Oxford English Dictionary, Dave and I hereby ask that every comment on this post correctly use at least one of them. Also, while you’re welcome to crack the obvious jokes (“Scott is a shnood,” “Dave suffers from extralusionary intelligence,” etc.), be aware that we’ve just preempted them.

Nerdify the world, and the women will follow

May 24th, 2006

As delighted as I’ve been with the popular response to this blog, it’s come to my attention that there are still a few readers who haven’t yet been angered or offended by anything I’ve written. That’s why today’s entry will be about women, science, and Larry Summers.

Granted, it feels strange to be blogging about why there aren’t more women in computer science and the other nerdly disciplines, having just come from a conference where Irit Dinur took the Best Paper Award for her combinatorial proof of the PCP Theorem. But the question remains: why aren’t there more Irits?

A hilarious analysis by Philip Greenspun seems like as good a starting point as any for discussing this question. Here are my favorite passages:

A lot more men than women choose to do seemingly irrational things such as become petty criminals, fly homebuilt helicopters, play video games, and keep tropical fish as pets (98 percent of the attendees at the American Cichlid Association convention that I last attended were male). Should we be surprised that it is mostly men who spend 10 years banging their heads against an equation-filled blackboard in hopes of landing a $35,000/year post-doc job?

Having been both a student and teacher at MIT, my personal explanation for men going into science is the following:

1. young men strive to achieve high status among their peer group
2. men tend to lack perspective and are unable to step back and ask the question “is this peer group worth impressing?”

Consider Albert Q. Mathnerd, a math undergrad at MIT (“Course 18” we call it). He works hard and beats his chest to demonstrate that he is the best math nerd at MIT. This is important to Albert because most of his friends are math majors and the rest of his friends are in wimpier departments, impressed that Albert has even taken on such demanding classes. Albert never reflects on the fact that the guy who was the best math undergrad at MIT 20 years ago is now an entry-level public school teacher in Nebraska, having failed to get tenure at a 2nd tier university. When Albert goes to graduate school to get his PhD, his choice will have the same logical foundation as John Hinckley’s attempt to impress Jodie Foster by shooting Ronald Reagan…

What about women? Don’t they want to impress their peers? Yes, but they are more discriminating about choosing those peers. I’ve taught a fair number of women students in electrical engineering and computer science classes over the years. I can give you a list of the ones who had the best heads on their shoulders and were the most thoughtful about planning out the rest of their lives. Their names are on files in my “medical school recommendations” directory…

With Occam’s Razor, we should not need to bring in the FBI to solve the mystery of why there are more men than women who have chosen to stick with the choice that they made at age 18 to be a professor of science or mathematics.

If you don’t recognize any truth in the above, then (almost by definition) you are not a nerd. Yet Greenspun’s argument immediately raises four questions:

  1. Is academic science really such a crappy career choice?
  2. If not, then what else is keeping more women from going into it?
  3. Regardless of underlying causes, should we be trying to entice more women into science?
  4. If so, how?

Let me address these questions in turn.

1. Is science really as depressing as Greenspun makes it out to be?

I can only speak for myself. Unlike most people, I don’t “work” at all, in the sense of doing anything with the conscious goal of making money. All I do is think about what interests me, and discuss the results of that thinking with other people. As long as governments (and philanthropists like Mike Lazaridis) are willing to pay me for my non-work, I’m happy to take their money. If they ever stop paying me, I guess I’ll have to find some other source of income.

Of course my perspective might change once I start a tenure-track, which is part of the reason why I haven’t been in any hurry to do so. But for now, I can’t complain about my life as a postdoc. Or rather, I can complain, but then I remember the alternatives. Can I even imagine what it would be like to grapple not with the eternal verities of QMA and PSPACE, but with the fickle whims of the stock market? My only reward being a gigantic pile of cash, most of which wouldn’t even fit in my wallet when I went out for Indian buffet?

2. The trouble with ‘because’

So Greenspun’s “Albert Q. Mathnerd” theory strikes me as at best a partial answer to why more women don’t go into the nerdly sciences. But there’s a stronger argument: if Greenspun were right, then we would expect even fewer women in the humanities and social sciences (which are even more cash-strapped than the sciences), and more women trading derivatives and starting software companies.

And that brings us, of course, to the crater-pocked battlefield where hardened university presidents fear to tread. Are there Darwinian reasons to expect males to be more “spatial” and less “verbal” on average, or to have a higher variance in ability (with both more Alan Turings and more George W. Bushes), etc., etc.? If you want to read an interesting discussion of these questions — one that involves, you know, actual facts and evidence — I heartily recommend this debate (both sides of it) between Steven Pinker and Elizabeth Spelke.

But what do I think about the “root cause” of the gender imbalance in science? I’ll tell you exactly what I think: I think the question is ill-posed. When we say that A causes B, we normally mean something like “if A didn’t happen, then B wouldn’t happen either.” Thus: “if John had had the same upbringing but the biological makeup of a woman, he would have become a lawyer instead of a string theorist.” The trouble is, what does that even mean? With a few arguable and presumably unrepresentative exceptions (like hermaphrodites), no one on Earth has the biology of a man but the life experiences of a woman or vice versa.

To put the point differently: suppose (hypothetically) that what repelled women from computer science were all the vending-machine-fueled all-nighters, empty pizza boxes stacked to the ceiling, napping coders drooling on the office futon, etc.; and indeed that men would be repelled by such things as well, were it not for a particular gene on the Y chromosome called PGSTY-8. In that case, would the “cause” of the gender imbalance be genetic or cultural? This is a fascinating question, right up there with whether rocks fall because of gravity or being dropped, and whether 3+5=5+3 because addition is commutative or because they both equal 8.

3. The nerd case for feminism

Greenspun’s central contention is that we’re not doing an ambitious high-school girl any favors by steering her into the impoverished dungeon of academic science. In his words:

If smart American women choose to go to medical, business, and law school instead of doing science, and have fabulous careers, I certainly am not going to discourage them. Imagine if one of those kind souls that Summers was speaking to had taken Condoleezza Rice aside and told her not to waste time with political science because physics was so much more challenging.

Such a soul would deserve our undying gratitude.

But seriously — I draw a different moral than Greenspun does. I think it’s imperative to increase the number of women in science, not for women’s sake, but for science’s sake! Now would be a good place to insert your favorite joke about the computer labs full of horny, Perl-coding “feminists,” eager to cast off the yoke of sexism and open wide the gates of science to every young woman — whether blonde or brunette, single or possibly single, hot or extremely hot.

But there’s no need to be cynical. I’m not ashamed to assert that

  1. most people want to socialize with the opposite sex, and are unhappy (and hence unproductive) if they can’t;
  2. the conscious reasons for wanting to socialize with the opposite sex often have nothing to do with “fluid exchange” (to use the John Nash character’s phrase from A Beautiful Mind),
  3. let he (or she) who is without subconscious Darwinian motivations cast the first stone,
  4. human beings didn’t evolve to live their lives in an 85%-male environment,
  5. by the Pigeonhole Principle, not every straight male will be as lucky as I was to find a girlfriend in the remaining 15%, and
  6. computer science departments could attract and retain better people of both sexes if they felt less like monasteries or pirate ships.

Naturally, kidnapping women in the dead of night and forcing them to take Randomized Algorithms is off the table. But the question remains: how can we make the nerdly sciences more attractive to women?

4. How to seduce women (into scientific careers)

I have two thoughts in this direction.

The first thought is actually a question: assuming our social support systems made it easier to do so, would many women prefer to have kids first, and then go to grad school? That’s not a rhetorical question; it’s a genuine request for enlightenment. I ask it for three reasons:

  1. One of female academics’ most famous complaints is that, by the time they’ve battled their way to tenure, they’re already verging on infertility.
  2. If we consider the most famous female scientists — Marie Curie, Rosalind Franklin, Emmy Noether, Lise Meitner — most were in their 30’s or older when they did their best work. This contrasts with the pattern for male scientists.
  3. From an evolutionary perspective, the age at which women in the developed world start having kids is unbelievably late. That doesn’t mean we should go back to auctioning off 12-year-old girls as brides in exchange for goats and oxen. But it does suggest, to me, that the currently “normal” ways of balancing career and family might not be Pareto-optimal.

So that was my first thought. The second thought is that, when people talk about cultural changes that would entice more women into science, they always mean changes to nerd culture. You know the sort of thing I’m talking about:

Emphasize teamwork and community over intellectual combat.

Eliminate all-nighters.

Discourage questions in seminars that might hurt someone’s feelings.

Festoon the STOC proceedings with hearts, rainbows, and ponies.

The problem with such proposals is not just that they’re patronizing (and indeed deeply sexist in their own way), and not just that successful female scientists tend to be as competitive as anyone else. The real problem is the implicit assumption that, whenever there’s a disparity between nerd culture and popular culture, the fault must lie with nerd culture.

Sure, there are nerds could stand to shower more often, read more Shakespeare and less Slashdot, etc. But there are also plenty of “normals” who could stand to follow a chain of logic to an inconvenient conclusion, unsheath their sarcasm swords when confronted with idiocy, and judge people more by the originality of their ideas than by whether their clothes match.

In short, if the reason more women don’t study science is that they’re repelled by nerd culture, then de-nerdifying science is only one solution. The other solution is nerdifying the rest of the world! Admittedly, nerdifying the world might seem like a rather drastic way to increase the number of women in university science departments. But as you might have guessed, I want to nerdify the world for independent reasons as well.

The pee versus in-pee question

May 20th, 2006

Greetings from America’s fourth-best city, Seattle, where I’m attending the STOC’2006 conference. I arrived here yesterday from America’s third-best city, Boston, where I visited MIT for a week and gave a talk about The Learnability of Quantum States. (I’ll leave the best and second-best cities as exercises for the reader.)

Since tomorrow’s my birthday, I’ll consider myself free to blog about whatever I feel like today (as opposed to most days, when I blog about whatever the invisible space antelopes tell me to). So without further ado, here’s a question that bugged me for years: why do we need to urinate on a regular basis?

I mean, I understand solid waste perfectly well, and I also understand the need to get rid of urea and the other waste products in urine. But why constantly excrete water, something that humans and other animals regularly die from not having enough of? Why not store the water in the body until the next time it’s needed? From a Darwinian perspective, a regularly-vacating bladder would seem to make as little sense as a toothless vagina.

And yet, after minutes of diligent Wikipedia research, I’ve pieced together what I believe is a complete solution to this pee versus in-pee puzzle.

The short answer is that conserving water, rather than just pissing it away (so to speak), is exactly what our bodies try to do. But one needs to remember that, while feces comes directly from the digestive tract, urine is collected from waste products in the bloodstream. In particular, the kidneys contain permeable membranes whose job is to let wastes like urea through, while keeping the useful stuff (like red blood cells) out. However, as with any other filtration process, it’s difficult or impossible to keep all the water on one side of the barrier.

So what the body does instead is to let the water through, then slowly absorb it back into the bloodstream as needed. That’s why your urine is darker (more concentrated) if you’re dehydrated than if you aren’t. At some point, though, it presumably becomes infeasible to extract more water from the bladder without also letting the toxic wastes back into the bloodstream.

Now, I know what you’re thinking. You’re thinking, “why isn’t my urine always dark? In other words, why don’t I always absorb as much water as possible back into my bloodstream, whether I’m dehydrated or not? Why not save the water for a (non) rainy day?”

Aha, I’ve got an answer to that one too. Besides excreting wastes, another function of urine is to maintain a homeostatic balance between water and sodium in the blood. If there’s too much water (say, because you just drank six beers), your blood will be too thin, which can cause brain damage (completely apart from the other effects of the beer). Ideally, your body would store the excess water separately from the blood — and again, that’s exactly what it tries to do, but your bladder is only so big.

In summary, if you think through what my “in-pee” solution would actually entail, it turns out to be almost identical to the “pee” solution that Nature actually adopted. One might even say that pee = in-pee.

[Note for harping relatives: now do you understand why I didn’t go to medical school?]

The relativity of originality

May 11th, 2006

An anonymous commenter asked for my opinion of The Free Will Theorem, a much-discussed recent paper by John Conway and Simon Kochen. I’ve been putting it off, but I’ll finally will myself to say something.

I read The Free Will Theorem mostly as an amusing romp through the well-travelled philosophical terrain of quantum mechanics, relativity, and entanglement. I’ve always enjoyed Conway’s writing style, so it was a treat to see his usual jokes and puns out in full force.

Of course, the reason the paper has attracted attention is the Free Will Theorem itself, which I’ll paraphrase as follows:

Suppose that (1) the laws of physics allow something like a Bell or GHZ experiment, (2) the people doing the experiment can set their detectors any way they want (i.e., in a way not determined by the previous history of the universe), and (3) something like Lorentz invariance holds (i.e. there’s one reference frame where experimenter A measures first, and another where experimenter B measures first). Then the results of the experiment are also not determined by the previous history of the universe.

Or as the authors colorfully put it: “if indeed there exist any experimenters with a modicum of free will, then elementary particles must have their own share of this valuable commodity.”

(Note that by “free will,” all Conway and Kochen mean is the property of not being determined by the previous history of the universe. So even events with known probability distributions, like coin flips and quantum measurements, can have “free will” according to their definition.)

My reaction to the Free Will Theorem is threefold:

  • It’s a very important, even if mathematically trivial, consequence of the Bell/GHZ/Kochen-Specker-type theorems.
  • It will be new to many physicists.
  • It was folklore among those who think about entanglement and nonlocality.

I’ll be grateful for any references in support of the last point. Right now, all I can offer is that I gave almost the same argument four years ago, in my review of Stephen Wolfram’s A New Kind of Science (see pages 9-11). My goal there was to show that no deterministic cellular-automaton model of physics, of the sort Wolfram was advocating, could possibly explain the Bell inequality violations while respecting relativistic invariance. I didn’t think I was saying anything terribly new.

Conway and Kochen try to preempt such criticism as follows:

Physicists who feel that they already knew our main result are cautioned that it cannot be proved by arguments involving symbols such as , Ψ, ⊗, since these presuppose a large and indefinite amount of physical theory.

I find this unpersuasive. For me, the whole point of the Bell, GHZ, and Kochen-Specker type theorems has always been that they don’t presuppose quantum mechanics. Instead they show that any physical theory compatible with certain experimental results has to have certain properties (such as nonlocality or contextuality).

I should admit that the Free Will Theorem improves on the argument in my book review in at least three ways:

  1. It gets rid of probabilities, by going through a two-party version of the Kochen-Specker Theorem instead of through Bell’s inequality. (I mentioned in my review that the argument could be redone using the GHZ paradox, which involves three parties but is deterministic. I didn’t mention that it could also be done using two-party Kochen-Specker.)
  2. It gives a cute, memorable name — “free will” — to something that I referred to only by convoluted phrases like “randomness that’s more fundamental than the sort Wolfram allows” (by which I meant, that’s not reducible to Alice and Bob’s subjective uncertainty about the initial state of the universe).
  3. It makes the assumptions more explicit. For example, I never talked about Alice and Bob’s “free will” in choosing the detector settings, since I thought that was just assumed in talking about Bell’s inequality in the first place! (In other words, if Wolfram denied that Alice and Bob could choose the detector settings independently of each other, then he could have dispensed with Bell’s inequality in a much simpler way than he actually did.)

I should also admit that I like Conway and Kochen’s paper. Indeed, the main question it raises for me is not “how could they possibly pass this off as original?” but rather “do we, as scientists, sometimes put too high a premium on originality?”

In all the reading I’ve done in philosophy, I don’t know that I’ve ever once encountered an original idea — in the sense that, say, general relativity and NP-completeness were original ideas. Indeed, whenever I read about a priority dispute between philosophers (like the infamous one between Saul Kripke and Ruth Barcan Marcus), it strikes me as absurd: all the ideas under dispute seem obvious!

But does it follow that philosophy is a waste of time? No, I don’t think it does. The same “obvious” idea can be expressed clumsily or eloquently, sketched in a sentence or developed into a book, brought out explicitly or left beneath the surface. Now, I’m well aware that that’s not an original sentiment — nor, for that matter, is anything in this post, or probably this entire blog. Yet here I am writing it, and here you are reading it.

You might respond that Wolfram can (and does) mount a similar defense of A New Kind of Science: that sure, lesser mortals might have realized decades ago that simple programs can produce complex behavior, but they didn’t grasp the true, Earth-shattering significance of that fact. Compared to Wolfram, though, I think Conway and Kochen have at least two things going for them: (1) they don’t spend 1,200 pages denigrating the work of other people, and (2) they accept quantum mechanics.

From Ecclesiastes:

All streams run to the sea,
but the sea is not full;
to the place where the streams flow,
there they continue to flow.

All things are wearisome;
more than one can express;
the eye is not satisfied with seeing,
or the ear filled with hearing.

What has been is what will be,
and what has been done is what will be done;
there is nothing new under the sun.

It’s not radiation-poisoned, it’s just sleeping!

May 10th, 2006

Since I hadn’t heard from my friend Mahmoud Ahmadinejad for a while, I figured he must be busy with his new uranium-enrichment hobby. My suspicions weren’t alleviated by this excellent piece in the New York Review of Books. What I hadn’t realized is that Mahmoud is quite the joker! And no, I’m not talking about the obvious gag of funding a peaceful nuclear energy program by oil exports — I’m talking about the following Pythonesque routine, which I’m not making up:

IAEA: Iran, if your nuclear program is for peaceful purposes only, then why did we find traces of 36%-enriched uranium at the Natanz facility, whereas you’d only need 3% enrichment for a reactor?

Iran: Oh, that’s just because the equipment we bought from A. Q. Khan on the black market was contaminated.

Grab bag

May 6th, 2006

Sorry for the long delay; I’m recovering from a cold. Thankfully, nothing like my Canadian-muskox-strength cold in October, but still enough to keep my brain out of service for most of the week. On the positive side, I now have a week’s worth of websurfing to share with you.

What’s as fast-paced as Tetris or Pac-Man, playable for free on the web, and willing to tell you whether you harbor hidden biases against blacks, gays, women, or Jews? Why, the Implicit Association Test, developed by psychologists Mahzarin Banaji, Tony Greenwald, and Brian Nosek. If you haven’t played it yet, do so now — it’s fun! Do you take longer to match African-American faces with words like “peace,” “love,” and “wonderful” and Caucasian faces with words like “bad,” “awful,” and “horrible” than vice versa? Yes, if you’re like 88% of white Americans and — interestingly — 48% of black Americans. (Philip Tetlock, quoted in this Washington Post article, comments that “we’ve come a long way from Selma, Alabama, if we have to calibrate prejudice in milliseconds.”) While I’m ashamed to be part of that 88% statistic, I’m also relieved that, even at an involuntary, subconsious level, I apparently harbor no bias at all against Asian-Americans or gays.

While browsing Wikipedia (Earth’s largest procrastination resource), I came across the following “Freedom House” world map, which labels each country as “free,” “partly free,” or “not free” depending on how it scores on various indices of voting rights, free speech, etc.


I have one beef with this map: I think there should be a little red dot over Berkeley, California.

On an equally important note, while reading the Wikipedia entry for bear (don’t ask), I came across my favorite paragraph in the whole encyclopedia:

In a chance encounter with a bear, the best course of action is usually to back away slowly in the direction that you came. The bear will rarely become aggressive and approach you. In order to protect yourself, some suggest passively lying on the ground and waiting for the bear to lose interest. Another approach is to constantly maintain an obstacle between you and the bear, such as a thick tree or boulder. A person is much more agile and quick than a bear allowing him or her to respond to a bear’s clockwise or counter-clockwise movement around the obstacle and move accordingly. The bear’s frustration will eventually cause disinterest. One can then move away from the bear to a new obstacle and continue this until he or she has created a safe distance from the bear.

Lastly, Reuters reports on an interview in which Bill Gates discusses why he hates being so rich. My mom tells me that, when I visit Microsoft Research a few weeks from now, I should help ease Gates’s burden by demanding immediate reimbursement for my travel expenses.

One down

April 30th, 2006

Last summer I posed Ten Semi-Grand Challenges for Quantum Computing Theory. Today I’m pleased to report that (part of) one of my challenges has been “solved” — where, as always in this business, the word “solved” is defined broadly so as to include “proven to be not really worth working on, since a solution to it would imply a solution to something else that most of us gave up on years ago.”

Challenge 10 involved finding a polynomial-time quantum algorithm to PAC-learn neural networks (that is, the class TC0 of polynomial-size, constant-depth threshold circuits). In a new ECCC preprint, Adam Klivans and Alex Sherstov show that, if there’s a fast quantum algorithm to learn even depth-2 neural nets, then there’s also a fast quantum algorithm for the ~n1.5-approximate shortest vector problem. Embarrassingly for me, once you have the idea — to use Oded Regev’s lattice-based public key cryptosystems — the quantum hardness of learning (say) depth 4 or 5 neural nets is immediate, while getting down to depth 2 takes another page. This is one of those results that hangs in the wonderful balance between “you could’ve thought of that” and “nyah nyah, you didn’t.”

Feel free to post your own challenges in the comments section. But please, no “spouter challenges” like “where does the power of quantum computing come from?” or “is there a deeper theoretical framework for quantum algorithms?” In general, if you’re going to pose a scientific challenge, you should (1) indicate some technical problem whose solution would clearly represent progress, and (2) be willing to place at least 25% odds on such progress being made within five years. Or if you’re not a gambler, pick technical problems that you yourself intend to solve — that’s the approach I took with Semi-Grand Challenges 4 and 7.

Theoretical computer science is often disheartening: there are so many open problems, and a week later they’re all still open, and a week after that, they’re all still open. Wait a year, though, or five years, or twenty, and some grad student will have had the insight that’s eluded everyone else: that the problem can’t be solved with any existing technique, unless Blum integers are factorable in 2n^ε time for all ε>0.

In his country there is problem

April 28th, 2006


So it seems that Borat — the racist, misogynist, khrum-grabbing “reporter” from Da Ali G Show — has become a serious public relations problem for the former Soviet Republic of Kazakhstan. See here for an old New Yorker piece, and here for the latest on this important story.

Respek.

Alright, alright, back to complexity

April 26th, 2006

I’ve learned my lesson, at least for the next day or two.

And speaking of learning — in computational learning theory, there’s an “obvious” algorithm for learning a function from random samples. Here’s the algorithm: output any hypothesis that minimizes the error on those samples.

I’m being intentionally vague about what the learning model is — since as soon as you specify a model, it seems like some version of that algorithm is what you want to do, if you want the best tradeoff between the number of samples and the error of your hypothesis. For example, if you’re trying to learn a Boolean function from a class C, then you want to pick any hypothesis from C that’s consistent with all your observations. If you’re trying to learn a Boolean function based on noisy observations, then you want to pick any hypothesis that minimizes the total number of disagreements. If you’re trying to learn a degree-d real polynomial based on observations subject to Gaussian noise, then you want to pick any degree-d polynomial that minimizes the least-squared error, and so on.

Here’s my question: is the “obvious” algorithm always the best one, or is there a case where a different algorithm needs asymptotically fewer samples? That is, do you ever want to pick a hypothesis that disagrees with more of your observations over one that disagrees with less?

While I’m on the subject, have you ever wished you could help Scott Aaronson do his actual research, and even be thanked — by name — in the acknowledgments of one of his papers? Well then, don’t miss this chance! All you have to do is read this seminal paper by Alon, Ben-David, Cesa-Bianchi, and Haussler, and then tell me what upper bound on the sample complexity of p-concept learning follows from their results. (Perversely, all they prove in the paper is that some finite number of samples suffices — must be a mathematician thing.)

Earth Day, Doomsday, and Chicken Little

April 22nd, 2006

It’s Earth Day, so time for a brief break from my laserlike, day-long focus on complexity theory, and for my long-promised post about climate change.

Let me lay my cards on the table. I think that we’re in the same position with climate change today that we were with Hitler in 1938. That position, in case you’re wondering, is on the brink of a shitstorm. And as with the lead-up to that earlier shitstorm, some people are sanely worried, some are in active denial, and the rest are in “passive denial” — accepting the obvious if pressed, but preferring to think about more pleasant things like NP intersect coNP. It’s frustrating even to have to defend the “worried” view explicitly, since it’s so clear which way the debate will have been settled 50 years from now.

At the same time, I can’t ignore that there are thoughtful, humane, intelligent people — just like there were in the 1930’s — who downplay, equivocate over, and rationalize away the shitstorm that (again from my perspective) is gathering over our heads.

After all, isn’t the climate change business more complicated than all that? Do we even know the Earth is getting warmer? Okay, so maybe we do know, but do we really know why? Couldn’t it just be a coincidence that we’re pumping out billions of tons of CO2 and methane each year, and 19th-century physics tells us that will make the temperature rise, and the temperature is in fact rising as predicted? What about feedbacks like cloud cover, ocean absorbtion, and ice caps? And sure, maybe the feedbacks could at most buy a few decades, and maybe some of them (like melting ice caps darkening the Earth’s surface) are rapidly making things worse rather than better, but even so, wouldn’t the loss of some low-lying countries be more than balanced out by warmer winters in Ontario? And granted, maybe if our goal was to run a massive, irreversible geophysics experiment on an entire planet, it might be smarter to start with (say) Venus or Mars instead of Earth, but still — wouldn’t it be easier to adapt to a climate unlike any the planet has experienced in the last 200 million years than to drive Priuses instead of Cherokees? Isn’t it just a question of how to allocate resources, of how to maximize expected utility? And aren’t there other risks we should be more worried about, like bird flu, or out-of-control nanorobots converting the planet into grey goo?

I’ll tackle some of these questions in future posts or comments — though for most of them, the professionals at RealClimate can do a better job than I can. Today I want to try a different tack: flying over most of this well-worn ground, and aiming immediately for the one place where the climate skeptics invariably end up anyway when all of their other arguments have been exhausted. That place is the Chicken Little Argument.

“Back in the 1970’s, all you academics were screaming about overpopulation, and the oil shortage, and global cooling. That’s right, cooling: the exact opposite of warming! And before that it was radiation poisoning, or an accidental nuclear launch, and before that probably something else. Yet time after time, the doomsayers were wrong. So why should this time be any different? Why should ours be the one time when the so-called crisis is real, when it’s not a figment of a few scientists’ overheated imaginations?”

The first response, of course, is that sometimes the alarmists were right. More than once, our civilization really did face an existential threat, only to escape it by a hair. I already mentioned Hitler, but there’s another example that’s closer to the subject at hand.

In the 1970’s, Mario Molina and F. Sherwood Rowland realized that chlorofluorocarbons, then a common refrigerant, propellant, and cleaning solvent, could be broken down by UV light into compounds that then attacked the ozone molecules in the upper atmosphere. Had the resulting loss of ozone continued for much longer, the increased UV light reaching the Earth’s surface would eventually have decimated populations of plankton and cyanobacteria, which in turn could have destabilized much of the world’s food chain.

As with global warming today, the initial response of the chemical companies was to attack the ivory-tower, tree-hugging, funding-crazed, Cassandra-like messenger. But in 1985, Joseph Farman, Brian Gardiner, and Jonathan Shanklin looked into a weird error in ozone measurements over Antarctica, which seemed to show more than half the ozone there disappearing from September to December. When it turned out not to be an error, even Du Pont decided that planetary suicide wasn’t in its best interest, and CFC’s were phased out in most of the world by 1996. We survived that one.

But there’s a deeper response to the Chicken Little Argument, one that goes straight to the meat of the issue (chicken, I suppose). This is that, when we’re dealing with “indexical” questions — questions of the form “why us? why were we born in this era rather than a different one?” — we can’t apply the same rules of induction that work elsewhere.

To illustrate, consider a hypothetical planet where the population doubles every generation, until it finally depletes the planet’s resources and goes extinct. (Like bacteria in a petri jar.) Now imagine that in every generation, there are doomsayers preaching that the end is nigh, who are laughed off by folks with more common sense. By assumption, eventually the doomsayers will be right — their having been wrong in the past is just a precondition for there being a debate in the first place. But there’s a further point. If you imagine yourself chosen uniformly at random among all people ever to live on the planet, then with about 99% probability, you’ll belong to one of the last seven generations. The assumption of exponential growth makes it not just possible, but probable, that you’re near the end.

That’s one formulation (though not the best one) of the infamous Doomsday Argument, which says (roughly speaking) that the probability of human history continuing for millions of years longer is less than one would naïvely expect, since if it did so continue, then we would occupy an improbable position near the very beginning of that history. Obviously cavemen could have made the same argument, and they would have been wrong. The point is that, if everyone in history makes the Doomsday Argument, then most people who make it (or a suitable version of it) will by definition be right.

On hearing the Doomsday Argument for the first time, almost everyone thinks there must be a fallacy somewhere. But once you accept one key assumption, the Argument is a trivial consequence of Bayes’ Rule. So what is that key assumption? It’s what Nick Bostrom, in one of the only metaphysical page-turners ever written, calls the Self-Sampling Assumption (SSA). The SSA states that, if you consider a possible history of the world to have a prior probability p, and if that history contains N>0 people who you imagine you “could have been,” then you should judge the probability of your being a specific one of those people within that history to be p/N. Sound obvious? Well, you might imagine instead that you need to weight the probability of each history by the number of people in it — so that, if a history has ten times as many people who you “could have been,” then you would be ten times as likely to exist in that history in the first place. Bostrom calls this alternative the Self-Indication Assumption (SIA).

It’s not hard to show that switching from SSA to SIA exactly cancels out the effect of the Doomsday Argument — bringing you back to your “naïve” prior probabilities for each possible history. In short, if you accept SSA then the Doomsday Argument goes through, while if you accept SIA then it doesn’t.

But before you buy that “SIA not SSA” bumper-sticker for your SUV, let me point out the downsides. Firstly, SIA forces you to treat your own existence as a random variable — not as something you can just condition on! Indeed, the image that springs to mind is that of a warehouse full of souls, not all of which will get “picked” to inhabit a body. And secondly, assuming it’s logically possible for there to be a universe with an infinite number of people, SIA implies that we must live in such a universe. Usually, if you reach a definite empirical conclusion starting from pure thought, your best bet is to look around you. You might find yourself in a medieval monastery or an Amsterdam coffeeshop.

On the other hand, as Bostrom observed, the SSA carries some heavy baggage of its own. For example, it suggests the following “algorithm” by which the first people ever to live, call them (I dunno) “Adam” and “Eve,” could solve NP-complete problems in polynomial time. They simply guess a random solution, having formed the firm intention to

  1. have children (leading eventually to an exponential number of descendants) if the solution is wrong, or
  2. have no children if the solution is right.

(For this algorithm, it really does have to be “Adam and Eve, not Adam and Steve.”) Here’s the punchline: the prior probability of Adam and Eve’s choosing a wrong solution is close to 1, but under SSA, the posterior probability is close to 0. For if Adam and Eve guess a wrong solution, then with overwhelming probability they wouldn’t be Adam and Eve to begin with — they would be one of the numerous descendants thereof.

Indeed, there’s a loony, crackpot paper showing that if Adam and Eve had a quantum computer, then they could even solve PP-complete problems in polynomial time. Every day I’m dreading the Exxon ad: “If the assumptions underlying the Doomsday Argument were valid, it’s not just that Adam and Eve could solve NP-complete problems in polynomial time. Modulo a plausible derandomization assumption, a theorem of S. Aaronson implies they could decide the entire polynomial hierarchy! So go ahead, buy that monster SUV.”

If this discussion seems hopelessly speculative, well, that’s exactly the point. The Doomsday Argument is hopelessly speculative, but not more so than the Chicken Little Argument. Ultimately, both arguments rest on metaphysical assumptions about “why we’re us and not someone else” — about the probability of having been born into one historical epoch rather than another. This is not the sort of question that science gives us the tools to answer.

For me, then, the Doomsday Argument is like an ethereal missile that neutralizes the opposing missile of the Chicken Little Argument — leaving the ground troops below to slog it out based on, you know, actual facts and evidence. So I think the environmentalists’ message to the climate contrarians should be as follows: if you stick to the science, then we will too. But if you fall back on your favorite lazy meta-argument — “why should the task of saving the world have fallen to this generation, and not to some other one?” — then don’t be surprised to find that metareasoning cuts both ways.