Almost two years ago, a reader named Lev R. won my Best Anthropicism Contest with the following gem:
why aren’t physicists too interested in computational complexity? because if they were, they’d be computer scientists.
As the champion, Lev won the right to ask any question and have me answer it on this blog. Here was Lev’s question:
I like your “Earth Day, Doomsday, and Chicken Little” post, but you dodged the big question. Will the world end (humans go extinct) anytime soon? Or do you think that despite our best efforts, we’ll somehow end up not destroying ourselves?
In general, I despise being asked to make predictions, even about infinitely less weighty topics — especially when there’s a chance of my being wrong, and people looking back Nelson-Muntz-like and saying “ha ha, Scott was wrong!” That’s one of only several reasons why I could never be a physicist.
An answerable question would be one that asked me to clear up a misconception, or render a moral judgment, or discuss the consequences of a given assumption. (Unanswerable: “When will we see useful quantum computers?” Answerable: “Didn’t that company in Vancouver already build one?”) Questions about relationships between complexity classes or other unsolved math problems are also fine. But as for the universe, how am I supposed to know what it’ll decide to do, among all the things it could do within reasonable bounds of physics and logic? How am I even supposed to have a prior? As a CS theorist, I’m trained to think not about what’s likely to happen, but about the very worst that could happen — within stated assumptions, of course. Among the practical consequences of this attitude, I never gamble and I never play the stock market (and not only because, while there are many things I want, almost none of them can be traded for money). I also don’t worry about being put out of a job by prediction markets. Where the Bayesian stops, and says “every question beyond these is trivial or meaningless,” that’s where I’m just getting started.
But despite everything I’ve said, after years of diligent research into the future of the human race — reading hundreds of trillions of books and articles about climate change, overpopulation, Peak Oil, nuclear proliferation, transhumanism, AI, and every other conceivably relevant topic (what do you think I was doing, writing CS papers?) — I am finally prepared, this somber April 1st, to answer Lev’s question with the seriousness it deserves. Obviously my predictions can only be probabilistic, and obviously I can’t give you the deep reasons behind them — those would take years to explain. I shall therefore present the human future, circa 2100, in the form of a pie chart.
