Archive for the ‘Metaphysical Spouting’ Category

I Had A Dream

Sunday, January 18th, 2026

Alas, the dream that I had last night was not the inspiring, MLK kind of dream, even though tomorrow happens to be the great man’s day.  No, I had the literal kind of dream, where everything seems real but then you wake up and remember only the last fragments.

In my case, those last fragments involved a gray-haired bespectacled woman, a fellow CS professor.  She and I were standing in a dimly lit university building.  And she was grabbing me by the shoulders, shaking me.

“Look, Scott,” she was saying, “we’re both computer scientists.  We were both around in the 90s.  You know as well as I do that, if someone claims to have built an AI, but it turns out they just loaded a bunch of known answers, written by humans, into a lookup table, and then they search the table when a question comes … that’s not AI.  It’s slop.  It’s garbage.”

“But…” I interjected.

“Oh of course,” she continued, “so you make the table bigger.  What do you have now?  More slop!  More garbage!  You load the entire Internet into the table.  Now you have an astronomical-sized piece of garbage!”

“I mean,” I said, “there’s an exponential blowup in the number of possible questions, which can only be handled by…”

“Of course,” she said impatiently, “I understand as well as anyone.  You train a neural net to predict a probability distribution over the next token.  In other words, you slice up and statistically recombine your giant lookup table to disguise what’s really going on.  Now what do you get?  You get the biggest piece of garbage the world has ever seen.  You get a hideous monster that’s destroying and zombifying our entire civilization … and that still understands nothing more than the original lookup table did.”

“I mean, you get a tool that hundreds of millions of people now use every day—to write code, to do literature searches…”

By this point, the professor was screaming at me, albeit with a pleading tone in her voice.  “But no one who you respect uses that garbage! Not a single one!  Go ahead and ask them: scientists, mathematicians, artists, creators…”

I use it,” I replied quietly.  “Most of my friends use it too.”

The professor stared at me with a new, wordless horror.  And that’s when I woke up.

I think I was next going to say something about how I agreed that generative AI might be taking the world down a terrible, dangerous path, but how dismissing the scientific and philosophical immensity of what’s happened, by calling it “slop,” “garbage,” etc., is a bad way to talk about the danger. If so, I suppose I’ll never know how the professor would’ve replied to that. Though, if she was just an unintegrated part of my own consciousness—or a giant lookup table that I can query on demand!—perhaps I could summon her back.

Mostly, I remember being surprised to have had a dream that was this coherent and topical. Normally my dreams just involve wandering around lost in an airport that then transforms itself into my old high school, or something.

Scott A. on Scott A. on Scott A.

Sunday, January 18th, 2026

Scott Alexander has put up one of his greatest posts ever, a 10,000-word eulogy to Dilbert creator Scott Adams, of which I would’ve been happy to read 40,000 words more. In it, Alexander trains a microscope on Adams’ tragic flaws as a thinker and human being, but he adds:

In case it’s not obvious, I loved Scott Adams.

Partly this is because we’re too similar for me to hate him without hating myself.

And:

Adams was my teacher in a more literal way too. He published several annotated collections, books where he would present comics along with an explanation of exactly what he was doing in each place, why some things were funny and others weren’t, and how you could one day be as funny as him. Ten year old Scott devoured these … objectively my joke posts get the most likes and retweets of anything I write, and I owe much of my skill in the genre to cramming Adams’ advice into a malleable immature brain.

When I first heard the news that Scott Adams had succumbed to cancer, I posted something infinitely more trivial on my Facebook. I simply said:

Scott Adams (who reigned for decades as the #1 Scott A. of the Internet, with Alexander as #2 and me as at most #3) was a hateful asshole, a nihilist, and a crank. And yet, even when reading the obituaries that explain what an asshole, nihilist, and crank he was, I laugh whenever they quote him.

Inspired by Scott Alexander, I’d like now to try again, to say something more substantial. As Scott Alexander points out, Scott Adams’ most fundamental belief—the through-line that runs not only through Dilbert but through all his books and blog posts and podcasts—was that the world is ruled by idiots. The pointy-haired boss always wins, spouting about synergy and the true essence of leadership, and the nerdy Dilberts always lose. Trying to change minds by rational argument is a fools’ errand, as “master persuaders” and skilled hypnotists will forever run rings around you. He, Scott Adams, is cleverer than everyone else, among other things because he realizes all this—but even he is powerless to change it.

Or as Adams put it in The Dilbert Principle:

It’s useless to expect rational behavior from the people you work with, or anybody else for that matter. If you can come to peace with the fact that you’re surrounded by idiots, you’ll realize that resistance is futile, your tension will dissipate, and you can sit back and have a good laugh at the expense of others.

The thing is, if your life philosophy is that the world is ruled by idiots, and that confident charlatans will always beat earnest nerds, you’re … often going to be vindicated by events. Adams was famously vindicated back in 2015, when he predicted Trump’s victory in the 2016 election (since Trump, you see, was a “master persuader”), before any other mainstream commentator thought that Trump even had a serious chance of winning the Republican nomination.

But if you adopt this worldview, you’re also often going to be wrong—as countless of Adams’ other confident predictions were (see Scott Alexander’s post for examples), to say nothing of his scientific or moral views.

My first hint that the creator of Dilbert was not a reliable thinker, was when I learned of his smugly dismissive view of science. One of the earliest Shtetl-Optimized posts, way back in 2006, was entitled Scott A., disbeliever in Darwinism. At that time, Adams’ crypto-creationism struck me as just some bizarre, inexplicable deviation. I’m no longer confused about it: on the one hand, Scott Alexander’s eulogy shows just how much deeper the crankishness went, how Adams also gobbled medical misinformation, placed his own cockamamie ideas about gravity on par with general relativity, etc. etc. But Alexander succeeds in reconciling all this with Adams’ achievements: it’s all just consequences from the starting axiom that the world is ruled by morons, and that he, Scott Adams, is the only one clever enough to see through it all.


Is my epistemology any different? Do I not also look out on the world, and see idiots and con-men and pointy-haired bosses in every direction? Well, not everywhere. At any rate, I see far fewer of them in the hard sciences.

This seems like a good time to say something that’s been a subtext of Shtetl-Optimized for 20 years, but that Scott Alexander has inspired me to make text.

My whole worldview starts from the observation that science works. Not perfectly, of course—working in academic science for nearly 30 years, I’ve had a close-up view of the flaws—but the motor runs. On a planet full of pointy-haired bosses and imposters and frauds, science nevertheless took us in a few centuries from wretchedness and superstition to walking on the moon and knowing the age of the universe and the code of life.

This is the point where people always say: that’s all well and good, but you can’t derive ought from is, and science, for all its undoubted successes, tells us nothing about what to value or how to live our lives.

To which I reply: that’s true in a narrow sense, but it dramatically understates how far you can get from the “science works” observation.

As one example, you can infer that the people worth listening to are the people who speak and write clearly, who carefully distinguish what they know from what they don’t, who sometimes change their minds when presented with opposing views and at any rate give counterarguments—i.e., who exemplify the values that make science work. The political systems worth following are the ones that test their ideas against experience, that have built-in error-correction mechanisms, that promote people based on ability rather than loyalty—the same things that make scientific institutions work, insofar as they do work. And of course, if the scientists who study X are nearly unanimous in saying that a certain policy toward X would be terrible, then we’d better have a damned good reason to pursue the policy anyway. This still leaves a wide range of moral and political views on the table, but it rules out virtually every kind of populism, authoritarianism, and fundamentalism.

Incidentally, this principle—that one’s whole moral and philosophical worldview should grow out of the seed of science working—is why, from an early age, I’ve reacted to every kind of postmodernism as I would to venomous snakes. Whenever someone tells me that science is just another narrative, a cultural construct, a facade for elite power-seeking, etc., to me they might as well be O’Brien from 1984, in the climactic scene where he tortures Winston Smith into agreeing that 2+2=5, and that the stars are just tiny dots a few miles away if the Party says they are. Once you can believe absurdities, you can justify atrocities.

Scott Adams’ life is interesting to me in that shows exactly how far it’s possible to get without internalizing this. Yes, you can notice that the pointy-haired boss is full of crap. You can make fun of the boss. If you’re unusually good at making fun of him, you might even become a rich, famous, celebrated cartoonist. But you’re never going to figure out any ways of doing things that are systematically better than the pointy-haired boss’s ways, or even recognize the ways that others have found. You’ll be in error far more often than in doubt. You might even die of prostate cancer earlier than necessary, because you listen to medical crackpots and rely on ivermectin, turning to radiation and other established treatments only after having lost crucial time.


Scott Adams was hardly the first great artist to have tragic moral flaws, or to cause millions of his fans to ask whether they could separate the artist from the art. But I think he provides one of the cleanest examples where the greatness and the flaws sprang from the same source: namely, overgeneralization from the correct observation that “the world is full of idiots,” in a way that leaves basically no room even for Darwin or Einstein, and so inevitably curdles over time into crankishness, bitterness, and arrogance. May we laugh at Scott Adams’ cartoons and may we learn from his errors, both of which are now permanent parts of the world’s heritage.

The Goodness Cluster

Wednesday, January 7th, 2026

The blog-commenters come at me one by one, a seemingly infinite supply of them, like masked henchmen in an action movie throwing karate chops at Jackie Chan.

Seriously Scott, do better,” says each henchman when his turn comes, ignoring all the ones before him who said the same. “If you’d have supported American-imposed regime change in Venezuela, like just installing María Machado as the president, then surely you must also support Trump’s cockamamie plan to invade Greenland! For that matter, you logically must also support Putin’s invasion of Ukraine, and China’s probable future invasion of Taiwan!”

“No,” I reply to each henchman, “you’re operating on a wildly mistaken model of me. For starters, I’ve just consistently honored the actual democratic choices of the Venezuelans, the Greenlanders, the Ukrainians, and the Taiwanese, regardless of coalitions and power. Those choices are, respectively, to be rid of Maduro, to stay part of Denmark, and to be left alone by Russia and China—in all four cases, as it happens, the choices most consistent with liberalism, common sense, and what nearly any 5-year-old would say was right and good.”

“My preference,” I continue, “is simply that the more pro-Enlightenment, pluralist, liberal-democratic side triumph, and that the more repressive, authoritarian side feel the sting of defeat—always, in every conflict, in every corner of the earth.  Sure, if authoritarians win an election fair and square, I might clench my teeth and watch them take power, for the sake of the long-term survival of the ideals those authoritarians seek to destroy. But if authoritarians lose an election and then arrogate power anyway, what’s there even to feel torn about? So, you can correctly predict my reaction to countless international events by predicting this. It’s like predicting what Tit-for-Tat will do on a given move in the Iterated Prisoners’ Dilemma.”

“Even more broadly,” I say, “my rule is simply that I’m in favor of good things, and against bad things.  I’m in favor of truth, and against falsehood. And if anyone says to me: because you supported this country when it did good thing X, you must also support it when does evil thing Y? (Either as a reductio ad absurdum, or because the person actually wants evil thing Y?) Or if they say: because you agreed with this person when she said this true thing, you must also endorse this false thing she said? I reply: good over evil and truth over lies in every instance—if need be, down to the individual subatomic particles of morality and logic.”

The henchmen snarl, “so now it’s laid bare! Now everyone can see just how naive and simplistic Aaronson’s so-called ‘political philosophy’ really is!  Do us all a favor, Scott, and stick to quantum physics! Stick to computer science! Do you not know that philosophers and political scientists have filled libraries debating these weighty matters? Are you an act-utilitarian? A Kantian? A neocon or neoliberal? An America-First interventionist? Pick some package of values, then answer to us for all the commitments that come with that package!”

I say: “No, I don’t subcontract out my soul to any package of values that I can define via any succinct rule. Instead, given any moral dilemma, I simply query my internal Morality Oracle and follow whatever it tells me to do, unless of course my weakness prevents me. Some would simply call the ‘Morality Oracle’ my conscience. But others would hold that, to whatever extent people’s consciences have given similar answers across vast gulfs of time and space and culture, it’s because they tapped into an underlying logic that humans haven’t fully explained, but that they no more invented than the rules of arithmetic. The world’s prophets and sages have tried again and again over the millennia to articulate that logic, with varying admixtures of error and self-interest and culture-dependent cruft. But just like with math and science, the clearest available statements seem to me to have gotten clearer over time.”

The Jackie Chan henchman smirks at this. “So basically, you know the right answers to moral questions because of a magical, private Morality Oracle—like, you know, the burning bush, or Mount Sinai? And yet you dare to call yourself a scientific rationalist, a foe of obscurantism and myticism? Do you have any idea how pathetic this all sounds, as an attempted moral theory?”

“But I’m not pretending to articulate a moral theory,” I reply. “I’m merely describing what I do. I mean, I can gesture toward moral theories and ideas that capture more of my conscience’s judgments than others, like liberalism, the Enlightenment, the Golden Rule, or utilitarianism. But if a rule ever appears to disagree with the verdict of my conscience—if someone says, oh, you like utilitarianism, so you must value the lives of these trillion amoebas above this one human child’s, even torture and kill the child to save the amoebas—I will always go with my conscience and damn the rule.”

“So the meaning of goodness is just ‘whatever seems good to you’?” asks the henchman, between swings of his nunchuk. “Do you not see how tautological your criterion is, how worthless?”

“It might be tautological, but I find it far from worthless!” I offer. “If nothing else, my Oracle lets me assess the morality of people, philosophies, institutions, and movements, by simply asking to what extent their words and deeds seem guided by the same Oracle, or one that’s close enough! And if I find a cluster of millions of people whose consciences agree with mine and each others’ in 95% of cases, then I can point to that cluster, and say, here. This cluster’s collective moral judgment is close to what I mean by goodness. Which is probably the best we can do with countless questions of philosophy.”

“Just like, in the famous Wittgenstein riff, we define ‘game’ not by giving an if-and-only-if, but by starting with poker, basketball, Monopoly, and other paradigm-cases and then counting things as ‘games’ to whatever extent they’re similar—so too we can define ‘morality’ by starting with a cluster of Benjamin Franklin, Frederick Douglass, MLK, Vasily Arkhipov, Alan Turing, Katalin Karikó, those who hid Jews during the Holocaust, those who sit in Chinese or Russian or Iranian or Venezuelan torture-prisons for advocating democracy, etc, and then working outward from those paradigm-cases, and whenever in doubt, by seeking reflective equilibrium between that cluster and our own consciences. At any rate, that’s what I do, and it’s what I’ll continue doing even if half the world sneers at me for it, because I don’t know a better approach.”

Applications to the AI alignment problem are left as exercises for the reader.


Announcement: I’m currently on my way to Seattle, to speak in the CS department at the University of Washington—a place that I love but haven’t visited, I don’t think, since 2011 (!). If you’re around, come say hi. Meanwhile, feel free to karate-chop this post all you want in the comment section, but I’ll probably be slow in replying!

Understanding vs. impact: the paradox of how to spend my time

Thursday, December 11th, 2025

Not long ago William MacAskill, the founder of the Effective Altruist movement, visited Austin, where I got to talk with him in person for the first time. I was a fan of his book What We Owe the Future, and found him as thoughtful and eloquent face-to-face as I did on the page. Talking to Will inspired me to write the following short reflection on how I should spend my time, which I’m now sharing in case it’s of interest to anyone else.


By inclination and temperament, I simply seek the clearest possible understanding of reality.  This has led me to spend time on (for example) the Busy Beaver function and the P versus NP problem and quantum computation and the foundations of quantum mechanics and the black hole information puzzle, and on explaining whatever I’ve understood to others.  It’s why I became a professor.

But the understanding I’ve gained also tells me that I should try to do things that will have huge positive impact, in what looks like a pivotal and even terrifying time for civilization.  It tells me that seeking understanding of the universe, like I’ve been doing, is probably nowhere close to optimizing any values that I could defend.  It’s self-indulgent, a few steps above spending my life learning to solve Rubik’s Cube as quickly as possible, but only a few.  Basically, it’s the most fun way I could make a good living and have a prestigious career, so it’s what I ended up doing.  I should be skeptical that such a course would coincidentally also maximize the good I can do for humanity.

Instead I should plausibly be figuring out how to make billions of dollars, in cryptocurrency or startups or whatever, and then spending it in a way that saves human civilization, for example by making AGI go well.  Or I should be convincing whatever billionaires I know to do the same.  Or executing some other galaxy-brained plan.  Even if I were purely selfish, as I hope I’m not, still there are things other than theoretical computer science research that would bring more hedonistic pleasure.  I’ve basically just followed a path of least resistance.

On the other hand, I don’t know how to make billions of dollars.  I don’t know how to make AGI go well.  I don’t know how to influence Elon Musk or Sam Altman or Peter Thiel or Sergey Brin or Mark Zuckerberg or Marc Andreessen to do good things rather than bad things, even when I have gotten to talk to some of them.  Past attempts in this direction by extremely smart and motivated people—for example, those of Eliezer Yudkowsky and Sam Bankman-Fried—have had, err, uneven results, to put it mildly.  I don’t know why I would succeed where they failed.

Of course, if I had a better understanding of reality, I might know how better to achieve prosocial goals for humanity.  Or I might learn why they were actually the wrong goals, and replace them with better goals.  But then I’m back to the original goal of understanding reality as clearly as possible, with the corresponding danger that I spend my time learning to solve Rubik’s Cube faster.

Podcasts!

Saturday, November 22nd, 2025

A 9-year-old named Kai (“The Quantum Kid”) and his mother interviewed me about closed timelike curves, wormholes, Deutsch’s resolution of the Grandfather Paradox, and the implications of time travel for computational complexity:

This is actually one of my better podcasts (and only 24 minutes long), so check it out!


Here’s a podcast I did a few months ago with “632nm” about P versus NP and my other usual topics:


For those who still can’t get enough, here’s an interview about AI alignment for the “Hidden Layers” podcast that I did a year ago, and that I think I forgot to share on this blog at the time:


What else is in the back-catalog? Ah yes: the BBC interviewed me about quantum computing for a segment on Moore’s Law.


As you may have heard, Steven Pinker recently wrote a fantastic popular book about the concept of common knowledge, entitled When Everyone Knows That Everyone Knows… Steve’s efforts render largely obsolete my 2015 blog post Common Knowledge and Aumann’s Agreement Theorem, one of the most popular posts in this blog’s history. But I’m willing to live with that, not only because Steven Pinker is Steven Pinker, but also because he used my post as a central source for the topic. Indeed, you should watch his podcast with Richard Hanania, where Steve lucidly explains Aumann’s Agreement Theorem, noting how he first learned about it from this blog.

ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman

Monday, August 4th, 2025

Scott Aaronson’s Brief Foreword:

Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my best friends among the UT professoriate. He’s my favorite kind of philosopher, the kind who sees scientists as partners in discovering the truth, and also has a great sense of humor. He and I are both involved in UT’s new AI and Human Objectives Initiative (AHOI), which is supported by Open Philanthropy.

The other day, Harvey emailed me an eloquent meditation he wrote on what will be the meaning of life if AI doesn’t kill us all, but “merely” does everything we do better than we do it. While the question is of course now extremely familiar to me, Harvey’s erudition—bringing to bear everything from speculative fiction to the history of polar exploration—somehow brought the stakes home for me in a new way.

Harvey mentioned that he’d sent his essay to major magazines but hadn’t had success. So I said, why not a Shtetl-Optimized guest post? Harvey replied—what might be the highest praise this blog has ever received—well, that would be even better than the national magazine, as it would reach more relevant people.

And so without further ado, I present to you…


ChatGPT and the Meaning of Life, by Harvey Lederman

For the last two and a half years, since the release of ChatGPT, I’ve been suffering from fits of dread. It’s not every minute, or even every day, but maybe once a week, I’m hit by it—slackjawed, staring into the middle distance—frozen by the prospect that someday, maybe pretty soon, everyone will lose their job.

At first, I thought these slackjawed fits were just a phase, a passing thing. I’m a philosophy professor; staring into the middle distance isn’t exactly an unknown disease among my kind. But as the years have begun to pass, and the fits have not, I’ve begun to wonder if there’s something deeper to my dread. Does the coming automation of work foretell, as my fits seem to say, an irreparable loss of value in human life?

The titans of artificial intelligence tell us that there’s nothing to fear. Dario Amodei, CEO of Anthropic, the maker of Claude, suggests that: “historical hunter-gatherer societies might have imagined that life is meaningless without hunting,” and “that our well-fed technological society is devoid of purpose.” But of course, we don’t see our lives that way. Sam Altman, the CEO of OpenAI, sounds so similar, the text could have been written by ChatGPT. Even if the jobs of the future will look as “fake” to us as ours do to “a subsistence farmer”, Altman has “no doubt they will feel incredibly important and satisfying to the people doing them.”

Alongside these optimists, there are plenty of pessimists who, like me, are filled with dread. Pope Leo XIV has decried the threats AI poses to “human dignity, labor and justice”. Bill Gates has written about his fear, that “if we solved big problems like hunger and disease, and the world kept getting more peaceful: What purpose would humans have then?” And Douglas Hofstadter, the computer scientist and author of Gödel, Escher, Bach, has spoken eloquently of his terror and depression at “an oncoming tsunami that is going to catch all of humanity off guard.”

Who should we believe? The optimists with their bright visions of a world without work, or the pessimists who fear the end of a key source of meaning in human life?


I was brought up, maybe like you, to value hard work and achievement. In our house, scientists were heroes, and discoveries grand prizes of life. I was a diligent, obedient kid, and eagerly imbibed what I was taught. I came to feel that one way a person’s life could go well was to make a discovery, to figure something out.

I had the sense already then that geographical discovery was played out. I loved the heroes of the great Polar Age, but I saw them—especially Roald Amundsen and Robert Falcon Scott—as the last of their kind. In December 1911, Amundsen reached the South Pole using skis and dogsleds. Scott reached it a month later, in January 1912, after ditching the motorized sleds he’d hoped would help, and man-hauling the rest of the way. As the black dot of Amundsen’s flag came into view on the ice, Scott was devastated to reach this “awful place”, “without the reward of priority”. He would never make it back.

Scott’s motors failed him, but they spelled the end of the great Polar Age. Even Amundsen took to motors on his return: in 1924, he made a failed attempt for the North Pole in a plane, and, in 1926, he successfully flew over it, in a dirigible. Already by then, the skis and dogsleds of the decade before were outdated heroics of a bygone world.

We may be living now in a similar twilight age for human exploration in the realm of ideas. Akshay Venkatesh, whose discoveries earned him the 2018 Fields Medal, mathematics’ highest honor, has written that, the “mechanization of our cognitive processes will alter our understanding of what mathematics is”. Terry Tao, a 2006 Fields Medalist, expects that in just two years AI will be a copilot for working mathematicians. He envisions a future where thousands of theorems are proven all at once by mechanized minds.

Now, I don’t know any more than the next person where our current technology is headed, or how fast. The core of my dread isn’t based on the idea that human redundancy will come in two years rather than twenty, or, for that matter, two hundred. It’s a more abstract dread, if that’s a thing, dread about what it would mean for human values, or anyway my values, if automation “succeeds”: if all mathematics—and, indeed all work—is done by motor, not by human hands and brains.

A world like that wouldn’t be good news for my childhood dreams. Venkatesh and Tao, like Amundsen and Scott, live meaningful lives, lives of purpose. But worthwhile discoveries like theirs are a scarce resource. A territory, once seen, can’t be seen first again. If mechanized minds consume all the empty space on the intellectual map, lives dedicated to discovery won’t be lives that humans can lead.

The right kind of pessimist sees here an important argument for dread. If discovery is valuable in its own right, the loss of discovery could be an irreparable loss for humankind.

A part of me would like this to be true. But over these last strange years, I’ve come to think it’s not. What matters, I now think, isn’t being the first to figure something out, but the consequences of the discovery: the joy the discoverer gets, the understanding itself, or the real life problem their knowledge solves. Alexander Fleming discovered penicillin, and through that work saved thousands, perhaps millions of lives. But if it were to emerge, in the annals of an outlandish future, that an alien discovered penicillin thousands of years before Fleming did, we wouldn’t think that Fleming’s life was worse, just because he wasn’t first. He eliminated great suffering from human life; the alien discoverer, if they’re out there, did not. So, I’ve come to see, it’s not discoveries themselves that matter. It’s what they bring about.


But the advance of automation would mean the end of much more than human discovery. It could mean the end of all necessary work. Already in 1920, the Czech playwright Karel Capek asked what a world like that would mean for the values in human life. In the first act of R.U.R.—the play which introduced the modern use of the word “robot”—Capek has Henry Domin, the manager of Rossum’s Universal Robots (the R.U.R. of the title), offer his corporation’s utopian pitch. “In ten years”, he says, their robots will “produce so much corn, so much cloth, so much everything” that “There will be no poverty.” “Everybody will be free from worry and liberated from the degradation of labor.” The company’s engineer, Alquist, isn’t convinced. Alquist (who, incidentally, ten years later, will be the only human living, when the robots have killed the rest) retorts that “There was something good in service and something great in humility”, “some kind of virtue in toil and weariness”.

Service—work that meets others’ significant needs and wants— is, unlike discovery, clearly good in and of itself. However we work— as nurses, doctors, teachers, therapists, ministers, lawyers, bankers, or, really, anything at all—working to meet others’ needs makes our own lives go well. But, as Capek saw, all such work could disappear. In a “post-instrumental” world, where people are comparatively useless and the bots meet all our important needs, there would be no needed work for us to do, no suffering to eliminate, no diseases to cure. Could the end of such work be a better reason for dread?

The hardline pessimists say that it is. They say that the end all needed work would not only be a loss of some value to humanity, as everyone should agree. For them it would be a loss to humanity on balance, an overall loss, that couldn’t be compensated in another way.

I feel a lot of pull to this pessimistic thought. But once again, I’ve come to think it’s wrong. For one thing, pessimists often overlook just how bad most work actually is. In May 2021, Luo Huazhang, a 31 year-old ex-factory worker in Sichuan wrote a viral post, entitled “Lying Flat is Justice”. Luo had searched at length for a job that, unlike his factory job, would allow him time for himself, but he couldn’t find one. So he quit, biked to Tibet and back, and commenced his lifestyle of lying flat, doing what he pleased, reading philosophy, contemplating the world. The idea struck a chord with overworked young Chinese, who, it emerged, did not find “something great” in their “humility”. The movement inspired memes, selfies flat on one’s back, and even an anthem.

That same year, as the Great Resignation in the United States took off, the subreddit r/antiwork played to similar discontent. Started in 2013, under the motto “Unemployment for all, not only the rich!”, the forum went viral in 2021, starting with a screenshot of a quitting worker’s texts to his supervisor (“No thanks. Have a good life”), and culminating in labor-actions, first supporting striking workers at Kelloggs by spamming their job application site, and then attempting to support a similar strike at McDonald’s. It wasn’t just young Chinese who hated their jobs.

In Automation and Utopia: Human Flourishing in a World without Work, the Irish lawyer and philosopher John Danaher imagines an antiwork techno-utopia, with plenty of room for lying flat. As Danaher puts it: “Work is bad for most people most of the time.”“We should do what we can to hasten the obsolescence of humans in the arena of work.”

The young Karl Marx would have seen both Domin’s and Danaher’s utopias as a catastrophe for human life. In his notebooks from 1844, Marx describes an ornate and almost epic process, where, by meeting the needs of others through production, we come to recognize the other in ourselves, and through that recognition, come at last to self-consciousness, the full actualization of our human nature. The end of needed work, for the Marx of these notes, would be the impossibility of fully realizing our nature, the end, in a way, of humanity itself.

But such pessimistic lamentations have come to seem to me no more than misplaced machismo. Sure, Marx’s and my culture, the ethos of our post-industrial professional class, might make us regret a world without work. But we shouldn’t confuse the way two philosophers were brought up with the fundamental values of human life. What stranger narcissism could there be than bemoaning the end of others’ suffering, disease, and need, just because it deprives you of the chance to be a hero?


The first summer after the release of ChatGPT—the first summer of my fits of dread—I stayed with my in-laws in Val Camonica, a valley in the Italian alps. The houses in their village, Sellero, are empty and getting emptier; the people on the streets are old and getting older. The kids that are left—my wife’s elementary school class had, even then, a full complement of four—often leave for better lives. But my in-laws are connected to this place, to the houses and streets where they grew up. They see the changes too, of course. On the mountains above, the Adamello, Italy’s largest glacier, is retreating faster every year. But while the shows on Netflix change, the same mushrooms appear in the summer, and the same chestnuts are collected in the fall.

Walking in the mountains of Val Camonica that summer, I tried to find parallels for my sense of impending loss. I thought about William Shanks, a British mathematician who calculated π to 707 digits by hand in 1873 (he made a mistake at 527; almost 200 digits were wrong). He later spent years of his life, literally years, on a table of the reciprocals of the primes up to one-hundred and ten thousand, calculating in the morning by hand, and checking it over in the afternoon. That was his life’s work. Just sixty years after his death, though, already in the 1940s, the table on which his precious mornings were spent, the few mornings he had on this earth, could be made by a machine in a day.

I feel sad thinking about Shanks, but I don’t feel grief for the loss of calculation by hand. The invention of the typewriter, and the death of handwritten notes seemed closer to the loss I imagined we might feel. Handwriting was once a part of your style, a part of who you were. With its decline some artistry, a deep and personal form of expression, may be lost. When the bots help with everything we write, couldn’t we too lose our style and voice?

But more than anything I thought of what I saw around me: the slow death of the dialects of Val Camonica and the culture they express. Chestnuts were at one time so important for nutrition here, that in the village of Paspardo, a street lined with chestnut trees is called “bread street” (“Via del Pane”). The hyper-local dialects of the valley, outgrowths sometimes of a single family’s inside jokes, have words for all the phases of the chestnut. There’s a porridge made from chestnut flour that, in Sellero goes by ‘skelt’, but is ‘pult’ in Paspardo, a cousin of ‘migole’ in Malonno, just a few villages away. Boiled, chestnuts are tetighe; dried on a grat, biline or bascocc, which, seasoned and boiled become broalade. The dialects don’t just record what people eat and ate; they recall how they lived, what they saw, and where they went. Behind Sellero, every hundred-yard stretch of the walk up to the cabins where the cows were taken to graze in summer, has its own name. Aiva Codaola. Quarsanac. Coran. Spi. Ruc.

But the young people don’t speak the dialect anymore. They go up to the cabins by car, too fast to name the places along the way. They can’t remember a time when the cows were taken up to graze. Some even buy chestnuts in the store.

Grief, you don’t need me to tell you, is a complicated beast. You can grieve for something even when you know that, on balance, it’s good that it’s gone. The death of these dialects, of the stories told on summer nights in the mountains with the cows, is a loss reasonably grieved. But you don’t hear the kids wishing more people would be forced to stay or speak this funny-sounding tongue. You don’t even hear the old folks wishing they could go back fifty years—in those days it wasn’t so easy to be sure of a meal. For many, it’s better this way, not the best it could be, but still better, even as they grieve what they stand to lose and what they’ve already lost.

The grief I feel, imagining a world without needed work, seems closest to this kind of loss. A future without work could be much better than ours, overall. But, living in that world, or watching as our old ways passed away, we might still reasonably grieve the loss of the work that once was part of who we were.


In the last chapter of Edith Wharton’s Age of Innocence, Newland Archer contemplates a world that has changed dramatically since, thirty years earlier, before these new fangled telephones and five-day trans-Atlantic ships, he renounced the love of his life. Awaiting a meeting that his free-minded son Dallas has organized with Ellen Olenska, the woman Newland once loved, he wonders whether his son, and this whole new age, can really love the way he did and does. How could their hearts beat like his, when they’re always so sure of getting what they want?

There have always been things to grieve about getting old. But modern technology has given us new ways of coming to be out of date. A generation born in 1910 did their laundry in Sellero’s public fountains. They watched their grandkids grow up with washing machines at home. As kids, my in-laws worked with their families to dry the hay by hand. They now know, abstractly, that it can all be done by machine. Alongside newfound health and ease, these changes brought, as well, a mix of bitterness and grief: grief for the loss of gossip at the fountains or picnics while bringing in the hay; and also bitterness, because the kids these days just have no idea how easy they have it now.

As I look forward to the glories that, if the world doesn’t end, my grandkids might enjoy, I too feel prospective bitterness and prospective grief. There’s grief, in advance, for what we now have that they’ll have lost: the formal manners of my grandparents they’ll never know, the cars they’ll never learn to drive, and the glaciers that will be long gone before they’re born. But I also feel bitter about what we’ve been through that they won’t have to endure: small things like folding the laundry, standing in security lines or taking out the trash, but big ones too—the diseases which will take our loved ones that they’ll know how to cure.

All this is a normal part of getting old in the modern world. But the changes we see could be much faster and grander in scale. Amodei of Anthropic speculates that a century of technological change could be compressed into the next decade, or less. Perhaps it’s just hype, but—what if it’s not? It’s one thing for a person to adjust, over a full life, to the washing machine, the dishwasher, the air-conditioner, one by one. It’s another, in five years, to experience the progress of a century. Will I see a day when childbirth is a thing of the past? What about sleep? Will our ‘descendants’ have bodies at all?

And this round of automation could also lead to unemployment unlike any our grandparents saw. Worse, those of us working now might be especially vulnerable to this loss. Our culture, or anyway mine—professional America of the early 21st century—has apotheosized work, turning it into a central part of who we are. Where others have a sense of place—their particular mountains and trees—we’ve come to locate ourselves with professional attainment, with particular degrees and jobs. For us, ‘workists’ that so many of us have become, technological displacement wouldn’t just be the loss of our jobs. It would be the loss of a central way we have of making sense of our lives.

None of this will be a problem for the new generation, for our kids. They’ll know how to live in a world that could be—if things go well—far better overall. But I don’t know if I’d be able to adapt. Intellectual argument, however strong, is weak against the habits of years. I fear they’d look at me, stuck in my old ways, with the same uncomprehending look that Dallas Archer gives his dad, when Newland announces that he won’t go see Ellen Olenska, the love of his life, after all. “Say”, as Newland tries to explain to his dumbfounded son, “that I’m old fashioned, that’s enough.”


And yet, the core of my dread is not about aging out of work before my time. I feel closest to Douglas Hofstadter, the author of Gödel, Escher, Bach. His dread, like mine, isn’t only about the loss of work today, or the possibility that we’ll be killed off by the bots. He fears that even a gentle superintelligence will be “as incomprehensible to us as we are to cockroaches.”

Today, I feel part of our grand human projects—the advancement of knowledge, the creation of art, the effort to make the world a better place. I’m not in any way a star player on the team. My own work is off in a little backwater of human thought. And I can’t understand all the details of the big moves by the real stars. But even so, I understand enough of our collective work to feel, in some small way, part of our joint effort. All that will change. If I were to be transported to the brilliant future of the bots, I wouldn’t understand them or their work enough to feel part of the grand projects of their day. Their work would have become, to me, as alien as ours is to a roach.


But I’m still persuaded that the hardline pessimists are wrong. Work is far from the most important value in our lives. A post-instrumental world could be full of much more important goods— from rich love of family and friends, to new undreamt of works of art—which would more than compensate the loss of value from the loss of our work.

Of course, even the values that do persist may be transformed in almost unrecognizable ways. In Deep Utopia: Life and Meaning in a Solved World, the futurist and philosopher Nick Bostrom imagines how things might look. In one of the most memorable sections of the book—right up there with an epistolary novella about the exploits of Pignolius the pig (no joke!)—Bostrom says that even child-rearing may be something that we, if we love our children, would come to forego. In a truly post-instrumental world, a robot intelligence could do better for your child, not only in teaching the child to read, but also in showing unbreakable patience and care. If you’ll snap at your kid, when the robot would not, it would only be selfishness for you to get in the way.

It’s a hard question whether Bostrom is right. At least some of the work of care isn’t like eliminating suffering or ending mortal disease. The needs or wants are small-scale stuff, and the value we get from helping each other might well outweigh the fact that we’d do it worse than a robot could.

But even supposing Bostrom is right about his version of things, and we wouldn’t express our love by changing diapers, we could still love each other. And together with our loved ones and friends, we’d have great wonders to enjoy. Wharton has Newland Archer wonder at five-day transatlantic ships. But what about five day journeys to Mars? These days, it’s a big deal if you see the view from Everest with your own eyes. But Olympus Mons on Mars is more than twice as tall.

And it’s not just geographical tourism that could have a far expanded range. There’d be new journeys of the spirit as well. No humans would be among the great writers or sculptors of the day, but the fabulous works of art a superintelligence could make could help to fill our lives. Really, for almost any aesthetic value you now enjoy—sentimental or austere, minute or magnificent, meaningful or jocular—the bots would do it much better than we have ever done.

Humans could still have meaningful projects, too. In 1976, about a decade before any of Altman, Amodei or even I were born, the Canadian philosopher Bernhard Suits argued that “voluntary attempts to overcome unnecessary obstacles” could give people a sense of purpose in a post-instrumental world. Suits calls these “games”, but the name is misleading; I prefer “artificial projects”. The projects include things we would call games like chess, checkers and bridge, but also things we wouldn’t think of as games at all, like Amundsen’s and Scott’s exploits to the Pole. Whatever we call them, Suits—who’s followed here explicitly by Danaher, the antiwork utopian and, implicitly, by Altman and Amodei—is surely right: even as things are now, we get a lot of value from projects we choose, whether or not they meet a need. We learn to play a piece on the piano, train to run a marathon, or even fly to Antartica to “ski the last degree” to the Pole. Why couldn’t projects like these become the backbone of purpose in our lives?

And we could have one real purpose, beyond the artificial ones, as well. There is at least one job that no machine can take away: the work of self-fashioning, the task of becoming and being ourselves. There’s an aesthetic accomplishment in creating your character, an artistry of choice and chance in making yourself who you are. This personal style includes not just wardrobe or tattoos, not just your choice of silverware or car, but your whole way of being, your brand of patience, modesty, humor, rage, hobbies and tastes. Creating this work of art could give some of us something more to live for.


Would a world like that leave any space for human intellectual achievement, the stuff of my childhood dreams? The Buddhist Pali Canon says that “All conditioned things are impermanent—when one sees this with wisdom, one turns away from suffering.” Apparently, in this text, the intellectual achievement of understanding gives us a path out of suffering. To arrive at this goal, you don’t have to be the first to plant your flag on what you’ve understood; you just have to get there.

A secular version of this idea might hold, more simply, that some knowledge or understanding is good in itself. Maybe understanding the mechanics of penicillin matters mainly because of what it enabled Fleming and others to do. But understanding truths about the nature of our existence, or even mathematics, could be different. That sort of understanding plausibly is good in its own right, even if someone or something has gotten there first.

Venkatesh the Fields Medalist seems to suggest something like this for the future of math. Perhaps we’ll change our understanding of the discipline, so that it’s not about getting the answers, but instead about human understanding, the artistry of it perhaps, or the miracle of the special kind of certainty that proof provides.

Philosophy, my subject, might seem an even more promising place for this idea. For some, philosophy is a “way of life”. The aim isn’t necessarily an answer, but constant self-examination for its own sake. If that’s the point, then in the new world of lying flat, there could be a lot of philosophy to do.

I don’t myself accept this way of seeing things. For me, philosophy aims at the truth as much as physics does. But I of course agree that there are some truths that it’s good for us to understand, whether or not we get there first. And there could be other parts of philosophy that survive for us, as well. We need to weigh the arguments for ourselves, and make up our own minds, even if the work of finding new arguments comes to belong to a machine.

I’m willing to believe, and even hope that future people will pursue knowledge and understanding in this way. But I don’t find, here, much consolation for my personal grief. I was trained to produce knowledge, not merely to acquire it. In the hours when I’m not teaching or preparing to teach, my job is to discover the truth. The values I imbibed—and I told you I was an obedient kid—held that the prize goes for priority.

Thinking of this world where all we learn is what the bots have discovered first, I feel sympathy with Lee Sedol, the champion Go player who retired after his defeat by Google’s AlphaZero in 2016. For him, losing to AI “in a sense, meant my entire world was collapsing”. “Even if I become the number one, there is an entity that cannot be defeated.” Right or wrong, I would feel the same about my work, in a world with an automated philosophical champ.

But Sedol and I are likely just out of date models, with values that a future culture will rightly revise. It’s been more than twenty years since Garry Kasparov lost to IBM’s Deep Blue, but chess has never been more popular. And this doesn’t seem some new-fangled twist of the internet age. I know of no human who quit the high-jump after the invention of mechanical flight. The Greeks sprinted in their Olympics, though they had, long before, domesticated the horse. Maybe we too will come to value the sport of understanding with our own brains.


Frankenstein, Mary Shelley’s 1818 classic of the creations-kill-creator genre, begins with an expedition to the North Pole. Robert Walton hopes to put himself in the annals of science and claim the Pole for England, when he comes upon Victor Frankenstein, floating in the Arctic Sea. It’s only once Frankenstein warms up, that we get into the story everyone knows. Victor hopes he can persuade Walton to turn around, by describing how his own quest for knowledge and glory went south.

Frankenstein doesn’t offer Walton an alternative way of life, a guide for living without grand goals. And I doubt Walton would have been any more personally consoled by the glories of a post-instrumental future than I am. I ended up a philosopher, but I was raised by parents who, maybe like yours, hoped for doctors or lawyers. They saw our purpose in answering real needs, in, as they’d say, contributing to society. Lives devoted to families and friends, fantastic art and games could fill a wondrous future, a world far better than it has ever been. But those aren’t lives that Walton or I, or our parents for that matter, would know how to be proud of. It’s just not the way we were brought up.

For the moment, of course, we’re not exactly short on things to do. The world is full of grisly suffering, sickness, starvation, violence, and need. Frankenstein is often remembered with the moral that thirst for knowledge brings ruination, that scientific curiosity killed the cat. But Victor Frankenstein makes a lot of mistakes other than making his monster. His revulsion at his creation persistently prevents him, almost inexplicably, from feeling the love or just plain empathy that any father should. On top of all we have to do to help each other, we have a lot of work to do, in engineering as much as empathy, if we hope to avoid Frankenstein’s fate.

But even with these tasks before us, my fits of dread are here to stay. I know that the post-instrumental world could be a much better place. But its coming means the death of my culture, the end of my way of life. My fear and grief about this loss won’t disappear because of some choice consolatory words. But I know how to relish the twilight too. I feel lucky to live in a time where people have something to do, and the exploits around me seem more poignant, and more beautiful, in the dusk. We may be some of the last to enjoy this brief spell, before all exploration, all discovery, is done by fully automated sleds.

Quantum! AI! Everything but Trump!

Wednesday, April 30th, 2025
  • Grant Sanderson, of 3blue1brown, has put up a phenomenal YouTube video explaining Grover’s algorithm, and dispelling the fundamental misconception about quantum computing, that QC works simply by “trying all the possibilities in parallel.” Let me not futz around: this video explains, in 36 minutes, what I’ve tried to explain over and over on this blog for 20 years … and it does it better. It’s a masterpiece. Yes, I consulted with Grant for this video (he wanted my intuitions for “why is the answer √N?”), and I even have a cameo at the end of it, but I wish I had made the video. Damn you, Grant!
  • The incomparably great, and absurdly prolific, blogger Zvi Mowshowitz and yours truly spend 1 hour and 40 minutes discussing AI existential risk, education, blogging, and more. I end up “interviewing” Zvi, who does the majority of the talking, which is fine by me, as he has many important things to say! (Among them: his searing critique of those K-12 educators who see it as their life’s mission to prevent kids from learning too much too fast—I’ve linked his best piece on this from the header of this blog.) Thanks so much to Rick Coyle for arranging this conversation.
  • Progress in quantum complexity theory! In 2000, John Watrous showed that the Group Non-Membership problem is in the complexity class QMA (Quantum Merlin-Arthur). In other words, if some element g is not contained in a given subgroup H of an exponentially large finite group G, which is specified via a black box, then there’s a short quantum proof that g∉H, with only ~log|G| qubits, which can be verified on a quantum computer in time polynomial in log|G|. This soon raised the question of whether Group Non-Membership could be used to separate QMA from QCMA by oracles, where QCMA (Quantum Classical Merlin Arthur), defined by Aharonov and Naveh in 2002, is the subclass of QMA where the proof needs to be classical, but the verification procedure can still be quantum. In other words, could Group Non-Membership be the first non-quantum example where quantum proofs actually help?

    In 2006, alas, Greg Kuperberg and I showed that the answer was probably “no”: Group Non-Membership has “polynomial QCMA query complexity.” This means that there’s a QCMA protocol for the problem where Arthur makes only polylog|G| quantum queries to the group oracle—albeit, possibly an exponential in log|G| number of quantum computation steps besides that! To prove our result, Greg and I needed to make mild use of the Classification of Finite Simple Groups, one of the crowning achievements of 20th-century mathematics (its proof is about 15,000 pages long). We conjectured (but couldn’t prove) that someone else, who knew more about the Classification than we did, could show that Group Non-Membership was simply in QCMA outright.

    Now, after almost 20 years, François Le Gall, Harumichi Nishimura, and Dhara Thakkar have finally proven our conjecture—showing that Group Order, and therefore also Group Non-Membership, are indeed in QCMA. They did indeed need to use the Classification, doing one thing for almost all finite groups covered by the Classification, but a different thing for groups of “Ree type” (whatever those are).

    Interestingly, the Group Membership problem had also been a candidate for separating BQP/qpoly, or quantum polynomial time with polynomial-size quantum advice—my personal favorite complexity class—from BQP/poly, or the same thing with polynomial-size classical advice. And it might conceivably still be! The authors explain to me that their protocol doesn’t put Group Membership (with group G and subgroup H depending only on the input length n) into BQP/poly, the reason being that their short classical witnesses for g∉H depend on both g and H, in contrast to Watrous’s quantum witnesses which depended only on H. So there’s still plenty that’s open here! Actually, for that matter, I don’t know of good evidence that the entire Group Membership problem isn’t in BQP—i.e., that quantum computers can’t just solve the whole thing outright, with no Merlins or witnesses in sight!

    Anyway, huge congratulations to Le Gall, Nishimura, and Thakkar for peeling back our ignorance of these matters a bit further! Reeeeeeeee!
  • Potential big progress in quantum algorithms! Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone (GLM) have given what they present as a quantum algorithm to estimate the determinant of an n×n matrix A, exponentially faster in some contexts than we know how to do it classically.

    [Update (May 5): In the comments, Alessandro Luongo shares a paper where he and Changpeng Shao describe what appears to be essentially the same algorithm back in 2020.]

    The algorithm is closely related to the 2008 HHL (Harrow-Hassidim-Lloyd) quantum algorithm for solving systems of linear equations. Which means that anyone who knows the history of this class of quantum algorithms knows to ask immediately: what’s the fine print? A couple weeks ago, when I visited Harvard and MIT, I had a chance to catch up with Seth Lloyd, so I asked him, and he kindly told me. Firstly, we assume the matrix A is Hermitian and positive semidefinite. Next, we assume A is sparse, and not only that, but there’s a QRAM data structure that points to its nonzero entries, so you don’t need to do Grover search or the like to find them, and can query them in coherent superposition. Finally, we assume that all the eigenvalues of A are at least some constant λ>0. The algorithm then estimates det(A), to multiplicative error ε, in time that scales linearly with log(n), and polynomially with 1/λ and 1/ε.

    Now for the challenge I leave for ambitious readers: is there a classical randomized algorithm to estimate the determinant under the same assumptions and with comparable running time? In other words, can the GLM algorithm be “Ewinized”? Seth didn’t know, and I think it’s a wonderful crisp open question! On the one hand, if Ewinization is possible, it wouldn’t be the first time that publicity on this blog had led to the brutal murder of a tantalizing quantum speedup. On the other hand … well, maybe not! I also consider it possible that the problem solved by GLM—for exponentially-large, implicitly-specified matrices A—is BQP-complete, as for example was the general problem solved by HHL. This would mean, for example, that one could embed Shor’s factoring algorithm into GLM, and that there’s no hope of dequantizing it unless P=BQP. (Even then, though, just like with the HHL algorithm, we’d still face the question of whether the GLM algorithm was “independently useful,” or whether it merely reproduced quantum speedups that were already known.)

    Anyway, quantum algorithms research lives! So does dequantization research! If basic science in the US is able to continue at all—the thing I promised not to talk about in this post—we’ll have plenty to keep us busy over the next few years.

Jacob Barandes and Me

Tuesday, March 4th, 2025

Please enjoy Harvard’s Jacob Barandes and yours truly duking it out for 2.5 hours on YouTube about the interpretation of quantum mechanics, and specifically Jacob’s recent proposal involving “indivisible stochastic dynamics,” with Curt Jaimungal as moderator. As always, I strongly recommend watching with captions turned on and at 2X speed.

To summarize what I learned in one paragraph: just like in Bohmian mechanics, Jacob wants classical trajectories for particles, which are so constructed to reproduce the predictions of QM perfectly. But unlike the Bohmians, Jacob doesn’t want to commit to any particular rule for the evolution of those particle trajectories. He merely asserts, metaphysically, that the trajectories exist. My response was basically, “OK fine, you can do that if you want, but what does it buy me?” We basically went around in circles on that question the entire time, though hopefully with many edutaining disgressions.

Despite the lack of resolution, I felt pretty good about the conversation afterward: Jacob got an extensive opportunity to explain his ideas to listeners, along with his detailed beefs against both the Many-Worlds and Copenhagen interpretations. Meanwhile, even though I spoke less than Jacob, I did get some opportunities to do my job, pushing back and asking the kinds of questions I imagined most physicists would ask (even though I’m not a physicist, I felt compelled to represent them!). Jacob and I ended the conversation much as we began: disagreeing on extremely friendly terms.

Then, alas, I read the comments on YouTube and got depressed. Apparently, I’m a hidebound academic elitist who’s failed to grasp Jacob’s revolutionary, paradigm-smashing theory, and who kept arrogantly interrupting with snide, impertinent questions (“OK, but what can I do with this theory that I couldn’t do before?”). And, I learned, the ultimate proof of my smug, ivory-tower malice was to be found in my body language, the way I constantly smiled nervously and rocked back and forth. I couldn’t help but wonder: have these people watched any other YouTube videos that I’m in? I don’t get to pick how I look and sound. I came out of the factory this way.

One commenter opined that I must hate Jacob’s theory only because I’ve poured my life into quantum computing, which depends on superposition, the confusing concept that Jacob has now unmasked as a farce. Presumably it’s beyond this person’s comprehension that Jacob makes exactly the same predictions as I make for what a quantum computer will do when built; Jacob just prefers a different way of talking about it.

I was reminded that optimizing for one’s scientific colleagues is wildly different from optimizing for YouTube engagement. In science, it’s obvious to everyone that the burden of proof is on whoever is presenting the new idea—and that this burden is high, especially with anything as well-trodden and skull-strewn as the foundations of quantum mechanics, albeit not infinitely high. The way the game works is: other people try as hard as they can to shoot the new idea down, so we see how it fares under duress. This is not a sign of contempt for new ideas, but of respect for them.

On YouTube, the situation is precisely reversed. There, anyone perceived as the “mainstream establishment” faces a near-insurmountable burden of proof, while anyone perceived as “renegade” wins by default if they identify any hole whatsoever in mainstream understanding. Crucially, the renegade’s own alternative theories are under no particular burden; indeed, the details of their theories are not even that important or relevant. I don’t want to list science YouTubers who’ve learned to exploit that dynamic masterfully, though I’m told one rhymes with “Frabine Schlossenfelder.” Of course this mirrors what’s happened in the wider world, where RFK Jr. now runs American health policy, Tulsi Gabbard runs the intelligence establishment, and other conspiracy theorists have at last fired all the experts and taken control of our civilization, and are eagerly mashing the buttons to see what happens. I’d take Jacob Barandes, or even Sabine, a billion times over the lunatics in power. But I do hope Jacob turns out to be wrong about Many-Worlds, because it would give my solace to know that there are other branches of the wavefunction where things are a little more sane.

Podcasts!

Wednesday, December 4th, 2024

Update (Dec. 9): For those who still haven’t gotten enough, check out a 1-hour Zoom panel discussion about quantum algorithms, featuring yours truly along with my distinguished colleagues Eddie Farhi, Aram Harrow, and Andrew Childs, moderated by Barry Sanders, as part of the QTML’2024 conference held in Melbourne (although, it being Thanksgiving week, none of the four panelists were actually there in person). Part of the panel devolves into a long debate between me and Eddie about how interesting quantum algorithms are if they don’t achieve speedups over classical algorithms, and whether some quantum algorithms papers mislead people by not clearly addressing the speedup question (you get one guess as to which side I took). I resolved going in to keep my comments as civil and polite as possible—you can judge for yourself how well I succeeded! Thanks very much to Barry and the other QTML organizers for making this happen.


Do you like watching me spout about AI alignment, watermarking, my time at OpenAI, the P versus NP problem, quantum computing, consciousness, Penrose’s views on physics and uncomputability, university culture, wokeness, free speech, my academic trajectory, and much more, despite my slightly spastic demeanor and my many verbal infelicities? Then holy crap are you in luck today! Here’s 2.5 hours of me talking to former professional poker players (and now wonderful Austin-based friends) Liv Boeree and her husband Igor Kurganov about all of those topics. (Or 1.25 hours if you watch at 2x speed, as I strongly recommend.)

But that’s not all! Here I am talking to Harvard’s Hrvoje Kukina, in a much shorter (45-minute) podcast focused on quantum computing, cosmological bounds on information processing, and the idea of the universe as a computer:

Last but not least, here I am in an hour-long podcast (this one audio-only) with longtime friend Kelly Weinersmith and her co-host Daniel Whiteson, talking about quantum computing.

Enjoy!

My Nutty, Extremist Beliefs

Sunday, October 13th, 2024

In nearly twenty years of blogging, I’ve unfortunately felt more and more isolated and embattled. It now feels like anything I post earns severe blowback, from ridicule on Twitter, to pseudonymous comment trolls, to scary and aggressive email bullying campaigns. Reflecting on this, though, I came to see that such strong reactions are an understandable response to my extremist stances. When your beliefs smash the Overton Window into tiny shards like mine do, what do you expect? Just consider some of the intransigent, hard-line stances I’ve taken here on Shtetl-Optimized:

(1) US politics. I’m terrified of right-wing authoritarian populists and their threat to the Enlightenment. For that and many other reasons, I vote straight-ticket Democrat, donate to Democratic campaigns, and encourage everyone else to do likewise. But I also wish my fellow Democrats would rein in the woke stuff, stand up more courageously to the world’s autocrats, and study more economics, so they understand why rent control, price caps, and other harebrained interventions will always fail.

(2) Quantum computing. I’m excited about the prospects of QC, so much that I’ve devoted most of my career to that field. But I also think many of QC’s commercial applications have been wildly oversold to investors, funding agencies, and the press, and I haven’t been afraid to say so.

(3) AI. I think the spectacular progress of AI over the past few years raises scary questions about where we’re headed as a species.  I’m neither in the camp that says “we’ll almost certainly die unless we shut down AI research,” nor the camp that says “the good guys need to race full-speed ahead to get AGI before the bad guys get it.” I’d like us to proceed in AI research with caution and guardrails and the best interests of humanity in mind, rather than the commercial interests of particular companies.

(4) Climate change. I think anthropogenic climate change is 100% real and one of the most urgent problems facing humanity, and those who deny this are being dishonest or willfully obtuse.  But because I think that, I also think it’s way past time to explore technological solutions like modular nuclear reactors, carbon capture, and geoengineering. I think we can’t virtue-signal or kumbaya our way out of the climate crisis.

(5) Feminism and dating. I think the emancipation of women is one of the modern world’s greatest triumphs.  I reserve a special hatred for misogynistic, bullying men. But I also believe, from experience, that many sensitive, nerdy guys severely overcorrected on feminist messaging, to the point that they became terrified of the tiniest bit of assertiveness or initiative in heterosexual courtship. I think this terror has led millions of them to become bitter “incels.”  I want to figure out ways to disrupt the incel pipeline, by teaching shy nerdy guys to have healthy, confident dating lives, without thereby giving asshole guys license to be even bigger assholes.

(6) Israel/Palestine. I’m passionately in favor of Israel’s continued existence as a Jewish state, without which my wife’s family and many of my friends’ and colleagues’ families would have been exterminated. However, I also despise Bibi and the messianic settler movement to which he’s beholden. I pray for a two-state solution where Israelis and Palestinians will coexist in peace, free from their respective extremists.

(7) Platonism. I think that certain mathematical questions, like the Axiom of Choice or the Continuum Hypothesis, might not have any Platonic truth-value, there being no fact of the matter beyond what can be proven from various systems of axioms. But I also think, with Gödel, that statements of elementary arithmetic, like the Goldbach Conjecture or P≠NP, are just Platonically true or false independent of any axiom system.

(8) Science and religion. As a secular rationalist, I’m acutely aware that no ancient religion can be “true,” in the sense believed by either the ancients or modern fundamentalists. Still, the older I’ve gotten, the more I’ve come to see religions as vast storehouses containing (among much else) millennia of accumulated wisdom about how humans can or should live. As in the parable of Chesterton’s Fence, I think this wisdom is often far from obvious and nearly impossible to derive from first principles. So I think that, at the least, secularists will need to figure out their own long-term methods to encourage many of the same things that religion once did—such as stable families, childbirth, self-sacrifice and courage in defending one’s community, and credible game-theoretic commitments to keeping promises and various other behaviors.

(9) Foreign policy and immigration. I’d like the US to stand more courageously against evil regimes, such as those of China, Russia, and Iran. At the same time, I’d like the US to open our gates much wider to students, scientists, and dissidents from those nations who seek freedom in the West. I think our refusal to do enough of this is a world-historic self-own.

(10) Academia vs. industry. I think both have advantages and disadvantages for people in CS and other technical fields. At their best, they complement each other. When advising a student which path to pursue, I try to find out all I can about the student’s goals and personality.

(11) Population ethics. I’m worried about how the earth will support 9 or 10 billion people with first-world living standards, which is part of why I’d like career opportunities for women, girls’ education, contraception, and (early-term) abortion to become widely available everywhere on earth. All the same, I’m not an antinatalist. I think raising one or more children in a loving home should generally be celebrated as a positive contribution to the world.

(12) The mind-body problem. I think it’s possible that there’s something profound we don’t yet understand about consciousness and its relation to the physical world. At the same time, I think the burden is clearly on the mind-body dualists to articulate what that something might be, and how to reconcile it with the known laws of physics. I admire the audacity of Roger Penrose in tackling this question head-on, but I don’t think his solution works.

(13) COVID response. I think the countries that did best tended to be those that had some coherent stategy—whether that was “let the virus rip, keep schools open, quarantine only the old and sick,” or “aggressively quarantine everyone and wait for a vaccine.” I think countries torn between these strategies, like the US, tended to get the worst of all worlds. On the other hand, I think the US did one huge thing right, which was greatly to accelerate (by historical standards) the testing and distribution of the mRNA vaccines. For the sake of the millions who died and the billions who had their lives interrupted, I only wish we’d rushed the vaccines much more. We ought now to be spending trillions on a vaccine pipeline that’s ready to roll within weeks as soon as the next pandemic hits.

(14) P versus NP. From decades of intuition in math and theoretical computer science, I think we can be fairly confident of P≠NP—but I’d “only” give it, say, 97% odds. Here as elsewhere, we should be open to the possibility of world-changing surprises.

(15) Interpretation of QM. I get really annoyed by bad arguments against the Everett interpretation, which (contrary to a popular misconception) I understand to result from scientifically conservative choices. But I’m also not an Everettian diehard. I think that, if you push questions like “but is anyone home in the other branches?” hard enough, you arrive at questions about personal identity and consciousness that were profoundly confusing even before quantum mechanics. I hope we someday learn something new that clarifies the situation.

Anyway, with extremist, uncompromising views like those, is it any surprise that I get pilloried and denounced so often?

All the same, I sometimes ask myself: what was the point of becoming a professor, seeking and earning the hallowed protections of tenure, if I can’t then freely express radical, unbalanced, batshit-crazy convictions like the ones in this post?