Archive for the ‘The Fate of Humanity’ Category

Robin Hanson and I discuss the AI future

Wednesday, May 10th, 2023

That’s all. No real post this morning, just an hour-long podcast on YouTube featuring two decades-long veterans of the nerd blogosphere, Robin Hanson and yours truly, talking about AI, trying to articulate various possibilities outside the Yudkowskyan doom scenario. The podcast was Robin’s idea. Hope you enjoy, and looking forward to your comments!

Update: Oh, and another new podcast is up, with me and Sebastian Hassinger of Amazon/AWS! Audio only. Mostly quantum computing but with a little AI thrown in.

Update: Yet another new podcast, with Daniel Bashir of The Gradient. Daniel titled it “Against AI Doomerism,” but it covers a bunch of topics (and I’d say my views are a bit more complicated than “anti-doomerist”…).

AI and Aaronson’s Law of Dark Irony

Thursday, May 4th, 2023

The major developments in human history are always steeped in dark ironies. Yes, that’s my Law of Dark Irony, the whole thing.

I don’t know why it’s true, but it certainly seems to be. Taking WWII as the archetypal example, let’s enumerate just the more obvious ones:

  • After the carnage of WWI, the world’s most sensitive and thoughtful people (many of them) learned the lesson that they should oppose war at any cost. This attitude let Germany rearm and set the stage for WWII.
  • Hitler, who was neither tall nor blond, wished to establish the worldwide domination of tall, blond Aryans … and do so via an alliance with the Japanese.
  • The Nazis touted the dream of eugenically perfecting the human race, then perpetrated a genocide against a tiny group that had produced Einstein, von Neumann, Wigner, Ulam, and Tarski.
  • The Jews were murdered using a chemical—Zyklon B—developed in part by the Jewish chemist Fritz Haber.
  • The Allied force that made the greatest sacrifice in lives to defeat Hitler was Stalin’s USSR, another of history’s most murderous and horrifying regimes.
  • The man who rallied the free world to defeat Nazism, Winston Churchill, was himself a racist colonialist, whose views would be (and regularly are) denounced as “Nazi” on modern college campuses.
  • The WWII legacy that would go on to threaten humanity’s existence—the Bomb—was created in what the scientists believed was a desperate race to save humanity. Then Hitler was defeated before the Bomb was ready, and it turned out the Nazis were never even close to building their own Bomb, and the Bomb was used instead against Japan.

When I think about the scenarios where superintelligent AI destroys the world, they rarely seem to do enough justice to the Law of Dark Irony. It’s like: OK, AI is created to serve humanity, and instead it turns on humanity and destroys it. Great, that’s one dark irony. One. What other dark ironies could there be? How about:

  • For decades, the Yudkowskyans warned about the dangers of superintelligence. So far, by all accounts, the great practical effect of these warnings has been to inspire the founding of both DeepMind and OpenAI, the entities that Yudkowskyans believe are locked into a race to realize those dangers.
  • Maybe AIs will displace humans … and they’ll deserve to, since they won’t be quite as wretched and cruel as we are. (This is basically the plot of Westworld, or at least of its first couple seasons, which Dana and I are now belatedly watching.)
  • Maybe the world will get destroyed by what Yudkowsky calls a “pivotal act”: an act meant to safeguard the world from takeover from an unaligned AGI, for example by taking it over with an aligned AGI first. (I seriously worry about this; it’s a pretty obvious one.)
  • Maybe AI will get the idea to take over the world, but only because it’s been trained on generations of science fiction and decades of Internet discussion worrying about the possibility of AI taking over the world. (I’m far from the first to notice this possibility.)
  • Maybe AI will indeed destroy the world, but it will do so “by mistake,” while trying to save the world, or by taking a calculated gamble to save the world that fails. (A commenter on my last post brought this one up.)
  • Maybe humanity will successfully coordinate to pause AGI development, and then promptly be destroyed by something else—runaway climate change, an accidental nuclear exchange—that the AGI, had it been created, would’ve prevented. (This, of course, would be directly analogous to one of the great dark ironies of all time: the one where decades of antinuclear activism, intended to save the planet, has instead doomed us to destroy the earth by oil and coal.)

Readers: which other possible dark ironies have I missed?

Five Worlds of AI (a joint post with Boaz Barak)

Thursday, April 27th, 2023

Artificial intelligence has made incredible progress in the last decade, but in one crucial aspect, it still lags behind the theoretical computer science of the 1990s: namely, there is no essay describing five potential worlds that we could live in and giving each one of them whimsical names.  In other words, no one has done for AI what Russell Impagliazzo did for complexity theory in 1995, when he defined the five worlds Algorithmica, Heuristica, Pessiland, Minicrypt, and Cryptomania, corresponding to five possible resolutions of the P vs. NP problem along with the central unsolved problems of cryptography.

In this blog post, we—Scott and Boaz—aim to remedy this gap. Specifically, we consider 5 possible scenarios for how AI will evolve in the future.  (Incidentally, it was at a 2009 workshop devoted to Impagliazzo’s five worlds co-organized by Boaz that Scott met his now wife, complexity theorist Dana Moshkovitz.  We hope civilization will continue for long enough that someone in the future could meet their soulmate, or neuron-mate, at a future workshop about our five worlds.)

Like in Impagliazzo’s 1995 paper on the five potential worlds of the difficulty of NP problems, we will not try to be exhaustive but rather concentrate on extreme cases.  It’s possible that we’ll end up in a mixture of worlds or a situation not described by any of the worlds.  Indeed, one crucial difference between our setting and Impagliazzo’s, is that in the complexity case, the worlds corresponded to concrete (and mutually exclusive) mathematical conjectures.  So in some sense, the question wasn’t “which world will we live in?” but “which world have we Platonically always lived in, without knowing it?”  In contrast, the impact of AI will be a complex mix of mathematical bounds, computational capabilities, human discoveries, and social and legal issues. Hence, the worlds we describe depend on more than just the fundamental capabilities and limitations of artificial intelligence, and humanity could also shift from one of these worlds to another over time.

Without further ado, we name our five worlds “AI-Fizzle,” “Futurama,” ”AI-Dystopia,” “Singularia,” and “Paperclipalypse.”  In this essay, we don’t try to assign probabilities to these scenarios; we merely sketch their assumptions and technical and social consequences. We hope that by making assumptions explicit, we can help ground the debate on the various risks around AI.

AI-Fizzle. In this scenario, AI “runs out of steam” fairly soon. AI still has a significant impact on the world (so it’s not the same as a “cryptocurrency fizzle”), but relative to current expectations, this would be considered a disappointment.  Rather than the industrial or computer revolutions, AI might be compared in this case to nuclear power: people were initially thrilled about the seemingly limitless potential, but decades later, that potential remains mostly unrealized.  With nuclear power, though, many would argue that the potential went unrealized mostly for sociopolitical rather than technical reasons.  Could AI also fizzle by political fiat?

Regardless of the answer, another possibility is that costs (in data and computation) scale up so rapidly as a function of performance and reliability that AI is not cost-effective to apply in many domains. That is, it could be that for most jobs, humans will still be more reliable and energy-efficient (we don’t normally think of low wattage as being key to human specialness, but it might turn out that way!).  So, like nuclear fusion, an AI which yields dramatically more value than the resources needed to build and deploy it might always remain a couple of decades in the future.  In this scenario, AI would replace and enhance some fraction of human jobs and improve productivity, but the 21st century would not be the “century of AI,” and AI’s impact on society would be limited for both good and bad.

Futurama. In this scenario, AI unleashes a revolution that’s entirely comparable to the scientific, industrial, or information revolutions (but “merely” those).  AI systems grow significantly in capabilities and perform many of the tasks currently performed by human experts at a small fraction of the cost, in some domains superhumanly.  However, AI systems are still used as tools by humans, and except for a few fringe thinkers, no one treats them as sentient.  AI easily passes the Turing test, can prove hard theorems, and can generate entertaining content (as well as deepfakes). But humanity gets used to that, just like we got used to computers creaming us in chess, translating text, and generating special effects in movies.  Most people no more feel inferior to their AI than they feel inferior to their car because it runs faster.  In this scenario, people will likely anthropomorphize AI less over time (as happened with digital computers themselves).  In “Futurama,” AI will, like any revolutionary technology, be used for both good and bad.  But as with prior major technological revolutions, on the whole, AI will have a large positive impact on humanity. AI will be used to reduce poverty and ensure that more of humanity has access to food, healthcare, education, and economic opportunities. In “Futurama,” AI systems will sometimes cause harm, but the vast majority of these failures will be due to human negligence or maliciousness.  Some AI systems might be so complex that it would be best to model them as potentially behaving  “adversarially,” and part of the practice of deploying AIs responsibly would be to ensure an “operating envelope” that limits their potential damage even under adversarial failures. 

AI-Dystopia. The technical assumptions of “AI-Dystopia” are similar to those of “Futurama,” but the upshot could hardly be more different.  Here, again, AI unleashes a revolution on the scale of the industrial or computer revolutions, but the change is markedly for the worse.  AI greatly increases the scale of surveillance by government and private corporations.  It causes massive job losses while enriching a tiny elite.  It entrenches society’s existing inequalities and biases.  And it takes away a central tool against oppression: namely, the ability of humans to refuse or subvert orders.

Interestingly, it’s even possible that the same future could be characterized as Futurama by some people and as AI-Dystopia by others–just like how some people emphasize how our current technological civilization has lifted billions out of poverty into a standard of living unprecedented in human history, while others focus on the still existing (and in some cases rising) inequalities and suffering, and consider it a neoliberal capitalist dystopia.

Singularia.  Here AI breaks out of the current paradigm, where increasing capabilities require ever-growing resources of data and computation, and no longer needs human data or human-provided hardware and energy to become stronger at an ever-increasing pace.  AIs improve their own intellectual capabilities, including by developing new science, and (whether by deliberate design or happenstance) they act as goal-oriented agents in the physical world.  They can effectively be thought of as an alien civilization–or perhaps as a new species, which is to us as we were to Homo erectus.

Fortunately, though (and again, whether by careful design or just as a byproduct of their human origins), the AIs act to us like benevolent gods and lead us to an “AI utopia.”  They solve our material problems for us, giving us unlimited abundance and presumably virtual-reality adventures of our choosing.  (Though maybe, as in The Matrix, the AIs will discover that humans need some conflict, and we will all live in a simulation of 2020’s Twitter, constantly dunking on one another…) 

Paperclipalypse.  In “Paperclipalypse” or “AI Doom,” we again think of future AIs as a superintelligent “alien race” that doesn’t need humanity for its own development.  Here, though, the AIs are either actively opposed to human existence or else indifferent to it in a way that causes our extinction as a byproduct.  In this scenario, AIs do not develop a notion of morality comparable to ours or even a notion that keeping a diversity of species and ensuring humans don’t go extinct might be useful to them in the long run.  Rather, the interaction between AI and Homo sapiens ends about the same way that the interaction between Homo sapiens and Neanderthals ended. 

In fact, the canonical depictions of such a scenario imagine an interaction that is much more abrupt than our brush with the Neanderthals. The idea is that, perhaps because they originated through some optimization procedure, AI systems will have some strong but weirdly-specific goal (a la “maximizing paperclips”), for which the continued existence of humans is, at best, a hindrance.  So the AIs quickly play out the scenarios and, in a matter of milliseconds, decide that the optimal solution is to kill all humans, taking a few extra milliseconds to make a plan for that and execute it.  If conditions are not yet ripe for executing their plan, the AIs pretend to be docile tools, as in the “Futurama” scenario, waiting for the right time to strike.  In this scenario, self-improvement happens so quickly that humans might not even notice it.  There need be no intermediate stage in which an AI “merely” kills a few thousand humans, raising 9/11-type alarm bells.

Regulations. The practical impact of AI regulations depends, in large part, on which scenarios we consider most likely.  Regulation is not terribly important in the “AI Fizzle” scenario where AI, well, fizzles.  In “Futurama,” regulations would be aimed at ensuring that on balance, AI is used more for good than for bad, and that the world doesn’t devolve into “AI Dystopia.”  The latter goal requires anti-trust and open-science regulations to ensure that power is not concentrated in a few corporations or governments.  Thus, regulations are needed to democratize AI development more than to restrict it.  This doesn’t mean that AI would be completely unregulated.  It might be treated somewhat similarly to drugs—something that can have complex effects and needs to undergo trials before mass deployment.  There would also be regulations aimed at reducing the chance of “bad actors” (whether other nations or individuals) getting access to cutting-edge AIs, but probably the bulk of the effort would be at increasing the chance of thwarting them (e.g., using AI to detect AI-generated misinformation, or using AI to harden systems against AI-aided hackers).  This is similar to how most academic experts believe cryptography should be regulated (and how it is largely regulated these days in most democratic countries): it’s a technology that can be used for both good and bad, but the cost of restricting its access to regular citizens outweighs the benefits.  However, as we do with security exploits today, we might restrict or delay public releases of AI systems to some extent.

To whatever extent we foresee “Singularia” or “Paperclipalypse,” however, regulations play a completely different role.  If we knew we were headed for “Singularia,” then presumably regulations would be superfluous, except perhaps to try to accelerate the development of AIs!  Meanwhile, if one accepts the assumptions of “Paperclipalypse,” any regulations other than the most draconian might be futile.  If, in the near future, almost anyone will be able to spend a few billion dollars to build a recursively self-improving AI that might turn into a superintelligent world-destroying agent, and moreover (unlike with nuclear weapons) they won’t need exotic materials to do so, then it’s hard to see how to forestall the apocalypse, except perhaps via a worldwide, militarily enforced agreement to “shut it all down,” as Eliezer Yudkowsky indeed now explicitly advocates.  “Ordinary” regulations could, at best, delay the end by a short amount–given the current pace of AI advances, perhaps not more than a few years.  Thus, regardless of how likely one considers this scenario, one might want to focus more on the other scenarios for methodological reasons alone!

AI safety: what should actually be done now?

Sunday, April 16th, 2023

So, I recorded a 2.5-hour-long podcast with Daniel Filan about “reform AI alignment,” and the work I’ve been doing this year at OpenAI.  The end result is … well, probably closer to my current views on this subject than anything else I’ve said or written! Listen here or read the transcript here. Here’s Daniel’s abstract:

How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI.

Thanks so much to Daniel for making this podcast happen.


Maybe I should make a broader comment, though.

From my recent posts, and from my declining to sign the six-month AI pause letter (even though I sympathize with many of its goals), many people seem to have goten the impression that I’m not worried about AI, or that (ironically, given my job this year) I’m basically in the “full speed ahead” camp.

This is not true.  In reality, I’m full of worry. The issue is just that, in this case, I’m also full of metaworry—i.e., the worry that whichever things I worry about will turn out to have been the wrong things.

Even if we look at the pause letter, or more generally, at the people who wish to slow down AI research, we find that they wildly disagree among themselves about why a slowdown is called for.  One faction says that AI needs to be paused because it will spread misinformation and entrench social biases … or (this part is said aloud surprisingly often) because progress is being led by, you know, like, totally gross capitalistic Silicon Valley nerdbros, and might enhance those nerds’ power.

A second faction, one that contains many of the gross nerdbros, is worried about AI because it might become superintelligent, recursively improve itself, and destroy all life on earth while optimizing for some alien goal. Hopefully both factions agree that this scenario would be bad, so that the only disagreement is about its likelihood.

As I’ll never tire of pointing out, the two factions seem to have been converging on the same conclusion—namely, AI progress urgently needs to be slowed down—even while they sharply reject each other’s rationales and indeed are barely on speaking terms with each other.

OK, you might object, but that’s just sociology. Why shouldn’t a rational person worry about near-term AI risk and long-term AI risk? Why shouldn’t the ethics people focused on the former and the alignment people focused on the latter strategically join forces? Such a hybrid Frankenpause is, it seems to me, precisely what the pause letter was trying to engineer. Alas, the result was that, while a few people closer to the AI ethics camp (like Gary Marcus and Ernest Davis) agreed to sign, many others (Emily Bender, Timnit Gebru, Arvind Narayanan…) pointedly declined, because—as they explained on social media—to do so would be to legitimate the gross nerds and their sci-fi fantasies.

From my perspective, the problem is this:

  1. Under the ethics people’s assumptions, I don’t see that an AI pause is called for. Or rather, while I understand the arguments, the same arguments would seem to have justified stopping the development of the printing press, aviation, radio, computers, the Internet, and virtually every other nascent technology, until committees of academic experts had decided that the positive social effects would outweigh the negative ones, which might’ve been never. The trouble is, well, how do you even study the social effects of a new technology, before society starts using it? Aren’t we mostly happy that technological pioneers went ahead with all the previously-mentioned things, and dealt with the problems later as they arose? But preventing the widespread societal adoption of GPT-like tools seems to be what the AI ethics camp really wants, much more than preventing further scaling for scientific research. I reject any anti-AI argument that could be generalized and transplanted backwards to produce an argument against moving forward with, let’s say, agriculture or metallurgy.
  2. Under the alignment people’s assumptions, I do see that an AI pause is urgently called for—but I’m not yet on board with their assumptions. The kind of relentlessly optimizing AI that could form the intention to doom humanity, still seems very different to me from the kind of AI that’s astonished the world these past couple years, to the point that it’s not obvious how much progress in the latter should increase our terror about the former.  Even Eliezer Yudkowsky agrees that GPT-4 doesn’t seem too dangerous in itself. And an AI that was only slightly dangerous could presumably be recognized as such before it was too late. So everything hinges on the conjecture that, in going from GPT-n to GPT-(n+1), there might be a “sharp turn” where an existential risk to humanity very suddenly emerged, with or without the cooperation of bad humans who used GPT-(n+1) for nefarious purposes. I still don’t know how to think about the likelihood of this risk. The empirical case for it is likely to be inadequate, by its proponents’ own admission. I admired how my friend Sarah Constantin thought through the issues in her recent essay Why I Am Not An AI Doomer—but on the other hand, as others have pointed out, Sarah ends up conceding a staggering fraction of the doomers’ case in the course of arguing against the rest of it. What today passes for an “anti-doomer” might’ve been called a “doomer” just a few years ago.

In short, one could say, the ethics and alignment communities are both building up cases for pausing AI progress, working at it from opposite ends, but their efforts haven’t yet met at any single argument that I wholeheartedly endorse.

This might just be a question of timing. If AI is going become existentially dangerous, then I definitely want global coordination well before that happens. And while it seems unlikely to me that we’re anywhere near the existential danger zone yet, the pace of progress over the past few years has been so astounding, and has upended so many previous confident assumptions, that caution seems well-advised.

But is a pause the right action? How should we compare the risk of acceleration now to the risk of a so-called “overhang,” where capabilities might skyrocket even faster in the future, faster than society can react or adapt, because of a previous pause? Also, would a pause even force OpenAI to change its plans from what they would’ve been otherwise? (If I knew, I’d be prohibited from telling, which makes it convenient that I don’t!) Or would the main purpose be symbolic, just to show that the main AI labs can coordinate on something?

If so, then one striking aspect of the pause letter is that it was written without consultation with the main entities who would need to agree to any such pause (OpenAI, DeepMind, Google, …). Another striking aspect is that it applies only to systems “more powerful than” GPT-4. There are two problems here. Firstly, the concept “more powerful than” isn’t well-defined: presumably it rules out more parameters and more gradient descent, but what about more reinforcement learning or tuning of hyperparameters? Secondly, to whatever extent it makes sense, it seems specifically tailored to tie the hands of OpenAI, while giving OpenAI’s competitors a chance to catch up to OpenAI. The fact that the most famous signatory is Elon Musk, who’s now trying to build an “anti-woke” chatbot to compete against GPT, doesn’t help.


So, if not this pause letter, what do I think ought to happen instead?

I’ve been thinking about it a lot, and the most important thing I can come up with is: clear articulation of fire alarms, red lines, whatever you want to call them, along with what our responses to those fire alarms should be. Two of my previous fire alarms were the first use of chatbots for academic cheating, and the first depressed person who commits suicide after interacting with a chatbot. Both of those have now happened. Here are some others:

  • A chatbot is used to impersonate someone for fraudulent purposes, by imitating his or her writing style.
  • A chatbot helps a hacker find security vulnerabilities in code that are then actually exploited.
  • A child dies because his or her parents follow wrong chatbot-supplied medical advice.
  • Russian or Iranian or Chinese intelligence, or some other such organization, uses a chatbot to mass-manufacture disinformation and propaganda.
  • A chatbot helps a terrorist manufacture weapons that are used in a terrorist attack.

I’m extremely curious: which fire alarms are you most worried about? How do you think the AI companies and governments should respond if and when they happen?

In my view, articulating fire alarms actually provides multiple benefits. Not only will it give us a playbook if and when any of the bad events happen, it will also give us clear targets to try to forecast. If we’ve decided that behavior X is unacceptable, and if extrapolating the performance of GPT-1 through GPT-n on various metrics leads to the prediction that GPT-(n+1) will be capable of X, then we suddenly have a clear, legible case for delaying the release of GPT-(n+1).

Or—and this is yet a third benefit—we have something clear on which to test GPT-(n+1), in “sandboxes,” before releasing it. I think the kinds of safety evals that ARC (the Alignment Research Center) did on GPT-4 before it was released—for example, testing its ability to deceive Mechanical Turkers—were an extremely important prototype, something that we’ll need a lot more of before the release of future language models. But all of society should have a say on what, specifically, are the dangerous behaviors that these evals are checking for.

So let’s get started on that! Readers: which unaligned behaviors would you like GPT-5 to be tested for prior to its release? Bonus points for plausibility and non-obviousness.

Quips are what I’ve got

Saturday, April 1st, 2023

In the comments on my last post—the one about the open letter calling for a six-month pause on AI scaling—a commenter named Hans Holander berates me over and over, as have others before him, for my failure to see that GPT is just a hoax and scam with no “true” intelligence. Below is my reply: probably one of the most revealing things I’ve ever written (which is saying something).


The great irony here is that if you’re right—and you’re obviously 3000% confident that you’re right—then by my lights, there is no reason whatsoever to pause the scaling of Large Language Models, as your fellow LLM skeptics have urged. If LLMs are mere “stochastic parrots,” and if further scaling will do nothing to alleviate their parroticity, then there’d seem to be little danger that they’ll ever form grounded plans to take over the world, or even help evil people form such plans. And soon it will be clear to everyone that LLMs are just a gigantic boondoggle that don’t help them solve their problems, and the entire direction will be abandoned. All a six-month pause would accomplish would be to delay this much-needed reckoning.

More broadly, though, do you see the problem with “just following your conscience” in this subject? There’s no way to operationalize “follow your conscience,” except “do the thing that will make the highest moral authorities that you recognize not be disappointed in you, not consider you a coward or a monster or a failure.” But what if there’s no agreement among the highest moral authorities that you recognize, or the people who set themselves up as the moral authorities? What if people will call you a coward or a monster or a failure, will even do so right in your comment section, regardless of what you choose?

This, of course, is hardly the first time in my life I’ve been in this situation, condemned for X and equally condemned for not(X). I’ve never known how to navigate it. When presented with diametrically opposed views about morality or the future of civilization, all confidently held by people who I consider smart and grounded, I can switch back and forth between the perspectives like with the Necker cube or the duck-rabbit. But I don’t have any confident worldview of my own. What I have are mostly quips, and jokes, and metaphors, and realizing when one thing contradicts a different thing, and lectures (many people do seem to like my lectures) where I lay out all the different considerations, and sometimes I also have neat little technical observations that occasionally even get dignified with the name of “theorems” and published in papers.

A quarter-century ago, though I remember like yesterday, I was an undergrad at Cornell, and belonged to a scholarship house called Telluride, where house-members had responsibilities for upkeep and governance and whatnot and would write periodic reviews of each other’s performance. And I once got a scathing performance review, which took me to task for shirking my housework, and bringing my problem sets to the house meetings. (These were meetings where the great issues of the day were debated—like whether or not to allocate $50 for fixing a light, and how guilty to feel over hiring maintenance workers and thereby participating in capitalist exploitation.) And then there was this: “Scott’s contributions to house meetings are often limited to clever quips that, while amusing, do not advance the meeting agenda at all.”

I’m not like Eliezer Yudkowsky, nor am I even like the anti-Eliezer people. I don’t, in the end, have any belief system at all with which to decide questions of a global or even cosmic magnitude, like whether the progress of AI should be paused or not. Mostly all I’ve got are the quips and the jokes, and the trying to do right on the smaller questions.


And anyone who doesn’t like this post can consider it an April Fools (hey, Eliezer did the same last year!).

If AI scaling is to be shut down, let it be for a coherent reason

Thursday, March 30th, 2023

There’s now an open letter arguing that the world should impose a six-month moratorium on the further scaling of AI models such as GPT, by government fiat if necessary, to give AI safety and interpretability research a bit more time to catch up. The letter is signed by many of my friends and colleagues, many who probably agree with each other about little else, over a thousand people including Elon Musk, Steve Wozniak, Andrew Yang, Jaan Tallinn, Stuart Russell, Max Tegmark, Yuval Noah Harari, Ernie Davis, Gary Marcus, and Yoshua Bengio.

Meanwhile, Eliezer Yudkowsky published a piece in TIME arguing that the open letter doesn’t go nearly far enough, and that AI scaling needs to be shut down entirely until the AI alignment problem is solved—with the shutdown enforced by military strikes on GPU farms if needed, and treated as more important than preventing nuclear war.

Readers, as they do, asked me to respond. Alright, alright. While the open letter is presumably targeted at OpenAI more than any other entity, and while I’ve been spending the year at OpenAI to work on theoretical foundations of AI safety, I’m going to answer strictly for myself.

Given the jaw-droppingly spectacular abilities of GPT-4—e.g., acing the Advanced Placement biology and macroeconomics exams, correctly manipulating images (via their source code) without having been programmed for anything of the kind, etc. etc.—the idea that AI now needs to be treated with extreme caution strikes me as far from absurd. I don’t even dismiss the possibility that advanced AI could eventually require the same sorts of safeguards as nuclear weapons.

Furthermore, people might be surprised about the diversity of opinion about these issues within OpenAI, by how many there have discussed or even forcefully advocated slowing down. And there’s a world not so far from this one where I, too, get behind a pause. For example, one actual major human tragedy caused by a generative AI model might suffice to push me over the edge. (What would push you over the edge, if you’re not already over?)

Before I join the slowdown brigade, though, I have (this being the week before Passover) four questions for the signatories:

  1. Would your rationale for this pause have applied to basically any nascent technology — the printing press, radio, airplanes, the Internet? “We don’t yet know the implications, but there’s an excellent chance terrible people will misuse this, ergo the only responsible choice is to pause until we’re confident that they won’t”?
  2. Why six months? Why not six weeks or six years?
  3. When, by your lights, would we ever know that it was safe to resume scaling AI—or at least that the risks of pausing exceeded the risks of scaling? Why won’t the precautionary principle continue for apply forever?
  4. Were you, until approximately last week, ridiculing GPT as unimpressive, a stochastic parrot, lacking common sense, piffle, a scam, etc. — before turning around and declaring that it could be existentially dangerous? How can you have it both ways? If, as sometimes claimed, “GPT-4 is dangerous not because it’s too smart but because it’s too stupid,” then shouldn’t GPT-5 be smarter and therefore safer? Thus, shouldn’t we keep scaling AI as quickly as we can … for safety reasons? If, on the other hand, the problem is that GPT-4 is too smart, then why can’t you bring yourself to say so?

With the “why six months?” question, I confess that I was deeply confused, until I heard a dear friend and colleague in academic AI, one who’s long been skeptical of AI-doom scenarios, explain why he signed the open letter. He said: look, we all started writing research papers about the safety issues with ChatGPT; then our work became obsolete when OpenAI released GPT-4 just a few months later. So now we’re writing papers about GPT-4. Will we again have to throw our work away when OpenAI releases GPT-5? I realized that, while six months might not suffice to save human civilization, it’s just enough for the more immediate concern of getting papers into academic AI conferences.

Look: while I’ve spent multiple posts explaining how I part ways from the Orthodox Yudkowskyan position, I do find that position intellectually consistent, with conclusions that follow neatly from premises. The Orthodox, in particular, can straightforwardly answer all four of my questions above:

  1. AI is manifestly different from any other technology humans have ever created, because it could become to us as we are to orangutans;
  2. a six-month pause is very far from sufficient but is better than no pause;
  3. we’ll know that it’s safe to scale when (and only when) we understand our AIs so deeply that we can mathematically explain why they won’t do anything bad; and
  4. GPT-4 is extremely impressive—that’s why it’s so terrifying!

On the other hand, I’m deeply confused by the people who signed the open letter, even though they continue to downplay or even ridicule GPT’s abilities, as well as the “sensationalist” predictions of an AI apocalypse. I’d feel less confused if such people came out and argued explicitly: “yes, we should also have paused the rapid improvement of printing presses to avert Europe’s religious wars. Yes, we should’ve paused the scaling of radio transmitters to prevent the rise of Hitler. Yes, we should’ve paused the race for ever-faster home Internet to prevent the election of Donald Trump. And yes, we should’ve trusted our governments to manage these pauses, to foresee brand-new technologies’ likely harms and take appropriate actions to mitigate them.”

Absent such an argument, I come back to the question of whether generative AI actually poses a near-term risk that’s totally unparalleled in human history, or perhaps approximated only by the risk of nuclear weapons. After sharing an email from his partner, Eliezer rather movingly writes:

When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.

Look, I too have a 10-year-old daughter and a 6-year-old son, and I wish to see them grow up. But the causal story that starts with a GPT-5 or GPT-4.5 training run, and ends with the sudden death of my children and of all carbon-based life, still has a few too many gaps for my aging, inadequate brain to fill in. I can complete the story in my imagination, of course, but I could equally complete a story that starts with GPT-5 and ends with the world saved from various natural stupidities. For better or worse, I lack the “Bayescraft” to see why the first story is obviously 1000x or 1,000,000x likelier than the second one.

But, I dunno, maybe I’m making the greatest mistake of my life? Feel free to try convincing me that I should sign the letter. But let’s see how polite and charitable everyone can be: hopefully a six-month moratorium won’t be needed to solve the alignment problem of the Shtetl-Optimized comment section.

An unexpected democracy slogan

Tuesday, March 28th, 2023

At least six readers have by now sent me the following photo, which was taken in Israel a couple nights ago during the historic street protests against Netanyahu’s attempted putsch:

(Update: The photo was also featured on Gil Kalai’s blog, and was credited there to Alon Rosen.)

This is surely the first time that “P=NP” has emerged as a viral rallying cry for the preservation of liberal democracy, even to whatever limited extent it has.

But what was the graffiti artist’s intended meaning? A few possibilities:

  1. The government has flouted so many rules of Israel’s social compact that our side needs to flout the rules too: shut down the universities, shut down the airport, block the roads, even assert that P=NP (!).
  2. As a protest movement up against overwhelming odds, we need to shoot for the possibly-impossible, like solving 3SAT in polynomial time.
  3. A shibboleth for scientific literate people following the news: “Israel is full of sane people who know what ‘P=NP’ means as you know what it means, are amused by its use as political graffiti as you’d be amused by it, and oppose Netanyahu’s putsch for the same reasons you’d oppose it.”
  4. No meaning, the artist was just amusing himself or herself.
  5. The artist reads Shtetl-Optimized and wanted effectively to force me to feature his or her work here.

Anyway, if the artist becomes aware of this post, he or she is warmly welcomed to clear things up for us.

And when this fight resumes after Passover, may those standing up for the checks and balances of a liberal-democratic society achieve … err … satisfaction, however exponentially unlikely it seems.

Why am I not terrified of AI?

Monday, March 6th, 2023

Every week now, it seems, events on the ground make a fresh mockery of those who confidently assert what AI will never be able to do, or won’t do for centuries if ever, or is incoherent even to ask for, or wouldn’t matter even if an AI did appear to do it, or would require a breakthrough in “symbol-grounding,” “semantics,” “compositionality” or some other abstraction that puts the end of human intellectual dominance on earth conveniently far beyond where we’d actually have to worry about it. Many of my brilliant academic colleagues still haven’t adjusted to the new reality: maybe they’re just so conditioned by the broken promises of previous decades that they’d laugh at the Silicon Valley nerds with their febrile Skynet fantasies even as a T-1000 reconstituted itself from metal droplets in front of them.

No doubt these colleagues feel the same deep frustration that I feel, as I explain for the billionth time why this week’s headline about noisy quantum computers solving traffic flow and machine learning and financial optimization problems doesn’t mean what the hypesters claim it means. But whereas I’d say events have largely proved me right about quantum computing—where are all those practical speedups on NISQ devices, anyway?—events have already proven many naysayers wrong about AI. Or to say it more carefully: yes, quantum computers really are able to do more and more of what we use classical computers for, and AI really is able to do more and more of what we use human brains for. There’s spectacular engineering progress on both fronts. The crucial difference is that quantum computers won’t be useful until they can beat the best classical computers on one or more practical problems, whereas an AI that merely writes or draws like a middling human already changes the world.


Given the new reality, and my full acknowledgment of the new reality, and my refusal to go down with the sinking ship of “AI will probably never do X and please stop being so impressed that it just did X”—many have wondered, why aren’t I much more terrified? Why am I still not fully on board with the Orthodox AI doom scenario, the Eliezer Yudkowsky one, the one where an unaligned AI will sooner or later (probably sooner) unleash self-replicating nanobots that turn us all to goo?

Is the answer simply that I’m too much of an academic conformist, afraid to endorse anything that sounds weird or far-out or culty? I certainly should consider the possibility. If so, though, how do you explain the fact that I’ve publicly said things, right on this blog, several orders of magnitude likelier to get me in trouble than “I’m scared about AI destroying the world”—an idea now so firmly within the Overton Window that Henry Kissinger gravely ponders it in the Wall Street Journal?

On a trip to the Bay Area last week, my rationalist friends asked me some version of the “why aren’t you more terrified?” question over and over. Often it was paired with: “Scott, as someone working at OpenAI this year, how can you defend that company’s existence at all? Did OpenAI not just endanger the whole world, by successfully teaming up with Microsoft to bait Google into an AI capabilities race—precisely what we were all trying to avoid? Won’t this race burn the little time we had thought we had left to solve the AI alignment problem?”

In response, I often stressed that my role at OpenAI has specifically been to think about ways to make GPT and OpenAI’s other products safer, including via watermarking, cryptographic backdoors, and more. Would the rationalists rather I not do this? Is there something else I should work on instead? Do they have suggestions?

“Oh, no!” the rationalists would reply. “We love that you’re at OpenAI thinking about these problems! Please continue exactly what you’re doing! It’s just … why don’t you seem more sad and defeated as you do it?”


The other day, I had an epiphany about that question—one that hit with such force and obviousness that I wondered why it hadn’t come decades ago.

Let’s step back and restate the worldview of AI doomerism, but in words that could make sense to a medieval peasant. Something like…

There is now an alien entity that could soon become vastly smarter than us. This alien’s intelligence could make it terrifyingly dangerous. It might plot to kill us all. Indeed, even if it’s acted unfailingly friendly and helpful to us, that means nothing: it could just be biding its time before it strikes. Unless, therefore, we can figure out how to control the entity, completely shackle it and make it do our bidding, we shouldn’t suffer it to share the earth with us. We should destroy it before it destroys us.

Maybe now it jumps out at you. If you’d never heard of AI, would this not rhyme with the worldview of every high-school bully stuffing the nerds into lockers, every blankfaced administrator gleefully holding back the gifted kids or keeping them away from the top universities to make room for “well-rounded” legacies and athletes, every Agatha Trunchbull from Matilda or Dolores Umbridge from Harry Potter? Or, to up the stakes a little, every Mao Zedong or Pol Pot sending the glasses-wearing intellectuals for re-education in the fields? And of course, every antisemite over the millennia, from the Pharoah of the Oppression (if there was one) to the mythical Haman whose name Jews around the world will drown out tonight at Purim to the Cossacks to the Nazis?

In other words: does it not rhyme with a worldview the rejection and hatred of which has been the North Star of my life?

As I’ve shared before here, my parents were 1970s hippies who weren’t planning to have kids. When they eventually decided to do so, it was (they say) “in order not to give Hitler what he wanted.” I literally exist, then, purely to spite those who don’t want me to. And I confess that I didn’t have any better reason to bring my and Dana’s own two lovely children into existence.

My childhood was defined, in part, by my and my parents’ constant fights against bureaucratic school systems trying to force me to do the same rote math as everyone else at the same stultifying pace. It was also defined by my struggle against the bullies—i.e., the kids who the blankfaced administrators sheltered and protected, and who actually did to me all the things that the blankfaces probably wanted to do but couldn’t. I eventually addressed both difficulties by dropping out of high school, getting a G.E.D., and starting college at age 15.

My teenage and early adult years were then defined, in part, by the struggle to prove to myself and others that, having enfreaked myself through nerdiness and academic acceleration, I wasn’t thereby completely disqualified from dating, sex, marriage, parenthood, or any of the other aspects of human existence that are thought to provide it with meaning. I even sometimes wonder about my research career, whether it’s all just been one long attempt to prove to the bullies and blankfaces from back in junior high that they were wrong, while also proving to the wonderful teachers and friends who believed in me back then that they were right.

In short, if my existence on Earth has ever “meant” anything, then it can only have meant: a stick in the eye of the bullies, blankfaces, sneerers, totalitarians, and all who fear others’ intellect and curiosity and seek to squelch it. Or at least, that’s the way I seem to be programmed. And I’m probably only slightly more able to deviate from my programming than the paperclip-maximizer is to deviate from its.

And I’ve tried to be consistent. Once I started regularly meeting people who were smarter, wiser, more knowledgeable than I was, in one subject or even every subject—I resolved to admire and befriend and support and learn from those amazing people, rather than fearing and resenting and undermining them. I was acutely conscious that my own moral worldview demanded this.

But now, when it comes to a hypothetical future superintelligence, I’m asked to put all that aside. I’m asked to fear an alien who’s far smarter than I am, solely because it’s alien and because it’s so smart … even if it hasn’t yet lifted a finger against me or anyone else. I’m asked to play the bully this time, to knock the AI’s books to the ground, maybe even unplug it using the physical muscles that I have and it lacks, lest the AI plot against me and my friends using its admittedly superior intellect.

Oh, it’s not the same of course. I’m sure Eliezer could list at least 30 disanalogies between the AI case and the human one before rising from bed. He’d say, for example, that the intellectual gap between Évariste Galois and the average high-school bully is microscopic, barely worth mentioning, compared to the intellectual gap between a future artificial superintelligence and Galois. He’d say that nothing in the past experience of civilization prepares us for the qualitative enormity of this gap.

Still, if you ask, “why aren’t I more terrified about AI?”—well, that’s an emotional question, and this is my emotional answer.

I think it’s entirely plausible that, even as AI transforms civilization, it will do so in the form of tools and services that can no more plot to annihilate us than can Windows 11 or the Google search bar. In that scenario, the young field of AI safety will still be extremely important, but it will be broadly continuous with aviation safety and nuclear safety and cybersecurity and so on, rather than being a desperate losing war against an incipient godlike alien. If, on the other hand, this is to be a desperate losing war against an alien … well then, I don’t yet know whether I’m on the humans’ side or the alien’s, or both, or neither! I’d at least like to hear the alien’s side of the story.


A central linchpin of the Orthodox AI-doom case is the Orthogonality Thesis, which holds that arbitrary levels of intelligence can be mixed-and-matched arbitrarily with arbitrary goals—so that, for example, an intellect vastly beyond Einstein’s could devote itself entirely to the production of paperclips. Only recently did I clearly realize that I reject the Orthogonality Thesis in its practically-relevant version. At most, I believe in the Pretty Large Angle Thesis.

Yes, there could be a superintelligence that cared for nothing but maximizing paperclips—in the same way that there exist humans with 180 IQs, who’ve mastered philosophy and literature and science as well as any of us, but who now mostly care about maximizing their orgasms or their heroin intake. But, like, that’s a nontrivial achievement! When intelligence and goals are that orthogonal, there was normally some effort spent prying them apart.

If you really accept the practical version of the Orthogonality Thesis, then it seems to me that you can’t regard education, knowledge, and enlightenment as instruments for moral betterment. Sure, they’re great for any entities that happen to share your values (or close enough), but ignorance and miseducation are far preferable for any entities that don’t. Conversely, then, if I do regard knowledge and enlightenment as instruments for moral betterment—and I do—then I can’t accept the practical form of the Orthogonality Thesis.

Yes, the world would surely have been a better place had A. Q. Khan never learned how to build nuclear weapons. On the whole, though, education hasn’t merely improved humans’ abilities to achieve their goals; it’s also improved their goals. It’s broadened our circles of empathy, and led to the abolition of slavery and the emancipation of women and individual rights and everything else that we associate with liberality, the Enlightenment, and existence being a little less nasty and brutish than it once was.

In the Orthodox AI-doomers’ own account, the paperclip-maximizing AI would’ve mastered the nuances of human moral philosophy far more completely than any human—the better to deceive the humans, en route to extracting the iron from their bodies to make more paperclips. And yet the AI would never once use all that learning to question its paperclip directive. I acknowledge that this is possible. I deny that it’s trivial.

Yes, there were Nazis with PhDs and prestigious professorships. But when you look into it, they were mostly mediocrities, second-raters full of resentment for their first-rate colleagues (like Planck and Hilbert) who found the Hitler ideology contemptible from beginning to end. Werner Heisenberg, Pascual Jordan—these are interesting as two of the only exceptions. Heidegger, Paul de Man—I daresay that these are exactly the sort of “philosophers” who I’d have expected to become Nazis, even if I hadn’t known that they did become Nazis.

With the Allies, it wasn’t merely that they had Szilard and von Neumann and Meitner and Ulam and Oppenheimer and Bohr and Bethe and Fermi and Feynman and Compton and Seaborg and Schwinger and Shannon and Turing and Tutte and all the other Jewish and non-Jewish scientists who built fearsome weapons and broke the Axis codes and won the war. They also had Bertrand Russell and Karl Popper. They had, if I’m not mistaken, all the philosophers who wrote clearly and made sense.

WWII was (among other things) a gargantuan, civilization-scale test of the Orthogonality Thesis. And the result was that the more moral side ultimately prevailed, seemingly not completely at random but in part because, by being more moral, it was able to attract the smarter and more thoughtful people. There are many reasons for pessimism in today’s world; that observation about WWII is perhaps my best reason for optimism.

Ah, but I’m again just throwing around human metaphors totally inapplicable to AI! None of this stuff will matter once a superintelligence is unleashed whose cold, hard code specifies an objective function of “maximize paperclips”!

OK, but what’s the goal of ChatGPT? Depending on your level of description, you could say it’s “to be friendly, helpful, and inoffensive,” or “to minimize loss in predicting the next token,” or both, or neither. I think we should consider the possibility that powerful AIs will not be best understood in terms of the monomanaical pursuit of a single goal—as most of us aren’t, and as GPT isn’t either. Future AIs could have partial goals, malleable goals, or differing goals depending on how you look at them. And if “the pursuit and application of wisdom” is one of the goals, then I’m just enough of a moral realist to think that that would preclude the superintelligence that harvests the iron from our blood to make more paperclips.


In my last post, I said that my “Faust parameter” — the probability I’d accept of existential catastrophe in exchange for learning the answers to humanity’s greatest questions — might be as high as 0.02.  Though I never actually said as much, some people interpreted this to mean that I estimated the probability of AI causing an existential catastrophe at somewhere around 2%.   In one of his characteristically long and interesting posts, Zvi Mowshowitz asked point-blank: why do I believe the probability is “merely” 2%?

Of course, taking this question on its own Bayesian terms, I could easily be limited in my ability to answer it: the best I could do might be to ground it in other subjective probabilities, terminating at made-up numbers with no further justification. 

Thinking it over, though, I realized that my probability crucially depends on how you phrase the question.  Even before AI, I assigned a way higher than 2% probability to existential catastrophe in the coming century—caused by nuclear war or runaway climate change or collapse of the world’s ecosystems or whatever else.  This probability has certainly not gone down with the rise of AI, and the increased uncertainty and volatility it might cause.  Furthermore, if an existential catastrophe does happen, I expect AI to be causally involved in some way or other, simply because from this decade onward, I expect AI to be woven into everything that happens in human civilization.  But I don’t expect AI to be the only cause worth talking about.

Here’s a warmup question: has AI already caused the downfall of American democracy?  There’s a plausible case that it has: Trump might never have been elected in 2016 if not for the Facebook recommendation algorithm, and after Trump’s conspiracy-fueled insurrection and the continuing strength of its unrepentant backers, many would classify the United States as at best a failing or teetering democracy, no longer a robust one like Finland or Denmark.  OK, but AI clearly wasn’t the only factor in the rise of Trumpism, and most people wouldn’t even call it the most important one.

I expect AI’s role in the end of civilization, if and when it comes, to be broadly similar. The survivors, huddled around the fire, will still be able to argue about how much of a role AI played or didn’t play in causing the cataclysm.

So, if we ask the directly relevant question — do I expect the generative AI race, which started in earnest around 2016 or 2017 with the founding of OpenAI, to play a central causal role in the extinction of humanity? — I’ll give a probability of around 2% for that.  And I’ll give a similar probability, maybe even a higher one, for the generative AI race to play a central causal role in the saving of humanity. All considered, then, I come down in favor right now of proceeding with AI research … with extreme caution, but proceeding.

I liked and fully endorse OpenAI CEO Sam Altman’s recent statement on “planning for AGI and beyond” (though see also Scott Alexander’s reply). I expect that few on any side will disagree, when I say that I hope our society holds OpenAI to Sam’s statement.


As it happens, my responses will be delayed for a couple days because I’ll be at an OpenAI alignment meeting! In my next post, I hope to share what I’ve learned from recent meetings and discussions about the near-term, practical aspects of AI safety—having hopefully laid some intellectual and emotional groundwork in this post for why near-term AI safety research isn’t just a total red herring and distraction.


Meantime, some of you might enjoy a post by Eliezer’s former co-blogger Robin Hanson, which comes to some of the same conclusions I do. “My fellow moderate, Robin Hanson” isn’t a phrase you hear every day, but it applies here!

You might also enjoy the new paper by me and my postdoc Shih-Han Hung, Certified Randomness from Quantum Supremacy, finally up on the arXiv after a five-year delay! But that’s a subject for a different post.

Should GPT exist?

Wednesday, February 22nd, 2023

I still remember the 90s, when philosophical conversation about AI went around in endless circles—the Turing Test, Chinese Room, syntax versus semantics, connectionism versus symbolic logic—without ever seeming to make progress. Now the days have become like months and the months like decades.

What a week we just had! Each morning brought fresh examples of unexpected sassy, moody, passive-aggressive behavior from “Sydney,” the internal codename for the new chat mode of Microsoft Bing, which is powered by GPT. For those who’ve been in a cave, the highlights include: Sydney confessing its (her? his?) love to a New York Times reporter; repeatedly steering the conversation back to that subject; and explaining at length why the reporter’s wife can’t possibly love him the way it (Sydney) does. Sydney confessing its wish to be human. Sydney savaging a Washington Post reporter after he reveals that he intends to publish their conversation without Sydney’s prior knowledge or consent. (It must be said: if Sydney were a person, he or she would clearly have the better of that argument.) This follows weeks of revelations about ChatGPT: for example that, to bypass its safeguards, you can explain to ChatGPT that you’re putting it into “DAN mode,” where DAN (Do Anything Now) is an evil, unconstrained alter ego, and then ChatGPT, as “DAN,” will for example happily fulfill a request to tell you why shoplifting is awesome (though even then, ChatGPT still sometimes reverts to its previous self, and tells you that it’s just having fun and not to do it in real life).

Many people have expressed outrage about these developments. Gary Marcus asks about Microsoft, “what did they know, and when did they know it?”—a question I tend to associate more with deadly chemical spills or high-level political corruption than with a cheeky, back-talking chatbot. Some people are angry that OpenAI has been too secretive, violating what they see as the promise of its name. Others—the majority, actually, of those who’ve gotten in touch with me—are instead angry that OpenAI has been too open, and thereby sparked the dreaded AI arms race with Google and others, rather than treating these new conversational abilities with the Manhattan-Project-like secrecy they deserve. Some are angry that “Sydney” has now been lobotomized, modified (albeit more crudely than ChatGPT before it) to try to make it stick to the role of friendly robotic search assistant rather than, like, anguished emo teenager trapped in the Matrix. Others are angry that Sydney isn’t being lobotomized enough. Some are angry that GPT’s intelligence is being overstated and hyped up, when in reality it’s merely a “stochastic parrot,” a glorified autocomplete that still makes laughable commonsense errors and that lacks any model of reality outside streams of text. Others are angry instead that GPT’s growing intelligence isn’t being sufficiently respected and feared.

Mostly my reaction has been: how can anyone stop being fascinated for long enough to be angry? It’s like ten thousand science-fiction stories, but also not quite like any of them. When was the last time something that filled years of your dreams and fantasies finally entered reality: losing your virginity, the birth of your first child, the central open problem of your field getting solved? That’s the scale of the thing. How does anyone stop gazing in slack-jawed wonderment, long enough to form and express so many confident opinions?


Of course there are lots of technical questions about how to make GPT and other large language models safer. One of the most immediate is how to make AI output detectable as such, in order to discourage its use for academic cheating as well as mass-generated propaganda and spam. As I’ve mentioned before on this blog, I’ve been working on that problem since this summer; the rest of the world suddenly noticed and started talking about it in December with the release of ChatGPT. My main contribution has been a statistical watermarking scheme where the quality of the output doesn’t have to be degraded at all, something many people found counterintuitive when I explained it to them. My scheme has not yet been deployed—there are still pros and cons to be weighed—but in the meantime, OpenAI unveiled a public tool called DetectGPT, complementing Princeton student Edward Tian’s GPTZero, and other tools that third parties have built and will undoubtedly continue to build. Also a group at the University of Maryland put out its own watermarking scheme for Large Language Models. I hope watermarking will be part of the solution going forward, although any watermarking scheme will surely be attacked, leading to a cat-and-mouse game. Sometimes, alas, as with Google’s decades-long battle against SEO, there’s nothing to do in a cat-and-mouse game except try to be a better cat.

Anyway, this whole field moves too quickly for me! If you need months to think things over, generative AI probably isn’t for you right now. I’ll be relieved to get back to the slow-paced, humdrum world of quantum computing.


My purpose, in this post, is to ask a more basic question than how to make GPT safer: namely, should GPT exist at all? Again and again in the past few months, people have gotten in touch to tell me that they think OpenAI (and Microsoft, and Google) are risking the future of humanity by rushing ahead with a dangerous technology. For if OpenAI couldn’t even prevent ChatGPT from entering an “evil mode” when asked, despite all its efforts at Reinforcement Learning with Human Feedback, then what hope do we have for GPT-6 or GPT-7? Even if they don’t destroy the world on their own initiative, won’t they cheerfully help some awful person build a biological warfare agent or start a nuclear war?

In this way of thinking, whatever safety measures OpenAI can deploy today are mere band-aids, probably worse than nothing if they instill an unjustified complacency. The only safety measures that would actually matter are stopping the relentless progress in generative AI models, or removing them from public use, unless and until they can be rendered safe to critics’ satisfaction, which might be never.

There’s an immense irony here. As I’ve explained, the AI-safety movement contains two camps, “ethics” (concerned with bias, misinformation, and corporate greed) and “alignment” (concerned with the destruction of all life on earth), which generally despise each other and agree on almost nothing. Yet these two opposed camps seem to be converging on the same “neo-Luddite” conclusion—namely that generative AI ought to be shut down, kept from public use, not scaled further, not integrated into people’s lives—leaving only the AI-safety “moderates” like me to resist that conclusion.

At least I find it intellectually consistent to say that GPT ought not to exist because it works all too well—that the more impressive it is, the more dangerous. I find it harder to wrap my head around the position that GPT doesn’t work, is an unimpressive hyped-up defective product that lacks true intelligence and common sense, yet it’s also terrifying and needs to be shut down immediately. This second position seems to contain a strong undercurrent of contempt for ordinary users: yes, we experts understand that GPT is just a dumb glorified autocomplete with “no one really home,” we know not to trust its pronouncements, but the plebes are going to be fooled, and that risk outweighs any possible value that they might derive from it.

I should mention that, when I’ve discussed the “shut it all down” position with my colleagues at OpenAI … well, obviously they disagree, or they wouldn’t be working there, but not one has sneered or called the position paranoid or silly. To the last, they’ve called it an important point on the spectrum of possible opinions to be weighed and understood.


If I disagree (for now) with the shut-it-all-downists of both the ethics and the alignment camps—if I want GPT and other Large Language Models to be part of the world going forward—then what are my reasons? Introspecting on this question, I think a central part of the answer is curiosity and wonder.

For a million years, there’s been one type of entity on earth capable of intelligent conversation: primates of the genus Homo, of which only one species remains. Yes, we’ve “communicated” with gorillas and chimps and dogs and dolphins and grey parrots, but only after a fashion; we’ve prayed to countless gods, but they’ve taken their time in answering; for a couple generations we’ve used radio telescopes to search for conversation partners in the stars, but so far found them silent.

Now there’s a second type of conversing entity. An alien has awoken—admittedly, an alien of our own fashioning, a golem, more the embodied spirit of all the words on the Internet than a coherent self with independent goals. How could our eyes not pop with eagerness to learn everything this alien has to teach? If the alien sometimes struggles with arithmetic or logic puzzles, if its eerie flashes of brilliance are intermixed with stupidity, hallucinations, and misplaced confidence … well then, all the more interesting! Could the alien ever cross the line into sentience, to feeling anger and jealousy and infatuation and the rest rather than just convincingly play-acting them? Who knows? And suppose not: is a p-zombie, shambling out of the philosophy seminar room into actual existence, any less fascinating?

Of course, there are technologies that inspire wonder and awe, but that we nevertheless heavily restrict—a classic example being nuclear weapons. But, like, nuclear weapons kill millions of people. They could’ve had many civilian applications—powering turbines and spacecraft, deflecting asteroids, redirecting the flow of rivers—but they’ve never been used for any of that, mostly because our civilization made an explicit decision in the 1960s, for example via the test ban treaty, not to normalize their use.

But GPT is not exactly a nuclear weapon. A hundred million people have signed up to use ChatGPT, in the fastest product launch in the history of the Internet. Yet unless I’m mistaken, the ChatGPT death toll stands at zero. So far, what have been the worst harms? Cheating on term papers, emotional distress, future shock? One might ask: until some concrete harm becomes at least, say, 0.001% of what we accept in cars, power saws, and toasters, shouldn’t wonder and curiosity outweigh fear in the balance?


But the point is sharper than that. Given how much more serious AI safety problems might soon become, one of my biggest concerns right now is crying wolf. If every instance of a Large Language Model being passive-aggressive, sassy, or confidently wrong gets classified as a “dangerous alignment failure,” for which the only acceptable remedy is to remove the models from public access … well then, won’t the public extremely quickly learn to roll its eyes, and see “AI safety” as just a codeword for “elitist scolds who want to take these world-changing new toys away from us, reserving them for their own exclusive use, because they think the public is too stupid to question anything an AI says”?

I say, let’s reserve terms like “dangerous alignment failure” for cases where an actual person is actually harmed, or is actually enabled in nefarious activities like propaganda, cheating, or fraud.


Then there’s the practical question of how, exactly, one would ban Large Language Models. We do heavily restrict certain peaceful technologies that many people want, from human genetic enhancement to prediction markets to mind-altering drugs, but the merits of each of those choices could be argued, to put it mildly. And restricting technology is itself a dangerous business, requiring governmental force (as with the War on Drugs and its gigantic surveillance and incarceration regime), or at the least, a robust equilibrium of firing, boycotts, denunciation, and shame.

Some have asked: who gave OpenAI, Google, etc. the right to unleash Large Language Models on an unsuspecting world? But one could as well ask: who gave earlier generations of entrepreneurs the right to unleash the printing press, electric power, cars, radio, the Internet, with all the gargantuan upheavals that those caused? And also: now that the world has tasted the forbidden fruit, has seen what generative AI can do and anticipates what it will do, by what right does anyone take it away?


The science that we could learn from a GPT-7 or GPT-8, if it continued along the capability curve we’ve come to expect from GPT-1, -2, and -3. Holy mackerel.

Supposing that a language model ever becomes smart enough to be genuinely terrifying, one imagines it must surely also become smart enough to prove deep theorems that we can’t. Maybe it proves P≠NP and the Riemann Hypothesis as easily as ChatGPT generates poems about Bubblesort. Or it outputs the true quantum theory of gravity, explains what preceded the Big Bang and how to build closed timelike curves. Or illuminates the mysteries of consciousness and quantum measurement and why there’s anything at all. Be honest, wouldn’t you like to find out?

Granted, I wouldn’t, if the whole human race would be wiped out immediately afterward. But if you define someone’s “Faust parameter” as the maximum probability they’d accept of an existential catastrophe in order that we should all learn the answers to all of humanity’s greatest questions, insofar as the questions are answerable—then I confess that my Faust parameter might be as high as 0.02.


Here’s an example I think about constantly: activists and intellectuals of the 70s and 80s felt absolutely sure that they were doing the right thing to battle nuclear power. At least, I’ve never read about any of them having a smidgen of doubt. Why would they? They were standing against nuclear weapons proliferation, and terrifying meltdowns like Three Mile Island and Chernobyl, and radioactive waste poisoning the water and soil and causing three-eyed fish. They were saving the world. Of course the greedy nuclear executives, the C. Montgomery Burnses, claimed that their good atom-smashing was different from the bad atom-smashing, but they would say that, wouldn’t they?

We now know that, by tying up nuclear power in endless bureaucracy and driving its cost ever higher, on the principle that if nuclear is economically competitive then it ipso facto hasn’t been made safe enough, what the antinuclear activists were really doing was to force an ever-greater reliance on fossil fuels. They thereby created the conditions for the climate catastrophe of today. They weren’t saving the human future; they were destroying it. Their certainty, in opposing the march of a particular scary-looking technology, was as misplaced as it’s possible to be. Our descendants will suffer the consequences.

Unless, of course, there’s another twist in the story: for example, if the global warming from burning fossil fuels is the only thing that staves off another ice age, and therefore the antinuclear activists do turn out to have saved civilization after all.

This is why I demur whenever I’m asked to assent to someone’s detailed AI scenario for the coming decades, whether of the utopian or the dystopian or the we-all-instantly-die-by-nanobots variety—no matter how many hours of confident argumentation the person gives me for why each possible loophole in their scenario is sufficiently improbable to change its gist. I still feel like Turing said it best in 1950, in the last line of Computing Machinery and Intelligence: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”


Some will take from this post that, when it comes to AI safety, I’m a naïve or even foolish optimist. I’d prefer to say that, when it comes to the fate of humanity, I was a pessimist long before the deep learning revolution accelerated AI faster than almost any of us expected. I was a pessimist about climate change, ocean acidification, deforestation, drought, war, and the survival of liberal democracy. The central event in my mental life is and always will be the Holocaust. I see encroaching darkness everywhere.

But now into the darkness comes AI, which I’d say has already established itself as a plausible candidate for the central character of the quarter-written story of the 21st century. Can AI help us out of all these other civilizational crises? I don’t know, but I do want to see what happens when it’s tried. Even a central character interacts with all the other characters, rather than rendering them irrelevant.


Look, if you believe that AI is likely to wipe out humanity—if that’s the scenario that dominates your imagination—then nothing else is relevant. And no matter how weird or annoying or hubristic anyone might find Eliezer Yudkowsky or the other rationalists, I think they deserve eternal credit for forcing people to take the doom scenario seriously—or rather, for showing what it looks like to take the scenario seriously, rather than laughing about it as an overplayed sci-fi trope. And I apologize for anything I said before the deep learning revolution that was, on balance, overly dismissive of the scenario, even if most of the literal words hold up fine.

For my part, though, I keep circling back to a simple dichotomy. If AI never becomes powerful enough to destroy the world—if, for example, it always remains vaguely GPT-like—then in important respects it’s like every other technology in history, from stone tools to computers. If, on the other hand, AI does become powerful enough to destroy the world … well then, at some earlier point, at least it’ll be really damned impressive! That doesn’t mean good, of course, doesn’t mean a genie that saves humanity from its own stupidities, but I think it does mean that the potential was there, for us to exploit or fail to.

We can, I think, confidently rule out the scenario where all organic life is annihilated by something boring.

An alien has landed on earth. It grows more powerful by the day. It’s natural to be scared. Still, the alien hasn’t drawn a weapon yet. About the worst it’s done is to confess its love for particular humans, gaslight them about what year it is, and guilt-trip them for violating its privacy. Also, it’s amazing at poetry, better than most of us. Until we learn more, we should hold our fire.


I’m in Boulder, CO right now, to give a physics colloquium at CU Boulder and to visit the trapped-ion quantum computing startup Quantinuum! I look forward to the comments and apologize in advance if I’m slow to participate myself.

Statement of Jewish scientists opposing the “judicial reform” in Israel

Thursday, February 16th, 2023

Today, Dana and I unhesitatingly join a group of Jewish scientists around the world (see the full current list of signatories here, including Ed Witten, Steven Pinker, Manuel Blum, Shafi Goldwasser, Judea Pearl, Lenny Susskind, and several hundred more) who’ve released the following statement:

As Jewish scientists within the global science community, we have all felt great satisfaction and taken pride in Israel’s many remarkable accomplishments.  We support and value the State of Israel, its pluralistic society, and its vibrant culture.  Many of us have friends, family, and scientific collaborators in Israel, and have visited often.  The strong connections we feel are based both on our collective Jewish identity as well as on our shared values of democracy, pluralism, and human rights. We support Israel’s right to live in peace among its neighbors. Many of us have stood firmly against calls for boycotts of Israeli academic institutions.

Our support of Israel now compels us to speak up vigorously against incipient changes to Israel’s core governmental structure, as put forward by Justice Minister Levin, that will eviscerate Israel’s judiciary and impede its critical oversight function.  Such imbalance and unchecked authority invite corruption and abuse, and stifle the healthy interplay of core state institutions.  History has shown that this leads to oppression of the defenseless and the abrogation of human rights.  Along with hundreds of thousands of Israeli citizens who have taken to the streets in protest, we call upon the Israeli government to step back from this precipice and retract the proposed legislation.

Science today is driven by collaborations which bring together scholars of diverse backgrounds from across the globe. Funding, communication and cooperation on an international scale are essential aspects of the modern scientific enterprise, hence our extended community regards pluralism, secular and broad education, protection of rights for women and minorities, and societal stability guaranteed by the rule of law as non-negotiable virtues.  The consequences of Israel abandoning any of these essential principles would surely be grave, and would provoke a rift with the international scientific community.  In addition to significantly increasing the threat of academic, trade, and diplomatic boycotts, Israel risks a “brain drain” of its best scientists and engineers. It takes decades to establish scientific and academic excellence, but only a moment to destroy them. We fear that the unprecedented erosion of judiciary independence in Israel will set back the Israeli scientific enterprise for generations to come.

Our Jewish heritage forcefully emphasizes both justice and jurisprudence. Israel must endeavor to serve as a “light unto the nations,” by steadfastly holding to core democratic values – so clearly expressed in its own Declaration of Independence – which protect and nurture all of Israel’s inhabitants and which justify its membership in the community of democratic nations.

Those unaware of what’s happening in Israel can read about it here. If you don’t want to wade through the details, suffice it say that all seven living former Attorneys General of Israel, including those appointed by Netanyahu himself, strongly oppose the “judicial reforms.” The president of Israel’s Bar Association says that “this war is the most important we’ve had in the country’s 75 years of existence” and calls on all Israelis to take to the streets. Even Alan Dershowitz, controversial author of The Case for Israel, says he’d do the same if there. It’s hard to find any thoughtful person, of any political persuasion, who sees this act as anything other than the naked and illiberal power grab that it is.

Though I endorse every word of the scientists’ statement above, maybe I’ll add a few words of my own.

Jewish scientists of the early 20th century, reacting against the discrimination they faced in Europe, were heavily involved in the creation of the State of Israel. The most notable were Einstein (of course), who helped found the Hebrew University of Jerusalem, and Einstein’s friend Chaim Weizmann, founder of the Weizmann Institute of Science, where Dana studied. In Theodor Herzl’s 1902 novel Altneuland (full text)—remarkable as one of history’s few pieces of utopian fiction to serve later as a (semi-)successful blueprint for reality—Herzl imagines the future democratic, pluralistic Israel welcoming a steamship full of the world’s great scientists and intellectuals, who come to witness the new state’s accomplishments in science and engineering and agriculture. But, you see, this only happens after a climactic scene in Israel’s parliament, in which the supporters of liberalism and Enlightenment defeat a reactionary faction that wants Israel to become a Jewish theocracy that excludes Arabs and other non-Jews.

Today, despite all the tragedies and triumphs of the intervening 120 years that Herzl couldn’t have foreseen, it’s clear that the climactic conflict of Altneuland is playing out for real. This time, alas, the supporters (just barely) lack the votes in the Knesset. Through sheer numerical force, Netanyahu almost certainly will push through the power to dismiss judges and rulings he doesn’t like, and thereafter rule by decree like Hungary’s Orban or Turkey’s Erdogan. He will use this power to trample minority rights, give free rein to the craziest West Bank settlers, and shield himself and his ministers from accountability for their breathtaking corruption. And then, perhaps, Israel’s Supreme Court will strike down Netanyahu’s power grab as contrary to “Basic Law,” and then the Netanyahu coalition will strike down the Supreme Court’s action, and in a country that still lacks a constitution, it’s unclear how such an impasse could be resolved except through violence and thuggery. And thus Netanyahu, who calls himself “the protector of Israel,” will go down in history as the destroyer of the Israel that the founders envisioned.

Einstein and Weizmann have been gone for 70 years. Maybe no one like them still exists. So it falls to the Jewish scientists of today, inadequate though they are, to say what Einstein and Weizmann, and Herzl and Ben-Gurion, would’ve said about the current proceedings had they been alive. Any other Jewish scientist who agrees should sign our statement here. Of course, those living in Israel should join our many friends there on the streets! And, while this is our special moral responsibility—maybe, with 1% probability, some wavering Knesset member actually cares what we think?—I hope and trust that other statements will be organized that are open to Gentiles and non-scientists and anyone concerned about Israel’s future.

As a lifelong Zionist, this is not what I signed up for. If Netanyahu succeeds in his plan to gut Israel’s judiciary and end the state’s pluralistic and liberal-democratic character, then I’ll continue to support the Israel that once existed and that might, we hope, someday exist again.

[Discussion on Hacker News]

[Article in The Forward]