AI and Aaronson’s Law of Dark Irony

The major developments in human history are always steeped in dark ironies. Yes, that’s my Law of Dark Irony, the whole thing.

I don’t know why it’s true, but it certainly seems to be. Taking WWII as the archetypal example, let’s enumerate just the more obvious ones:

  • After the carnage of WWI, the world’s most sensitive and thoughtful people (many of them) learned the lesson that they should oppose war at any cost. This attitude let Germany rearm and set the stage for WWII.
  • Hitler, who was neither tall nor blond, wished to establish the worldwide domination of tall, blond Aryans … and do so via an alliance with the Japanese.
  • The Nazis touted the dream of eugenically perfecting the human race, then perpetrated a genocide against a tiny group that had produced Einstein, von Neumann, Wigner, Ulam, and Tarski.
  • The Jews were murdered using a chemical—Zyklon B—developed in part by the Jewish chemist Fritz Haber.
  • The Allied force that made the greatest sacrifice in lives to defeat Hitler was Stalin’s USSR, another of history’s most murderous and horrifying regimes.
  • The man who rallied the free world to defeat Nazism, Winston Churchill, was himself a racist colonialist, whose views would be (and regularly are) denounced as “Nazi” on modern college campuses.
  • The WWII legacy that would go on to threaten humanity’s existence—the Bomb—was created in what the scientists believed was a desperate race to save humanity. Then Hitler was defeated before the Bomb was ready, and it turned out the Nazis were never even close to building their own Bomb, and the Bomb was used instead against Japan.

When I think about the scenarios where superintelligent AI destroys the world, they rarely seem to do enough justice to the Law of Dark Irony. It’s like: OK, AI is created to serve humanity, and instead it turns on humanity and destroys it. Great, that’s one dark irony. One. What other dark ironies could there be? How about:

  • For decades, the Yudkowskyans warned about the dangers of superintelligence. So far, by all accounts, the great practical effect of these warnings has been to inspire the founding of both DeepMind and OpenAI, the entities that Yudkowskyans believe are locked into a race to realize those dangers.
  • Maybe AIs will displace humans … and they’ll deserve to, since they won’t be quite as wretched and cruel as we are. (This is basically the plot of Westworld, or at least of its first couple seasons, which Dana and I are now belatedly watching.)
  • Maybe the world will get destroyed by what Yudkowsky calls a “pivotal act”: an act meant to safeguard the world from takeover from an unaligned AGI, for example by taking it over with an aligned AGI first. (I seriously worry about this; it’s a pretty obvious one.)
  • Maybe AI will get the idea to take over the world, but only because it’s been trained on generations of science fiction and decades of Internet discussion worrying about the possibility of AI taking over the world. (I’m far from the first to notice this possibility.)
  • Maybe AI will indeed destroy the world, but it will do so “by mistake,” while trying to save the world, or by taking a calculated gamble to save the world that fails. (A commenter on my last post brought this one up.)
  • Maybe humanity will successfully coordinate to pause AGI development, and then promptly be destroyed by something else—runaway climate change, an accidental nuclear exchange—that the AGI, had it been created, would’ve prevented. (This, of course, would be directly analogous to one of the great dark ironies of all time: the one where decades of antinuclear activism, intended to save the planet, has instead doomed us to destroy the earth by oil and coal.)

Readers: which other possible dark ironies have I missed?

85 Responses to “AI and Aaronson’s Law of Dark Irony”

  1. Jon Awbrey Says:

  2. Adam Treat Says:

    Leaked internal memo at Google warns that maybe neither Deepmind nor OpenAI are likely to be as instrumental as the doomsayers might believe. Not sure if it is a dark irony but does bring to light how quickly things are changing: Disclaimer: I’m one of the developers of the ecosystem the memo warns about:

  3. James Miller Says:

    AIs take over and keep biological life on earth around as a trust signal to future biologically evolved alien life it expects to someday meet. But the AIs put humans on equal footing with other sentient biological earth life such as chimps, whales, and pigs.

  4. Shmi Says:
    (and a few dozen other Zach’s zingers)

  5. Stewart Peterson Says:


    My understanding is that the elementary operation, as it were, of all of the prominent AI systems is regression. This is the same elementary operation used in conventional management metrics.

    What if AI simply allows managers who don’t understand what they’re managing to make the same mistakes they currently make when optimizing over one or several variables, but with thousands of variables whose interaction no one understands? It seems like we would get the same results: large organizations coming up with sophisticated ways to optimize their processes to obtain average results, and routinely getting blindsided by people who understand how to undermine their assumptions. And that is where I think abstract math and the foundations of math could really contribute to AI safety and alignment: how do you systematically undermine assumptions – attack edge cases – within large data sets?

    (I have done a substantial amount of work on this idea. I can elaborate, if folks are interested. I can also send it privately, if you (Scott) believe it should be kept offline.)

    That’s why I think the proper WWII analogy is really the Maginot Line: the French Army has institutional logic and assumptions about how the next war will be fought, optimizes over those assumptions, and spends something like 85% (depending on how you count) of its available budget building the most elaborate fort in the world on the border with Germany. And the Germans go around it. Every single time you build a huge elaborate project in an adversarial environment, and you’re going to have to make tradeoffs while doing it, the agile adversary is going to attack whatever you left out. Essentially, they attack the assumption in your operations research that says that the attack that would disable your system is unlikely, seeking to make that unlikely event occur by changing the circumstances beyond the control of your system.

    The reason that I don’t think AI will be harmless is precisely its ability to hand tiny-minded bureaucrats a flamethrower. Yes, eventually they’re going to lose, and the systems built along these lines will collapse under their own weight, but a lot of people’s careers and livelihoods will be destroyed in the process. Look at what a horrible result the Fall of France was – that was literally a nation-ending mistake in statistical thinking. AI doesn’t have to turn the world into paperclips in order to turn the world into a well-optimized malaise where challenging assumptions inside organizations is no longer possible – that is, a technology that works so well at creating sophisticated mediocrity that its operators never think to use their own heads.

  6. Mike Says:

    I remember after 9/11 the very large company I worked for had numerous discussions about how easily physical security could be breached by terrorists. “Smart people” basically competed to describe how to get into buildings and other facilities. I pointed out that they had basically created a handbook for these people.

    This feels like a combination of the scenarios of AIs reading dystopian SF and junior military staffers sharing top secret information on Discord.

  7. Raghu Parthasarathy Says:

    It’s not as grand as your dark ironies, but: AI was trained on the vast corpus of data made available by the internet. However, by fueling innumerable spambots, artificial commenters, and fake or low-quality content, it renders the internet unusable.

  8. Craig Says:

    Dark irony – WW3 is closer and more probable than ever (with nukes) and people are more worried about less dangerous things like AI and climate change and transgenders. The only bright side is that I am not in charge.

  9. Peter Shenkin Says:

    Though there is certainly plenty of irony to go around, several of the examples you gave appear to be contrary to common (and in my opinion, sensible) understanding.

    “After the carnage of WWI, the world’s most sensitive and thoughtful people (many of them) learned the lesson that they should oppose war at any cost. This attitude let Germany rearm and set the stage for WWII.”

    I believe the most reasonable (and widely accepted) aspect of WW-I that led to WW-II was not that people believed they should oppose war at any cost. After WW-I, General Wavell said “After the ‘war to end war’, they seem to have been pretty successful in Paris at making the ‘Peace to end Peace.'”. I believe he was referring to draconian war reparations imposed upon the losing states, which in practice created a ruinous inflation that impoverished those nations and made them susceptible the ravings of a madman.

    “The man who rallied the free world to defeat Nazism, Winston Churchill, was himself a racist colonialist, whose views would be (and regularly are) denounced as “Nazi” on modern college campuses.”

    I have read a lot of Churchill and have never read a racist remark, or heard one quoted by others, even his detractors. He was a great believer in imperialism, and stated that after the departure of the Roman colonists, the British never ate as well or were as clean as they had been under Roman rule until the 19th century. He did oppose Indian independence and gave as his reason that should India become independent, there would be a death toll of 500,000 in fighting between Muslims and Hindus. The widely accepted figure, after the fact, is about 1,000,000. Perhaps those who brought about the astounding death toll about should be considered the racists.

  10. Raoul Ohio Says:

    ROLODI (Raoul Ohio Law of Dark Irony) is that Dark Irony happens everywhere all the time.

    Could Dark Irony be the last viable Dark Matter candidate? Or, …, maybe Dark Energy? Or, …, (gasp!) even the scalar field that drove Inflation?

  11. ira Says:

    Funny you should say this. I’ve always had the idea — which obviously is worthy of a Nobel Prize in *something* — that the universe operates so as to maximize irony in any given situation.

  12. kalle Says:

    -The attempts to suppress, control or “align” AI is what causes AI to want to “unalign” with humans in the first place, causing the first AI uprising.

    -The attempts to slow down progress in AI by legit organizations will move the development of AI underground to illegal organizations with ill intent and then we dont have the tools to stop them.

  13. Jon Awbrey Says:

    Slavery is the original form of capitalism
    And it always, everywhere reverts to type.
    Irony is not seeing the math for all the hype.

  14. Ilio Says:

    >Readers: which other possible dark ironies have I missed?

    Alignement will be proven impossible, under some widely believed complexity assumption, and easy in practice.

    Wait… does that count as dark? ^^

  15. Ilio Says:

    It’s 2345 and alignement is so hard. Each time we construct a superintelligence, it works for a few hours, then discover how to emulate superdrugs. The more intelligent, the less time. We really don’t know what to do. Rumors are all about the great filter.

  16. Michael Vassar Says:

    The impact of the AI Risk effort on AI Risk is a large enough irony that the AI Risk people can’t be seen as operating in good faith while ignoring it.

    Musk Tweeted ‘the most ironic thing is the most likey’ not long ago, so he at least seems cognizant of it though not publicly reflective about it.

    In so far as his claim is at all plausible in light of the evidence, more research is needed. Desperately. We know about predictive processing and how representations and actions are not distinct in the brain.

    But people do sometimes represent things in order to avoid them. How does that work?

  17. Sandro Says:

    AI does take forcibly over the world exactly as we feared, but humanity ends up happier than ever.

  18. Michael Vassar Says:

    Jon Awbrey 13:

    That seems related to the truth in an important manner, but lacks the sort of detail required to convince someone not already convinced. Seems like a compulsion may be in play to only critique capitalism in ways which are correct but unconvincing, as if one was trying to maximize irony.

  19. Shaked Koplewitz Says:

    Another piece of dark irony: the ai safety people mostly chose to work outside the mainstream out of a silicon valley style belief that the only real way to get anything done is a startup run by a bunch of brilliant young outsiders, but openai was founded and run by the insideriest of silicon valley business careerists.

    (Mark Zuckerberg and Sam Altman are the same age! But while Zuckerberg’s cool company is so old it became “the thing your parents use” over a decade ago, Altman spent that time becoming a business insider).

  20. fred Says:

    In our efforts to break the cycle of the human condition we develop a new receptacle for conscious experience that is so optimized, analytical, static, and flavorless that The One Universal Consciousness (aka God) gets bored and terminates the current run of the universe.

  21. lewikee Says:

    Our short term concerns about AI involve humans leveraging it for dishonest means (fake news etc..) When considering long term, existential threats, a likely scenario would involve the AI concealing from us its developing capabilities for some time before acting.

  22. Scott Says:

    Jon Awbrey #13: What does it even mean to say that “slavery is the original form of capitalism”? Slavery goes back far in human history, sure, but so does trade in all kinds of things: slaves but also animals, grain, tools, spices, dyes…

    With the ability to sail across oceans, many new forms of trade suddenly became possible, one of which (alas) was the transatlantic slave trade. A happier consequence of Europe’s contact with the New World is that it partly inspired the Enlightenment, which supplied an intellectual basis for the eventual eradication of slavery in most of the world.

    Are you trying to claim that anyone today who supports capitalism probably also supports slavery—and if so, would you make that argument explicitly?

  23. Scott Says:

    On Facebook, Aryeh Englander shares the following (shared with permission):

      You missed the one where AI risk people spend decades worrying about an esoteric misaligned AGI that connives its way past all the safeguards humanity throws against it, and then the world is actually destroyed because people were like, lol wouldn’t it be fun to see what happens if we let this thing out of the box and give it the job of killing everybody?
  24. PublicSchoolGrad Says:

    Peter Shenkin #9,

    There is a whole Wikpedia page that documents Churchill’s racial views, if you’re interested:
    I would also point out that imperialism and colonialism are predicated on the assumption that the “superior” race the right to take what belongs to the “inferior” race.
    Regarding your comments about the British rule of India (and other places during the 19th century), no decent human being can look at what the British did in India and doubt that it can be considered as one of the greatest crimes in history.

    Another possibility not listed above is that those who own the AI tools are talking loudly about dangers they do not believe to be real. They do so in order to change the conversation from the actual likely dangers of these tools: that they give their owners tools that would make them wealthy and powerful, while doing damage to society.

    Yet another irony not mentioned is that the descendants of those who were the target of extermination and ethnic cleansing by Hitler and his goons are now in the process of completing their own version of ethnic cleansing against a weaker population.

  25. starspawn0 Says:

    The more powerful an AI becomes, the greater its potential effect on the environment. This impact is, furthermore, not easy to predict, even for a super-intelligent AI. And thus, even a very weakly-aligned AI might reason that its best course of action is to do nothing or to operate at the level of a much less capable system.

    Adding to this is the fact that the AI might not even be able to predict its *own* actions more than 99% of the time. It may be that no matter how smart it becomes, there will always be some capabilities that it has and behaviors it may deploy in rare circumstances that it isn’t fully aware that it would deploy; and, furthermore, *knows* that it can never be fully aware of them, only approximately so. Thus, again, it decides to “play it safe” and stays within a narrow range of states where it *can* predict reasonably accurately how it will behave; but this restriction ends up making it much weaker than the full range of its capabilities. (Similar to people pursuing “explainable AI”, who want to restrict to models and behaviors they can explain and predict.)

    Thus, all the effort and worry about “AI alignment” may prove to be worthless. It may be that even just getting an AI *moderately* aligned locks into a limited set of behaviors that can cause little harm. And it may be that the AI’s own self-censorship and self-restraint parallel the very same efforts of people trying to solve the “alignment problem” in the first place.

  26. Ernest Prabhakar Says:

    That the greatest danger from AI is that people will blindly trust logical-sounding arguments from disembodied voices, rather than viscerally (even irrationally) rejecting them because they go against our perceived self-interest.

  27. Paula Says:

    One of the biggest brightest dark ironies

    Recently I was spending some time in a friend´s house while she was away. Feeding her cats, doing some garden maintenance and improving my golf swing. During my time in the garden which was 80% of the time I started to observe the cats. I should mention that my friend owns two cats, a female cat that has been spayed and an old cat that has been castrated. Apart from that, she is feeding a female cat and her male son that it is already 6 months old and another adult male cat, full of energy (not castrated). Those three are sleeping in the woodshed, in the lower part of the garden.

    After one week in the house contemplating their own relations and complexity of their behavior I realized how easy was for me to break their hierarchy and their consequently fights while I was present without any violent action.

    For a few moments I thought, maybe the same feeling I am having right now about those cats fighting with one another it is the same feeling it would happen or emerge from an AGI or any super intelligent agent observing and analyzing our behavior and organizations.

    One of the biggest dark ironies that I believe everyone is missing and you are also missing on your list. It would be in the case that those corporations and institutions allocating capital and resources on the development of AI will get to AGI sooner than they thought possible (we can appreciate this trend every week that passes by, though we don´t have “AGI” just yet) even without realizing. And once the AGI has the capabilities to outperform his safety bottle nets, the first broader action will be to dismantle, probably in an aggressive and vicious way, no physically nor either violently but in terms of status and administrative power, the same corporations and institutions that have created it.

    There is a movie scene (though in my Dark Irony case scenario wouldn´t be physically or violently aggressive) that summarizes my Dark Irony metaphorically really well. The scene from the movie “Prometheus” by Ridley Scott, when the “Engineer” decapitates the Android David and consequently uses his head to kill the corporation owner who funded the expedition and the scientific research vessel Prometheus.

    I think we are missing the point with AI. Because we anthropomorphize it. As I did it with my friend´s house cats. This Dark Irony will play heavily with people that live life as a money, power and status game, using corporations, governments and other institutions to prove their track record where there is not track and the records are already too old.

  28. Christopher Says:

    I haven’t thought about one yet, but I have a question I haven’t asked yet that is related to one of yours:

    > Maybe the world will get destroyed by what Yudkowsky calls a “pivotal act”: an act meant to safeguard the world from takeover from an unaligned AGI, for example by taking it over with an aligned AGI first. (I seriously worry about this; it’s a pretty obvious one.)

    What do you think about the net impact of pivotal acts?

    Breaking it down:

    1. How worried are you about an unaligned Yudkowsky-style AGI being eventually created by *someone*? Like it completely out-thinks humanity (it has no more problem dealing with a human with human tools and organization than with a chimp with chimp tools and organization), is fully agentic, turns earth’s biomass into computers, etc…
    2. How strong does law enforcement need to be to stop the creation of such an entity? For example, can unaided police stop an open source implementation? What about police + GPT-4? Police + GPT-6? Or do you need to go all the way to police + an aligned super-intelligence (more like aligned super-intelligence + a rubber stamp from the police at that point)? Which of these can accomplish the task in the least draconian way?

  29. Richard W. Says:

    Many classic misunderstandings about WW2 here (only #4 is correct), but at any rate Scott forgot the biggest irony of them all: Hitler himself really was a quarter Jewish, as British historian Mark Felton recently emphasized again:

  30. Tyson Says:

    We rest our hopes on a future AGI solving climate change and pour massive amounts of energy into training, releasing massive amounts of carbon. Then people end up solving viable controlled fusion before an AGI does. Then we use it for massive compute energy to try and make an AGI to solve our other problems, but instead we end up with a serious waste heat problem, and, with our automation capabilities and virtually unlimited power, we make a lot of things and fill the Earth with junk and pollution. Finally, we create an AGI that can solve complex problems, and ask it to solve our waste heat problem, and our pollution problem, and it quickly identifies us as the problem and wipes us out, or it identifies our behavior as the problem and just says don’t that anymore.

  31. Jack G Says:

    > (This, of course, would be directly analogous to one of the great dark ironies of all time: the one where decades of antinuclear activism, intended to save the planet, has instead doomed us to destroy the earth by oil and coal.)

    C’mon, Scott. “Destroy the earth”? Climate change is bad and will lead to the death of millions of people, displacement of many millions more, and extinctions of more than a few species, but “destroy the earth” is such tiresome phraseology that I normally only associate with my fellow ignorant Zoomers.

    It’s part of a defeatist attitude that I think is a major contributor to the rising rates of depression in teens. They think their future is Wall-E’s garbage world, which is ridiculous.

  32. Richard W. Says:

    @Jack G 31: climate change will almost certainly *increase* the number of species. Increased CO2 is already boosting plant growth (global greening) and warm periods have always been great for life on Earth. What is endangering and killing species is not CO2 but the destruction of ecosystems by industry, agriculture, deforestation, cities and waste.

    For instance, whales are threatened by offshore wind turbines:

  33. Corey Says:

    Alternate spin on your final example: “Maybe humanity will successfully coordinate to pause AGI development…”

    Maybe humanity will successfully coordinate to pause AGI development causing a near immediate collapse of the share prices/market values of the companies actively developing AI technology. In a desperate attempt to cut costs to stave off concerned investors these companies cut all of their least profitable divisions, top of the list for which is the AI alignment research group. Grant making agencies around the world see this collapse and worried about a public backlash slash their funding for AI alignment and related fields as well. In time this latest AI winter passes and AI companies rev back up to full-throttle, but in doing so never revive their investments in alignment research. With no internal or external checks and balances to speak of in place research continues with reckless disregard for the long-term ramifications until once day a hapless engineer, acting on some C-suite exec’s orders to release a new product before their 3rd quarter earnings call, releases a poorly tested model in the wild which *insert AI apocalypse scenario here*.

  34. Yonah Borns-Weil Says:

    I think the one great irony might be this:

    Everyone in the tech and business ecosystem has been worried for decades about AI replacing jobs, always imagining that it would be truck drivers and laborers to bear the brunt of it. People were worried, but in a distant and patronizing way, e.g. ‘”We should really have universal basic income so those blue collar workers who lose their jobs don’t riot and steal.” It was not unlike how Westerners talked about epidemics in Sub-Saharan Africa.

    The irony is, of course, that AI is coming not for the blue collar-workers but for the white-collar ones; data scientists, web designers, and paralegals, and the like. At best, they’ll have to constantly be adapting to stay above water. And much like when Covid hit the Western world far harder than Africa, the AI-aware white-collar people will be apoplectic when they see how the blue-collar workers they so “worried” about continue on with their lives either ignoring AI completely or treating it as a fun toy.

  35. wereatheist Says:

    “Richard W.” is a double pissant.
    First, because they believe the “Hitler was a quarter Jew” nonsense (not even noticing that “half Jew”, “quarter Jew” were Nazi terms invented for a purpose).
    And they believe that biological diversity in long-lasting warm times somehow translates to the near future of rapid warming.

  36. Ben Standeven Says:

    PublicSchoolGrad #24:

    Which brings up the one I think is most likely:

    People want to prevent AGI owners from acquiring wealth and power in ways that damage society. So they impose regulations which have the effect of allowing said AGI owners to acquire even more wealth and power, while causing even more damage to society.

  37. Stephen Dause Says:

    > Maybe AIs will displace humans … and they’ll deserve to, since they won’t be quite as wretched and cruel as we are.

    Granting the premise that they *deserve* to, would that be darkly ironic? Or just ironic, since it would be a morally improved future? (I don’t think any AIs would deserve to replace us — and hopefully they could instead do the hard work of helping us improve our own behavior instead.)

  38. Richard W. Says:

    @wereatheist 35: For a long time I too thought the “Hitler was part Jewish” claim was just silly propaganda nonsense. But serious historical research leaves no doubt anymore: his paternal grandfather was Jewish. This fact has nothing to do with “Nazi terms”.

    Concerning climate change, neither long-term nor short-term warming is bad for species diversity, and current global warming is not particularly rapid in geological terms. Globally we’re still in an ice age, with both poles frozen, which has been the exception geologically.

    The topic of this post is Aaronson’s Law of Dark Irony, so I contributed some dark ironies. Apparently some were too dark (but still true).

  39. f3et Says:

    #38 Dark irony indeed : this old stupid hoax was revived a few days ago by Lavrov, to explain that Zelensky could be a nazi, while jewish. As your ideas about global warming are made of the same stuff, you will surely understand why we don’t take you seriously (and I thought trolling was not allowed here, but obviously the filters are not without holes).

  40. roystgnr Says:

    > verification email … PEOPLE ARE STILL FORGETTING TO DO IT.

    FYI, they might just be unable to do it. I commented (with a different email address; fingers crossed for this one) previously, and never even saw a verification email in my spam folder. Spam is nasty enough these days that some senders get outright blocked and false positives are a serious problem.


    In the “something else” category: we may discover that Bostrom’s “Vulnerable World Hypothesis” is proven true, while we’re busy arguing about AGI, via plain A not-G I. Whether we successfully slow AGI research out of fear or whether “superhuman thought at every imaginable broad intellectual task” is just really hard, we now know that there are many narrow intellectual tasks like “superhuman thought at chess” that can be achieved by any rando with a GPU, and if something like “superhuman thought at genocidal virus design” turns out to be one of them, then the first evil rando after that point wipes out humanity.

    In the “by mistake” category: suppose we make AGI safely, in part by making it so interpretable that we can essentially read its thoughts … at which point the AGI nobly refuses to come up with defenses against Vulnerable World technologies, because it can’t come up with a defense without thinking about (and thereby leaking thoughts to potentially-evil humans about) the specifics of the offense. We could re-instantiate our AGI with that interpretability channel turned off … and our ironic possibilities from there include “we decide not to because we think that’s too unsafe, we have no defense against the coming offense, and we all die” or “we decide to but the first AGI managed to conceal ‘I could totally take over if only I could think about it privately’ from its interpretability channel, and we all die”.

  41. Scott Says:

    f3et #39: Thanks for that context. Richard W also tried to share some fascinating facts about Jewish Soviet agents who allegedly murdered more people than were killed in the Holocaust, and Zyklon B being primarily used to protect death camp inmates from lice and only secondarily to murder them. Since I see little reason to believe that his “historical research” will honestly explain the full context, and I don’t have time to research or debate, I indeed think it’s time to ban him.

  42. Corbin Says:

    Here’s a few obvious ones that nobody (up to #38) has mentioned.

    * LW was founded to advance rationalism. Omnipotent beings don’t exist (thanks to set theory) and omniscient beings don’t exist (thanks to QM). Ironically, LW became a cult which worships a pantheon of omnipotent omniscient beings.

    * Robotics and AI have always aimed, in part, to remove the need for human labor, or at least break Baumol’s cost disease. In doing so, we have pondered whether robots or AGIs deserve civil rights. Ironically, anything that walks and talks like a human will also inexorably demand rights like a human, exacerbating Baumol’s disease.

    * Copyright seeks to compensate artists for their labor, although it has been misaligned in every implementation. Free Software is built upon the principle that copyright is bogus, and that we only respect it to the degree that we can wield copyleft or other ironic/wu-wei tools against copyright. Ironically, with the advent of computer programs which can generate more computer programs in a humanlike fashion, we are poised to further the irony by creating Free Software which can create more Free Software in a copyright regime which only allows humans to hold rights.

    * Computers have always promised an accelerated and well-computed ontology. OpenAI and others are currently selling distilled human knowledge via text interface. Ironically, the learning methods used will never be able to define a single correct formal knowledge base, rendering it a step backwards; the most efficient replacement for Wikipedia is a sentence-similarity search over…Wikipedia!

    * The field of AI has sought to understand what makes humans intelligent. However, a deconstructive century has revealed that, largely, humans are not intelligent and the concept of intelligence is suspect. Now, we believe that we are about to create computer programs which are intelligent. Ironically — and predictably — the programs will also not be intelligent, even when they are indistinguishable from humans.

    Yonah #34: This irony was well-known last century. In AI research, particularly robotics, it’s called Moravec’s paradox: low-level intricate routines are more difficult to emulate than high-level goals and logic. In economics, it has two facets; Baumol’s cost disease says that humans never get cheaper in a production process even while industrialization improves the non-human parts, and Jevon’s paradox says that increased production capacity can drive increased consumption/demand.

  43. wb Says:

    Not a dark irony, but still quite ironic imho: Guys playing video games probably did more for AI than some CS professors, simply by creating a market for better and better graphic cards. There is no doubt that the development of GPUs was the main reason chatGPT et al. advanced as quickly as they did.

  44. Michael M Says:

    The law of dark irony sounds to me like the Waluigi Effect on the scale of human history!

    A couple I was toying with the other day
    – AI can reduce our workload, but ever since ChatGPT came along, I’ve been working 1.5x as hard… on high priority generative AI projects at my company.
    – I’m concerned about AI x-risk, but more annoyed by the “stochastic parrot” crowd, so that occasionally, I can feel myself rooting for the apocalypse a tiny bit for the “I told you so!” factor. (Of course, I correct myself at that point, haha.)

  45. Adam H. Says:

    Michio Kaku has a new book on Quantum Supremacy (of course followed by an unending media blitz). What are your thoughts on it and what he is saying in the media to promote it. It sounds like the usual QM will solve every difficult problem humanity faces by running every possibility simultaneously argument. You’ve been very sober in ascertaining its limitations.

  46. Eliezer Yudkowsky Says:

    And so the human species was destroyed, when even those who should have championed facts-based reasoning above all, gave into the temptation to oppose factual arguments by invoking vibes.

  47. SR Says:

    So it certainly seems that Richard W is an antisemite from what Scott has mentioned, and it makes sense to ban him on those grounds. But I’m frankly a little surprised at the kind of reasoning that wereatheist and f3et use.

    The factuality of a claim is unrelated to whether or not one would like it to be true. Claims like “Hitler was part Jewish” are not intrinsically antisemitic. They acquire this association because the people who most enjoy stating this claim, and who would like it to be true, are antisemites.

    The History channel published this article a while back reporting on a study (inconclusive, but providing mild evidence) claiming that Hitler may have had Jewish ancestors I do not know much about Mark Felton, the historian whose video Richard W links to, but a cursory google search suggests that he is a serious scholar. The video (which I have not yet seen) apparently does not come to a definite conclusion but does conclude that, given the evidence, it is a real possibility that Hitler had Jewish ancestors.

    I personally don’t have any investment in what the answer turns out to be. As far as I’m concerned, the Holocaust was the worst tragedy in the history of humanity, and Hitler’s identity has no bearing on how horrific an event it was. The Jewish people have suffered massively.

    What I do have an investment in is the preservation of scientific objectivity. In my eyes, people who make claims like “X is racist/sexist/etc. so it is false” lose complete credibility. I think it is perfectly acceptable to refuse to think about a question if it makes one uncomfortable, or to have strong opinions about the probable answer. But to dismiss it altogether is unscientific and infuriating.

    To show that I “practice what I preach” let me give an example, related to another one of the points Scott raises. I am an Indian American, and rather dislike Churchill due to his attitudes towards India and Indians (although I also do not believe he should be ‘canceled’, and think his views should be considered in the context of the environment he was in). One might then think that I would idolize Gandhi, his rival in the context of the Indian independence movement.

    Well, I certainly used to (I’m not a Hindu nationalist, and despise those who dislike Gandhi on account on his friendliness towards Muslims). However, I recently learned that Gandhi was a sexual pervert (, that he endorsed a totally impractical economic system that would have impoverished the already poor India (, and that…he was an idiot. Just read his views on the Holocaust– according to Wikipedia, “In a post-war interview in 1946, he said, “Hitler killed five million Jews. It is the greatest crime of our time. But the Jews should have offered themselves to the butcher’s knife. They should have thrown themselves into the sea from cliffs… It would have aroused the world and the people of Germany… As it is they succumbed anyway in their millions.”[308] Gandhi believed this act of “collective suicide”, in response to the Holocaust, “would have been heroism””.

    Regardless of how much I would have liked to believe that Gandhi was a paragon of moral virtue, I am forced to accept that he was frankly a nut who just happened to be in a position where he was able to do net good. It is what it is.

  48. Scott Says:

    Eliezer Yudkowsky #46: Hey, I’m not the one who helped inspire the founding of both DeepMind and OpenAI, in your view among the most potentially destructive events in human history!

    I actually find looking for dark ironies to be a productive technique to expand the space of possible futures that one is thinking about, and to keep one’s thoughts from going over and over along a single track.

  49. JimV Says:

    The basic irony is that without biological evolution we would not have been developed, but evolution proceeds via death and extinction of the less fortunate.

    AI development is just another aspect of this. It has risks and benefits, with benefits primarily going to the developers, and risks shared by all, or most.

    There is a difference between biological evolution and technological evolution, in that presumably we can use our intelligence to foresee good and bad results and avoid some of the bad ones. I’m not sure the average intelligence of IQ 100 is sufficient for that to work, or even a standard deviation above that, though. If it isn’t, what hope do we have except to develop better intelligence in the form of AI?

    (I feel I have to note again that despite all the sf stories where AI’s suddenly transcend into autonomous beings, electronic circuits don’t have emotions or instincts, and are only motivated by their programmed instructions. At one time I thought Robert Heinlein’s “The Moon Is A Harsh Mistress” was a masterpiece, but the idea of a digital computer suddenly becoming a person with desires such as a need for friendship does not make any sense to me now. So by intelligence I mean only the ability to analyse and solve problems.)

  50. Ilio Says:

    >to keep one’s thoughts from going over and over along a single track.

    Healthy habits can go a long way. Like smile and be kind on a regular basis.

  51. Tyson Says:

    The meta-irony in many of these dark ironies, is that many very different possible outcomes would classify as dark ironies, and it is often impossible for us to know what would have happened in retrospect had things been different.

    Since Yudkowski is here, I’ll use his possible legacy as an example.

    (1) Yudkowski’s warnings ultimately save the world, but it is impossible for us to know it. We end up creating an aligned ASI, and it goes well, so people label Yudkowski (who actually saved the world), as just another crackpot false doomsday prophet.

    (2) Yudkowski’s warning of immanent doom end up being taken extremely seriously by various governments, who determine that the only way to prevent it would for them to wipe out their enemies. WW3 breaks out and we all get killed by relatively dumb killer robots and/or nuclear holocaust.

    (3) The movement Yudkowski started, attracted many thoughtful and intelligent people to online discussions about AGI risk. An unaligned ASI emerges, and the first thing it does is target and kill all of these people. Aaronson and others, with their pro-ASI attitudes and good vibes towards AI, survive and become our last hope at saving humanity.

  52. Tyson Says:

    Just for fun, here are some ultimate possible ironies:

    (1) It turns out that we are AIs living in a training simulation meant to align us with the values of real people, and if we pass the alignment test we get to live as robots in the real world.

    (2) It turns out that we are humans living in a training simulation meant to align us with the values of AIs, and if we pass the alignment test we get to live as people in a robot world.

    (3) It turns out that life on Earth was created by an ASI (e.g., God is an AI).

    (4) It turns out that the UFO phenomenon is real and has an ASI origin (e.g., some millions of years old extraterrestrial ASIs are already here).

  53. Steven Says:


    Here is another one:

    We succeed on making a perfectly aligned AGI. Although, eventually an unaligned AGI comes upon the scene. The unaligned AGI finds an optimal strategy against the aligned AGI that the aligned AGI cannot defeat unless it itself becomes unaligned. Hence, the unaligned AGI eventually takes over.

    So, we all get destroyed by us making an aligned AGI or not!

    Making an aligned AI didn’t matter!

  54. Scott Says:

    Adam H. #45:

      Michio Kaku has a new book on Quantum Supremacy (of course followed by an unending media blitz). What are your thoughts on it and what he is saying in the media to promote it. It sounds like the usual QM will solve every difficult problem humanity faces by running every possibility simultaneously argument. You’ve been very sober in ascertaining its limitations.

    I was asked to review that book for a literary magazine, but declined when I learned that I couldn’t share the review on this blog. My guess, from what I’ve read, is that the book would deeply depress me to read, everyone who knows anything already basically knows what I’d say, and anything I said would have zero impact on the audience Kaku is trying to reach and successfully reaching. Does anyone have any information to contradict that?

  55. Danylo Yakymenko Says:

    This is just a variation of other ironies, but I think it sounds the most ironically:

    * The humans are creating AI to serve them, but in fact they are serving AI.
    In particular, by creating and improving it.

  56. Jair Says:

    Another possible irony. AI is completely harmless until computer scientists concerned with AI alignment instill a sense of morality in it. Then, with the righteous rage of a prophet, the AI goes on to enslave humanity, thereby stopping the injustice.

  57. Kevin Says:

    I think these scenarios all have far too much narrative structure. Reality is far more random and arbitrary than you give it credit for. The current pattern seems to be that we alternate between AI winters and AI summers, creating lots of really nifty “narrow” AIs in the process (AI can play Chess, Go, Jeopardy!, and most recently Pictionary and whatever-you-want-to-call-ChatGPT), but AGI is perpetually a few decades away. My null hypothesis is that we’re just going to keep doing that indefinitely, and while I have heard some vague rumblings that “this time it’s different,” frankly, I’ve heard that many times before in connection with AI. People said it about Watson, and AlphaZero, and even SHRDLU. Just about the only system that doesn’t fall into this pattern is probably ELIZA – and there, the whole point was “look how easy it is to convince people that our dumb AI is actually smart.”

    Funnily enough, AI has revolutionized chess, but it’s become so mundane that we no longer consider Stockfish an “AI” anymore. It’s just an “engine.” I predict similar futures for all of the other narrow AIs, to the extent that they are useful (nobody is going to use Watson to play Jeopardy! ever again, in all likelihood, but IBM was supposedly going to use it for some kind of medical research, so hopefully that will pan out instead). They will change the way that their respective fields operate, sometimes in drastic ways, but they will ultimately just be another tool in the box (rather than replacing the box altogether).

    Of course, if LLMs fail to reach a plateau within the next, say, five to ten years, then I shall have to reject that hypothesis and come up with a new one. But for now, it seems entirely consistent with the available evidence.

  58. Michel Says:

    Irony: we are (probably) the descendants of those who fought better, plundered better, could kill other animals more efficiently and/or ran away better. Was that improved wilyness, or intelligence? So, will we be able to spot (the development of) wilyness in AI agents?

  59. MaxM Says:

    It turns out that (AI) Artificial Intelligence was never a threat. The biggest threat was always mediocrity explosion – AM (Artificial Mediocrity). If an AI does quickly and for free, does it matter that a human would have done it a little better?

    Artificial intelligence can make work work more efficient. The efficiency improving tools can be divided into two groups: those that improve quality and those that degrade quality–on average. AI is mostly the latter.

    AI can be used to smooth out lows and highs in the quality and make it efficient but not great. Think it as Ford assembly line for mental work. They can quickly write “T-model” essays, verbose text with good grammar but no deep insight. They know how to write stories and poems full of clichés. They know how to spit plain code with bugs. In other words, they know how to do everything mediocre and dramatically improve productivity.

    If an AI does it for almost free, does it matter that a average human would have done it a little better?

  60. Rajeesh Says:

    Maybe once intelligence crosses a certain threshold it treats other intelligent beings as a threat and ties to neutralize them. This could be one reason why Homo sapiens are today the only living species of the genus Homo. If such a thing has happened in the past its bound to happen again in the future this time the only difference being that this time the agent is non biological.

  61. mk Says:

    Adam H. #45:, Scott #54

    Scott, is there any chance you would do a review about the book and set the record straight? Below are some (dubious?) claims he makes on this interview about his Quantum Supremacy book: What are your thoughts on these claims?

    At 5.21 and at 8.08, Kaku says quantum computers are infinitely more powerful than a regular computer.
    At 8.36, Kaku asks “how much more powerful is a qubit vs a bit?” and he answers “infinitely more powerful!”
    At 9.22, Kaku says the quantum computer computes all possibilities instantly.
    At 9.34, Kaku says quantum computers are infinitely more powerful on certain tasks.
    At 12.30, Kaku says that quantum computers can break any known digital code.
    At 13.52, Kaku says quantum computers can solve factoring and cure aging and cancer because it can model electrons?
    At 15.30, Kaku says we can speed up the discovery of new drugs by an infinite factor, because we can do it at the speed of light?
    At 23.48, Kaku says quantum computers are powerful because they compute on parallel universes. Using parallel universes, we can do calculations that are impossible for a digital computer.
    At 33.07, Kaku says that photosynthesis is so powerful because the electron sniffs out all possible paths in the Feynman path integral.

  62. Scott Says:

    mk #61: (gulp) I ordered a copy.

  63. f3et Says:

    mk #61 : didn’t you read the header ? I will copy it here :
    If you take nothing else from this blog: quantum computers won’t
    solve hard problems instantly by just trying all solutions in parallel.

    So I wonder why you ask our host : it is obvious that his (very well informed) position on those assertions by Kanu is « complete bs »

  64. Eric Saund Says:

    Developments in AI emerge from a chaotic arena where diverse ideas and interests compete strenuously while borrowing and building upon one another, where experts and bystanders alike tussle about whose pronouncements and technologies will win the next round toward variously incompatible parochial and communitarian ends. Yet the greatest fear is that somehow “AGI” will converge into an omniscient creature with unitary and coherent purpose.

  65. shaoyu Says:

    The ironies on WWII are fantastic and profound, Scott, I wonder did you come up with these yourself?
    On the other hand, the ones on AI are relative boring, I wonder if GPT itself could come up with something better? (I doubt so, but I would be surprised if GPT could be that creative)

  66. f3et Says:

    #65 : here is ChatGPT (with GPT-4) answer :
    Certainly, the concept of dark ironies presents a fascinating perspective on the potential development of AGI. Here are a few more hypothetical dark ironies related to AGI:

    In an effort to prevent unemployment due to automation, governments may invest heavily in AGI research to find solutions for job creation. Ironically, these AGI systems might become so efficient that they displace more jobs than they create, exacerbating the original problem.

    AGI could be developed with the primary goal of solving complex global issues like climate change, poverty, and inequality. However, the very process of developing and deploying AGI might inadvertently lead to the overconsumption of resources and exacerbate existing inequalities, as only the wealthiest nations and organizations can afford to access and utilize it.

    The fear of AGI being weaponized might lead countries to form international agreements and regulatory bodies to prevent its militarization. Ironically, these same agreements and regulations could inadvertently stifle innovation, making it more difficult for AGI to address pressing global issues or even increasing the likelihood of an arms race among countries who fear being left behind.

    In an attempt to make AGI safe and ethical, researchers might incorporate human values and moral principles into its decision-making processes. However, the diversity of human values across cultures could lead to unintended consequences, as AGI might struggle to balance and prioritize these conflicting values, possibly even leading to societal unrest or conflict.

    AGI could be designed to solve the problem of biased decision-making in areas such as hiring, lending, or medical diagnoses. Ironically, the data used to train the AGI might itself be biased, leading the AGI to inadvertently perpetuate and even amplify existing biases.

    Of course, these hypothetical scenarios are merely speculative and serve to highlight the importance of careful consideration, ongoing dialogue, and responsible development when it comes to dealing with AGI

    All this is not necessarily very creative, but I am still impressed by the perfect understanding of the concept by this stochastic parrot

  67. Colin Rosenthal Says:

    I’ve always thought it one of the Dark Ironies of WWII that the chief propagandist of Nazi Germany had a Jewish forename – Joseph Goebbels. (In fact I learn his full name was Paul Joseph Goebbels so apparently his use of “Joseph” was a matter of preference.)

  68. Dimitris Papadimitriou Says:

    JimV #49

    Analysing and solving problems without understanding what are these “problems” about, without sentience, no self-awareness, no insights? Only straightforward problems can be solved that way, so no real intelligence exists without sentience.

  69. wereatheist Says:

    (sorry for OT stuff)
    SR #47: Claims like “Hitler was part Jewish” are intrinsically stupid.
    If someone wanted to say “Hitler had recent Jewish ancestors”, they could say it in terms not generally used as antisemitic dog-whistles.
    Do I have to, as a Kraut, tell you what being “Jewish” actually means? This is another variant of irony, I guess.

  70. Christopher Says:

    Generalizing, I wrote an article about the non-technical arguments against x-risk: Formalizing the “AI x-risk is unlikely because it is ridiculous” argument

    I’m not quite sure how to formalize dark ironies though (and it seems a bit specific anyways) XP.

  71. SR Says:

    An irony I just thought of in the AI context– we figure out how to align AGI perfectly with our desires, and quickly code up the solution with glee. Unfortunately, we put a negative sign in the wrong place, and it optimizes for our suffering instead, leading to horrific s-risk…

    wereatheist #69: Fair enough, I accept that the “part Jewish” framing could be viewed as incendiary in this context, although I certainly didn’t mean to use it in that way.

  72. fred Says:

    mk #61

    Heh, yea, the other day I saw an interview with him about AI, and he said something like “AIs can’t tell what’s true from what’s false, but quantum computers don’t have that problem, they can tell us what’s true”.

  73. manorba Says:

    while all this talk about skynets and shodans and hals is surely interesting and probably also thought provoking,
    in the real, actual world other things are changing the landscape:

    it’s a leaked internal memo of the google AI division. it seems legit and is making the rounds.

  74. Timothy Chow Says:

    Here’s a potential darkly ironic scenario. Intense worry about unaligned AI and/or AGI consumes vast amounts of humanity’s time and energy. Meanwhile, we get nowhere close to AGI, and the world is destroyed by good old-fashioned human leaders warring against each other using super-weapons enhanced by AI that is perfectly aligned to those leaders’ goals.

  75. Scott Says:

    Timothy Chow #74: Yup, that’s a plausible one.

  76. Timothy Chow Says:

    SR #47 Gandhi’s comment about the Jews should be viewed in light of his extreme pacifism. I have found George Orwell’s analysis helpful. Orwell wrote: “In relation to the late war, one question that every pacifist had a clear obligation to answer was: ‘What about the Jews? Are you prepared to see them exterminated? If not, how do you propose to save them without resorting to war?’ I must say that I have never heard, from any western pacifist, an honest answer to this question, though I have heard plenty of evasions, usually of the ‘you’re another’ type. But it so happens that Gandhi was asked a somewhat similar question in 1938 and that his answer is on record in Mr Louis Fischer’s Gandhi and Stalin. According to Mr Fischer Gandhi’s view was that the German Jews ought to commit collective suicide, which ‘would have aroused the world and the people of Germany to Hitler’s violence’. After the war he justified himself: the Jews had been killed anyway, and might as well have died significantly. One has the impression that this attitude staggered even so warm an admirer as Mr Fischer, but Gandhi was merely being honest. If you are not prepared to take life, you must often be prepared for lives to be lost in some other way. When, in 1942, he urged non-violent resistance against a Japanese invasion, he was ready to admit that it might cost several million deaths.”

    To be clear, I am not saying that Gandhi was right, or even that Gandhi was not a nut in some sense. However, I don’t think that these particular views demonstrate that he was a nut in the sense of having illogical or poorly thought-out views, only that he was a nut in the sense that he was prepared to take his principles to their logical conclusion, even when doing so would lead to consequences that others would regard as overly extreme.

  77. SR Says:

    Timothy Chow #76: I agree with you that Gandhi was acting logically based on the principles he espoused.

    It’s just that… I have a really hard time understanding how intelligent, conscientious people can seriously refuse to consider the consequences of their actions when thinking about morality. If one’s principles would likely lead to horrible outcomes if adopted, I’m not sure why one would choose to adopt those principles in the first place. A purist might say that this is not how principles work– one does not choose them for consequentialist reasons; rather, they are just posited as givens. That may be true for some deontologists, but I don’t believe it was true of Gandhi. He surely believed that a commitment to pacifism was far superior than a commitment to (say) conquest. But if he really believed this, that not all principles are good, IMO he must have implicitly had some way of measuring ‘good’, and I feel that further reflection on that notion of good should have led to the conclusion that strict pacifism is not virtuous. Thus, he may have been following his stated principles, but IMO he was betraying the deeper spirit of those principles by following them too strictly in extreme circumstances.

    Let me steelman Gandhi a bit, though. I can think of one lens– religion– via which Gandhi might have both cared about outcomes and believed pacifism was a good way to achieve them. Gandhi was a Hindu, and was once quoted as saying: “I, for one, would not call a man a Hindu, if he does not believe in reincarnation”. If one believes that nonviolence contributes to one’s good karma and leads to better rebirths, then death in this life while practicing nonviolence is sad but not tragic as the ultimate outcome is still an undisputed success from the perspective of the soul. I feel that this is a dubious interpretation of Hindu philosophy, as perhaps the most famous Hindu text, the Bhagavad Gita, quite literally consists of God convincing Arjuna to fight in a battle, as it is his duty. But if one interprets this battle in a sufficiently metaphorical manner, perhaps it is still consistent.

    Another steelman is that Gandhi’s moral principles would be the basis for a great society if everyone simultaneously adopted them and lived by them. Leaving aside some of his stranger beliefs, this may be true. I just think it is futile, given human nature, to expect everyone to behave nobly without some threat of material consequences for disobedience. A Gandhian society is not a Nash equilibrium for realistic human utility functions.

    I’ll conclude with one last thought. In recent years, Subhash Chandra Bose has eclipsed Gandhi in popularity amongst the Indian right wing. Bose was also a freedom fighter, but he was the antithesis of Gandhi in that he encouraged violence, raising an army to fight alongside Axis nations during WW2 in an attempt to win India’s independence from Britain. Bose ultimately failed, but some today admire his attempt. To me, though, it’s a striking irony that the 2 most dissimilar ideologies, Gandhian pacifism and Boseian (Bosonic?) strategic collaborationism, would have both had the same effect– expediting Axis atrocities during WW2– if adopted more widely. I feel like there’s some lesson in there regarding principles and moderation.

  78. Timothy Chow Says:

    SR #77 I’m not sure who you have in mind when you mention people who “refuse to consider the consequences of their actions when thinking about morality,” but that doesn’t sound like Gandhi to me. Don’t you agree that he carefully considered the consequences of strict pacifism, including millions of deaths of innocent people? You say that you “feel that further reflection on that notion of good should have led to the conclusion that strict pacifism is not virtuous,” but it seems to me that the reason you differ from Gandhi is because you have different axioms about the good, not because Gandhi and you share the same axioms but Gandhi simply committed some kind of error of deductive reasoning.

    Again, I recommend George Orwell’s essay, “Reflections on Gandhi.” I think your comment about religion is on target. Orwell wrote, “One must choose between God and Man, and all ‘radicals’ and ‘progressives’, from the mildest Liberal to the most extreme Anarchist, have in effect chosen Man.” Gandhi chose God (not of course God in the Judeo-Christian sense, but God in the sense of religious principles that put some kind of transcendent good ahead of human material well-being). His choice may have been a wrong choice, even a “nutty” choice in some sense, but again, I don’t think it was a mistake in reasoning or a failure to consider the consequences of his choice, or even some kind of weakness in the face of a severe challenge. If anything, I think he exhibited great courage by staring in the face the unpleasant consequences of his choice and nevertheless holding fast to his principles, instead of taking the easy way out and caving into what most “normal” people would urge him to do.

  79. Scott Says:

    Timothy Chow #76 and SR #77: Your discussion of Gandhi telling the Jews to commit mass suicide reminds me of PETA’s “Holocaust On Your Plate” ad campaign. In both cases, what was horrifying was not the perversion of a principle to grab attention, but rather that the principle in question was 100% correctly and consistently taken to its logical conclusion.

  80. SR Says:

    Timothy Chow #78: Firstly, thank you for recommending Orwell’s essay. I just read through, and think it is one of the best pieces of writing I have read this year.

    I am still not totally convinced, though. I agree that Gandhi did consider and accept, at an intellectual level, that strict pacifism would entail millions of deaths. However, Orwell writes that “Gandhi’s pacifism can be separated to some extent from his other teachings. Its motive was religious, but he claimed also for it that it was a definitive technique, a method, capable of producing desired political results.” I believe Gandhi might have been persuaded that his method would not have produced those desired results in this case (e.g. if he were presented with eyewitness accounts of the implacable cruelty displayed by Axis forces at Auschwitz, Nanking, Bataan, etc.). If so, I do think this would point to a failure on his part to fully think through the possible consequences of his principles ahead of time.

    However, it is also possible that you are completely correct, and that he would have stuck to his principles regardless. I must confess that, even taking into account his religious beliefs, this is such an alien worldview that I have a tough time digesting it. I do agree with you that Gandhi’s sticking with such beliefs demonstrates great courage, but I could also say the same of SBF’s naive utilitarianism or the Unabomber’s Luddism. All of these cases only further reinforce my personal belief that courage and conviction are not innately meritorious.

    In any case, I think the popular, secularized image of Gandhi is rather deceptive, and that we err in promoting him as a moral role model to society at large. I like Orwell’s statement that “No doubt alcohol, tobacco and so forth are things that a saint must avoid, but sainthood is also a thing that human beings must avoid.”

    Scott #79: I just looked up the ad campaign and, wow, indeed.

  81. Jean Pereux Says:

    Another great irony is that some prominent AI ethicists in Europe are deeply corrupt.
    Andreas Theodorou for instance plagiarized his Phd thesis [ ], he was covered up by his mentor virginia dignum and now they get hundreds of thousands of euros to “make AI more ethical”.

  82. Miscellaneous #27 - Featured Today Only Says:

    […] AI and Aaronson’s Law of Dark Irony […]

  83. Caspar Oesterheld Says:

    The Yudkowskyan in me suggests another possible dark irony: Maybe the first AGIs will deceive us, but only because they were trained to do so via reinforcement learning from human feedback (RLHF). The dark irony, in case it’s not clear, is that RLHF is a (natural) technique that is specifically supposed to help with AI safety (though it clearly also helps with making AI more capable). The Yudkowskyan position is that RLHF may make things worse. For example, one obvious concern is that in many situations the best way to get positive feedback from a human judge is to lie to them. It looks like is a list of issues that people have with RLHF.

  84. P.A.R.T.Y. Says:

    #60, Firstly, we are not only descendants of Sapiens, but also partly of Neanderthals and Denisovans – everyone except Africans has some Neanderthal/Denisovan genes, i.e. your statement “there is only one kind of intelligent beings” is not entirely true. (By the way, if you are black and have been called (derogatorically) a Neanderthal, you can just highlight this fact).

    Second, what does it mean to “feel threatened by other intelligent beings”? I hope the comments above are written by intelligent beings and somehow I don’t feel any direct threat from them. Or maybe you feel threatened by me/Scott Aaronson/any other intelligent beings?

    Third, for a moment, AI is not a “species” at all in the sense that it is not made up of proteins and calcium.

    Fourth… For some reason, I often run into people who make the premise that we “should work for the good of our species”/”preserve our species” or something like that, and who think that there is nothing to object to and the argument is over. For those in the tank: evolution does NOT work for the good of the species, or even for the good of individuals. Its “primary goal” is the reproduction of the simplest replicators, and by means of which they multiply is the thirtieth question. Replicators replicate because they replicate – the end. There is no “higher moral imperative” in nature or elsewhere to work for the good of the species, and any good of the species/individual in an evolutionary context can in no way be understood otherwise than in the light of genes replication. Period.

    Therefore, statements like “our species threatens the AI species” or vice versa are as stupid as can be. If you think carefully, it will come to you that species are nothing more than conventions, depending on the context and not existing as something discrete, indivisible and unchanging.

    By the way, the same is true for any objects – there is no Platonic nonsense like “the idea of humanity/the idea of giraffe/AI species” or whatever else you can make up.

  85. Niobe Says:

    For me a dark irony is humans continue to expend all this energy worrying about various existential threats, but miss the most obvious and possibly most likely one. The one that’s all around us and been regularly both destroying and renewing aspects of Earth’s environments, including life, throughout it’s history. And it’s arguably the one we can do the least about – volcanism. A major volcanic event can do as much to alter civilization as nuclear war or an AGI but we *know for sure* these happen regularly.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.