{"id":7174,"date":"2023-03-30T01:23:41","date_gmt":"2023-03-30T06:23:41","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=7174"},"modified":"2023-05-01T08:42:28","modified_gmt":"2023-05-01T13:42:28","slug":"if-ai-scaling-is-to-be-shut-down-let-it-be-for-a-coherent-reason","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=7174","title":{"rendered":"If AI scaling is to be shut down, let it be for a coherent reason"},"content":{"rendered":"\n<p>There&#8217;s now an <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\">open letter<\/a> arguing that the world should impose a six-month moratorium on the further scaling of AI models such as GPT, by government fiat if necessary, to give AI safety and interpretability research a bit more time to catch up.  The letter is signed by many of my friends and colleagues, many who probably agree with each other about little else, over a thousand people including Elon Musk, Steve Wozniak, Andrew Yang, Jaan Tallinn, Stuart Russell, Max Tegmark, Yuval Noah Harari, Ernie Davis, Gary Marcus, and Yoshua Bengio. <\/p>\n\n\n\n<p>Meanwhile, Eliezer Yudkowsky <a href=\"https:\/\/time.com\/6266923\/ai-eliezer-yudkowsky-open-letter-not-enough\/\">published a piece in <em>TIME<\/em><\/a> arguing that the open letter doesn&#8217;t go nearly far enough, and that AI scaling needs to be shut down entirely until the AI alignment problem is solved&#8212;with the shutdown enforced by military strikes on GPU farms if needed, and treated as more important than preventing nuclear war.<\/p>\n\n\n\n<p>Readers, as they do, asked me to respond.  Alright, alright.  While the open letter is presumably targeted at OpenAI more than any other entity, and while I\u2019ve been <a href=\"https:\/\/scottaaronson.blog\/?p=6484\">spending the year at OpenAI<\/a> to work on theoretical foundations of AI safety, I\u2019m going to answer strictly for myself.<\/p>\n\n\n\n<p>Given the jaw-droppingly spectacular abilities of GPT-4&#8212;e.g., <a href=\"https:\/\/www.semafor.com\/article\/03\/15\/2023\/how-gpt-4-performed-in-academic-exams\">acing<\/a> the Advanced Placement biology and macroeconomics exams, correctly <a href=\"https:\/\/arxiv.org\/abs\/2303.12712\">manipulating images<\/a> (via their source code) without having been programmed for anything of the kind, etc. etc.&#8212;the idea that AI now needs to be treated with extreme caution strikes me as far from absurd.  I don&#8217;t even dismiss the possibility that advanced AI could eventually require the same sorts of safeguards as nuclear weapons.<\/p>\n\n\n\n<p>Furthermore, people might be surprised about the diversity of opinion about these issues <em>within<\/em> OpenAI, by how many there have discussed or even forcefully advocated slowing down.  And there&#8217;s a world not so far from this one where I, too, get behind a pause.  For example, one <em>actual<\/em> major human tragedy caused by a generative AI model might suffice to push me over the edge.  (What would push <em>you<\/em> over the edge, if you&#8217;re not already over?)<\/p>\n\n\n\n<p>Before I join the slowdown brigade, though, I have (this being the week before Passover) four questions for the signatories:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Would your rationale for this pause have applied to basically <em>any<\/em> nascent technology \u2014 the printing press, radio, airplanes, the Internet?  \u201cWe don\u2019t yet know the implications, but there\u2019s an excellent chance terrible people will misuse this, ergo the only responsible choice is to pause until we&#8217;re confident that they won&#8217;t\u201d?<\/li>\n\n\n\n<li>Why six months?  Why not six weeks or six years?<\/li>\n\n\n\n<li>When, by your lights, would we ever know that it was safe to <em>resume<\/em> scaling AI&#8212;or at least that the risks of pausing exceeded the risks of scaling?  Why won&#8217;t the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Precautionary_principle\">precautionary principle<\/a> continue for apply forever?<\/li>\n\n\n\n<li>Were you, until approximately last week, ridiculing GPT as unimpressive, a stochastic parrot, lacking common sense, piffle, a scam, etc. \u2014 before turning around and declaring that it could be existentially dangerous? How can you have it both ways?  If, as sometimes claimed, \u201cGPT-4 is dangerous not because it\u2019s too smart but because it\u2019s too stupid,\u201d then shouldn&#8217;t GPT-5 be smarter <em>and therefore<\/em> <em>safer<\/em>?  Thus, shouldn&#8217;t we keep scaling AI as quickly as we can &#8230; <em>for safety reasons<\/em>?  If, on the other hand, the problem is that GPT-4 is too smart, then why can\u2019t you bring yourself to say so?<\/li>\n<\/ol>\n\n\n\n<p>With the \u201cwhy six months?\u201d question, I confess that I was deeply confused, until I heard a dear friend and colleague in academic AI, one who&#8217;s long been skeptical of AI-doom scenarios, explain why <em>he<\/em> signed the open letter.  He said: look, we all started writing research papers about the safety issues with ChatGPT; then our work became obsolete when OpenAI released GPT-4 just a few months later.  So now we&#8217;re writing papers about GPT-4.  Will we <em>again<\/em> have to throw our work away when OpenAI releases GPT-5?  I realized that, while six months might not suffice to save human civilization, it&#8217;s just enough for the more immediate concern of getting papers into academic AI conferences.<\/p>\n\n\n\n<p>Look: while I&#8217;ve spent <a href=\"https:\/\/scottaaronson.blog\/?p=6821\">multiple<\/a> <a href=\"https:\/\/scottaaronson.blog\/?p=6823\">posts<\/a> explaining how I part ways from the Orthodox Yudkowskyan position, I do find that position intellectually consistent, with conclusions that follow neatly from premises.  The Orthodox, in particular, can straightforwardly answer all four of my questions above:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>AI is manifestly different from any other technology humans have ever created, because it could become to us as we are to orangutans;<\/li>\n\n\n\n<li>a six-month pause is very far from sufficient but is better than no pause;<\/li>\n\n\n\n<li>we&#8217;ll know that it&#8217;s safe to scale when (and only when) we understand our AIs so deeply that we can <em>mathematically explain<\/em> why they won&#8217;t do anything bad; and<\/li>\n\n\n\n<li>GPT-4 is <em>extremely<\/em> impressive&#8212;that&#8217;s why it&#8217;s so terrifying!<\/li>\n<\/ol>\n\n\n\n<p>On the other hand, I&#8217;m deeply confused by the people who signed the open letter, <em>even though they continue to downplay or even ridicule GPT&#8217;s abilities, as well as the &#8220;sensationalist&#8221; predictions of an AI apocalypse.<\/em>  I&#8217;d feel less confused if such people came out and argued explicitly: &#8220;yes, we should also have paused the rapid improvement of printing presses to avert Europe&#8217;s religious wars.  Yes, we should&#8217;ve paused the scaling of radio transmitters to prevent the rise of Hitler.  Yes, we should&#8217;ve paused the race for ever-faster home Internet to prevent the election of Donald Trump.  And yes, we should&#8217;ve trusted our governments to manage these pauses, to foresee brand-new technologies&#8217; likely harms and take appropriate actions to mitigate them.&#8221;<\/p>\n\n\n\n<p>Absent such an argument, I come back to the question of whether generative AI <em>actually<\/em> poses a near-term risk that&#8217;s totally unparalleled in human history, or perhaps approximated only by the risk of nuclear weapons.  After sharing an email from his partner, Eliezer rather movingly writes:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she\u2019s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.<\/p>\n<\/blockquote>\n\n\n\n<p>Look, I too have a 10-year-old daughter and a 6-year-old son, and I wish to see them grow up.  But the causal story that starts with a GPT-5 or GPT-4.5 training run, and ends with the sudden death of my children and of all carbon-based life, still has a few too many gaps for my aging, inadequate brain to fill in.  I can complete the story in my imagination, of course, but I could equally complete a story that starts with GPT-5 and ends with the world saved from various natural stupidities.  For better or worse, I lack the &#8220;Bayescraft&#8221; to see why the first story is <em>obviously<\/em> 1000x or 1,000,000x likelier than the second one.<\/p>\n\n\n\n<p>But, I dunno, maybe I&#8217;m making the greatest mistake of my life?  Feel free to try convincing me that I should sign the letter.  But let&#8217;s see how polite and charitable everyone can be: hopefully a six-month moratorium won&#8217;t be needed to solve the alignment problem of the <em>Shtetl-Optimized<\/em> comment section.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There&#8217;s now an open letter arguing that the world should impose a six-month moratorium on the further scaling of AI models such as GPT, by government fiat if necessary, to give AI safety and interpretability research a bit more time to catch up. The letter is signed by many of my friends and colleagues, many [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[8],"tags":[],"class_list":["post-7174","post","type-post","status-publish","format-standard","hentry","category-the-fate-of-humanity"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/7174","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7174"}],"version-history":[{"count":7,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/7174\/revisions"}],"predecessor-version":[{"id":7275,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/7174\/revisions\/7275"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7174"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7174"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7174"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}