{"id":7266,"date":"2023-04-27T19:37:27","date_gmt":"2023-04-28T00:37:27","guid":{"rendered":"https:\/\/scottaaronson.blog\/?p=7266"},"modified":"2023-04-28T09:01:15","modified_gmt":"2023-04-28T14:01:15","slug":"five-worlds-of-ai-a-joint-post-with-boaz-barak","status":"publish","type":"post","link":"https:\/\/scottaaronson.blog\/?p=7266","title":{"rendered":"Five Worlds of AI (a joint post with Boaz Barak)"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img decoding=\"async\" src=\"https:\/\/www.scottaaronson.com\/fiveworlds.jpg\" alt=\"\"\/><\/figure>\n<\/div>\n\n\n<p>Artificial intelligence has made incredible progress in the last decade, but in one crucial aspect, it still lags behind the theoretical computer science of the 1990s: namely, there is no <a href=\"https:\/\/www.quantamagazine.org\/which-computational-universe-do-we-live-in-20220418\/\">essay describing five potential worlds that we could live in and giving each one of them whimsical names<\/a>.&nbsp; In other words, no one has done for AI what Russell Impagliazzo did for complexity theory in 1995, when he defined the five worlds Algorithmica, Heuristica, Pessiland, Minicrypt, and Cryptomania, corresponding to five possible resolutions of the P vs. NP problem along with the central unsolved problems of cryptography.<\/p>\n\n\n\n<p>In this blog post, we&#8212;Scott and Boaz&#8212;aim to remedy this gap.  Specifically, we consider 5 possible scenarios for how AI will evolve in the future.&nbsp; (Incidentally, it was at a <a href=\"http:\/\/dimacs.rutgers.edu\/archive\/Workshops\/Cryptography\/program.html\">2009 workshop<\/a> devoted to Impagliazzo\u2019s five worlds co-organized by Boaz that Scott met his now wife, complexity theorist <a href=\"https:\/\/www.cs.utexas.edu\/~danama\/\">Dana Moshkovitz<\/a>.&nbsp; We hope civilization will continue for long enough that someone in the future could meet their soulmate, or neuron-mate,&nbsp;at a future workshop about <em>our<\/em> five worlds.)<\/p>\n\n\n\n<p>Like in <a href=\"https:\/\/www.karlin.mff.cuni.cz\/~krajicek\/ri5svetu.pdf\">Impagliazzo\u2019s 1995 paper<\/a> on the five potential worlds of the difficulty of NP problems, we will not try to be exhaustive but rather concentrate on extreme cases.&nbsp; It\u2019s possible that we\u2019ll end up in a mixture of worlds or a situation not described by any of the worlds.&nbsp; Indeed, one crucial difference between our setting and Impagliazzo\u2019s, is that in the complexity case, the worlds corresponded to concrete (and mutually exclusive) mathematical conjectures.&nbsp; So in some sense, the question wasn\u2019t \u201cwhich world <em>will<\/em> we live in?\u201d but \u201cwhich world have we Platonically <em>always<\/em> lived in, without knowing it?\u201d&nbsp; In contrast, the impact of AI will be a complex mix of mathematical bounds, computational capabilities, human discoveries, and social and legal issues. Hence, the worlds we describe depend on more than just the fundamental capabilities and limitations of artificial intelligence, and humanity could also shift from one of these worlds to another over time.<\/p>\n\n\n\n<p>Without further ado, we name our five worlds \u201c<strong>AI-Fizzle,\u201d<\/strong> <strong>\u201cFuturama,\u201d<\/strong> <strong>\u201dAI-Dystopia,\u201d<\/strong> <strong>\u201cSingularia,\u201d<\/strong> and <strong>\u201cPaperclipalypse.\u201d<\/strong>&nbsp; In this essay, we don\u2019t try to assign probabilities to these scenarios; we merely sketch their assumptions and technical and social consequences. We hope that by making assumptions explicit, we can help ground the debate on the various risks around AI.<\/p>\n\n\n\n<p><strong>AI-Fizzle. <\/strong>In this scenario, AI \u201cruns out of steam\u201d fairly soon. AI still has a significant impact on the world (so it\u2019s not the same as a \u201ccryptocurrency fizzle\u201d), but relative to current expectations, this would be considered a disappointment.&nbsp; Rather than the industrial or computer revolutions, AI might be compared in this case to nuclear power: people were initially thrilled about the seemingly limitless potential, but decades later, that potential remains mostly unrealized.&nbsp; With nuclear power, though, many would argue that the potential went unrealized mostly for sociopolitical rather than technical reasons.&nbsp; Could AI also fizzle by political fiat?<\/p>\n\n\n\n<p>Regardless of the answer, another possibility is that costs (in data and computation) scale up so rapidly as a function of performance and reliability that AI is not cost-effective to apply in many domains. That is, it could be that for most jobs, humans will still be more reliable and energy-efficient (we don\u2019t normally think of <em>low wattage<\/em> as being key to human specialness, but it might turn out that way!).&nbsp; So, like nuclear fusion, an AI which yields dramatically more value than the resources needed to build and deploy it might always remain a couple of decades in the future.&nbsp; In this scenario, AI would replace and enhance some fraction of human jobs and improve productivity, but the 21st century would not be the \u201ccentury of AI,\u201d and AI\u2019s impact on society would be limited for both good and bad.<\/p>\n\n\n\n<p><strong>Futurama.<\/strong> In this scenario, AI unleashes a revolution that\u2019s entirely comparable to the scientific, industrial, or information revolutions (but \u201cmerely\u201d those).\u00a0 AI systems grow significantly in capabilities and perform many of the tasks currently performed by human experts at a small fraction of the cost, in some domains <em>superhumanly<\/em>.\u00a0 However, AI systems are still used as <em>tools<\/em> by humans, and except for a few fringe thinkers, no one treats them as sentient.\u00a0 AI easily passes the Turing test, can prove hard theorems, and can generate entertaining content (as well as deepfakes). But humanity gets used to that, just like we got used to computers creaming us in chess, translating text, and generating special effects in movies.\u00a0 Most people no more feel inferior to their AI than they feel inferior to their car because it runs faster.\u00a0 In this scenario, people will likely anthropomorphize AI <em>less<\/em> over time (as happened with digital computers themselves).\u00a0 In <strong>\u201cFuturama,\u201d<\/strong> AI will, like any revolutionary technology, be used for both good and bad.\u00a0 But as with prior major technological revolutions, on the whole, AI will have a large positive impact on humanity.  AI will be used to reduce poverty and ensure that more of humanity has access to food, healthcare, education, and economic opportunities.  In <strong>\u201cFuturama,\u201d<\/strong> AI systems will sometimes cause harm, but the vast majority of these failures will be due to human negligence or maliciousness.\u00a0 Some AI systems might be so complex that it would be best to model them as potentially behaving\u00a0 \u201cadversarially,\u201d and part of the practice of deploying AIs responsibly would be to ensure an \u201coperating envelope\u201d that limits their potential damage even under adversarial failures.\u00a0<\/p>\n\n\n\n<p><strong>AI-Dystopia.<\/strong> The technical assumptions of <strong>\u201cAI-Dystopia\u201d<\/strong> are similar to those of <strong>\u201cFuturama,\u201d<\/strong> but the upshot could hardly be more different.&nbsp; Here, again, AI unleashes a revolution on the scale of the industrial or computer revolutions, but the change is markedly for the worse.&nbsp; AI greatly increases the scale of surveillance by government and private corporations.&nbsp; It causes massive job losses while enriching a tiny elite.&nbsp; It entrenches society\u2019s existing inequalities and biases.&nbsp; And it takes away a central tool against oppression: namely, the ability of humans to refuse or subvert orders.<\/p>\n\n\n\n<p>Interestingly, it\u2019s even possible that <em>the same future<\/em> could be characterized as <strong>Futurama<\/strong> by some people and as <strong>AI-Dystopia<\/strong> by others\u2013just like how some people emphasize how our <em>current<\/em> technological civilization has lifted billions out of poverty into a standard of living unprecedented in human history, while others focus on the still existing (and in some cases rising) inequalities and suffering, and consider it a neoliberal capitalist dystopia.<\/p>\n\n\n\n<p><strong>Singularia.<\/strong>\u00a0 Here AI breaks out of the current paradigm, where increasing capabilities require ever-growing resources of data and computation, and no longer needs human data or human-provided hardware and energy to become stronger at an ever-increasing pace.\u00a0 AIs improve their own intellectual capabilities, including by developing new science, and (whether by deliberate design or happenstance) they act as goal-oriented agents in the physical world.\u00a0 They can effectively be thought of as an alien civilization\u2013or perhaps as a new species, which is to us as we were to <em>Homo erectus<\/em>.<\/p>\n\n\n\n<p>Fortunately, though (and again, whether by careful design or just as a byproduct of their human origins), the AIs act to us like benevolent gods and lead us to an \u201cAI utopia.\u201d&nbsp; They solve our material problems for us, giving us unlimited abundance and presumably virtual-reality adventures of our choosing.&nbsp; (Though maybe, as in <em>The Matrix<\/em>, the AIs will discover that humans need some conflict, and we will all live in a simulation of 2020\u2019s Twitter, constantly dunking on one another\u2026)&nbsp;<\/p>\n\n\n\n<p><strong>Paperclipalypse.<\/strong>&nbsp; In <strong>\u201cPaperclipalypse\u201d<\/strong> or \u201cAI Doom,\u201d we again think of future AIs as a superintelligent \u201calien race\u201d that doesn\u2019t need humanity for its own development.&nbsp; Here, though, the AIs are either actively opposed to human existence or else indifferent to it in a way that causes our extinction as a byproduct.&nbsp; In this scenario, AIs do not develop a notion of morality comparable to ours or even a notion that keeping a diversity of species and ensuring humans don\u2019t go extinct might be useful to them in the long run.&nbsp; Rather, the interaction between AI and Homo sapiens ends about the same way that the interaction between Homo sapiens and Neanderthals ended.&nbsp;<\/p>\n\n\n\n<p>In fact, the canonical depictions of such a scenario imagine an interaction that is much more abrupt than our brush with the Neanderthals. The idea is that, perhaps because they originated through some optimization procedure, AI systems will have some strong but weirdly-specific goal (a la \u201cmaximizing paperclips\u201d), for which the continued existence of humans is, at best, a hindrance.&nbsp; So the AIs quickly play out the scenarios and, in a matter of milliseconds, decide that the optimal solution is to kill all humans, taking a few extra milliseconds to make a plan for that and execute it.&nbsp; If conditions are not yet ripe for executing their plan, the AIs pretend to be docile tools, as in the <strong>\u201cFuturama\u201d<\/strong> scenario, waiting for the right time to strike.&nbsp; In this scenario, self-improvement happens so quickly that humans might not even notice it.&nbsp; There need be no intermediate stage in which an AI \u201cmerely\u201d kills a few thousand humans, raising 9\/11-type alarm bells.<\/p>\n\n\n\n<p><strong>Regulations<\/strong>. The practical impact of AI regulations depends, in large part, on which scenarios we consider most likely.&nbsp; Regulation is not terribly important in the<strong> \u201cAI Fizzle\u201d<\/strong> scenario where AI, well, fizzles.&nbsp; In \u201c<strong>Futurama,\u201d<\/strong> regulations would be aimed at ensuring that on balance, AI is used more for good than for bad, and that the world doesn\u2019t devolve into <strong>\u201cAI Dystopia.\u201d<\/strong>&nbsp; The latter goal requires anti-trust and open-science regulations to ensure that power is not concentrated in a few corporations or governments.&nbsp; Thus, regulations are needed to <em>democratize<\/em> AI development more than to <em>restrict<\/em> it.&nbsp; This doesn\u2019t mean that AI would be completely unregulated.&nbsp; It might be treated somewhat similarly to drugs\u2014something that can have complex effects and needs to undergo trials before mass deployment.&nbsp; There would also be regulations aimed at reducing the chance of \u201cbad actors\u201d (whether other nations or individuals) getting access to cutting-edge AIs, but probably the bulk of the effort would be at increasing the chance of thwarting them (e.g., using AI to detect AI-generated misinformation, or using AI to harden systems against AI-aided hackers).&nbsp; This is similar to how most academic experts believe cryptography should be regulated (and how it <em>is<\/em> largely regulated these days in most democratic countries): it\u2019s a technology that can be used for both good and bad, but the cost of restricting its access to regular citizens outweighs the benefits.&nbsp; However, as we do with security exploits today, we might restrict or delay public releases of AI systems to some extent.<\/p>\n\n\n\n<p>To whatever extent we foresee <strong>\u201cSingularia\u201d<\/strong> or <strong>\u201cPaperclipalypse,\u201d<\/strong> however, regulations play a completely different role.&nbsp; If we knew we were headed for <strong>\u201cSingularia,\u201d<\/strong> then presumably regulations would be superfluous, except perhaps to try to accelerate the development of AIs!&nbsp; Meanwhile, if one accepts the assumptions of <strong>\u201cPaperclipalypse,\u201d<\/strong> any regulations other than the most draconian might be futile.&nbsp; If, in the near future, almost anyone will be able to spend a few billion dollars to build a recursively self-improving AI that might turn into a superintelligent world-destroying agent, and moreover (unlike with nuclear weapons) they won\u2019t need exotic materials to do so, then it\u2019s hard to see how to forestall the apocalypse, except perhaps via a worldwide, militarily enforced agreement to \u201c<a href=\"https:\/\/time.com\/6266923\/ai-eliezer-yudkowsky-open-letter-not-enough\/\">shut it all down<\/a>,\u201d as Eliezer Yudkowsky indeed now explicitly advocates.&nbsp; \u201cOrdinary\u201d regulations could, at best, delay the end by a short amount\u2013given the current pace of AI advances, perhaps not more than a few years.&nbsp; Thus, regardless of how likely one considers this scenario, one might want to focus more on the other scenarios for methodological reasons alone!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence has made incredible progress in the last decade, but in one crucial aspect, it still lags behind the theoretical computer science of the 1990s: namely, there is no essay describing five potential worlds that we could live in and giving each one of them whimsical names.&nbsp; In other words, no one has done [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"advanced_seo_description":"","jetpack_seo_html_title":"","jetpack_seo_noindex":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false},"categories":[8],"tags":[],"class_list":["post-7266","post","type-post","status-publish","format-standard","hentry","category-the-fate-of-humanity"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/7266","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7266"}],"version-history":[{"count":5,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/7266\/revisions"}],"predecessor-version":[{"id":7273,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=\/wp\/v2\/posts\/7266\/revisions\/7273"}],"wp:attachment":[{"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7266"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7266"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scottaaronson.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7266"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}