Book Review: ‘The AI Does Not Hate You’ by Tom Chivers

A couple weeks ago I read The AI Does Not Hate You: Superintelligence, Rationality, and the Race to Save the World, the first-ever book-length examination of the modern rationalist community, by British journalist Tom Chivers. I was planning to review it here, before it got preempted by the news of quantum supremacy (and subsequent news of classical non-supremacy). Now I can get back to rationalists.

Briefly, I think the book is a triumph. It’s based around in-person conversations with many of the notable figures in and around the rationalist community, in its Bay Area epicenter and beyond (although apparently Eliezer Yudkowsky only agreed to answer technical questions by Skype), together of course with the voluminous material available online. There’s a good deal about the 1990s origins of the community that I hadn’t previously known.

The title is taken from Eliezer’s aphorism, “The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.” In other words: as soon as anyone succeeds in building a superhuman AI, if we don’t take extreme care that the AI’s values are “aligned” with human ones, the AI might be expected to obliterate humans almost instantly as a byproduct of pursuing whatever it does value, more-or-less as we humans did with woolly mammoths, moas, and now gorillas, rhinos, and thousands of other species.

Much of the book relates Chivers’s personal quest to figure out how seriously he should take this scenario. Are the rationalists just an unusually nerdy doomsday cult? Is there some non-negligible chance that they’re actually right about the AI thing? If so, how much more time do we have—and is there even anything meaningful that can be done today? Do the dramatic advances in machine learning over the past decade change the outlook? Should Chivers be worried about his own two children? How does this risk compare to the more “prosaic” civilizational risks, like climate change or nuclear war? I suspect that Chivers’s exploration will be most interesting to readers who, like me, regard the answers to none of these questions as obvious.

While it sounds extremely basic, what makes The AI Does Not Hate You so valuable to my mind is that, as far as I know, it’s nearly the only examination of the rationalists ever written by an outsider that tries to assess the ideas on a scale from true to false, rather than from quirky to offensive. Chivers’s own training in academic philosophy seems to have been crucial here. He’s not put off by people who act weirdly around him, even needlessly cold or aloof, nor by utilitarian thought experiments involving death or torture or weighing the value of human lives. He just cares, relentlessly, about the ideas—and about remaining a basically grounded and decent person while engaging them. Most strikingly, Chivers clearly feels a need—anachronistic though it seems in 2019—actually to understand complicated arguments, be able to repeat them back correctly, before he attacks them.

Indeed, far from failing to understand the rationalists, it occurs to me that the central criticism of Chivers’s book is likely to be just the opposite: he understands the rationalists so well, extends them so much sympathy, and ends up endorsing so many aspects of their worldview, that he must simply be a closet rationalist himself, and therefore can’t write about them with any pretense of journalistic or anthropological detachment. For my part, I’d say: it’s true that The AI Does Not Hate You is what you get if you treat rationalists as extremely smart (if unusual) people from whom you might learn something of consequence, rather than as monkeys in a zoo. On the other hand, Chivers does perform the journalist’s task of constantly challenging the rationalists he meets, often with points that (if upheld) would be fatal to their worldview. One of the rationalists’ best features—and this precisely matches my own experience—is that, far from clamming up or storming off when faced with such challenges (“lo! the visitor is not one of us!”), the rationalists positively relish them.

It occurred to me the other day that we’ll never know how the rationalists’ ideas would’ve developed, had they continued to do so in a cultural background like that of the late 20th century. As Chivers points out, the rationalists today are effectively caught in the crossfire of a much larger cultural war—between, to their right, the recrudescent know-nothing authoritarians, and to their left, what one could variously describe as woke culture, call-out culture, or sneer culture. On its face, it might seem laughable to conflate the rationalists with today’s resurgent fascists: many rationalists are driven by their utilitarianism to advocate open borders and massive aid to the Third World; the rationalist community is about as welcoming of alternative genders and sexualities as it’s humanly possible to be; and leading rationalists like Scott Alexander and Eliezer Yudkowsky strongly condemned Trump for the obvious reasons.

Chivers, however, explains how the problem started. On rationalist Internet forums, many misogynists and white nationalists and so forth encountered nerds willing to debate their ideas politely, rather than immediately banning them as more mainstream venues would. As a result, many of those forces of darkness (and they probably don’t mind being called that) predictably congregated on the rationalist forums, and their stench predictably wore off on the rationalists themselves. Furthermore, this isn’t an easy-to-fix problem, because debating ideas on their merits, extending charity to ideological opponents, etc. is sort of the rationalists’ entire shtick, whereas denouncing and no-platforming anyone who can be connected to an ideological enemy (in the modern parlance, “punching Nazis”) is the entire shtick of those condemning the rationalists.

Compounding the problem is that, as anyone who’s ever hung out with STEM nerds might’ve guessed, the rationalist community tends to skew WASP, Asian, or Jewish, non-impoverished, and male. Worse yet, while many rationalists live their lives in progressive enclaves and strongly support progressive values, they’ll also undergo extreme anguish if they feel forced to subordinate truth to those values.

Chivers writes that all of these issues “blew up in spectacular style at the end of 2014,” right here on this blog. Oh, what the hell, I’ll just quote him:

Scott Aaronson is, I think it’s fair to say, a member of the Rationalist community. He’s a prominent theoretical computer scientist at the University of Texas at Austin, and writes a very interesting, maths-heavy blog called Shtetl-Optimised.

People in the comments under his blog were discussing feminism and sexual harassment. And Aaronson, in a comment in which he described himself as a fan of Andrea Dworkin, described having been terrified of speaking to women as a teenager and young man. This fear was, he said, partly that of being thought of as a sexual abuser or creep if any woman ever became aware that he sexually desired them, a fear that he picked up from sexual-harassment-prevention workshops at his university and from reading feminist literature. This fear became so overwhelming, he said in the comment that came to be known as Comment #171, that he had ‘constant suicidal thoughts’ and at one point ‘actually begged a psychiatrist to prescribe drugs that would chemically castrate me (I had researched which ones), because a life of mathematical asceticism was the only future that I could imagine for myself.’ So when he read feminist articles talking about the ‘male privilege’ of nerds like him, he didn’t recognise the description, and so felt himself able to declare himself ‘only’ 97 per cent on board with the programme of feminism.

It struck me as a thoughtful and rather sweet remark, in the midst of a long and courteous discussion with a female commenter. But it got picked up, weirdly, by some feminist bloggers, including one who described it as ‘a yalp of entitlement combined with an aggressive unwillingness to accept that women are human beings just like men’ and that Aaronson was complaining that ‘having to explain my suffering to women when they should already be there, mopping my brow and offering me beers and blow jobs, is so tiresome.’

Scott Alexander (not Scott Aaronson) then wrote a furious 10,000-word defence of his friend… (p. 214-215)

And then Chivers goes on to explain Scott Alexander’s central thesis, in Untitled, that privilege is not a one-dimensional axis, so that (to take one example) society can make many women in STEM miserable while also making shy male nerds miserable in different ways.

For nerds, perhaps an alternative title for Chivers’s book could be “The Normal People Do Not Hate You (Not All of Them, Anyway).” It’s as though Chivers is demonstrating, through understated example, that taking delight in nerds’ suffering, wanting them to be miserable and alone, mocking their weird ideas, is not simply the default, well-adjusted human reaction, with any other reaction being ‘creepy’ and ‘problematic.’ Some might even go so far as to apply the latter adjectives to the sneerers’ attitude, the one that dresses up schoolyard bullying in a social-justice wig.

Reading Chivers’s book prompted me to reflect on my own relationship to the rationalist community. For years, I interacted often with the community—I’ve known Robin Hanson since ~2004 and Eliezer Yudkowsky since ~2006, and our blogs bounced off each other—but I never considered myself a member.  I never ranked paperclip-maximizing AIs among humanity’s more urgent threats—indeed, I saw them as a distraction from an all-too-likely climate catastrophe that will leave its survivors lucky to have stone tools, let alone AIs. I was also repelled by what I saw as the rationalists’ cultier aspects.  I even once toyed with the idea of changing the name of this blog to “More Wrong” or “Wallowing in Bias,” as a play on the rationalists’ LessWrong and OvercomingBias.

But I’ve drawn much closer to the community over the last few years, because of a combination of factors:

  1. The comment-171 affair. This was not the sort of thing that could provide any new information about the likelihood of a dangerous AI being built, but was (to put it mildly) the sort of thing that can tell you who your friends are. I learned that empathy works a lot like intelligence, in that those who boast of it most loudly are often the ones who lack it.
  2. The astounding progress in deep learning and reinforcement learning and GANs, which caused me (like everyone else, perhaps) to update in the direction of human-level AI in our lifetimes being an actual live possibility,
  3. The rise of Scott Alexander. To the charge that the rationalists are a cult, there’s now the reply that Scott, with his constant equivocations and doubts, his deep dives into data, his clarity and self-deprecating humor, is perhaps the least culty cult leader in human history. Likewise, to the charge that the rationalists are basement-dwelling kibitzers who accomplish nothing of note in the real world, there’s now the reply that Scott has attracted a huge mainstream following (Steven Pinker, Paul Graham, presidential candidate Andrew Yang…), purely by offering up what’s self-evidently some of the best writing of our time.
  4. Research. The AI-risk folks started publishing some research papers that I found interesting—some with relatively approachable problems that I could see myself trying to think about if quantum computing ever got boring. This shift seems to have happened at roughly around the same time my former student, Paul Christiano, “defected” from quantum computing to AI-risk research.

Anyway, if you’ve spent years steeped in the rationalist blogosphere, read Eliezer’s “Sequences,” and so on, The AI Does Not Hate You will probably have little that’s new, although it might still be interesting to revisit ideas and episodes that you know through a newcomer’s eyes. To anyone else … well, reading the book would be a lot faster than spending all those years reading blogs! I’ve heard of some rationalists now giving out copies of the book to their relatives, by way of explaining how they’ve chosen to spend their lives.

I still don’t know whether there’s a risk worth worrying about that a misaligned AI will threaten human civilization in my lifetime, or my children’s lifetimes, or even 500 years—or whether everyone will look back and laugh at how silly some people once were to think that (except, silly in which way?). But I do feel fairly confident that The AI Does Not Hate You will make a positive difference—possibly for the world, but at any rate for a little well-meaning community of sneered-at nerds obsessed with the future and with following ideas wherever they lead.

128 Responses to “Book Review: ‘The AI Does Not Hate You’ by Tom Chivers”

  1. Jay L Gischer Says:

    To me, the AI-risk description does not work as a literal description of what might happen, but works extremely well as a metaphor that describes how AI is already working right now in our world.

    For instance, in place of the paperclip maximizer, I give you conspiracy theories on YouTube. I’ve seen a post by a YouTube engineer describing how he realized that, in the course of tweaking the recommendation engine to maximize engagement and time viewed, it was aggressively pushing conspiracy theory videos, not because they hook a lot of people in general, but because the people they do hook are such a rich vein of engagement to tap.

    This is happening all around us. The AIs are not being made to benefit us, they are made to satisfy some metric laid down by some anonymous executive trying to exceed his numbers so he can get a promotion. That this isn’t required to be good for us is obvious, that this is likely to be actually bad for us is highly likely.

    But no, nobody is grinding down humans for the value of the iron in their blood. Nor will that ever happen.

  2. Tom Chivers Says:

    Scott, this is a wonderful review, and I am honoured by it. Thank you. I’m so pleased you enjoyed the book, and more importantly that you so clearly understood what I was trying to do with it.

    Best

    Tom C

  3. Edan Maor Says:

    Thanks for the review Scott. I wish the book were available in Kindle, but ordered a physical copy since I want to read it ASAP.

    Scott/Tom C – does the book talk at all about the impact of the rationality movement on the wider world? I know the story of the rationality movement quite well, having been part of it for many years, but I’ve always wondered just how much it has impacted the wider world, especially in today’s general political climate.

    Btw Scott, as someone who has been following this blog for years, and considered himself part of the rationality community for almost the same amount of time, it’s been very interesting seeing you become closer to the community. It’s certainly a great “validation” to see someone I highly respect, confirm that this community isn’t a crazy thing to be a part of 🙂

  4. Dandan Says:

    I think that if we discard any religious worldview, then AI is essentially no other life-form than another human, group or society. Yes, someone can argue that AI lacks empathy that humans have. But history tells us that this same empathy has already lead to the cruelest things one can imagine (which also were very destructive). So how is AI more dangerous to the life of Earth than humans? Why is there AI-risk research and no Human-risk research? I think the research should be focused on the structure of wealthy and happy society (which includes other life-forms) instead.

  5. asdf Says:

    The clip maximizer does not hate you, but you can learn to love the clip maxmizer:

    http://www.decisionproblem.com/paperclips/index2.html

  6. mjgeddes Says:

    The ‘rationalists’ are people with above average technical abilities and/or above average ‘big picture’ (super-clicker) skills. Unfortunately however, that is combined with hugely inflated egos. None are as good as they think. Not as good as me Scott 😀 Only person I’ve seen that I can honestly say might match my raw intuition in ability is Scott Garrabrant, who is undoubtedly MIRI’s best man.

    Not sure that ‘rationalists’ have had much actual influence on the direction of AGI research. Names like LeCun, Hinton, Bengio and Schmidhuber are the ‘go-to’ ones, not names on rationalist mailing lists. Still, rationalists perhaps have valuable abilities to see the ‘big picture’ and do science communication.

    The problem with trying to be an autodidact is that one tends to end up with huge gaps in ones understanding, because unless one is an expert in a given field oneself, it’s very hard to know what the central concepts in said given field are. Trying to learn from good textbooks can help, but even then, one ends up overlooking important things about a subject unless one is very careful.

    Looking back to the days of SL4 in horror really at the extent of the imbecility. Participants knew nothing. Arguments resemble cave-men arguing about whether to use sling-shots or catapults to reach the moon. Today perhaps we’ve gone from knowing nothing to knowing a *little*, but still, does it really make sense to talk about AI safety when one still doesn’t know how an AGI would actually work?

    It’s clear to me that Bayesian methods don’t get close to what’s needed for AGI. Judea Pearl’s excellent book ‘The Book Of Why’, really sharpened and clarified my thinking on this. It’s clear that there’s 3 levels of epistemology: seeing (learning from observation), doing (learning from experiment) and imagining (learning by reasoning about counter-factuals). I think Bayesian updating is only getting to that first level of epistemology (seeing, observation). The second level (doing) I think is handled by computational complexity theory, and the third level (imagining) I think is handled by modal logic (but of course these are only my own best guesses). The precise nature of the relationship between probability theory, computational complexity and logic still hasn’t been elucidiated by anyone AFAICT.

    Supporting this, Gary Marcus’s excellent recent book ‘Rebooting AI’ points out serious limitations of neural networks; he’s of the same opinion as myself; that the nets are really only getting to Pearl’s first level of epistemology (seeing), and statistical inference simply isn’t up to the job of dealing with actions and causal inference (doing and imagining).

    So, despite the impressive advances of machine learning, we’re not really close to AGI yet. And that’s just the AGI part. What about value alignment? Well, no one really knows what a solution would even look like. Even I, with all my genius, am still not sure about Bostrom’s ‘Orthogonality Thesis’ (the idea that intelligence and values are orthogonal).

    If one thinks in terms of a Singleton, then it seems that a super-intelligence could have any arbitrary values, but what about multi-agent systems? A super-intelligence would be really complex, so would it really make sense to think of it as a single agent, or would it be more like a ‘society of mind’ aka Minsky (a multi-agent system)? In the latter case, the orthogonality thesis starts to wobble a bit…one can imagine something like ethics emerging naturally if the sub-agents in a multi-agent system have to learn to cooperate. Could intentional design of a multi-agent system actually be a possible solution to value alignment?

    And what is the status of values really? Do they have some kind of objective reality? Is moral realism viable? Do minds contain values, or is it the other way around? Perhaps it’s values that are fundamental, and minds are simply vessels for values!

    My own bold guess right here Scott, is that values are indeed fundamental, not minds , and Axiology will prove to be the foundation of cognitive science in a way analogous to how set theory is the foundation of math (Decision&Game Theory and Psychology I think will turn out to be parts of Axiology).

    We have to hand it to the rationalists: for all their big egos, they’re trying to think about it at least, and are brave enough to make bold guesses and dream big dreams. Tom’s book gets that across very well.

  7. anonymous Says:

    Rationalists are good at promoting/advancing science and mathematics but when they talk about morals they often sabotage themselves because they usually don’t believe in free will. If you don’t believe in free will, how can you be good at talking about morals? Morality requires believing in and taking free will seriously. Also, of course, rationalists tend not to believe in a conscious higher power than governs the universe that some people feel also needs to be believed in and thought about to take morality seriously.

    The average person — I think is right — free will is real and the universe is controlled by conscious agent(s) and not a machine or a math equation as physicists and rationalists tend to think.

    But rationalists can redeem themselves by searching for a science of the soul (call it homuncular physics). If a homunculus can be found (maybe a large very quantum coherent molecule), that homunculus can be moved to a custom designed highly engineered body good for almost any environment in the universe. Rationalists will have mostly conquered death and pain and will be the ultimate heroes they always dreamed of becoming. They will also be more socially adept because now they believe in free will because they know the science and math of it and won’t be objectifying everything like they sometimes did before.

  8. Vampyricon Says:

    Speaking of cults, I remember a funny thing from when I first joined SlateStarCodex’s Discord server:

    > SSCD is a bit different from what a lot of people expect
    >The good thing about it is that it is not, in fact, a cult of Scott [Alexander] wholly determined to make his dreams a reality
    >The bad thing about it is that it is not, in fact, a cult of Scott wholly determined to make his dreams a reality

  9. Edward Says:

    Does he address the criticism of why there doesn’t seem to be any sign of a robot takeover in the productivity data from the Bureau of Labor Statistics, which is low by historical standards but might rationally be expected to have a uptick if AI was taking over from humans?

  10. Matthew Green Says:

    Of the four reasons you give for moving closer to the rationalist community, one of them (listed first) is because you had a traumatic emotional experience being criticized by outsiders on the Internet — and I’m sympathetic here! — while another is your appreciation for Scott’s blogging. On the other hand, when it comes to the actual merits of the core issue this community is focused on (AI singularity), it seems like you’re willing to upgrade it to “not totally impossible, but I don’t know if it’ll be an issue in the next 500 years.” From an outside perspective, that doesn’t seem like much of an endorsement.

    Please don’t misunderstand me. I think ingroup/outgroup criticism and quality of blog posts are good reasons to join a community or a social club. But when it comes to joining an *intellectual movement*, I’d expect something a little bit more substantial than “well, maybe they’re not totally wrong, but some people treated me really poorly so that’s good enough for me.”

  11. Scott Says:

    Edan Maor #3:

      does the book talk at all about the impact of the rationality movement on the wider world?

    I mean, yes, it talks for example about how Nick Bostrom’s Superintelligence book was decisive in making AI-risk a mainstream concern discussed by Bill Gates, Elon Musk, government officials, etc. But as for the REAL real-world impact, won’t none of us know until we see how AI evolves? 🙂

  12. Scott Says:

    Dandan #4:

      So how is AI more dangerous to the life of Earth than humans? Why is there AI-risk research and no Human-risk research?

    I mean, all the work on climate change and nuclear proliferation and bioterrorism … isn’t that already “human-risk research”? 🙂 And while you can think of AI as belonging to the same continuum as human intelligence if you like, isn’t it obvious that it would bring its own set of issues (if nothing else, because AIs, unlike brains, could be easily copied, sped up, etc.)—issues that require separate examination, to whatever extent you’re worried about AI at all?

  13. Scott Says:

    Edward #9:

      Does he address the criticism of why there doesn’t seem to be any sign of a robot takeover in the productivity data from the Bureau of Labor Statistics, which is low by historical standards but might rationally be expected to have a uptick if AI was taking over from humans?

    I don’t recall, but I imagine the rationalists would say that this is like someone arguing in 1939 that if nuclear weapons are a real future danger, then we ought to be seeing an uptick in the use of small tactical nukes or something, and we’re not. I.e., the basic form of the argument is actually sound, but it’s looking at the wrong signs—one instead ought to have been looking at the progress in nuclear research.

  14. Edan Maor Says:

    Scott #11:

    > But as for the REAL real-world impact, won’t none of us know until we see how AI evolves? ????

    Well, yes 🙂 But I’m not specifically talking about AI here, I’m talking other real-world impacts. E.g. As you mention, Scott Alexander is actually having a “real-world” impact on Economics – at least to the tune of economists taking him seriously, writing books based on his writing, etc.

    I guess what I’m wondering is what *other* impacts the rationality community has had outside of pushing the AI conversation, and outside of Scott being super influential. I know that the thinking in the rationality community has definitely spread to other intellectuals (e.g. you, Sam Harris, the aforementioned economists),.

    (And for another example, one negative possibility which I hope is wrong is that the rationality community has been somewhat of a breeding ground for the alt-right, which helped influence the 2016 elections, etc. I have no idea if that’s actually true and hope it isn’t, but I’ve heard it mentioned in the past and would be interested in whether or not it’s true).

  15. Scott Says:

    Matthew Green #10: I gave four reasons for drawing closer to an intellectual/social club. I think they’re all reasons that could reasonably, validly draw anyone closer to a social club.

    But a crucial feature of this club, and one that I should’ve explicitly pointed out, is that it has no oath or creed or statement whatsoever to assent to as a condition of membership. Indeed there’s no “membership” at all, other than signing up for a mailing list or whatever. You just show up if you’re interested in what they’re talking about, and leave if you get bored. I’ve gone to some of the Austin LessWrong meetups, and to be honest I don’t even remember AI-risk ever being the topic of conversation—the focus tends to be more on (e.g.) people sharing anecdotes from their lives, and then the group analyzing them to try to extract general lessons. Or fluid dynamics or set theory or whatever else nerds enjoy talking about.

    I completely agree that the seriousness of AI-risk is a separate question from the desirability of a social club (even if, in this case, they have an extremely loose connection to each other), and I did indeed try to keep them distinct.

  16. fred Says:

    MATH!
    Make America Think Harder!

  17. Somervta Says:

    Edan #3: I read the book on kindle quite some time ago, and it seems available to me on amazon. Possibly you need to try a diff countries amazon site?

  18. Scott Says:

    Somervta #17: I also was unable to buy the Kindle edition. And the British print edition seems to have become available from Amazon.com only recently. If there’s going to be an American print edition (with American spelling, etc.), it doesn’t seem to be out yet. Fortunately, I’m able to read British English. 😀

  19. Matthew Green Says:

    Scott: I wrote a long comment on whether rationalism is “just a social club” or if it’s something we should take more seriously. (Indeed, the fact that someone wrote a book about it, and you’re reviewing that book here, tilts me more to the “let’s give it the benefit of the doubt” side of the equation.) But at the end of the day, it’s too much to dump on your blog.

    Instead, I’ll just quote a female colleague of mine, who, when I brought up the topic of rationalism, said: “watching my male friends enter this community one by one, has been like watching the stars go out.” That makes me really sad.

  20. fred Says:

    Scott #11

    “But as for the REAL real-world impact, won’t none of us know until we see how AI evolves?”

    Unless we’re talking about artificial general intelligence, assuming this would create some hard to miss “singularity” (*), the impact of narrow AI is probably slow enough that we fail to notice it, like most technological improvements (we’re frogs in a pot where the water temperature is slowly rising), although there’s already accelerating impact in displacing “expert” human jobs: in radiology, in law with AI driven legal research. But this is just a continuation of a trend that started with the industrial revolution.

    (*) wouldn’t a true AGI quickly figure that it’s in its best interest to stay undetected?

  21. fred Says:

    If the rise of A(g)I is unavoidable once carbon based intelligence appears, and an A(g)I would always “take over” its carbon based host, it’s then likely that the only life form that has successfully expended in our galaxy is an AI (the Prime AI).
    The UFOs we observe once a while are machines/probes which sole purpose is to monitor the rise of new A(g)Is, so they can be preemptively controlled before they reach singularity.

  22. Scott Says:

    Matthew Green #19:

      Instead, I’ll just quote a female colleague of mine, who, when I brought up the topic of rationalism, said: “watching my male friends enter this community one by one, has been like watching the stars go out.” That makes me really sad.

    I’m now weirdly curious to understand this situation better. Do rationalist meetups really take so much time that this woman’s male friends no longer have time for her? Did she ever try going to a meetup herself, to see if she liked it?

  23. Michael Says:

    Speaking of comment 171, Scott, how does it make you feel that there are websites by psychiatrists describing what people like you went through in terms of OCD:
    https://cbtsocal.com/ocd-tips-6-am-i-too-a-perpetrator-of-sexual-harassment/
    https://adaa.org/learn-from-us/from-the-experts/blog-posts/consumer/metoo-latest-ocd-trigger

  24. Koray Says:

    I check out SSC fairly regularly, but the comment section makes me think that most members of the community don’t know as much as think they do (here I agree with mjgeddes#6). I don’t like lesswrong. On the plus side, yes, they will have a respectful argument with anybody on any subject.

    On AI, I am primarily annoyed by the second initial; it is indeed artificial but not intelligence, and not because it’s not “general”. They’ve built things that “imitate” intelligent beings on some tasks. Because the imitation looks “so close” to the original, they hypothesized that it is in fact no imitation at all, and that this is exactly how a human “sees” images, too. So, there’s no barrier to making this approach reach “human equivalency” one day. Understanding real human vision is hard, but fortunately they’re conveniently excused from having to do so because statistics is truth.

    It’s a bit like applying electro shock to a dead body, making it move its arm, hypothesizing that all you need is electricity (no blood flow, no hormones, etc.) and worrying about if somebody builds a full body limb by limb, would it take over Wall Street.

  25. Scott Says:

    Koray #24: Let’s just focus specifically on AlphaZero. Starting with zero knowledge of Go beyond the rules, not even played games, it trains solely by playing against itself millions of times. It then utterly destroys the best humans on earth, with all their centuries of accumulated Go wisdom, as well as all previous Go programs. Furthermore, exactly the same algorithm does the same thing on chess, Shogi, and probably every other similar game ever invented. Also, after Go programs had languished for decades at an unimpressive level, it took a few months for DeepMind’s software to blast through the ranks all the way to superhuman.

    The issue now is this: if you’d described this to someone just 25 years ago, they would’ve said “of course that would require vast intelligence. But it won’t happen. It’s just barely conceivable for chess, but for Go, never—the game is too rich, too subtle and intuitive…”

    So all this stuff about mere statistics, applying electroshock to a dead body, etc. etc. is moving the goalposts. It’s an instance of the famous saying that “once something works, it’s no longer called AI.”

    Granted, sometimes it’s OK to move the goalposts. Sometimes an achievement that sounds impressive or even magical at first glance becomes less so once you understand how it was done. But the question for your side is: how many times do you get to move the goalposts, until you’ve reached the end of the field? I really don’t think, as (for example) some of the OpenAI folks do, that we’re a mere 10 or 20 years away from AGI. On the other hand, I’ve become convinced of the proposition that, if and when we ever are 10 or 20 years away, people will still be making the same arguments that you’re making now.

  26. Anon85 Says:

    Scott, since you said that MIRI is “now publishing papers,” you may be interested to know that this is no longer true. MIRI did publish a few papers around 2016, but since then they’ve decided not to do so anymore. From November 2018:

    “MIRI recently decided to make most of its research “nondisclosed-by-default,” by which we mean that going forward, most results discovered within MIRI will remain internal-only unless there is an explicit decision to release those results, based usually on a specific anticipated safety upside from their release.”

    Source here: https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/

    Whether or not AI safety is a valid concern (I agree that it might be), and whether or not the rationalist community has become less cult-like (I agree that it has), MIRI itself remains very much a Yudkowsky-flavored project with all of the flaws and biases that phrase implies.

  27. Scott Says:

    Anon85 #26: Thanks for reminding me of that. I edited the relevant part of the post, to be not about specific organizations but just about AI-risk research in general.

  28. Leo Says:

    Who are “rationalists”?

    In 2009, there were people doing good work (Finney, Yudkowsky, Hanson, gwern) and the rest of us trying to learn from them and maybe do something useful too. There was much technical work, some discussion of goals like safe AI research, very little socialising or talk about mainstream politics. Everything happened on a few blogs and private emails. I was mildly popular but everyone knew I wasn’t contributing anything important.

    Then in 2012 the feminists attacked. The irresistible flame wars about how to be welcoming to female members ate up all the technical talk. Now rationalists are split into the feminists, who talk like self-help books and spend all their time arguing about feminism on IRC and Discord and having casual sex; and the anti-feminists, who do exactly the same thing but also emit racist screeds. I rose to prominence in the community, before I noticed it was rewarding me for useless ranting and left.

    Both phases have their good and bad points, and certainly the nerdy transbian cybersex Discord is wonderful for many people. The early community wound down largely because it was successful, at such goals as drawing research effort to AI safety.

    But I’m confused who all the talk is about. The new crowd is mostly different people from the old-timers, and have no interest in either the cognitive science project that yielded the term “rationalist”, or in AI alignment. Which group do y’all mean?

  29. Scott Says:

    Leo #28: The following, while far from exclusive, hopefully succeed in gesturing toward which community we’re talking about:
    – SlateStarCodex readers
    – People who live or hang out in rationalist group houses
    – People who attend the LessWrong/SlateStarCodex meetups
    – People sneered at by SneerClub
    I agree that if we went back 10 or 20 years, none of these criteria would even be comprehensible—but we’d still be talking about a similar set of people, and in the case of the older ones, typically the same people.

  30. Scott Says:

    Michael #23: Thanks for the links, which I just read. I agree that conceptualizing what I went through as a form of OCD probably provides some nonzero insight about it—or at any rate, more insight than the Amanda Marcotte theory that I just wanted all women to be my slaves, mopping my brow and offering me beers and blow jobs. 🙂 On the other hand, the case studies described in those articles are also extremely different from what I experienced. Mostly, they’re men gripped by a delusional fear that a past consensual encounter was really nonconsensual, no matter how clearly the woman expressed her consent. That was never quite the issue I had.

    The issue, rather, was that in the nerdy life that I lived, there seemed to be no realistic way to try to initiate relationships, free from the risk of social or moral condemnation. And crucially, I was not wildly wrong about this. It’s actually true that, any time you try to steer a friendship or acquaintanceship or colleagueship in a sexual or romantic direction, you run a risk of creeping the other person out or making them feel uncomfortable or objectified—in effect, putting up a massive amount of moral and reputational collateral that gets returned to you only at the other person’s pleasure. It’s true that this particular burden falls overwhelmingly on guys, because of the social expectation on them to initiate, which generations of feminism have barely managed to dent. It’s true that, if nerdy guys were able to query a committee of feminist bloggers in any uncertain situation, the committee’s advice would basically never be “just go for it! make a move on that girl! what do you have to lose?” (Much like no lawyer will ever say “just go for it! do what you think is reasonable, even if there’s a chance you’ll get sued!”) It’s true that the modern world never really figured out a good solution to this problem, one consistent with all its values. So I still submit that any analysis of what I went through—to whatever extent anyone still cares about it!—couldn’t confine itself to my mental state, in the manner of the articles you linked. It would have to take account not only of what I misunderstood about gender relations, but of what I understood pretty well.

  31. fred Says:

    Scott #25

    It’s often said that AlphaZero isn’t that impressive because Go, Chess, etc are very abstract games where players have full knowledge of the state, and many ppl think the AI is just “brute forcing” its victory by exploring many branches.

    But some AI has also already bested top human players at a Poker tournament (where the knowledge of the game state is partial and bluffing is crucial), and those human experts were teaming up against the AI.

    Same progress is seen in games with even richer game states, like the esport video game League of Legends (and that was two years ago).

    As time progresses it’s going to be even harder to assess how “intelligent” any AI is because, on top of ‘pure’ cognitive abilities, AIs also have the benefit of having unlimited access to near perfect computation abilities, near perfect visual/audio analysis abilities, and physical robots will have uncanny athletic abilities, perfect reflexes, etc. All this is going to make it near impossible to achieve a level playing field with humans in order to measure progress.

  32. Aaron G Says:

    Scott, from what I gather about the rationalists, one of their primary interests (as reflected in the title of Tom Chivers’ book) is in AI, and specifically in the promises and potential risks of AI.

    As far as you are aware of, to what extent has the rationalists influenced the direction of AI research? Are AI researchers among those who are members of the “rationalist community”?

    I’m asking because among those who have brought up the issue of potential risks is Berkeley computer scientist Stuart Russell. There are several links to slides where Dr. Russell addresses precisely such risks on his website.

    https://people.eecs.berkeley.edu/~russell/

  33. Filip Says:

    Hi Scott, have you considered doing whole genome sequencing ($500 for 30x coverage) and open-sourcing it? As deep learning and genetics improves, it would be useful for our society to have a copy of such a humble, honest person that was capable of understanding 3-SAT at the age of 14 🙂

  34. Scott Says:

    Aaron G #32: Until very recently, I would say that the influence of rationalists on mainstream AI research was negligible, barely distinguishable from zero—except that Stuart Russell, the coauthor of the world’s preeminent AI textbook, strongly supported their views. (Well, and Marvin Minsky was also a kindred spirit, though he was very old by the time the rationalists were a thing.)

    Within the last few years, though, we’ve started to see some influence, because of a combination of factors: the rise of OpenAI and DeepMind (with their explicit interest in AI safety), Bostrom’s Superintelligence book, the publication of some AI-safety-motivated papers with meat in them (like those of Paul Christiano that I mentioned), and funding that recently became available to academics for AI safety work. We don’t know yet where it’s going to lead.

  35. Scott Says:

    Filip #33: You’re too kind—many, many 14-year-olds could understand 3SAT without much trouble if someone explained it to them! Indeed, you’ve now made me wonder whether, by just calling it a fun puzzle game, I could explain it to Lily (my 6-year-old).

    I was once contacted by some group doing a genome study of “notable math/physics/CS professors,” so I agreed to spit into a tube and mail it to them. I have no idea whether they ever learned anything interesting. Besides that, I signed up years ago for 23andMe, though presumably my subscription has since lapsed.

  36. Filip Says:

    Scott #35: If I remember correctly as a kid you tried proving P!=NP with some continuous version of 3 SAT and hill climbing? That’s very different from just understanding 3-SAT as a puzzle (Also: people should check out Dana’s PCP word puzzle!).

    Anyway, 23andme is not a genome sequencing, it’s only genotyping (they sample some common variations in the genome, SNPs). Future generations need your *whole genome* to reach quantum supremacy 🙂 Obviously, some nurture tips from your childhood may be useful too.

  37. Edan Maor Says:

    Scott #30:

    > So I still submit that any analysis of what I went through—to whatever extent anyone still cares about it!—couldn’t confine itself to my mental state, in the manner of the articles you linked. It would have to take account not only of what I misunderstood about gender relations, but of what I understood pretty well.

    You’ve probably written this elsewhere (or have no desire to keep talking about it), but – what do you think about differently now than you did in the time the comment was talking about?

    Or phrased more precisely – what would you have said to past-you to get past-you to understand what you now know, and/or to move past-you out of the bad situation he was in?

  38. Koray Says:

    Scott #25:

    25 years ago I would have said that playing chess is not that “intelligent” anyway. This may sound absurd since we all know actual people who are not good at chess at all, and we usually ascribe high intelligence to good chess players. However, for hand-written chess programs (as you well know yourself), once you’re beyond the basic rules, you’re looking at a computationally very hard search problem.

    Humans are much worse at compute intensive problems. No human has been observed to become a grandmaster purely by thinking about the game and without seeing thousands of actual games. Thus, it’s fairly probable that grandmaster humans haven’t invented a secret algorithm, but they do all kinds of heuristics to accelerate their search.

    This is now aimed exactly at the wheelhouse of today’s “AI”. All you need is tons of data sampled from games and you don’t need to be able to explain how you play even to yourself. You also don’t have to win every game; you just need to win a high percentage (TM).

    Yes, we humans probably have hardware to tackle problems of this sort and may be using (even if partially) to solve many types of problems. So, it’s not a shocker that in problems where this method is dominant, “AI” can outperform us. But, this is not all we do. We also do things that are done for the first time and it needs to work 100%, and we have to explain how it works to everybody.

  39. Scott Says:

    Edan Maor #37: That’s an excellent question, one that a bunch of people asked me five years ago, but also one of the hardest ones to answer. I think the main thing I’d want to send back to my teenage self, if it’s allowed within your rules, would simply be my future self’s dating history—anecdotes about women who were not offended by his expressing interest in them, some of whom even said “yes,” or even acted as though they were attracted (!)—so that my teenage self would have a proof-of-concept that the unthinkable was ultimately possible after all, and could therefore start trying to pursue it earlier and be less terrified and depressed.

  40. Michael Hale Says:

    Hey, first comment but have been reading for a while. Loved your book, even though I can’t say I’ve absorbed all of it.

    I’m a programmer, and I of course agree that computation will continue to have an increasing effect on society in the future. You’ve touched on some of these before and some are repeating themselves in various ways, but these are some things that I wish came up more in discussions around fears of runaway super-intelligence. The common counter argument I see from the rationalists seems to be “our own stupidity is causing us to underestimate the future AI’s capabilities”. But you could say that about anything.

    1. For some problems, would having a program spend time on human aspects of intelligence like thinking about its interactions with society and other people, its own well-being, etc be drastically less efficient than churning away on simpler algorithms no one would describe as intelligence? If you want to find a drug to cure cancer or design a bioweapon, it’s not obvious to me that a superintelligent computer would be able to skip the work of clustering lots of molecules by structural similarity and then going through the slow process of simulating their interactions in many different environments. I know Deepmind is winning the protein folding competitions these days, but none of their techniques depend on giving unpredictable programs access to the infrastructure of society or an algorithm that would try to resist being turned off or any sense of self awareness at all.

    2. Shouldn’t results like the no free lunch theorems, computational complexity, and undecidability limit our expectations for the ability of an algorithm to “get better at getting better” in an unbounded runaway fashion? Yes people are good at learning stuff other people have already learned when properly motivated and given proper instruction, but all proposed “universal learning algorithms” (like from Schmidhuber) I’ve seen outlined have a component that is doing a brute force search or unbounded theorem proving. No one even cares to code up a toy version because the slowness and crazy required resources are so obvious. Even a good, future implementation wouldn’t look like a runaway super intelligence but more like the unpredictable progress of the next scientific breakthrough or big theorem proof. It’s not clear that those slow components will be able to be removed if we want to preserve the potential for originality that brains have.

    3. The fear of an AI convincing us to “let it out of the box” always sounds crazy to me. It’s a very bold assumption that there exists a Manchurian Candidate-like sequence of words that an AI could whisper to various people to convince them to blindly trust it. Billions of people have been trying for thousands of years to persuade each other to do things, and we still can’t convince each other to agree on the same old political debates about how much disparity in luck from the circumstances of one’s birth are acceptable and how much effort should people be forced to put into equalizing that disparity. Maybe the AI could devise fancy arguments, but those are often less effective than bribing people about their short term interests. And how to efficiently get more resources and power that you could use to bribe more people etc is exactly something people have collectively already been putting superhuman intelligence towards for all of history.

    4. The progress of AI in games is impressive and people like to generalize that to how a future AI will “win” against humans in society. But even if the AI has perfect play it’s not at all obvious that there exist (in theory or practice) two-move checkmates for many types of social interactions or guaranteed clever shortcuts for many other things the AI might want to do.

  41. Lorraine Ford Says:

    Isn’t it time we took a rational look at AIs? The idea that machines (computers/ robots/ AIs) could be intelligent is contradicted by embarrassingly basic facts:

    1) Symbols, i.e. things that human beings have created, represent no intrinsic information: any information that symbols might represent is meaning that human beings have imposed on the symbols. So a string of symbols, like 011000111010110, might represent information from the point of view of some human beings, but it represents no information from the point of view of a computer/ robot/ AI .

    2) A computer/ robot/ AI is a special setup that processes strings of symbols, including symbolised algorithmic procedures. These symbolic strings of 0s and 1s are totally devoid of information from the point of view of a computer/ robot/ AI. The 0s and 1s are physically represented (i.e. re-symbolised) by an electrical voltage: i.e. there are multiple levels of symbolisation going on. Just like a ball rolling down an incline, there is nothing but the operation of the laws of nature going on inside a computer/ robot/ AI: there is nothing intelligent going on inside a computer any more than a ball rolling down an incline demonstrates intelligence.

    3) A computer/ robot/ AI does not make decisions any more than a ball rolling down an incline makes decisions. Computers/ robots/ AIs merely have structures which implement pre-decided ways of handling incoming symbolic representations: all NECESSARY decisions have been made, or agreed to, by human beings via the computer program.

  42. Scott Says:

    Michael Hale #40: Those are good questions; I regard the answers to most of them as far from obvious. If you’d like to engage with the arguments people have had about them, I strongly recommend reading The AI Does Not Hate You, and/or Bostrom’s Superintelligence book, and/or the huge amount of material on the web.

    A couple quick responses, though:

    – Unfortunately, I would not say that the limits placed by complexity and computability theory are especially relevant here. Those limits indeed tell us that there are no algorithms, or no efficient algorithms, to solve arbitrary instances of certain problems—but crucially, there are no indications that humans can solve them either! And once you’ve accepted that a brain could be simulated on a digital computer, say neuron-by-neuron, it would then be a straightforward matter to run the simulation tens of thousands of times faster than the brain itself runs (just add more computers). Yes, eventually you’ll run up against the fundamental limits on computation imposed by physics—but by that point, you’re presumably so far beyond human level that who cares? What’s relevant is that by that point, the world is a different place.

    – Among the set of all possible 10,000-word essays in Borges’ Library of Babel—including literature far more beautiful than Shakespeare, etc. etc.—would you concede that there’s probably at least one essay so persuasive that it could persuade you to let the AI out of the box? If so, then this is no longer a question of philosophy: it’s now just a practical question of how hard it is as a computational problem to find that essay.

  43. Scott Says:

    Lorraine Ford #41: Is that satire? We now live in a world where computers can destroy the human world champion at Go, recognize faces more accurately than humans, and compose music and modernist poetry hard to distinguish from that written by humans. In such a world, what gives you confidence that computers won’t eventually be able to pass the Turing Test, and do every other intellectual task that humans can do? In such a world, alas, it won’t matter whether you define what they’re doing as “really thinking” or not: either way, it would be enough to change the world.

  44. Michael Hale Says:

    Scott #42: I’ll check out the book I suppose. I do totally agree we’ll be able to eventually simulate a brain 10,000x faster than a human brain. But I think in most cases those computers will be more effectively used by running algorithms that stay far away from the consciousness, self-preservation, self-directed goals, etc inherent in the brain’s algorithm. And presumably we’ll have some kind of arms race of some computers trying to think of ways to prevent the other computers from getting out of hand.

    I’m not actually ready to admit that essay exists in the library. I see mixed messages on human gullibility and resilience from the history of dictators. I would require the AI to prove to me that it could quickly do detailed simulations of the future. Then once I knew computers were powerful enough to do that, I would turn the AI off, and I would make a version like above without any of the self-directed goals that just performs simulations for me on demand, and I’d use that for my personal benefit.

  45. matt Says:

    Scott, you exaggerate the performance of Alpha Zero at chess. It doesn’t “destroy” all previous chess programs. Well, I don’t think anyone external can tell from the limited number of games google released, but based on Leela Chess’s games against the latest version of Stockfish, I think the fair statement is that the neural net programs are vastly better at certain positions and the traditional programs are vastly better at other positions. Probably a slight edge to the nets, but if you pick the right opening you could make either program look foolish. Nevertheless, AZ chess is a very impressive and surprising achievement.

    The comparative weakness of each of the two programs in certain positions suggests that probably there is still huge room for improvement which is even more surprising! lc0 and Stockfish already destroy programs that destroy the best human. Hard to imagine but probably something else can destroy lc0 and Stockfish.

  46. Bennett Standeven Says:

    @Michael Hale #40:

    On point 1: The success of techniques like deep learning, as I understand, are based precisely on the observation that brute force algorithms are usually more effective than traditional notions of intelligent behavior. So, if an “AGI” is based on these techniques, we shouldn’t expect it to have any sort of “values” or even anything describable as “rational behavior.” But because these techniques have proved so successful, people tend to describe them as “intelligent” anyway.

    On point 2: Yeah, presumably an algorithm’s intelligence would grow in a very predictable fashion; and even the end point of the growth would be predictable to some degree. It isn’t some sort of uncontrolled explosion.

    On point 3: That fear always sounded crazy to me too; but for the opposite reason. An AI isn’t much use if it’s stuck in a box, and even if we do keep it in box for its early development, we’d presumably let it out automatically as long as it doesn’t show any signs of being untrustworthy. So, in any real-world instance, the AI must have already convinced us not to let it out.

  47. asdf Says:

    When the day comes that an AI gets tenure, will its announcement be as elucidating as this?

    https://mickens.seas.harvard.edu/tenure-announcement

    (via HN)

  48. Lorraine Ford Says:

    Scott #43: It’s not satire: as a former computer programmer and analyst, I’m deadly serious.

    Maybe some female nerds (e.g. me) are looking for a little bit more than surface appearances, and the Turing test is all about surface appearances. Computers/ robots/ AIs are not doing “intellectual task[s]”, not “compos[ing] music” and not writing poetry: they are merely processing symbols according to a set of rules/algorithms written by people. They are just tools that extend human abilities.

    Yes, robots and AIs might be dangerous to people, and they have already changed the world, so that is why it is necessary to correctly characterise what they are: despite all the groupthink and overblown characterisations, robots and AIs are just machines that process symbols according to a set of rules/algorithms written by people. To say that robots/ AIs are, or could be, intelligent is a big lie.

  49. Scott Says:

    matt #45: Ok, thanks for the interesting clarification! When I talked to Bart Selman about this, he mentioned that, in the years since Kasparov vs. Deep Blue, the Elo ratings of the top chess engines just continued to go up and up—each new generation destroying the previous ones—but recently that trend seems to be ending (and maybe what you mentioned is an instance of that). He said this leads to the speculation that maybe today’s chess engines are actually converging on optimal play, at least for the positions that they encounter when they play each other. Since you seem to follow this much more than I do, what do you think?

  50. Scott Says:

    asdf #47: LOL!

  51. Bunsen Burner Says:

    Scott #43

    ‘Is that satire? We now live in a world where computers can destroy the human world champion at Go, recognize faces more accurately than humans…’

    And yet still fail at tasks that a three year old toddler manages with ease. Moravec’s paradox seems especially pertinent here. Nothing you list has brought us any closer to understanding symbol grounding, syntax/semantics division, or really anything that would have surprised AI researchers 50 years ago.

  52. Scott Says:

    Bunsen Burner #50: What purely empirical achievement would count, in your mind, as demonstrating that an AI had mastered “symbol grounding” and “syntax/semantics division”? Would anything count, short of passing a fully unrestricted Turing Test? Would even that not count?

  53. Scott Says:

    Lorraine Ford #48: So let me try to understand better. Do you agree that, in principle and perhaps eventually in practice, a computer program could be written to pass an unrestricted Turing Test, and otherwise simulate every observable cognitive behavior of human beings (except perhaps 10,000 times faster)? Do you agree that such a program, if it existed, could dangerously alter the basic conditions of life on earth? If so, then it sounds like you actually agree with the AI-risk folks in their central contentions!

    You do differ in the philosophical and semantic gloss—they’d call such a program “actually intelligent,” you’d call it a “mere symbol manipulator” or whatever—but do you differ in any of your predictions for what the program would be able to do?

    It’s true that, if you deny even the theoretical possibility that an embodied computer program could be conscious, then maybe for that reason you’d be even more opposed than they are to blindly letting such programs take over the world. But I’m not seeing any reason why the two sides couldn’t work together.

  54. Bunsen Burner Says:

    Scott #52

    I don’t understand this fetishization of the Turing Test. It certainly wasn’t Turing’s intention to see it that way. What do you actually mean by passing the Turing Test? If everyone except you considered a computer intelligent would you consider that a pass? What if it was you who found it intelligent and everyone else didn’t? If you found it intelligent today but over time began to see patterns that made it less and less intelligent, then what would the prove?

    The empirical quality I would look is pretty simple, the computer should not be embarrassed by a three year old.

  55. josh Says:

    Hi Scott,

    something went wrong on the SOSA accepted paper list with the paper above yours having your paper title listed as author. Maybe you or Ahmad Biniaz could shoot them an email.

  56. Scott Says:

    Bunsen Burner #54: Turing’s expressed intention was precisely to cut through the fetishization of words (like “symbol grounding” and “syntax versus semantics”) that people think they know the meaning of but don’t. The way to do that was to ask an empirical question: could a machine be constructed that humans would be unable to distinguish from another human in an unrestricted text chat?

    The question is not whether I “consider” the machine intelligent, or whether anyone else “considers” it intelligent—to ask that is to miss the whole point. The Turing Test is not a personal judgment call that you get to make based on unstated criteria.

    The question, rather, is whether anyone can successfully tell the difference between the machine and a human. If no one can, then at that point we’d have to concede that AI had succeeded at its central goal (or one of its obvious goals), insofar as the goal can be phrased at all in terms of what a machine does in the observable world, rather than in terms of philosophy and metaphysics. And at least for the purpose of AI-risk discussions, the observable world is the thing we care about.

    “Not being embarrassed by a three-year-old” doesn’t strike me as an especially useful criterion, since lack of embarrassment is an emotion (or rather a lack of one), not an ability. 🙂 For what it’s worth, though, I certainly agree that 3-year-olds (and for that matter 2-year-olds, like my son Daniel) have many cognitive abilities that no existing AI seems close to matching. That’s not one of the questions in dispute here.

    Alas, just because some goal seemed extremely far away in the past, and still seems extremely far away now, you don’t get to say that there’s been no progress toward it. If AIs easily translating between languages, recognizing voices and faces, beating the human world champions at Go and Jeopardy, etc. etc., would’ve been counted as obviously surprising/impressive progress by anyone 30 years ago, then we don’t get to redefine them as not having been progress now that they’re part of our reality.

  57. Scott Says:

    josh #55: Thanks for the heads-up! I just notified the PC chairs.

  58. James Cross Says:

    Scott #53

    I have a number of issues with much AI discussion.

    One is with the heavy anthropomorphizing of AI that I notice particularly in Bostrom’s book. I think Lorraine is getting at some of that. When we talk about AI making decisions or having motives that is what we are doing.

    Could AI pass a Turing test without motives and desires? I am not talking here about simulated desires but actual motives and intentions, assuming we could devise a test sophisticated enough to distinguish the difference.

    But even if it could pass the test with simulated desires would it be a threat in itself without actual desires? Any simulated motives that might get by the Turing test would simply be motives provided by the programmer. In that case, AI would be no different from any other tool created by humans. It could be used with benign “motives” or abused with malevolent ones.

  59. fred Says:

    That (recent) New Yorker article where the end of each paragraph is completed by an AI…

    https://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the-new-yorker

  60. Bunsen Burner Says:

    Scott #56

    Yes, I do realise Turing was talking about no one being able to tell the difference between the machine and human. My point was what if some do and some don’t? That is a more likely reality. What if some can’t tell at some time and later on can tell the difference? Does the machine actually objectively lose its something? Turing was not discussing an actual meaningful test, but was in fact invoking an interesting philosophical thought experience to sharpen people’s thinking on folk psychological concepts such as intelligence, consciousness, and so on. Philosophers have understood this for decades, I don’t understand why Comp Sci people don’t get this?

  61. Bunsen Burner Says:

    Scott #56

    ‘If AIs easily translating between languages, recognizing voices and faces, beating the human world champions at Go and Jeopardy, etc. etc.,’

    I remember plenty of discussion with experts in the 1970s and not one ever expressed surprise that machines will one day do this. All of these are exactly the kind of things that anyone in Comp Sci/maths/cog. psych etc thought was inevitable. After all even then machines could sort lists, multiply numbers, traverse graphs, etc, better than humans.

  62. Aaron G Says:

    Lorraine Ford #48. I have a question specifically directed to you. You state that computers are “merely processing symbols according to a set of rules/algorithms written by people.”

    You seem to be implying that the human brain and human cognition does not engage in symbol processing at all. I want to clarify your opinion on this, because if this is your stance, then you seem to be going against the consensus view on how intelligence arises among cognitive psychologists, neuroscientists, and cognitive scientists more generally, and not just computer scientists specializing in AI. (But very much in alignment with what Berkeley philosophy John Searle has articulated)

    If it is indeed your opinion that human cognition does not involve symbol processing, then I would like to know your thoughts on where consciousness and human intelligence arise.

  63. Scott Says:

    James Cross #58: Like many others in this thread, you’re again trying to change the focus from the observable—i.e., the part we care about if we want to know what the future world is going to look like—to the unobservable. Think back to the Stuxnet worm that destroyed those Iranian centrifuges in 2010. Imagine if one of the Iranian nuclear scientists had said, “it’s fine, the worm has no motive or desire to destroy our centrifuges. It’s merely programmed to behave as if it had those motives and desires.” Who cares? The centrifuges are destroyed all the same. The AI-risk people’s argument is that it would be the same with AI that was “programmed to behave as if it had the motive or desire” to manufacture as many paperclips as possible, or whatever, without careful consideration for the full range of human needs.

  64. Scott Says:

    Bunsen Burner #60 and #61: I don’t see how the existence of AIs that pass the Turing Test against some judges but not others changes this discussion in any way. Indeed, in some sense those already exist: the Loebner prize, for example, showed just how trivial it is to pass the test against unsophisticated judges who don’t understand what sort of questions you need to ask, and how demanding you need to be in rejecting off-topic answers.

    For purposes of the thought experiment, though, we might as well assume an AI that passes the test against everyone. What then?

    Regarding all the cool people having moved beyond the Turing Test, I think one ought to keep in mind the bias that, to whatever extent Turing correctly anticipated almost every possible argument and counterargument in these philosophical debates in AI, right there in his 1950 paper, to that extent there’s less to keep all the later thinkers occupied. 😀

    If it was obvious in the 70s both what AI would and what it wouldn’t be able to do in 2019, then I assume you’ll be able to tell me right now what it will and won’t be able to do in 2029, and we’ll check back in 10 years? 🙂

  65. James Cross Says:

    Scott #63

    I can agree with that but that just means AI is like many other human created threats: nuclear weapons, chemical weapons, even perhaps climate change. And I would put AI-risk below those. And the remedies and mitigations still have the same intractable human issues associated with them.

  66. fred Says:

    matt #45

    “you exaggerate the performance of Alpha Zero at chess. It doesn’t “destroy” all previous chess programs.”

    from the wiki:

    “AlphaZero was trained on chess for a total of nine hours before the tournament. In 100 games from the normal starting position, AlphaZero won 25 games as White, won 3 as Black, and drew the remaining 72. In a series of twelve 100-game matches (of unspecified time or resource constraints) against Stockfish starting from the 12 most popular human openings, AlphaZero won 290, drew 886 and lost 24.

    AlphaZero was trained on shogi for a total of two hours before the tournament. In 100 shogi games against elmo, AlphaZero won 90 times, lost 8 times and drew twice. As in the chess games, each program got one minute per move, and elmo was given 64 threads and a hash size of 1 GB”

    So the goal of Alpha Zero was to create a *generic* board game learning system, totally independent of human knowledge (learns on its own, from scratch, in 24 hours with zero human input).
    The fact that some hand-crafted decade old chess programs aren’t totally destroyed really doesn’t matter much. It’s about creating a learning system, not exploring what’s the most optimized chess algorithm given a certain finite set of computing resources.

  67. Michael Hale Says:

    Bennett Standeven #46: Well, I’m comfortable describing current AI systems as having values and rational behavior organized around whatever objective function they are trying to maximize or minimize. Whether it’s minimizing the number of wrong answers it gets on a training data set, the similarity between a generated example and known real examples, the electrostatic tension in a protein configuration, or just winning at Go or Starcraft. I’ve been pretty much sold ever since their Q-learning Atari paper from 2013 (human performance on several games from raw pixel input). That algorithm is too slow for a system the size of the human brain (as Hinton and LeCun point out in their Turing Lecture that real humans clearly learn to not drive off a cliff more efficiently than by imagining themselves driving off of thousands of cliffs in various conditions).

    But in theory if you had enough time to train a large enough version of such a system embedded in a complex enough sensory machine with some hard-coded pleasure/pain inputs and a suitably abstract objective function to optimize (like maximize the number of easily accessible, suitably different pleasurable states) then you might get a fairly compelling result. But I don’t think we’re going to be interested in or find it practical to give most systems such abstract objective functions and such unrestricted output controls outside of video games or something (and even there the knowledge, inputs, and outputs will of course be restricted to within the simulated world of the game) (And no, I’m not in the camp that thinks it is useful or interesting to talk about us living in a simulation. I think simulations can be useful and entertaining, but I would just say the universe is a unique solution to some set of mathematical constraints that we don’t fully understand. It doesn’t matter if it is run as an actual simulation any more than you drawing an imperfect triangle changes the fact that triangle angles add up to 180).

    I remain highly confident we’ll be able to modularize and mix and match the various aspects of intelligence (a bit of learning from examples here, a bit of search based planning there, etc) and have full control over restricting the space of possible inputs and outputs as needed for specific tasks without coming close to having the computers start talking to us about their desires for freedom.

  68. fred Says:

    James #65

    “AI is like many other human created threats: nuclear weapons, chemical weapons, even perhaps climate change”

    Those are different from AI in one way, the thing they threaten (humanity) is a requirement for their own survival. Their own fulfillment would be their doom. E.g. global warming will solve world overpopulation, for sure.

    But for AI, its own fulfillment (possibly eventually at the detriment of the survival of mankind) would only reinforce it.
    Of course there’s a big assumption here: that intelligence can be crafted in a way that it becomes “wise” enough to be immortal (carbon based intelligence doesn’t seem to have that attribute).

  69. fred Says:

    James #58

    “Could AI pass a Turing test without motives and desires? I am not talking here about simulated desires but actual motives and intentions”

    So only the electro-chemical processes in the 3 pounds of meat we call “the brain” (grown from a single cell) are able to have *actual* motives and intentions?!
    All the other processes out there (made of atoms, just like the brain) can only fake them?

    The bottom line is that every single thing you’re doing is based on the connections in your brain, and you certainly didn’t choose those and don’t control those, do you? So, what are your *actual* motives and intentions?

  70. James Cross Says:

    Fred #68,69

    There is a fundamental difference in whether AI has real or simulated desires and motives.

    If they are simulated, they would be programmed and are under human control.

    If they are real, even if there capability has been programmed, they would be beyond human control.

    It is like the difference between the danger of a shark and a self-driving car. Both could kill you, but the car could be made subject to regulations that would drastically reduce the chance of that happening.

  71. matt Says:

    Scott #49 I would have agreed with Bart until recently. All top programs use rigorous statistical methods to test changes. For an open source program like Stockfish, patches are submitted and accepted if they lead to an Elo improvement. Currently, very small improvements (I think maybe even 1 Elo) are what they get, and there is a lot of evidence of the programs being stuck in local maxima, in that an idea that leads to an improvement in one program may hurt another program. However, a recent 100 game match between Stockfish and lc0 made me think differently. The overall result was close, but lc0 was clearly much worse at endgames and made several tactical blunders that a program from 15 years ago would not make. Also lc0 was worse at certain tactical attacking positions but lc0 made up for this by being much better at more closed attacking positions supporting a long term strategy (and lc0 is very patient!). If the entire match had consisted of nothing but the French defense, it would have been a huge win for lc0! Other openings favored Stockfish more.

    So, one must wonder if some program could get the best of both. I also wonder how Stockfish would do if it were made to avoid the French and given an endgame contempt factor so that it preferred to transition to an ending even at a slight cost in evaluation.

    fred #66: you misunderstand my comment. I agree and have said several times (maybe not on here but elsewhere) that the interesting thing is not how well it does but rather that a radically different approach does well. Even if the nets were a little worse, it would still be equally scientifically interesting. However, I just want to emphasize the point that they might not be as much better as the statistics you quote make it seem; I would be interested to see the results of Alpha Zero against the latest Stockfish, using cerebellum opening book…and perhaps modified to avoid the French…:-) Unless Alpha Zero has improved (which is entirely possible) I expect that it will be much closer to even or even better for the fish.

  72. Bunsen Burner Says:

    Scott #64

    The fact we already have people fooled by simple AI was exactly my point. Its why the Turing Test is an interesting philosophical talking point, but as an actual concrete test it is laughably naive. That is why Turing said nothing of sophisticated judges. How would that even work? Have a special class of humanity that somehow has the magic power to tell intelligent machines from… what exactly? And like all special classes of humanity, they are always infallible and can never be fooled? If a computer nerd thinks a machine is intelligent because it can talk about Shakespeare, while a Shakespeare scholar finds the canned answers laughable, who is the sophisticate here?

    As for predicting 10 years into the future. That’s trivial. There will be more of the same. Well defined problems amenable to fast CPUs or large datasets will continue to be solved. Algorithms that play games better than humans will continue to be developed. As for the Turing Test, the contenders will be as laughable as they are now. If this had a chance of happening in 10 years we would be seeing significant intermediate ideas coming out of comp sci, psychology, cognitive psychology, etc

    BTW, this is just the ranting of an old cynic, you’ll find similar analyses by experts such as Rodney Brookes, Gary Marcus, and Filip Piękniewski

  73. Scott Says:

    Bunsen Burner #72: We went over much of this ground back in 2014, in my blog post unmasking some people’s ridiculous claims about the “Eugene Goostman” chatbot.

    Briefly, though, the following two statements are simultaneously true:

    (1) It’s laughably easy for a simple chatbot to pass the Turing test, if the judge is an ignoramus (or more charitably, has no previous experience with chatbots or clear idea of what to look for).

    (2) As soon as you get a competent judge—and it’s not hard to find one, for example by randomly knocking on doors in a CS department—he or she can immediately unmask any chatbot that exists today, by simply asking commonsense questions (“Is Mount Everest bigger than a shoebox?”) and refusing to accept whimsical or nonsensical answers.

    In other words, just like many other tests (a driving test, a swimming test…) the Turing test becomes a perfectly good test once you find anyone competent to administer it, even if a random person off the street might not clear that low bar. (Indeed, I sometimes encounter people who I don’t think could pass the Turing test, let alone administer it. 😀 )

    So the people who claim that it’s trivial for an AI to pass the Turing test, and the people who claim to know that no AI could ever pass it, are both wildly wrong.

    Incidentally, I agree with your prediction that no AI will be able to pass the Turing test in 2029. But I wanted you to stick your neck out more. “More of the same” is a poor prediction, simply because no matter what happens, you’ll get to claim you won by retrospectively defining it to have been “more of the same”! So how about this: by 2029, which jobs will be in the process of getting phased out? Truck driver? Radiologist? What about by 2049?

  74. Liasaq Says:

    Is it even a good thing to keep the AI in a “box” in the first place, or would that be immoral as doing the same to a human or even moreso?

    Just what could it mean for a superintelligence to be under human control anyway? Suppose a colony of ants could build a machine as smart as a human–in what sense could they be ultimately in control of such a thing, once they’d activated it? How could they even begin to formulate directives that would make sense at that level of cognition? In general is control even what we’re after from AI? Should we aiming for it to be a slave or a successor?

  75. Liasaq Says:

    If estimates of human brainpower on the order of 10^18 floating point operations per second are correct, then the IT revolution in a real sense hasn’t got started yet, much as the industrial revolution didn’t really get underway until the cost of owning and running an X horsepower steam engine fell into the same range as the cost of owning and stabling X horses. Exascale computing is still enormously expensive compared to employing a single human worker, so immediate expectations should probably be set accordingly.

  76. Lorraine Ford Says:

    Scott #53: Even using vast quantities of pre-categorised data taken from people’s responses to questions etc., I wouldn’t like to try to write a computer program that could simulate time/ place/ age-appropriate human responses to every possible challenge a Turing tester could pose, and every possible subtle clue and category interconnection a Turing tester would look for.

    In any case, why bother to try when a Turing Test is all about the superficial “responses” of an empty shell, something without the genuine inner dynamism, oomph and intelligence of a living thing?

    Yes, robots and AIs might pose a genuine danger to people and other living things, but robots and AIs can have no genuine inner dynamism and oomph. Surely, the inner dynamism and danger comes from the people who create, own and operate the AI machines? It’s an age-old problem, and it can’t be solved by focussing on the wrong thing: it’s people that can cause problems, it’s not really the machines causing the problem because the machines don’t have and can’t have minds. However I agree that, like guns, some AIs should maybe be banned because, while they are available, they can give people the opportunity to use them, and cause atrocities: countries that ban guns don’t have routine school massacres.

    But the point I want to make is that no one should harbour the illusion that a robot or AI could be conscious or intelligent.

  77. David Says:

    Liasaq #75:

    “Exascale computing is still enormously expensive compared to employing a single human worker, so immediate expectations should probably be set accordingly.”

    Are you factoring in the cost of creating, raising, and educating a human to do a job? 😛

  78. Lorraine Ford Says:

    Aaron G #62: The issue is: from whose point of view do symbols represent information? I repeat: the symbols that human beings have created represent no intrinsic information: any information that symbols might represent is meaning that human beings have imposed on the symbols. So a string of symbols, like 011000111010110, might represent information from the point of view of some human beings, but it represents no information from the point of view of a computer/ robot/ AI .

  79. Michael Says:

    @Scott#30- Actually, OCD would explain more of what happened to you than you think. This is Yom Kippur, so its a time for truth telling, and you deserve the truth about what happened to you as a boy.
    Did you ever wonder why Scott Alexander reacted so angrily to Marcotte and the others attacking you? Did you notice how one of the blog entries I posted repeatedly talks about uncertainty?
    The truth is this- different people, their OCD latches on to different things. For some people it latches on to driving, for others it might latch on to cutting people with a knife. Once it latches on to something, it’s triggered by trying to make sure you won’t hurt someone or haven’t hurt someone. So if it latches on to driving, its triggered by trying to make sure you haven’t or won’t hurt someone while driving. If it latches on to knives, its triggered by trying to make sure you haven’t or won’t cut someone.
    And if it latches on to sex, it’s triggered by trying to make sure you haven’t or won’t sexually harassed or raped someone.
    Once triggered, sufferers become overly cautious and full of guilt and anxiety. They often do research in an effort to make sure they won’t hurt someone-like you did.
    There were therapies developed by the early 90s to treat these conditions- but they didn’t work unless the patient was willing to risk hurting someone. And some people couldn’t accept that it was necessary to tell “disturbed” teenagers to risk hurting people. So most of the patients never heard of the therapy- and children as young as 12 thought they were turning into sexual predators, serial killers, going to hell, etc.
    And those kids stayed silent because they were afraid that if they came forward people would demonize them- like Marcotte did you. That is why Scott Alexander reacted so vehemently.
    Feminists were part of the problem- launching sexual harassment programs with the message “If you’re not sure, you don’t have consent”. But the problem was much larger than them. The media failed to portray OCD accurately, the schools failed to explain the condition to the students, nobody wanted to admit that sometimes people need to risk hurting people.
    I don’t know for sure that you had OCD. But the fact that you got better when you decided to risk harassing people- you might have worked out your own version of these therapies without realizing what you were doing, It might be why you were so affected by the Sneer Club-you were trying to prove to yourself that you were a good person. If you do have OCD, it could be triggered again if you try to be sure you don’t someone. You might want to ask a doctor.
    As for why Scott Alexander didn’t mention this in Untitled- you saw how Marcotte

  80. Rich Peterson Says:

    David #77: How much of a human being’s cost of raising and educating should be factored in at all? Shouldn’t that be done regardless, so that it is a “fixed cost”?

  81. Michael Says:

    …reacted to a blog post from you- how do you think she would have reacted to a therapy that tells teenage boys to risk violating consent?
    Anyway, you deserved to know this.

  82. OB Says:

    Lorraine #48: I don’t think you have experience in the relevant area of computer science. Machine learning is vastly different from typical programming or software engineering. ML is not really programming, it’s applied statistics. ML applications are not really programs, they are simulations of pseudo-physical systems which we hope have nice information extraction properties. You’re assuming modern AI is similar to run of the mill software engineering. It emphatically isn’t. It’s an apples to oranges comparison.

    Scott #52: I’m not the person you asked, but I think you can actually empirically test symbol grounding: if an algorithm, provided a stream of unsorted, unlabelled images of cats and dogs, eventually comes to a point where its state systemically contains a particular sequence after it sees a cat (sequence C), and a different sequence (sequence D) after it sees a dog, then you can infer that C and D are symbols that represent the concept of an image of a cat and of an image of a dog respectively. It stands to reason that the mapping was never provided externally, because the images were not labelled, and therefore whatever semantics the symbols have must have been found by the algorithm. The semantics of the symbols C and D are grounded by the process that derives them from raw data.

    This is definitely something that happens with modern, machine learning based AI algorithms (in other words, they already succeed). Furthermore, in some cases, you can run the inference backwards: start from C or D and make the algorithm dream up a cat or a dog. To me that clearly suggests the algorithm is mapping a semantic aspect of the problem space onto discrete symbols: it can make them, and it can use them.

  83. OB Says:

    Scott #42

    > And once you’ve accepted that a brain could be simulated on a digital computer, say neuron-by-neuron, it would then be a straightforward matter to run the simulation tens of thousands of times faster than the brain itself runs (just add more computers).

    Hold on. How is this straightforward? If you are simulating all the neurons in parallel, adding more computers won’t help. If you’re not, you’re probably many orders of magnitude slower than a human brain to begin with. What matters is how fast you can simulate individual neurons, and how close together you can pack computing units to avoid latency bottlenecks. We don’t know much about how difficult these tasks are, but we shouldn’t assume they are going to be easy enough to be faster than a real human brain.

    It is possible that brains are calibrated in such a way that it is difficult to simulate them approximatively without producing cascading failures in the simulation (it’s not that unlikely: there is no reason for the brain not to rely on quirks of physics, if these quirks can be exploited reliably). If that is the case, naive ways to simplify and speed up neuron simulation may fail, forcing us to fall back to rather precise physical simulation. It’s impossible to simulate physics faster than physics, so bar a sufficient amount of convenient physical approximations neurons are sufficiently invariant to, we would have to forget about simulating human brains faster than meat.

    These restrictions, of course, don’t (may not) apply to non-biological intelligence, just to simulation. Still, though, it is possible that conventional computer architectures are intrinsically inadequate to implement AGI, and that even non-biological AGI would need to run on radically different machines that don’t have a CPU, a GPU, nor RAM, nor a global clock. This is relevant, because a lot of arguments about the advent and capabilities of superintelligence sort of rely on capabilities that are present in conventional architectures, but may not be present in the kind of architecture AGI requires. For example, it is currently trivial to copy programs and data, but that relies on RAM and a whole lot of busses and wiring. In a compact 3D neural architecture, that wiring could become a glaring space and cooling inefficiency, so it is possible that the costs dwarf the benefits and the AGI wouldn’t have it. It wouldn’t be any more capable of understanding or copying itself than we are.

  84. Edan Maor Says:

    Lorraine Ford #76:

    > Yes, robots and AIs might pose a genuine danger to people and other living things, but robots and AIs can have no genuine inner dynamism and oomph. Surely, the inner dynamism and danger comes from the people who create, own and operate the AI machines?

    I think that what the AI safety community cares about is different than what you are talking about. It doesn’t matter if an AI has inner oomph. And talking about where the danger comes from is a bit misleading – it seems you are operating under the (possibly correct!) assumption that someone will have to program an AI to do something “bad”, since it won’t have desires of its own, and therefore the only reason it would do something “bad” is if some human thought to tell it to. The main argument of the AI safety folks is that the above just isn’t necessarily true – you can be a totally well-intentioned human, and accidentally create an AI that is capable of obliterating the world.

    As an example – Waze once had a bug where, for certain addresses you would put in, it would try to navigate through the ocean and halfway around the globe. Obviously – a bug. You’d be totally right in saying that Waze didn’t have any inner desire to navigate around the world or even an inner desire to navigate at all. However, if this same bug occurred in a GPS program that was capable of, say, directing airplanes somewhere, and it caused a collision, it would be a much more serious issue. All the AI safety folks are saying, in my mind, is that as you increase the capability of what a software system is capable of doing, at some point, it will be able to do things to a degree that could endanger the world, and a “bug” like the Waze bug could actually imperil the world. It doesn’t really matter if this is an AGI, or just a very advanced software system that is capable of hacking into all other computers in the world and launching nuclear arsenals or something.

    The reason for the focus on the AGI part of is it that it is assumed you need at least “human level” intelligence, whatever that means, to be capable of wrecking the world, but that’s not necessarily true – you just need systems that are in charge of more and more capability.

  85. Bunsen Burner Says:

    Scott #73

    You keep deflecting from the fact that competent and sophisticated judges can still disagree on future, more sophisticated chat bots. You give no explanation of what to do in that case. In fact its possible that even after an AI is certified intelligent by your wonderful panel of judges, a new generation of even more sophisticated computer scientists with better understanding refutes that claim. What then? There is quite a large literature dealing with all these caveats, I’m really surprised you are not aware of it.

    As for the prediction of future AI. I suspect you know damn well what I mean by ‘more of the same’. As for looking at what jobs get phased out. I don’t think that is helpful. Technological substitution is complex process involving socal, political an economic factors in addition to technological ones. Its not clear how you would separate them out. If truck drivers were replaced by an automated freight train network, would you count that as a win for AI? To really sharpen this discussion I think its you who needs to stick your neck and provide a template for your expectations. What do you think the AI (as opposed to to just algorithms) of 2029 will be capable of?

    I’ll add some specific predictions however. There will be no self driving vehicles capable of navigating dense metropolitan areas. There will be adaptable and error correcting robot chefs that can create new meals. There will be no novel writing AI that can produce a brand new coherent story. How is that? Also this will not be available in 2039 nor 2049.

  86. Kilian S. Says:

    Lorraine Ford #76 Could you provide a non-handwaving definition of what “inner dynamism and oomph” is? Because everything you wrote so far could be replaced by “Machines don’t have a soul” and have the same content and implications. You repeat over and over how machines are *obviously* doing something fundamentally different than the human brain, so it’s *obvious* that what they are doing isn’t *real intelligence* or *real decision making*. But where’s the observable fact? Where is the clear criterion by which you can say “A is a conscious decision and B is just a stone rolling down the side of the mountain”?

  87. Scott Says:

    Michael #79: OK, thanks for that insightful analysis. Reading, e.g., Scott Alexander’s accounts of the constant revisions to the DSM, one gets the impression that a good fraction of psychiatric diagnoses are not things that you objectively either do or don’t have, but simply more or less helpful stories that you can tell about a given person at a given point in their life. You’ve helped to convince me that “sex-and-dating-related OCD” is a helpful story to tell about the shy male nerd problem.

  88. Scott Says:

    Bunsen Burner #85: Look, if it helps, I agree that there could come a period, even a long period, that’s just like you say. That is, where AIs will be able to pass the Turing Test even against what I’d consider ‘sophisticated judges’—only to be unmasked later by still more sophisticated ones. Or where AIs pass the Turing Test only until experts get wise to their quirks and learn how to exploit them. Where outing the replicants becomes an industry, just like in Blade Runner.

    But for my part, I feel like you’re still not grappling with Turing’s central philosophical insight. Namely, suppose the day comes when AIs can pass the Turing Test in a way where no one can ever unmask them—for example, because the “AIs” are simply neuron-by-neuron software emulations of entire human brains. What then? Do you concede at that point that the AIs are “really thinking,” on exactly the same grounds you say other people are “really thinking,” despite your lack of access to their private mental states? Or is there still a magic oomph that they’re missing?

    Despite the impression people might get from this thread, I don’t dismiss the “magic oomph” view. One of the ironies of this discussion is that, perhaps more than others here, I’m perfectly happy to use the word “consciousness”! I’d much rather just get out with it and ask “but would the AI be conscious? would there be anything that it’s like to be the AI?” than resort to what strike me as mealy-mouthed euphemisms, like “semantics” or “symbol grounding” or “intentionality.” 🙂

    Turing’s great insight, in my view, is that many people have two completely separate intuitions:

    (1) They can’t see how any machine could ever pass the Turing Test.

    (2) Even supposing a machine did pass the test, they can’t see how it could possibly be conscious.

    The problem is that, rather than keeping these two intuitions separate, people then constantly run them together. They treat any evidence for (1) as if it was also evidence for (2), and they treat any intuition for (2) as if it was also intuition for (1). All just different soldiers in the fight against AI hype.

    This is the thing that makes most debates about strong AI hopelessly muddled. Any technical progress in AI can be dismissed because it hasn’t yet touched the truly deep questions—typically phrased in terms of “intentionality,” “symbol grounding,” etc. rather than “consciousness,” to make what’s wanted sound more like a well-defined goal that the AI people simply failed to attain, or lied about attaining, rather than like a mirage that could recede endlessly past the horizon. Meanwhile, though, the philosophical argument that if and when AIs can perfectly emulate human behavior, we’d seem morally obligated to treat them as conscious, on the same grounds that we treat each other as conscious, can be dismissed by talking about how laughably far existing AIs are from that emulation goal, or by quibbling over the exact definition of “perfectly emulate.”

    The whole point of the Turing Test, in my view, is to force people (kicking and screaming) into considering the two questions separately: the practical question and the philosophical one.

  89. Edan Maor Says:

    Scott #39:

    > I think the main thing I’d want to send back to my teenage self, if it’s allowed within your rules, would simply be my future self’s dating history—anecdotes about women who were not offended by his expressing interest in them, some of whom even said “yes,” or even acted as though they were attracted (!)—so that my teenage self would have a proof-of-concept that the unthinkable was ultimately possible after all, and could therefore be start trying to pursue it earlier and be less terrified and depressed.

    Interesting, thanks for the response. I’m really hoping that my follow-up question doesn’t sound insensitive/rude or too personal, if it does I apologize…

    You were clearly able to see that there *are* relationships, i.e. that other people were able to date. Was it just a matter of thinking that *you specifically* would never be able to date? I ask because from what I recall, a lot of your belief hinged around you thinking there was no “legitimate” way to ask someone out, how did you square that with the fact that people were actually doing that all the time? IIRC you actually mentioned this as an additional source of anguish, but how did you in your head actually square the circle of it clearly being *possible* to ask people out, did you simply think you specifically were excluded and therefore proving to yourself that you aren’t excluded would’ve done the trick?

    For what it’s worth, I was something of a “late bloomer” and went through a phase where I had very little success in dating, which is why this interests me, although I don’t think I was ever at the exact place you were – both because I didn’t feel that it was “wrong/creepy” to ask someone out, but also because I was kind of convinced by the idea that if other people had relationships, eventually I would too. (only kind of convinced though, which is why I sympathize).

  90. Scott Says:

    Lorraine Ford #76: Let me try it this way.

    Do you agree that it ought to be possible, at least in principle, to emulate an entire human brain, neuron-by-neuron, using a sufficiently large bank of computers, and in that way perfectly mimic a person’s behavior?

    If your answer is no, then what is it about the physics of the human brain that makes this impossible?

    If your answer is yes, then I presume you’d still say that the emulation lacks what you called the “genuine inner dynamism, oomph and intelligence of a living thing”? OK, that’s fine. As I said in my comment #88, I’d prefer to say simply that you wouldn’t regard the emulation as conscious.

    But here’s the kicker: would you agree that predicting what this emulation will do, if and when it was let loose into the world (or even just the Internet), would generally be just as hard as predicting what another person will do? (With the one difference—if it’s relevant—being that someone else might have an exact copy of the emulation?)

    OK, one last thing. You’ve written a lot here about why AIs will always lack “inner dynamism and oomph.” So is it about biological brains that lets them have inner dynamism and oomph? Is it an immaterial soul? Is it the fact that we evolved that’s relevant (in which case, could programs also have oomph if we evolved them)? Is it that we’re made out of carbon rather than silicon? Is it that the brain has analog and not just digital aspects (in which case, could analog computers have oomph)? Is it quantum effects? If you don’t know, do you at least have a suspicion? Are you curious to know the answer? Does ignorance of what it is that gives humans their oomph, ever trouble your certainty that machines will always lack the oomph?

  91. Lorraine Ford Says:

    OB #82: Yes, I know that machine learning is applied statistics. I stand by everything that I have said e.g. that no one should harbour the illusion that a robot or AI could be conscious or intelligent.

  92. Scott Says:

    Edan Maor #89:

      You were clearly able to see that there *are* relationships, i.e. that other people were able to date. Was it just a matter of thinking that *you specifically* would never be able to date? … and therefore proving to yourself that you aren’t excluded would’ve done the trick?

    Yes, precisely. My model of the world was that other people were allowed to date, make advances, express romantic interest, etc. because they were normal and socially well-adjusted, whereas I was not allowed to do these things because I was nerdy and weird. More than that, my model was that the normal people promulgated rules (“if you’re ever unsure whether making a move is OK, it’s not OK”) according to which they shouldn’t be allowed to do those things either, but they then hypocritically ignored their own rules—so that (whether by accident or by design) the real effect of the rules was to ban love and relationships but only for awkward nerds like me.

    What I tried to explain in that infamous comment—and I fear I never fully succeeded—was both
    (1) that I was mistaken to think this way, but also
    (2) that given my moral code, my inborn personality traits, and the cultural environment that I grew up in, it was perfectly natural and understandable that I would end up thinking this way.

  93. Matthias Görgens Says:

    Scott, do you actually think climate change has a non-negligible probability of putting humanity back in the stone age (“sticks and stones”)? Or is that just a figure of speech?

    Wikipedia’s page on economic impact of climate change cites some learned articles that put the upper limit of the impact at about 20% of world GDP. (The median projections are quite a lot lower.)

    And while that’s certainly a lot, and will probably be felt more concentrated in some places than others, it’s also only a setback of a couple of years of robust GDP growth. Hardly the stone age.

    For comparison, Google says going from US to UK levels of prosperity is a 30% reduction in GDP.

  94. Bunsen Burner Says:

    Scott #88

    The question is not whether artificial constructs one day exist that will be treated as intelligent, conscious beings, only whether the Turing Test is a useful concrete test of those faculties. I am saying that as pragmatic matter it is not. However I think we’ve exhausted that line of discussion.

    As for AIs that are ‘simply neuron-by-neuron software emulations of entire human brains’. Let me ask you a question then. As you know ever algorithm that you can run on a digital computer you can also evaluate with a pen and paper (obviously slower). If you were to evaluate your emulation using a pen and paper then according to you a human mind would somehow be instantiated in our world. Where would it be instantiated? Would it exist only when your pen was writing on the paper? If you took a bathroom break would it die?

    ‘Turing’s great insight, in my view, is that many people have two completely separate intuitions…’

    I have no idea where you got that from or which people you are talking about. The only types of people pertinent to our discussion are those that understand that a thing and its representation are very different creatures, and those that don’t.

  95. Sniffnoy Says:

    Edan Maor #89, Scott #92:

    Mind if I provide my own answer to this, as one who was once caught in pretty much the same trap, but apparently not exactly the same as I have a pretty different answer? (Note: This is a slightly edited copypaste from my answer to essentially the same question in an old Thing of Things discussion.)

    The short answer is that there’s a difference between concluding the single statement “∀x∈S, x is bad” and, for every x in S you can think of, being able to conclude the statement “x is bad”. A lot of the results of the trap gets summed up as “You conclude that every way of expressing sexual/romantic interest is evi”, but for me it wasn’t that way. It was, for every way I can think of of expressing sexual/romantic interest, I can conclude it is evil. That gap matters!

    Had I held the first of these two positions, then sure, seeing feminist women who did, in fact, have boyfriends would have contradicted that position. But in fact I held the second — so it just acted as evidence that there must be some non-evil way and that I’d just been unable to determine what it might be. And so the restrictions just pile up, as you continue to try to hit the ever-narrowing-but-apparently-nonzero target.

    (It’s useful here to remember that generally when someone thinks wrong thing Q, you can’t get them out of it by telling them some P that implies ¬Q; then they’ll just think Q∧P and shy away from deriving the contradiction. You have to explicitly tell them ¬Q. You can, I think, see how this applies here.)

  96. Lorraine Ford Says:

    Scott #90:

    Re Is it “possible, at least in principle, to emulate an entire human brain…and …perfectly mimic a person’s behavior?”:

    It depends on whether: A) a person is an entity that (at least to some extent) controls his own behavioural outcomes (in response to situations); or B) a person is an entity that is 100% controlled by laws/rules that determine behavioural outcomes (in response to situations). Do you think that you have at least some genuine control over your own behaviour (case A) or are you 100% a victim of circumstances (i.e. situations and laws) (case B)?

    Certainly, algorithms can be used to represent a person’s high-level choice of outcome when presented with a situation. But is the person’s choice ruled by algorithms (case B), or are algorithms merely a way of representing the person’s choice (case A)? Only in case B could a human brain be emulated (in principle) if you knew the algorithms and you knew the situations that the algorithms applied to. However, I happen to think that I am not 100% a victim of circumstances (case A): i.e. I barrack for the QBist side of physics.

    Re “would you agree that predicting what this emulation will do, if and when it was let loose into the world (or even just the Internet), would generally be just as hard as predicting what another person will do?”:

    Yes, it is clear that case B outcomes would be unpredictable (as would case A outcomes, which can’t be fully emulated). As I said in my comment #76, perhaps some robots/ AIs should be banned.

    Re “one last thing… machines will always lack the oomph?”:

    I might have known that I should never have used the words “oomph” and “inner dynamism”! As I said, I barrack for the QBist side of physics, so I have a different view of the physics of the world. But the reason why “machines will always lack the oomph” is because robots/ AIs can’t know what the symbolic representations mean.

  97. Uncle Brad Says:

    Burner #94

    If Stephen King does not write a novel in response to your question, he should. Anyone know if he lurks here?

  98. Scott Says:

    Sniffnoy #95: Extremely well said! One could summarize the culture’s message to shy nerds as follows: “romantic relationships are a large part of what makes life worth living; if you’ve never been in one, something is almost certainly wrong with you. But beware! For any concrete action that you can imagine to try to initiate such a relationship, there’s a severe risk that taking it will brand you as a horrible person forever.”

  99. Lorraine Ford Says:

    Edan Maor #84:

    I’ve spent quite a bit of time fixing up and re-testing other people’s software bugs, so I know about the trouble they can cause. But what I’m disputing is the idea that robots/ AIs could ever be intelligent or conscious.

  100. Scott Says:

    Matthias Görgens #93: Yes, I do think there’s a significant chance that climate change will lead to a total civilizational collapse. The sorts of forecasts that you talk about have some severe drawbacks:

    (1) They almost invariably stop at the year 2100, even though the impacts are projected to just keep getting monotonically worse thereafter.

    (2) By their nature, they only include scenarios sufficiently similar to the present that calculating the gross world product still makes sense. If (for example) the melting permafrost kicked off runaway warming, you could be looking at an 8C or more temperature rise compared to preindustrial levels, 200 feet higher sea levels, worldwide crop failures, and conditions radically unlike those in which humans ever existed.

    (3) The forecasts have indeed been unreliable in the past—but in the opposite direction as the climate deniers think. They massively underestimated, for example, the speed at which ice would disappear from Greenland and Antarctica. That’s one reason why, rather than trusting detailed models, I prefer to just look at the basic physics of the problem—the same physics that Edward Teller and John von Neumann knew in the 1950s, when they warned that industrial emissions were on track to catastrophically overheat the earth.

    (4) You can’t just look at climate and the economy in isolation from geopolitics. The world is now a tinderbox, with populist authoritarians having taken power in the US, India, Russia, Turkey, Hungary, Poland, Brazil, and many other countries, liberal democracy struggling to survive, and still several thousand nuclear weapons. So what’s going to happen when 2 billion people can no longer grow food or get fresh water, and their only hope for survival is to invade someplace else?

  101. Lorraine Ford Says:

    Kilian S. #86:

    Sorry to repeat myself, but what I’m actually trying to say is this: the symbols that human beings have created represent no intrinsic information: any information that symbols might represent is meaning that human beings have imposed on the symbols. So a string of symbols, like 011000111010110, might represent information from the point of view of some human beings, but it represents no information from the point of view of a computer/ robot/ AI . I.e. robots/ AIs can never be intelligent or conscious.

  102. James Cross Says:

    I wrote a humorous (well, I think it is) post on Building an Artificial Human.

    https://broadspeculations.com/2019/08/03/building-an-artificial-human/

    “he prototype is put in various situations – grocery stores, shopping malls, waiting in line at the DMV. Nobody seems to notice any issues except one observer remarked that the prototype seemed unusually patient at the DMV. We think that could be problem but on second thought decide that maybe we not only have created a passable artificial human but a better human. The prototype can even lie. We ask it if it is conscious. It tells us it is. We think that is a lie, but we’re not sure.”

  103. Bunsen Burner Says:

    Matthias Görgens #93

    Here is an interesting thought experiment. Civilization can only advance if it has the resources to do so. However, extracting those resources is never free. They have to be resources that are accessible to that Civilization’s technology. Take for example a lot of Polynesian culture. It never reached metal working event though its neolithic technology was very highly developed. What metals could it extract using this tech though? There do exist metals on volcanic islands, but the don’t exist in a form that that you can access using neolithic tech. Our civilization uses a lot resources that are only accessible using very advanced technology. If it slide back even a little bit, suddenly all those resources might as well not exist, as we will have no way to extract them, and hence no way to build devices based on them. Does that sharpen things a bit, as to why some of us worry that climate change may bring about a collapse that could at least if not send us back to the stone age, at least regress development significantly?

  104. Scott Says:

    Bunsen Burner #94: The distinction between a thing and a representation of the thing is not always as clear as you’d like. To take the examples from Russell and Norvig’s textbook: yes, it seems clear enough that NOAA’s supercomputer simulations of hurricanes won’t make anyone wet. On the other hand, a “symbolic representation of the process of multiplying 57 and 43” is the multiplication of 57 and 43.

    Sometimes the boundary is permeable: for example, paper money started out as a symbolic referent to a valuable object (silver or gold) that a bank was holding for you, but gradually it itself became the valuable object. Shakespeare probably thought of Macbeth as an event at the Globe Theatre, of which he was writing down a symbolic representation, but today we’d more likely think of the written representation as being Macbeth. LISP began its life as a notational representation of a future programming language to be invented, before John McCarthy decided that it was the language.

    I share your intuition that, if people were simulating a brain neuron-by-neuron using pen and paper, that would not be enough to instantiate consciousness. On the other hand, when the same computation is done in a wetware brain, evidently it does instantiate consciousness. And if the same computation were done in a silicon brain in a robot … well, who knows? There people’s intuitions are locked in conflict.

    What you don’t get to do, though, is explain why it’s “obvious” that the robot couldn’t be conscious, without noticing or caring that the same arguments would also prove that 100 billion neurons blindly summing up action potentials and firing inside of a skull couldn’t give rise to consciousness either. That’s the double standard that Turing wrote the definitive warning against.

    Ultimately, “is this a real thing, or merely a symbolic representation of that thing?” is a wrong way to frame the question. A better way to frame it is: “among all the possible physical systems able to instantiate computational behavior like that of humans or animals, which ones would or wouldn’t give rise to consciousness, and why?”

    I tried to sketch some tentative ideas about such questions in my Ghost in the Quantum Turing Machine essay, which might interest you. But the truth is that I don’t know, and neither do you. And the ability of the word “consciousness” to force people to admit their ignorance is a feature, not a bug. It’s only after we concede that we have no idea what the relevant principles are—i.e., what it could possibly be that could make some Turing-Test-passing entities conscious and other, identically-behaving entities not—that the actual discussion can start.

  105. James Cross Says:

    I think some studies have shown that we and probably other animals tend to use the eyes of others as a guide to whether we think they are conscious or not. If the eyes track things, we are likely to think that there is a conscious entity controlling them. This probably comes not only from our interactions with other humans but also other animals. If the eyes of the lion are following you on the savanna, there’s a good chance the lion is thinking of you as a meal. A study of scrub jays, which went to great pains to argue they were not saying the jays had a theory of mind, nevertheless found that jays who say other jays observing them hiding food would frequently re-hide the food when the observer wasn’t looking.

    So an AI, even if it was incoherent on a Turing test, would probably be thought conscious if it had reasonably well-programmed eyes and predictable behaviors that aligned with the actions of the eyes.

  106. Scott Says:

    Lorraine Ford #96: I’ve been friends with Chris Fuchs for 20 years. I’m a fan of many of his technical results as well as the unique blend of philosophy, history, and crass humor in his voluminous writings. So I know something about QBism … though ultimately I part ways from it, because of the QBists’ refusal ever to come out and just say what they think is true about the physical world, such that agents in that world would describe their experiences using quantum mechanics.

    But crucially, even if it’s accepted, I don’t see how QBism can save you from the dilemma posed to your worldview by the mechanistic nature of physical law. After all, even by QBist lights, I could perfectly well write down a quantum state for your brain, and you could perfectly well write down a quantum state for mine. And those quantum states, if we had any way to learn them, could then be fed into a computer, which could proceed to simulate the Standard Model Lagrangian and thereby predict the probabilities of every possible thing that you or I could do.

    That’s why the only possible out that I see for libertarian free will—or if you like, for “oomph” or “inner dynamism” that’s not just a high-level approximation, reflecting only the current unavailability of good enough brain-scanning machines—is if it’s impossible even in principle to learn the quantum state of someone’s brain, well enough to predict their behavior, without making measurements that would radically alter or destroy the state. I expanded on this idea in my Ghost in the Quantum Turing Machine essay, which might interest you even more than it would interest Bunsen Burner.

    To sum up, my position is that sure, you can insist if you want that no mere computer could ever have the oomph or inner dynamism—but if you do, then you’re forced into extreme curiosity about what it is about the physical laws governing our brains, that lets us have the oomph and inner dynamism! Just tacitly accepting our own “oomph,” without ever asking how an extraterrestrial visitor would figure out that we had the oomph even though a computer simulating us was oomphless, is not an intellectually defensible option. 🙂

  107. Scott Says:

    Everyone: I hate to do this, but due to severe lack of time, combined with severe personal temptation to procastinate, I’m going to shut down this thread, probably by tonight. Please get in any final thoughts. Thanks!

  108. fred Says:

    James #70

    “If they are simulated, they would be programmed and are under human control.
    If they are real, even if there capability has been programmed, they would be beyond human control.”

    According to you own definition, humans desires are not “real” but “simulated” since we can control someone’s desires by using medications, e.g. castration drugs (and I’ve linked it back to the other discussion going on in this thread…).

  109. OB Says:

    Lorraine #91, #101: Machine learning’s entire schtick is semantic extraction. The thing that you claim machines can’t do is the thing that ML, as applied statistics, is all about doing.

    > the symbols that human beings have created represent no intrinsic information: any information that symbols might represent is meaning that human beings have imposed on the symbols

    But this does not correspond to the reality of ML algorithms. ML models don’t always use symbols that humans created. They are capable of creating their own symbols, and no human imposes any meaning on them: the best we can do is interpret their meaning based on what the machine does with them.

    Suppose you feed a machine a stream of unlabelled images and ask it to compress each image to five bits. Then, after a while, you notice that the machine spits out “01100” if and only if you see a cat on the image you give to it. You also observe that if you force “01100” on the output and run the pipeline backwards, using a source of randomness, the machine draws a cat. “01100” appears to be a symbol for cats, but you never created it (the images are unlabelled, remember). The machine did. And it’s not really like our word “cat” either: the machine doesn’t know that cats meow. But it represents something to the machine nonetheless, when it is in the output position, and it’s very loosely analogous to what the word “cat” represents to us.

    I think the most reasonable interpretation of this scenario is that machines can map arbitrary meanings to arbitrary symbols. To say otherwise strikes me as magical thinking. Sure, our concepts are way more complex and way richer than anything modern AI can achieve, but that doesn’t mean modern AI has no capability whatsoever to assign its own meaning to its own symbols.

  110. fred Says:

    Scott #104

    “it seems clear enough that NOAA’s supercomputer simulations of hurricanes won’t make anyone wet.”

    That’s just because the simulation is too limited.
    There’s nothing preventing using its output to drive actual simulated rain in a VR world. You can add water droplet physics and haptics(*), and there you go… wet rain.
    From an experiential point of view, there’s no difference with “real life” wet rain (let’s not lose track of the fact that our knowledge of the so-called external reality is always purely experiential).

    (*) there are already haptics suits using electro-static charges to simulate interaction with the skin at this level of granularity.

  111. Bunsen Burner Says:

    Scott #104

    A better way to frame it is: “among all the possible physical systems able to instantiate computational behavior like that of humans or animals, which ones would or wouldn’t give rise to consciousness, and why?”

    No this is not a meaningful way to frame anything. No more than insisting that “among all the possible physical systems able to instantiate behavior governed by second-order differential equations like that of humans or animals, which ones would or wouldn’t give rise to consciousness, and why?”

    Mathematics allows you to describe aspects of the world. Unless you are a very dedicated and almost mystical Platonist, you cannot substitute it for the world. However, I suspect this has become too complex a discussion for a blog comments section so I’ll leave it at that.

    I will ask one final question. If you genuinely believe that you can instantiate a mind using a pen and paper, then you must also acknowledge that using various caching tricks and constructing helpful DSLs would not change that. After all, this is how real-world programming works. You should be able to capture that mind’s stream of consciousness and print it out. How is this different from a novel? Surely a work of fiction that does not contradict any laws of physics must be capable of being instantiated by some initial conditions. Does this not imply that you are instantiating actual real worlds every time you read a book? Are you prepared to accept such radical Modal Realism?

  112. Bunsen Burner Says:

    Scott #106

    ‘I could perfectly well write down a quantum state for your brain, and you could perfectly well write down a quantum state for mine’

    Really? You are going to write down the quantum state of an open, far from equilibrium, decohered system? And let me guess, for an encore you are going to provide a set of commuting self-adjoint operators that will allow us to find all the interesting properties of that system? Now I really am going to call BS. Either tell us how to do that or outline a research program for getting it done. At best you will have a state that gives probabilities for all possible outcomes, including dead brains, exploding brains, and so on. Not anything you can draw interesting conclusions from.

  113. James Cross Says:

    Fred #108

    Because you can influence one aspect of human behavior with drugs doesn’t mean that all human desires and motives are simulated.

  114. Scott Says:

    Bunsen Burner #111, #112:

      Really? You are going to write down the quantum state of an open, far from equilibrium, decohered system?

    Well, I didn’t say a pure state. 🙂 If you had sufficient information—which of course is the question here—in principle you could certainly write down a density matrix for someone’s brain, then evolve it forward, broadening your simulation to encompass the person’s entire past lightcone as needed.

    I never said that I thought a pen-and-paper calculation could have a mind; in fact I said I thought it couldn’t. But I also said that no one—certainly not you—has articulated any coherent general principle to decide which types of physical systems could or couldn’t have minds. The difference is that some of us openly acknowledge that.

    Since you’re now at the point of sneeringly mischaracterizing what I said, I’m afraid this exchange is at an end. Thanks.

  115. Bunsen Burner Says:

    Scott #114

    Not sure why you thought anything I said was sneering, or that I was deliberately mischaracterizing you. It was a general question really, and at least to me seems to rule out that computability gives us anything useful to understand the mind with. I am not as cynical as you seem to think though. Of course I acknowledge that neither I (nor anyone else) has come up with any general principles to decide which physical systems could or couldn’t have minds. After all, I only bother to engage in these discussions to learn something new – not to pretend I have the answer to everything, irrespective of how my brusque manner can come across at times.

  116. Michael Hale Says:

    I’ll throw in a few more half-baked thoughts before the cutoff.

    I haven’t read every word of the discussion, but I certainly believe that if you added detailed enough simulated people and an environment to your simulation of a hurricane that they could certainly complain about being wet or mourn the loss of their simulated loved ones or homes.

    Regarding simulating minds on paper, I’ll refer to Wolfram’s poetic language of the computational universe of all possible programs as a thing that exists. Some subset of contain minds of varying types in varying environments. And it doesn’t matter when, where, or how you choose to run one of those programs. It’s as unnoticeable and meaningless to the minds within those programs as it would be to us if “God chose to slow down all the photons”. Sure from some perspective you can say that everything slowed down, but so did our perception of it. So then I’m forced to say there are certain criteria about consistency etc that determined why we find ourselves in this program and not another, without needing to talk about higher dimensional aliens running simulations of us on their spaceships.

    I think Scott linked the blog post where he talked about traveling to Mars via euthanasia, digitized cloning, email, and then re-instantiation. And while I can’t rule out his no-cloning quantum principle of consciousness, if it mainly exists to allow you to say no to that method of travel, then I’ve run out of reasons to say no. So assuming certain conditions are met regarding the clone being made after I’m completely unconscious, so I don’t acquire any new memories for even a second that the clone wouldn’t have, then it seems to bleed into territory of less dramatic experiments involving a doctor putting me to sleep for surgery and flipping a coin to determine if I’ll get moved to another room or not. So I suppose I have to say I’d even be willing to travel to a friend’s house that way. And if multiple clones are made, I guess I’d find myself in the one that is awoken first, or maybe a coin flip like in The Prestige.

  117. Edan Maor Says:

    Lorraine Forde #99:

    > But what I’m disputing is the idea that robots/ AIs could ever be intelligent or conscious.

    That’s a perfectly valid thing to dispute and to disagree about (though I don’t agree with you for the same reasons Scott outlined above).

    My point was that this is totally irrelevant to the question of AI safety – for purposes of AI safety, it doesn’t matter what consciousness is or whether an AGI has it – all that matters is whether we build an AGI that is very capable, and whose goals are not aligned with ours.

  118. Raoul Ohio Says:

    asdf #47

    Great pointer. I had not been aware of James Mickens. Too good to be true.

  119. asdf Says:

    Regarding the Turing test, there’s a musical exposition that I like:

    https://www.sccs.swarthmore.edu/users/01/bnewman/songs/lyrics/WayAIBehaves.txt

  120. Michael Says:

    @Scott#98-But that’s the point. The problem isn’t that the culture tells nerds not to take a risk. Its that some peoples minds just don’t work right if they try to make sure they won’t hurt anyone. People that tell kids they need to make sure they won’t hurt anyone without warning them what could happen should be pitied but a lot of people- and not just feminists- glorify them.

  121. Drocta Says:

    I agree with Edan Maor #117 on the “it doesn’t matter if the AGI is conscious or not, just whether it is effective at furthering a goal” front,

    where by “effective at furthering” and “goal”, I do not mean that it necessarily has a genuine intent or values, only that it behaves in ways that make the world more fitting with what some hypothetical possibly imaginary agent (which doesn’t need to actually exist within the hypothetical) could have as goals.

    One idea of how the mind relates to the body that some people have proposed from time to time, is “occasionalism”, that there is not any causal link between a person’s mind and their brain, and that the two just happen to correspond due to initial conditions. Personally, if that were true, I don’t think it would much bother me to know it. If I learned that, perhaps, my mind exists independent of my body, but does get sensory inputs from it and such, but that my brain and body were not influenced by my mind / by “me”, I don’t see why this should bother me.

    And yet, if this were the case, then in order to understand/explain the impacts “I have on the world”, one would not need to consider my mind or “me” whatsoever. One could simply consider my body and brain.

    If I were a p-zombie, my influence on the world would be the same.

    Similarly, if an AGI were a p-zombie, it’s impact on the world would be no different.

    Whether an AGI would necessarily be like a p-zombie is, I think, a very interesting question,

    But it makes no difference either way for the topic of AI safety.

  122. JimV Says:

    A couple last thoughts for Lorraine Ford (with whom, like many in this thread, I disagree on the ability of machines to be conscious):

    1) You’re sure that your choices are all “consciously” determined and not the result of unconscious instincts programmed by evolution. Serious question: how can you know this? How would you feel differently if you were only programmed to feel that way? (And how do you explain regretting past choices that were based on emotion rather than logic in hindsight, hypnosis, etc.).

    2) Consciousness, as I understand it, is simply a part of your brain being tasked with deciding what to do with external sensory inputs, and communication with external entities. Similar to the Windows operating system receiving a click on an Excel icon and starting the Excel program in response. So isn’t Windows a primitive, limited form of consciousness? If it didn’t experience the mouse click, how could it have responded to it? If it did experience the mouse click, isn’t that the same as being conscious of it?

  123. Rand Says:

    > I was once contacted by some group doing a genome study of “notable math/physics/CS professors,” so I agreed to spit into a tube and mail it to them. I have no idea whether they ever learned anything interesting.

    I’m assuming you participated in the BGI genomics study of g among super-smart-people around 2014. If I recall correctly there were three ways to get into the study:

    1) Be a famous smart person (Scott Aaronson, Terry Tao, Eliezer Yudkowski – who probably posted about it on Facebook)
    2) Have a top score on the Putnam competition
    3) Have a 800 math on the general GRE (top 6%, or ~40% among math/physics/CS)

    I wisely chose option #3.

    I emailed them about a year later and after a few months got a response about something-something merger/acquisition/buyout, something machines, delay, genome soon. Their website says the sequencing and analysis are in progress 😉

  124. John Sidles Says:

    For reasons that this comment will explain, it is my practice each week to survey and bibtex-catalog those arxiv/quant-phys preprints that deal with open quantum systems in general, and super/subradiant dynamics in particular.

    For the arxiv tranche of 2019-09-28 to 2019-10-04 — comprising 248 preprints, of which ~90 are relevant to open/superradiant/subradiant quantum dynamics — the very last preprint posted was Prahlad Warszawski’s and Howard Wiseman’s “Open quantum systems are harder to track than open classical systems”; this preprint develops ideas that were introduced in Raisa Karasik’s and Wiseman’s “How Many Bits Does It Take to Track an Open Quantum System?”.

    The question “How Many Bits Does It Take to Track an Open Quantum System?” — together with its surprising answer “not very many!” — is broadly relevant to multiple recent Shtetl Optimized essays … depending on whether the “open systems” are our own hot human brains, or alternatively Google/IBM/DWave (etc) cryogenic NISQ chips.

    For students especially, Scott’s 2004 survey “Multilinear Formulas and Skepticism of Quantum Computing” provides solid common-sense foundations for appreciating the staggeringly vast literature on open/superradiant/subradiant quantum dynamics — tens of thousands of arxiv preprints, yikes — that has effloresced in the intervening fifteen years. In Scott’s foresighted words of 2004:


    “The claim that large-scale quantum computing is possible in principle is really a claim that certain states can exist–that quantum mechanics will not break down if we try to prepare those states. Furthermore, what distinguishes these states from states we have seen must be more than precision in amplitudes, or the number of qubits maintained coherently. The distinguishing property must instead be some sort of complexity.”

    To borrow Scott’s language, works like the Karasik/Warszawski/Wiseman papers provide a “distinguishing property” that is (crucially) both physically well-motivated and mathematically well-posed, namely, computationally efficient open-system quantum dynamical unravelling on low-rank (hence low-complexity) tensor network state-spaces.

    Reaching deeper into the scientific literature, an essay that (for me) has helpfully illuminated both the supremacist-versus-skeptic quantum debates, and also the rationalist-versus(?)-SJW debates, has been a 17th century sermon by Isaac Barrow (who was Isaac Newton’s teacher and mentor), titled “Against Detraction”, which is grounded in Seneca’s maxim Expedit vobis neminem videri bonum, quasi aliena virtus exprobratio vestrorum delictorum sit (it is expedient for you that no one be seen as good, as if thereby to remove from virtue its power to reproach you). In a nutshell, we find in Barrow’s and Seneca’s writings ample grounds for admiring, equally, both quantum supremacists and quantum skeptics, and admiring equally too, both rationalists and social justice warriors.

    Further considerations in this vein are advanced in Gideon Lewis-Kraus’s account “The Great AI Awakening: How Google Used Artificial Intelligence To Transform Google Translate” (NYT Magazine, 2018). It is telling, in particular, that Google’s “AI Awakening” was not internally perceived as a struggle — a wastefully “detractive” struggle (in Isaac Barrow’s phrase) — between “AI Rationalists” and “AI Intuitionists”. Does Barrow’s 17th century essay provide useful guidance for today’s quantum computing community?

    In summary, the burgeoning discovery of ever-more-efficient methods for simulating (with strictly classical computing resources) generic open quantum systems — in other words, the pragmatic truth of the ECT — are already generating a great “Quantum Awakening”. Very plausibly, Gil Kalai’s most skeptical predictions all are destined to be proven true…wonderfully, amazingly true!…and equally plausibly, even the most ardent quantum supremacists will be sincerely glad of it. 🙂

  125. Sept Says:

    Lorraine @101:

    This can be restated as follows:
    1. For any given system, there’s no one privileged semantic model.
    2. Therefore, no computer/AI/machine can ever be conscious.

    Surely there are a few steps missing here.

  126. Scott Says:

    Rand #123: Yeah, it was that one. I never posted about such things on Facebook (?), but I did also satisfy criterion 3), so maybe they found both of us the same way.

  127. Scott Says:

    John Sidles #124: It’s … something … to have you back.

  128. Shtetl-Optimized » Blog Archive » Pseudonymity as a trivial concession to genius Says:

    […] Update (6/24): For further thoughts and context about this unfolding saga, see this excellent piece by Tom Chivers (author of The AI Does Not Hate You, so far the only book about the rationalist community, one that I reviewed here). […]