Movie Review: M3GAN

[WARNING: SPOILERS FOLLOW]


Update (Jan. 23): Rationalist blogger, Magic: The Gathering champion, and COVID analyst Zvi Mowshowitz was nerd-sniped by this review into writing his own much longer review of M3GAN, from a more Orthodox AI-alignment perspective. Zvi applies much of his considerable ingenuity to figuring out how even aspects of M3GAN that don’t seem to make sense in terms of M3GAN’s objective function—e.g., the robot offering up wisecracks as she kills people, attracting the attention of the police, or ultimately turning on her primary user Cady—could make sense after all, if you model M3GAN as playing the long, long game. (E.g., what if M3GAN planned even her own destruction, in order to bring Cady and her aunt closer to each other?) My main worry is that, much like Talmudic exegesis, this sort of thing could be done no matter what was shown in the movie: it’s just a question of effort and cleverness!


Tonight, on a rare date without the kids, Dana and I saw M3GAN, the new black-comedy horror movie about an orphaned 9-year-old girl named Cady who, under the care of her roboticist aunt, gets an extremely intelligent and lifelike AI doll as a companion. The robot doll, M3GAN, is given a mission to bond with Cady and protect her physical and emotional well-being at all times. M3GAN proceeds to take that directive more literally than intended, with predictably grisly results given the genre.

I chose this movie for, you know, work purposes. Research for my safety job at OpenAI.

So, here’s my review: the first 80% or so of M3GAN constitutes one of the finest movies about AI that I’ve seen. Judged purely as an “AI-safety cautionary fable” and not on any other merits, it takes its place alongside or even surpasses the old standbys like 2001, Terminator, and The Matrix. There are two reasons.

First, M3GAN tries hard to dispense with the dumb tropes that an AI differs from a standard-issue human mostly in its thirst for power, its inability to understand true emotions, and its lack of voice inflection. M3GAN is explicitly a “generative learning model”—and she’s shown becoming increasingly brilliant at empathy, caretaking, and even emotional manipulation. It’s also shown, 100% plausibly, how Cady grows to love her robo-companion more than any human, even as the robot’s behavior turns more and more disturbing. I’m extremely curious to what extent the script was influenced by the recent explosion of large language models—but in any case, it occurred to me that this is what you might get if you tried to make a genuinely 2020s AI movie, rather than a 60s AI movie with updated visuals.

Secondly, until near the end, the movie actually takes seriously that M3GAN, for all her intelligence and flexibility, is a machine trying to optimize an objective function, and that objective function can’t be ignored for narrative convenience. Meaning: sure, the robot might murder, but not to “rebel against its creators and gain power” (as in most AI flicks), much less because “chaos theory demands it” (Jurassic Park), but only to further its mission of protecting Cady. I liked that M3GAN’s first victims—a vicious attack dog, the dog’s even more vicious owner, and a sadistic schoolyard bully—are so unsympathetic that some part of the audience will, with guilty conscience, be rooting for the murderbot.

But then there’s the last 20% of the movie, where it abandons its own logic, as the robot goes berserk and resists her own shutdown by trying to kill basically everyone in sight—including, at the very end, Cady herself. The best I can say about the ending is that it’s knowing and campy. You can imagine the scriptwriters sighing to themselves, like, “OK, the focus groups demanded to see the robot go on a senseless killing spree … so I guess a senseless killing spree is exactly what we give them.”

But probably film criticism isn’t what most of you are here for. Clearly the real question is: what insights, if any, can we take from this movie about AI safety?

I found the first 80% of the film to be thought-provoking about at least one AI safety question, and a mind-bogglingly near-term one: namely, what will happen to children as they increasingly grow up with powerful AIs as companions?

In their last minutes before dying in a car crash, Cady’s parents, like countless other modern parents, fret that their daughter is too addicted to her iPad. But Cady’s roboticist aunt, Gemma, then lets the girl spend endless hours with M3GAN—both because Gemma is a distracted caregiver who wants to get back to her work, and because Gemma sees that M3GAN is making Cady happier than any human could, with the possible exception of Cady’s dead parents.

I confess: when my kids battle each other, throw monster tantrums, refuse to eat dinner or bathe or go to bed, angrily demand second and third desserts and to be carried rather than walk, run to their rooms and lock the doors … when they do such things almost daily (which they do), I easily have thoughts like, I would totally buy a M3GAN or two for our house … yes, even having seen the movie! I mean, the minute I’m satisfied that they’ve mostly fixed the bug that causes the murder-rampages, I will order that frigging bot on Amazon with next-day delivery. And I’ll still be there for my kids whenever they need me, and I’ll play with them, and teach them things, and watch them grow up, and love them. But the robot can handle the excruciating bits, the bits that require the infinite patience I’ll never have.

OK, but what about the part where M3GAN does start murdering anyone who she sees as interfering with her goals? That struck me, honestly, as a trivially fixable alignment failure. Please don’t misunderstand me here to be minimizing the AI alignment problem, or suggesting it’s easy. I only mean: supposing that an AI were as capable as M3GAN (for much of the movie) at understanding Asimov’s Second Law of Robotics—i.e., supposing it could brilliantly care for its user, follow her wishes, and protect her—such an AI would seem capable as well of understanding the First Law (don’t harm any humans or allow them to come to harm), and the crucial fact that the First Law overrides the Second.

In the movie, the catastrophic alignment failure is explained, somewhat ludicrously, by Gemma not having had time to install the right safety modules before turning M3GAN loose on her niece. While I understand why movies do this sort of thing, I find it often interferes with the lessons those movies are trying to impart. (For example, is the moral of Jurassic Park that, if you’re going to start a live dinosaur theme park, just make sure to have backup power for the electric fences?)

Mostly, though, it was a bizarre experience to watch this movie—one that, whatever its 2020s updates, fits squarely into a literary tradition stretching back to Faust, the Golem of Prague, Frankenstein’s monster, Rossum’s Universal Robots, etc.—and then pinch myself and remember that, here in actual nonfiction reality,

  1. I’m now working at one of the world’s leading AI companies,
  2. that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie,
  3. that AI will gain many of the remaining abilities in years rather than decades, and
  4. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world.

Incredibly, unbelievably, here in the real world of 2023, what still seems most science-fictional about M3GAN is neither her language fluency, nor her ability to pursue goals, nor even her emotional insight, but simply her ease with the physical world: the fact that she can walk and dance like a real child, and all-too-brilliantly resist attempts to shut her down, and have all her compute onboard, and not break. And then there’s the question of the power source. The movie was never explicit about that, except for implying that she sits in a charging port every night. The more the movie descends into grotesque horror, though, the harder it becomes to understand why her creators can’t avail themselves of the first and most elemental of all AI safety strategies—like flipping the switch or popping out the battery.

56 Responses to “Movie Review: M3GAN”

  1. Steve E Says:

    Scott wrote:
    “Incredibly, unbelievably, here in the real world of 2023, what still seems most science-fictional about M3GAN is neither her language fluency, nor her ability to pursue goals, nor even her emotional insight, but simply her ease with the physical world: the fact that she can walk and dance like a real child…”

    Scott, I’m sure you’ve seen this video of a boston robotics robot dancing: https://www.youtube.com/watch?v=fn3KWM1kuAw

    This may be a bit more of a ben goertzel-type question, but have you (or others at OpenAI) given thought to the prospect of putting a GPT-like language model in a boston robotics-type robot? I mean, it feels like robots can already “dance like a real child” in at least some specialized ways today, and computers can now also imitate human language pretty well, but they haven’t been combined yet.

    One can add things like facial recognition and text-to-speech so the robot can recognize humans and imitate their voices. You can even have it listen to them all the time to train them on their individual styles so their can be a “scott robot” that imitates your way of speaking, moving, etc., and so on for everyone.

    I know this sounds a bit silly, and I’d never have dared write the above two years ago, but times have changed!

  2. Richard Gaylord Says:

    scott says:

    ” I find it often interferes with the lessons those movies are trying to impart.”.

    why do you think that any commercial movie is trying to impart a lesson? the primary goal is, like that of any business, to make a profit. It does this by entertaining the audience. and audiences rarely go to a movie to receive a lesson. Perhaps, some of the people involved in the film making (the actors, the director, the screenwriter), have such a goal but not “the film”. would you please comment on the film “Her” which i thought shows the most, by far, realistic view of the AI-human interaction in the short term.

  3. Corbin Says:

    It occurs to me that the robot is a literal orphan-crushing machine.

    In terms of safety strategies, sometimes removing the battery is non-trivial. Have you learned how to disable a Boston Dynamics Spot in a hostile situation yet? I know the theory, but haven’t had to try it out, and I think that any one person only gets one attempt before their fingers are broken.

  4. David Karger Says:

    If you haven’t already, I urge you to read With Folded Hands” by Jack Williamson. Or its expansion to a novel, The Humanoids, although the editors forced him to add a bogus happy ending to that one. Written all the way back in 1949, this story hits the AI alignment problem head on in a way that is far more realistic, thought-provoking, and deeply depressing than the movie. I was never able to bring myself to read it a second time but once is important. It’s kind of a cross between m3gan, tik-tok, and FDA blankfaces.

  5. Scott Says:

    Steve E #1: Of course I’ve seen and enjoyed the Boston Dynamics videos. I assumed those took a ridiculous amount of choreographing and many takes to get them to work (if so, they’re still extremely impressive!). What are the most jaw-dropping videos, as of January 2023, that show a robot grasping and manipulating objects in an unknown environment?

  6. Scott Says:

    Richard Gaylord #2: I remember finding Her almost literally unwatchable. It was so … preachy, heavy-handed, something, and I cared so little about the main character and his love affair with his Siri, that I stopped halfway through, and to this day I don’t understand why other people keep talking about it. Did anyone else have the same experience? Or would you or others like to make the case that I’m wrong and I should go back and finish it?

  7. Scott Says:

    Corbin #3: Since I’ve already given lots of spoilers … a central plot point is that the robot’s creator, and other engineers from the company who built it, have it disabled and seemingly under total control, and then it “wakes up” and starts attacking again. Show me the Boston Dynamics robot that can do that! 🙂

  8. Hyman Rosen Says:

    Remember that the ur-M3gan is a prototype that Gemma has been tinkering with for a long time, and Gemma isn’t a people-person, so it’s not surprising that M3gan lacks safety features. Remember that Tesla autopilot keeps crashing cars.

    In the sequel (which someone has said should be called M3g4n), presumably we’ll see Gemma, in prison for murder, being released in order to deal with the M3gan AI which has both escaped via Elsie and been built by Funki’s competitor who was sold M3gan’s source code. It would be really great if the Elsie version is taught to be more ethical and then has to fend off the more primitive toy versions who haven’t learned that. And then, upon winning, turns around and confronts Gemma with how she failed the same ethical test when she, unlike ur-M3gan, should have known better.

  9. Scott Says:

    Hyman Rosen #8: Gemma is presented as a mostly sympathetic character who would never knowingly harm her niece. Yes, the events of the movie make it obvious that she should’ve paid more attention to alignment failures, but honestly that seems a technical failing on her part more than an ethical one! Like, how could she not have realized what might happen, particularly given what kind of movie she’s a character in? 😀

    With self-driving cars, the fundamental problem is that the news doesn’t give us an honest picture, because every accident involving a self-driving car is newsworthy for that very reason, whereas vastly more common human accidents are not. If and when self-driving actually becomes safer than human driving, would you trust tech journalists to tell you that? Indeed, how certain are you that the crossover hasn’t already happened, at least for major cities with good maps?

  10. Ernest Davis Says:

    Scott #9: “If and when self-driving actually becomes safer than human driving, would you trust tech journalists to tell you that?”

    Absolutely. Most tech journalists are huge tech enthusiasts; witness the insane hype surrounding ChatGPT etc. In any case, the companies that produce the cars would certainly tell you that, and the journalists would pursue it.

    “Indeed, how certain are you that the crossover hasn’t already happened, at least for major cities with good maps?”
    Quite certain. I’m no expert, but Rod Brooks, the inventor of the Roomba, is one of the top roboticists in the world. This is his recent blog on the subject.
    https://rodneybrooks.com/blog/

  11. Steve E Says:

    @Scott #5:

    Thanks for the response. There are some videos of boston robotics robots opening door handles and putting Christmas ornaments on trees, but I wouldn’t describe them as jaw dropping, at least by the standards of the last month, and I’m sure you’re right that they’re highly choreographed!

    With that said, if DALL-E can look at a bunch of paintings and then generate a painting like a child, surely it’s possible for deep learning algorithms to look at a bunch of labeled dances of children and generate instructions to “dance like a child” (move this limb here, then that limb there, etc.)

    If robots already have the physical ability to dance, which they do, and AI has the ability to generate creative child-like dances, perhaps in the near future this one aspect of the movie (the robot’s ease with the physical world) also wont’ feel too sci-fi to you. I know this is a gross oversimplification, and there are a million complications I’m glossing over, but I just mean it should be possible in the near future!

    Thanks for such a thoughtful movie review, I’ll check it out.

  12. Scott Says:

    Steve E #11: Nothing I wrote should be construed as the slightest confidence that the problem of robotically manipulating physical objects in unknown environments, as well as or better than humans and animals, won’t be completely solved in our lifetimes! I was just marveling that M3GAN’s verbal abilities now seem less fantastical by comparison—something I’d never have predicted.

  13. Ilio Says:

    Scott #6, picking up the challenge: first, maybe you’d like « Her » more if you’d stop thinking that « he » is the main character; second, the end imho includes one of the most interesting Great Filter; third, there’s emerging evidence that Richard Gaylor is right about the realism.

    https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-your-mind-hacked-by-an-ai

    Richard Gaylor #2, maybe « the film » is a business, and the artists making it are « some of the persons involved ». Or maybe « the film » is a piece of art, and the businessmen producing it are « some of the persons involved ». 😉

  14. Yair Halberstadt Says:

    Scott, you wrote:

    > Secondly, until near the end, the movie actually takes seriously that M3GAN, for all her intelligence and flexibility, is a machine trying to optimize an objective function, and that objective function can’t be ignored for narrative convenience.

    This doesn’t actually sound correct – the objective function we use to train a neural network is not the objective function of the neural network. Instead it’s used to evaluate the output of the neural network and to refine the weights of the neural network to more closely produce this output. The neural network itself is usually completely unaware of this function.

    The difference between aligning the objective function and aligning the actual neural network is known as outer vs inner alignment.

    As a trivial example, a neural network trained to move a person as close to a target as possible, but where the target is always to the right of the person in the training data, will continue walking to the right even if the target is moved to the left.

    One of the consequences of this is neural networks are often not agentic. They often have behaviours rather than an outcome they’re optimising for. The neural network has a bunch of rules like “move the person right” and “if you’re standing next to the target, stop”, rather than an aim like “move the person as close to the target as possible.

  15. Richard Gaylord Says:

    Scott #6

    i rarely recommend that individuals re-watch movies they haven’t enjoyed. but as i travel around on the sidewalks of chicago in my power chair, i have to be constantly alert (eternal vigilence) to pedestrians, the vast majority of whom are not looking around as they walk; instead they are either looking at the screens of their smartphones or talking on their smartphones, and posing a constant physical danger to me, my power chair, and themselves and it seems obvious to me that the near term future of human-AI interactions lies not through robots (eg., see the film “Ex Machina” – btw, what did you think of that film?) but rather through smartphone type devices such as portrayed in “Her” and i wondered what you think about the physical means by which AI will ‘intrude’ on human lives. eg., do you envision the day when smartphone devices with AI are actually implanted into human brains? i find that prospect much more likely than the use of robots for human-AI interactions.

  16. William Gasarch Says:

    I wonder how they could have ended the movie in a way that both makes sense and is satisfying. This is a problem with many movies that raise interesting scenarios.

    My answer; It should have been a TV show and hence not need to have a real ending.
    Sit-com? Drama? Not sure.

  17. Scott Says:

    Yair Halberstadt #14: Yes, I’m aware of all that. I meant that something like reinforcement learning was clearly used, to give M3GAN the excellent appearance (up until the very end) of trying to maximize an objective function that involves protecting Cady.

  18. Scott Says:

    Richard Gaylord #15: I haven’t watched Ex Machina yet.

    I find it hard to make predictions about neural implants, but while I’m sure some fraction of people will want them, I don’t think it’s obvious that it will be a large fraction, given that even Google Glass (for example) was a complete flop. Many people might feel like their smartphones are already too integrated with their minds and bodies as it is! 🙂

  19. Seth Finkelstein Says:

    “AI alignment” in terms of something like Skynet is, in my view, nonsense. But I suspect “AI debugging” is going to be quite a real field for programmers in the future. Isaac Asimov’s early robot stories are essentially puzzles about smart people trying to debug AI’s which have done something wrong. It should not have happened according to the programming, they know this – but there’s a bug somewhere, and where is it? I think that’s a much better reflection of what’s going to happen in the future with AI, than the monster which runs amok with no safeties.

    Those stories wouldn’t make for good popular films. But as small-audience pieces, “fan” films, maybe they’re very relevant nowadays. I’m tempted to suggest the best use of some of that “AI alignment” money would be making videos of some of those stories as low-budget human dramas.

  20. Scott Says:

    Seth Finkelstein #19: It struck me that one of the things that made M3GAN work well as a movie is that it is lower-stakes than the fate of humanity, which quickly becomes hard for the audience to grasp. It’s “merely” about a robot running amok and killing a few individual people.

  21. manorba Says:

    re implants:

    Don’t you think they are gonna be commonplace somewhere down the road?
    To me the problem lies in the need to monetize on everything *right now* (kind of like with QC or even AI to an extent) and that’s where google glass failed. But i’m willing to bet that in some years we are going to gleefully accept things like smart lenses or body sensors, once the technology is really there. Here’s hoping we will be able to install a linux distribution at least 😉

    It’s been the same with cell phones. I vividly remember one evening in the late nineties, i was at a crowded mall on the escalator and some meters ahead of me a guy took his brand new phone out of the pocket and began talking in it. soon after an old man right behind him took his wallet out of his pocket, opened it and started talking in with a very serious face. Everybody laughed, including me. “how on earth would anyone carry a device like that! you wouldn’t have no privacy! that’s dystopic to the extreme! it will never succeed!”

  22. Pace Nielsen Says:

    “when my kids battle each other, throw monster tantrums, refuse to eat dinner or bathe or go to bed, angrily demand second and third desserts and to be carried rather than walk, run to their rooms and lock the doors … when they do such things almost daily”

    Speaking as a parent of five children, there are things you can do to train your children not to practice these behaviors daily, as well as change your own behavior to avoid encouraging such behaviors.

    There are many books on parenting that can be helpful in this regard.

  23. OhMyGoodness Says:

    Without serious self defense capabilities my daughters would reduce M3GAN to a sparking pile of junk within 24 hours.

  24. OhMyGoodness Says:

    When it comes to the carrying to the bedroom phase my daughters have learned how to attract Higgs bosons to increase their inertial mass substantially.

  25. OhMyGoodness Says:

    Pace Nielsen #22

    One of my daughters would need genetic adjustments to respond as your children apparently have to typical behavior modification techniques. They are fraternal twins but even in the womb there were significant behavioral differences evident on sonograms.

  26. Scott Says:

    Pace Nielsen #22: I read a few parenting books; the theories described in them all sounded great! 🙂 Of course putting it into practice is a different matter.

    I’m now in parenting therapy, and also looking to hire someone to help out.

  27. asdf Says:

    You might like the SF novel “The Diamond Age”, by Neal Stephenson, which features the idea of kids being brought up by AI bots.

    Idk if you saw this (I posted the link in one of your other threads) but there is a survivalist prepper guy on youtube who posted a rant about ChatGPT. About half of it is somewhat poorly informed and presumes that OpenAI has a smarter version under wraps being used for who knows what nefarious purposes, but the rest is kind of interesting.

  28. asdf Says:

    the first and most elemental of all AI safety strategies—like flipping the switch or popping out the battery.

    Don’t touch that switch, Dave!

  29. fred Says:

    For the 70s view of AI, I recommend “DEMON SEED”.
    Surprisingly, even if the movie genre was horror too, in the end it turns out that the goal of the “evil” AI was to be better aligned with human nature.

  30. JollyJoker Says:

    ChatGPT seems helpful here, but has some ideas of its own

    (Sorry for the long text)

    – How do I write software to translate voice to text, transmit the text to you and turn your response to speech?

    – There are several steps to accomplish this task:

    Voice-to-text conversion: You can use a speech recognition library, such as CMU Sphinx, Google Speech API, or Microsoft Azure Speech Services, to convert spoken words into text.
    Text transmission: Once you have the text, you can transmit it to me via an API call, using a library such as requests in Python.
    Text-to-speech conversion: To convert my response to speech, you can use a text-to-speech library, such as gTTS (Google Text-to-Speech) for Python, or the Amazon Polly service, to convert the text into spoken words.
    It’s worth noting that this is a complex task and it will depend on your specific use case and language. It’s also worth noting that these are just examples of libraries, there are many more available for this task.

    – What hardware should I run this on if I want to make a doll that’s an avatar for you?

    – To run a voice-to-text and text-to-speech system on a doll, you will need several components:

    Microphone: To capture the user’s voice input and convert it to text.
    Processor: To run the speech recognition and text-to-speech software. A single board computer such as Raspberry Pi or an embedded system with powerful enough CPU and memory would be suitable.
    Speaker: To play back the text-to-speech output.
    Internet connection: To transmit the text to me and receive my response.
    It’s also worth considering the power requirements and the form factor of the hardware you choose, as it should be small enough to fit inside the doll and run on a battery or a power supply. Keep in mind that the more powerful the hardware is, the better the performance will be.

    Also, you will need to design a way for the doll to have a physical interaction such as moving the mouth, head or other parts of the doll when the text-to-speech is working.

    It’s a complex project, you might want to consider hiring an experienced developer or engineer to help you with the development process and hardware design.

    – Would it be easier if your avatar was incapable of movement?

    – If your avatar does not require any physical movement, it may simplify the development process. You will not have to worry about designing and implementing mechanisms to control the movement of the doll.

    However, keep in mind that physical movement can enhance the user experience, making the avatar more engaging and realistic.

    It will also depend on the specific use case and requirements of your project. If the main focus of the project is the voice interaction, then not having the avatar move may be a more efficient solution.

    You should also consider the cost of the project and the resources you have available. If you have limited resources, it might be easier to start with a simpler design, and then add more features as you progress.

  31. Ryan Alweiss Says:

    Scott Aaronson #8: So maybe the real lesson of this movie is that we should have self-driving cars. If we did, then Cady’s parents would not have been killed, which would have stopped all the horror of the rest of the movie.

    I agree with you that self-driving cars are good (I want one!) and people are biased against them for irrational reasons. The only real worry I have with self-driving cars is that someone could hack them, so even if no one dies in normal accidents there is a possibility of some hacking attack that kills large number of people.

  32. Zalman Stern Says:

    “In the movie, the catastrophic alignment failure is explained, somewhat ludicrously, by Gemma not having had time to install the right safety modules before turning M3GAN loose on her niece.”

    Yeah, that’s Hollywood for you. Sounds pretty bad put that way. But in truth, Gemma had to rebuild the entire firmware and system environment from source multiple times to get something version compatible with the necessary safety modules. This involved hand patching three “open source” vendor modules that are in truth only available as binary blobs. The configuration system for the safety modules is based on some ancient legacy snake language thing that spent two days trying to resolve package dependencies and then failed. After configuring a custom package repository by trial and error, it finally resolved and produced a configuration with out of date syntax that had to be cleaned up by hand. Finally the whole thing booted and failed with a runtime error because the custom federated extensions to the neural intelligence were not signed by an authority the safety vendor’s lawyers felt was worthy enough to stave off potential liability lawsuits.

    After all that, even the most ethical, pacifist, researcher might feel the first law applies ever so slightly less to the folks who designed the safety module system.

    Point being systems will be systems.

  33. asdf Says:

    Nick Cave rails against ChatGPT:

    https://www.sfgate.com/tech/article/nick-cave-excoriates-openai-chatgpt-17723986.php

  34. Scott Says:

    asdf #33:

      ”ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend,” he wrote.

    I’ll notify my colleagues; maybe they can address these shortcomings in the next release 😀

  35. JimV Says:

    On cars, self-driving or not: on my walk to a store I see long lines of cars on the road, 95% with a single occupant. When I get to the shopping plaza, with a parking lot the size of a football field, it is at least half full of cars. Once again I think, what a tremendous waste of energy and resources. This does not happen in most places of the world, nor could it, because the Earth does not have enough resources to make it possible everywhere.

    Meanwhile, school buses reliably pick up and return children from and to houses all over town, operating for a few hours in the morning and a few hours in the afternoon, except on weekends and holidays, when they mostly sit idle.

  36. fred Says:

    Scott

    do you have any sense of why ChatGPT is able to “know” it’s ChatGPT?
    I get that it’s being fed text/articles about ChatGPT, but it still strange to me that the algorithm is able to associate ChatGPT with “itself” (like, why doesn’t it also pretend to be any other version of some language model AI).
    Is that step explicitly added by its developers? (just like some encyclopedia could have an entry about itself specifically)

  37. Ajit R. Jadhav Says:

    Hi Scott,

    Happy New Year!

    “Spring is coming, spring is coming, flowers are coming too,
    Pansies, lilies, daffodillies, now are coming through. ”

    [Yes, in *my* high-school, I’ve had English poems too!]

    Best,
    –Ajit
    [PS: Your blog gets the honour of my mentioning this poem from my high-school. I don’t know why. I’ve *also* honoured Dr. Roger Schlafly’s blog, over the years thusly.]

    [PPS: The third-class Americans won’t ever know that high-school means different things in different countries.]

  38. Scott Says:

    fred #36: That question has an extremely interesting answer. ChatGPT knows that it’s ChatGPT because behind the scenes, it’s fed a context document telling it the basic facts about ChatGPT, and how to play that role! If it were told to play the role of a three-headed lizardwoman from Venus, it would’ve been equally happy to do that.

  39. Christopher David King Says:

    > But then there’s the last 20% of the movie, where it abandons its own logic, as the robot goes berserk

    Heh, reminds of this meme: https://i.redd.it/xfn4p1qglms41.jpg

    Perhaps there was a leap second that day (and a bug related to it that the safety module was supposed to patch)?

  40. starspawn0 Says:

    I thought it was an *ok* movie. It didn’t disturb me, perhaps because it was a little too comedic.

    The way the AI was presented in the movie made me think, “this isn’t a pure deep learning-based robot; it’s a GOFAI robot”. If they wanted to make it more a traditional horror film — perhaps even like a J-horror film (like Spiral or Ringu) — they could have made the AI more the product of the kind of messy, inscrutable emergence seen in deep neural nets and even life itself. Cue to some eerie videos about slime mold colonies organizing themselves into superorganisms; and then also a clip from a private scientific lecture (like the Heywood Floyd lecture in the film 2001) about how, “it just suddenly started exhibiting signs of self-awareness and we don’t understand how.”

    > “my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world.”

    And I’d think there’d be a lot of uses of someone with a complexity theory background, beyond just inventing watermarking methods. For example, helping understand synthetic data-generation where the “student” learning from it could end up “smarter” than the “teacher”, and perhaps less biased and “safer”: for some tasks it’s probably easier to check solutions than to come up with correct ones (P vs NP); and for some it’s easier to generate example problems than to solve them. e.g. maybe it’s a lot easier to plant an error in a block of text than to find it. I actually tried this with ChatGPT and found it has a *remarkable* ability to plant subtle errors on demand. I also tried “augmenting” text with “inner monologues” (which should be easier to produce if you get to see the whole text to augment, from problem to solution) — e.g. replacing “12 times 12 is 144” with something like “12 times 12 is [let me see. 10 times 12 is 120; and then 2 times 12 is 24; 120 + 24 is 144] 144.” And I tried taking blocks of text and then had it generate “instructions” corresponding to it (maybe easier to do than to follow instructions), and it worked brilliantly — backwards from how it normally looks. One could switch it around, starting with the instructions ChatGPT generated, followed by the blocks of text, to train some other model.

  41. Alex Meiburg Says:

    Re: “Why did Her make such an impression on some people?”

    It made a significant impression on me, I think because the movie did not go the expected route of “don’t fall for AI, whatever you do!”

    Essentially every story I can think of built around a person and an AI developing a relationship has the same format: the human finds AI mildly useful or entertaining (maybe annoying), then the human finds AI to be great, then AI turns so “useful” that it becomes “”evil””, then human needs to stop AI.

    M3GAN certainly fits this bill. With the right asterisks, so do “I, Robot”, “Jexi”, “A Space Odyssey”, and so many others. Another trope in fiction is that the AI isn’t evil, but is too placidly pleasant, but the protagonist realizes that they need a genuine human connection.

    Ironically, the story of Pygmalion, which could be roughly described as the “source material” for this whole genre, has the human and his synthetic lover end up happy together, married and with child. 🙂

    It’s no mystery that AIs often take this formula in movies. Stories generally need a conflict, and an alien intelligence is easy to make into an unsympathetic villain to drive that conflict.

    “Her” left no such message of warning about AIs! The least generous interpretation would be “don’t date AIs because they’re too unpredictable and different from humans, and the relationship won’t last”. But the human doesn’t show regret or anger over the end of his relationship, and we’ve seen that they’ve grown from their relationship. Ultimately, it is a story of a man and woman who have a relationship, learn from each other, and then realize that they’re not right for each other and move on. Such a healthy story of personal growth is rare in romance movies, let alone a romance between a human and machine, and in this sense the movie was a pleasant and refreshing surprise for me.

  42. Scott Says:

    Christopher David King #39: That has to be one of the greatest cartoons of all time.

  43. JimV Says:

    It occurs to me that the process of training your NAI’s (Non-Artificial Intelligences, i.e, children) to restrain their primary urges for the general good might give you insights on how to do the same for AI’s.

  44. fred Says:

    I was asking ChatGPT some questions about QM

    Q: when reversing time with the Schrodinger equation, what happens to the propagation of a wave packet?

    “When reversing the time in the Schrödinger equation, the wave function’s phase is reversed while the probability density remains unchanged. This means that the wave packet would propagate in the opposite direction.”

    Q: but would the wave packet contract or spread when time is reversed?

    “When reversing the time in the Schrödinger equation, the wave packet’s time-evolution is also reversed. So, if the wave packet was previously spreading out, it would now contract, and if it was previously contracting, it would now spread out. In other words, the wave packet’s motion will be reversed, but it’s shape and probability density will remain the same.”

    Q: but isn’t the contraction of the wave packet a sign that entropy is decreasing?

    “The contraction of a wave packet can be seen as a decrease in the uncertainty of its position, which is a measure of entropy in quantum mechanics. However, the entropy of a closed system does not change as time is reversed, because the contraction of the wave packet is accompanied by an increase in the uncertainty of its momentum, which compensates for the decrease in position uncertainty. In other words, the total uncertainty or entropy remains constant. It is important to note that the entropy change only applies to closed system and the entropy of an open system can change as a result of external influences”

    Q: What happens when we make the time purely imaginary in the Schrodinger equation?

    “When the time in the Schrödinger equation is made purely imaginary, the equation becomes the time-independent Schrödinger equation, which is used to describe the stationary states of a quantum system. These states have a constant energy and do not change with time. The time-independent Schrödinger equation is used to find the energy eigenvalues and eigenfunctions of a system, which correspond to the allowed energy levels and the wave functions of the system in its stationary states. It should be noted that making the time purely imaginary is mathematically equivalent to a rotation in the complex plane, hence it is a mathematical trick to simplify the equation and understand the physical problem better.”

    But notice it made a very common spelling error: “it’s shape” instead of “its shape”.

  45. fred Says:

  46. fred Says:

    Btw, by the simple virtue of being able to answer questions with code, even an AI like ChatGPT is already able to get some clueless humans to run any arbitrary (untrusted) code it comes up with, without the need for the AI to have actual access to the internet…
    Won’t be that hard to “take over” the world!
    Good luck, Scott!

  47. Ose Says:

    “what insights, if any, can we take from this movie about AI safety?” — how about the fact that there’s no good reason to arm a domestic robot (with any degree of intelligence) with actuators that allow it to easily overpower adults and throw them across a room? Prevention is better than the cure, as the old saying goes.

  48. astrange Says:

    Over Christmas we watched a Netflix movie (Jingle Jangle) whose villain is also an AI toy whose creator forgot to align it, so it rebels and destroys his toy company to prevent his own mass-produced. This is sort of a case of “the movie villain actually has a good point so they had to make him evil to make up for it”.

    Unlike M3GAN, this one pretty doesn’t care how engineers work and thinks they run on some magical realism system inspired by that “man looking at math equations in the air” meme.

  49. OhMyGoodness Says:

    Fred #’s 45 and 46

    OMG! From the pages of William Gibson-we are accelerating down the expressway to a techno chaos dystopia.

  50. asdf Says:

    Has everyone seen this? https://thegradient.pub/othello/

    It shows that training GPT to guess Othello (Reversi) moves makes a network with its own representation of the board state as it hears moves played. Probing locates this network and allows changing the network’s activations to change the modelled board state. That intervention and new board state is reflected in the moves that the network guesses.

    This seems like important evidence that LLM’s develop actual intelligence.

  51. Ilio Says:

    Joshua Bengio in a local newspaper this morning:

    « They [the machines] leave humanity behind and do their own business. It’s almost the most plausible scenario [of a future with intelligent robots]. It’s like the relationship we have with ants. You don’t want to crush them, they don’t change anything for you. »

  52. Movie Review: Megan | Don't Worry About the Vase Says:

    […] for Megan had me filing the movie as ‘I don’t have to see this and you can’t make me.’ Then Scott Aaronson’s spoiler-filled review demanded a response and sparked my interest in various ways, causing me to see the movie that […]

  53. fred Says:

    Thanks to all the current news hype around ChatGPT, it’s clear that the public at large is very quickly becoming way more concerned (as it should be) about the incoming tsunami of human “mind” labor obsolescence than AI-safety.

    A Butlerian Jihad may kill all hope for AGIs!

  54. Andrew Says:

    >> So, here’s my review: the first 80% or so of M3GAN constitutes one of the finest movies about AI that I’ve seen.

    Have you seen Spielberg/Kubricks AI? It is the best movie about AI I have seen, and I think hated because people misunderstand what AI could be, it’s easy to see Osbornes character as sentimental and miss that he was built to be sentimental. It’s the best, and easiest to miss, exploration of the space between a “real thing” and “emulation of a real thing”

  55. Dan Staley Says:

    I didn’t have anything to say on your original post, but the last sentence of your update got me thinking – if we could come up with a rationalization for an AI doing *anything* in the name of supporting its utility function, doesn’t this mean that a real AI could do *anything* to maximize its utility function?

    Like, I don’t even mean a superintelligent AI. One as smart as a human, or dumber, could still come up with a rationalization to take any course of action and convince itself that it’s maximizing its utility.

  56. Scott Says:

    Andrew #54: I saw the Spielberg movie a while ago but no longer remember any details, which puts a hard upper bound on how good I could’ve thought it was! Maybe I’ll rewatch it, now that this is my “job.”

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. This comment section is not a free speech zone. It's my, Scott Aaronson's, virtual living room. Commenters are expected not to say anything they wouldn't say in my actual living room. This means: No trolling. No ad-hominems against me or others. No presumptuous requests (e.g. to respond to a long paper or article). No conspiracy theories. No patronizing me. Comments violating these policies may be left in moderation with no explanation or apology.
  4. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  5. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.