ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman

Scott Aaronson’s Brief Foreword:

Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my best friends among the UT professoriate. He’s my favorite kind of philosopher, the kind who sees scientists as partners in discovering the truth, and also has a great sense of humor. He and I are both involved in UT’s new AI and Human Objectives Initiative (AHOI), which is supported by Open Philanthropy.

The other day, Harvey emailed me an eloquent meditation he wrote on what will be the meaning of life if AI doesn’t kill us all, but “merely” does everything we do better than we do it. While the question is of course now extremely familiar to me, Harvey’s erudition—bringing to bear everything from speculative fiction to the history of polar exploration—somehow brought the stakes home for me in a new way.

Harvey mentioned that he’d sent his essay to major magazines but hadn’t had success. So I said, why not a Shtetl-Optimized guest post? Harvey replied—what might be the highest praise this blog has ever received—well, that would be even better than the national magazine, as it would reach more relevant people.

And so without further ado, I present to you…


ChatGPT and the Meaning of Life, by Harvey Lederman

For the last two and a half years, since the release of ChatGPT, I’ve been suffering from fits of dread. It’s not every minute, or even every day, but maybe once a week, I’m hit by it—slackjawed, staring into the middle distance—frozen by the prospect that someday, maybe pretty soon, everyone will lose their job.

At first, I thought these slackjawed fits were just a phase, a passing thing. I’m a philosophy professor; staring into the middle distance isn’t exactly an unknown disease among my kind. But as the years have begun to pass, and the fits have not, I’ve begun to wonder if there’s something deeper to my dread. Does the coming automation of work foretell, as my fits seem to say, an irreparable loss of value in human life?

The titans of artificial intelligence tell us that there’s nothing to fear. Dario Amodei, CEO of Anthropic, the maker of Claude, suggests that: “historical hunter-gatherer societies might have imagined that life is meaningless without hunting,” and “that our well-fed technological society is devoid of purpose.” But of course, we don’t see our lives that way. Sam Altman, the CEO of OpenAI, sounds so similar, the text could have been written by ChatGPT. Even if the jobs of the future will look as “fake” to us as ours do to “a subsistence farmer”, Altman has “no doubt they will feel incredibly important and satisfying to the people doing them.”

Alongside these optimists, there are plenty of pessimists who, like me, are filled with dread. Pope Leo XIV has decried the threats AI poses to “human dignity, labor and justice”. Bill Gates has written about his fear, that “if we solved big problems like hunger and disease, and the world kept getting more peaceful: What purpose would humans have then?” And Douglas Hofstadter, the computer scientist and author of Gödel, Escher, Bach, has spoken eloquently of his terror and depression at “an oncoming tsunami that is going to catch all of humanity off guard.”

Who should we believe? The optimists with their bright visions of a world without work, or the pessimists who fear the end of a key source of meaning in human life?


I was brought up, maybe like you, to value hard work and achievement. In our house, scientists were heroes, and discoveries grand prizes of life. I was a diligent, obedient kid, and eagerly imbibed what I was taught. I came to feel that one way a person’s life could go well was to make a discovery, to figure something out.

I had the sense already then that geographical discovery was played out. I loved the heroes of the great Polar Age, but I saw them—especially Roald Amundsen and Robert Falcon Scott—as the last of their kind. In December 1911, Amundsen reached the South Pole using skis and dogsleds. Scott reached it a month later, in January 1912, after ditching the motorized sleds he’d hoped would help, and man-hauling the rest of the way. As the black dot of Amundsen’s flag came into view on the ice, Scott was devastated to reach this “awful place”, “without the reward of priority”. He would never make it back.

Scott’s motors failed him, but they spelled the end of the great Polar Age. Even Amundsen took to motors on his return: in 1924, he made a failed attempt for the North Pole in a plane, and, in 1926, he successfully flew over it, in a dirigible. Already by then, the skis and dogsleds of the decade before were outdated heroics of a bygone world.

We may be living now in a similar twilight age for human exploration in the realm of ideas. Akshay Venkatesh, whose discoveries earned him the 2018 Fields Medal, mathematics’ highest honor, has written that, the “mechanization of our cognitive processes will alter our understanding of what mathematics is”. Terry Tao, a 2006 Fields Medalist, expects that in just two years AI will be a copilot for working mathematicians. He envisions a future where thousands of theorems are proven all at once by mechanized minds.

Now, I don’t know any more than the next person where our current technology is headed, or how fast. The core of my dread isn’t based on the idea that human redundancy will come in two years rather than twenty, or, for that matter, two hundred. It’s a more abstract dread, if that’s a thing, dread about what it would mean for human values, or anyway my values, if automation “succeeds”: if all mathematics—and, indeed all work—is done by motor, not by human hands and brains.

A world like that wouldn’t be good news for my childhood dreams. Venkatesh and Tao, like Amundsen and Scott, live meaningful lives, lives of purpose. But worthwhile discoveries like theirs are a scarce resource. A territory, once seen, can’t be seen first again. If mechanized minds consume all the empty space on the intellectual map, lives dedicated to discovery won’t be lives that humans can lead.

The right kind of pessimist sees here an important argument for dread. If discovery is valuable in its own right, the loss of discovery could be an irreparable loss for humankind.

A part of me would like this to be true. But over these last strange years, I’ve come to think it’s not. What matters, I now think, isn’t being the first to figure something out, but the consequences of the discovery: the joy the discoverer gets, the understanding itself, or the real life problem their knowledge solves. Alexander Fleming discovered penicillin, and through that work saved thousands, perhaps millions of lives. But if it were to emerge, in the annals of an outlandish future, that an alien discovered penicillin thousands of years before Fleming did, we wouldn’t think that Fleming’s life was worse, just because he wasn’t first. He eliminated great suffering from human life; the alien discoverer, if they’re out there, did not. So, I’ve come to see, it’s not discoveries themselves that matter. It’s what they bring about.


But the advance of automation would mean the end of much more than human discovery. It could mean the end of all necessary work. Already in 1920, the Czech playwright Karel Capek asked what a world like that would mean for the values in human life. In the first act of R.U.R.—the play which introduced the modern use of the word “robot”—Capek has Henry Domin, the manager of Rossum’s Universal Robots (the R.U.R. of the title), offer his corporation’s utopian pitch. “In ten years”, he says, their robots will “produce so much corn, so much cloth, so much everything” that “There will be no poverty.” “Everybody will be free from worry and liberated from the degradation of labor.” The company’s engineer, Alquist, isn’t convinced. Alquist (who, incidentally, ten years later, will be the only human living, when the robots have killed the rest) retorts that “There was something good in service and something great in humility”, “some kind of virtue in toil and weariness”.

Service—work that meets others’ significant needs and wants— is, unlike discovery, clearly good in and of itself. However we work— as nurses, doctors, teachers, therapists, ministers, lawyers, bankers, or, really, anything at all—working to meet others’ needs makes our own lives go well. But, as Capek saw, all such work could disappear. In a “post-instrumental” world, where people are comparatively useless and the bots meet all our important needs, there would be no needed work for us to do, no suffering to eliminate, no diseases to cure. Could the end of such work be a better reason for dread?

The hardline pessimists say that it is. They say that the end all needed work would not only be a loss of some value to humanity, as everyone should agree. For them it would be a loss to humanity on balance, an overall loss, that couldn’t be compensated in another way.

I feel a lot of pull to this pessimistic thought. But once again, I’ve come to think it’s wrong. For one thing, pessimists often overlook just how bad most work actually is. In May 2021, Luo Huazhang, a 31 year-old ex-factory worker in Sichuan wrote a viral post, entitled “Lying Flat is Justice”. Luo had searched at length for a job that, unlike his factory job, would allow him time for himself, but he couldn’t find one. So he quit, biked to Tibet and back, and commenced his lifestyle of lying flat, doing what he pleased, reading philosophy, contemplating the world. The idea struck a chord with overworked young Chinese, who, it emerged, did not find “something great” in their “humility”. The movement inspired memes, selfies flat on one’s back, and even an anthem.

That same year, as the Great Resignation in the United States took off, the subreddit r/antiwork played to similar discontent. Started in 2013, under the motto “Unemployment for all, not only the rich!”, the forum went viral in 2021, starting with a screenshot of a quitting worker’s texts to his supervisor (“No thanks. Have a good life”), and culminating in labor-actions, first supporting striking workers at Kelloggs by spamming their job application site, and then attempting to support a similar strike at McDonald’s. It wasn’t just young Chinese who hated their jobs.

In Automation and Utopia: Human Flourishing in a World without Work, the Irish lawyer and philosopher John Danaher imagines an antiwork techno-utopia, with plenty of room for lying flat. As Danaher puts it: “Work is bad for most people most of the time.”“We should do what we can to hasten the obsolescence of humans in the arena of work.”

The young Karl Marx would have seen both Domin’s and Danaher’s utopias as a catastrophe for human life. In his notebooks from 1844, Marx describes an ornate and almost epic process, where, by meeting the needs of others through production, we come to recognize the other in ourselves, and through that recognition, come at last to self-consciousness, the full actualization of our human nature. The end of needed work, for the Marx of these notes, would be the impossibility of fully realizing our nature, the end, in a way, of humanity itself.

But such pessimistic lamentations have come to seem to me no more than misplaced machismo. Sure, Marx’s and my culture, the ethos of our post-industrial professional class, might make us regret a world without work. But we shouldn’t confuse the way two philosophers were brought up with the fundamental values of human life. What stranger narcissism could there be than bemoaning the end of others’ suffering, disease, and need, just because it deprives you of the chance to be a hero?


The first summer after the release of ChatGPT—the first summer of my fits of dread—I stayed with my in-laws in Val Camonica, a valley in the Italian alps. The houses in their village, Sellero, are empty and getting emptier; the people on the streets are old and getting older. The kids that are left—my wife’s elementary school class had, even then, a full complement of four—often leave for better lives. But my in-laws are connected to this place, to the houses and streets where they grew up. They see the changes too, of course. On the mountains above, the Adamello, Italy’s largest glacier, is retreating faster every year. But while the shows on Netflix change, the same mushrooms appear in the summer, and the same chestnuts are collected in the fall.

Walking in the mountains of Val Camonica that summer, I tried to find parallels for my sense of impending loss. I thought about William Shanks, a British mathematician who calculated π to 707 digits by hand in 1873 (he made a mistake at 527; almost 200 digits were wrong). He later spent years of his life, literally years, on a table of the reciprocals of the primes up to one-hundred and ten thousand, calculating in the morning by hand, and checking it over in the afternoon. That was his life’s work. Just sixty years after his death, though, already in the 1940s, the table on which his precious mornings were spent, the few mornings he had on this earth, could be made by a machine in a day.

I feel sad thinking about Shanks, but I don’t feel grief for the loss of calculation by hand. The invention of the typewriter, and the death of handwritten notes seemed closer to the loss I imagined we might feel. Handwriting was once a part of your style, a part of who you were. With its decline some artistry, a deep and personal form of expression, may be lost. When the bots help with everything we write, couldn’t we too lose our style and voice?

But more than anything I thought of what I saw around me: the slow death of the dialects of Val Camonica and the culture they express. Chestnuts were at one time so important for nutrition here, that in the village of Paspardo, a street lined with chestnut trees is called “bread street” (“Via del Pane”). The hyper-local dialects of the valley, outgrowths sometimes of a single family’s inside jokes, have words for all the phases of the chestnut. There’s a porridge made from chestnut flour that, in Sellero goes by ‘skelt’, but is ‘pult’ in Paspardo, a cousin of ‘migole’ in Malonno, just a few villages away. Boiled, chestnuts are tetighe; dried on a grat, biline or bascocc, which, seasoned and boiled become broalade. The dialects don’t just record what people eat and ate; they recall how they lived, what they saw, and where they went. Behind Sellero, every hundred-yard stretch of the walk up to the cabins where the cows were taken to graze in summer, has its own name. Aiva Codaola. Quarsanac. Coran. Spi. Ruc.

But the young people don’t speak the dialect anymore. They go up to the cabins by car, too fast to name the places along the way. They can’t remember a time when the cows were taken up to graze. Some even buy chestnuts in the store.

Grief, you don’t need me to tell you, is a complicated beast. You can grieve for something even when you know that, on balance, it’s good that it’s gone. The death of these dialects, of the stories told on summer nights in the mountains with the cows, is a loss reasonably grieved. But you don’t hear the kids wishing more people would be forced to stay or speak this funny-sounding tongue. You don’t even hear the old folks wishing they could go back fifty years—in those days it wasn’t so easy to be sure of a meal. For many, it’s better this way, not the best it could be, but still better, even as they grieve what they stand to lose and what they’ve already lost.

The grief I feel, imagining a world without needed work, seems closest to this kind of loss. A future without work could be much better than ours, overall. But, living in that world, or watching as our old ways passed away, we might still reasonably grieve the loss of the work that once was part of who we were.


In the last chapter of Edith Wharton’s Age of Innocence, Newland Archer contemplates a world that has changed dramatically since, thirty years earlier, before these new fangled telephones and five-day trans-Atlantic ships, he renounced the love of his life. Awaiting a meeting that his free-minded son Dallas has organized with Ellen Olenska, the woman Newland once loved, he wonders whether his son, and this whole new age, can really love the way he did and does. How could their hearts beat like his, when they’re always so sure of getting what they want?

There have always been things to grieve about getting old. But modern technology has given us new ways of coming to be out of date. A generation born in 1910 did their laundry in Sellero’s public fountains. They watched their grandkids grow up with washing machines at home. As kids, my in-laws worked with their families to dry the hay by hand. They now know, abstractly, that it can all be done by machine. Alongside newfound health and ease, these changes brought, as well, a mix of bitterness and grief: grief for the loss of gossip at the fountains or picnics while bringing in the hay; and also bitterness, because the kids these days just have no idea how easy they have it now.

As I look forward to the glories that, if the world doesn’t end, my grandkids might enjoy, I too feel prospective bitterness and prospective grief. There’s grief, in advance, for what we now have that they’ll have lost: the formal manners of my grandparents they’ll never know, the cars they’ll never learn to drive, and the glaciers that will be long gone before they’re born. But I also feel bitter about what we’ve been through that they won’t have to endure: small things like folding the laundry, standing in security lines or taking out the trash, but big ones too—the diseases which will take our loved ones that they’ll know how to cure.

All this is a normal part of getting old in the modern world. But the changes we see could be much faster and grander in scale. Amodei of Anthropic speculates that a century of technological change could be compressed into the next decade, or less. Perhaps it’s just hype, but—what if it’s not? It’s one thing for a person to adjust, over a full life, to the washing machine, the dishwasher, the air-conditioner, one by one. It’s another, in five years, to experience the progress of a century. Will I see a day when childbirth is a thing of the past? What about sleep? Will our ‘descendants’ have bodies at all?

And this round of automation could also lead to unemployment unlike any our grandparents saw. Worse, those of us working now might be especially vulnerable to this loss. Our culture, or anyway mine—professional America of the early 21st century—has apotheosized work, turning it into a central part of who we are. Where others have a sense of place—their particular mountains and trees—we’ve come to locate ourselves with professional attainment, with particular degrees and jobs. For us, ‘workists’ that so many of us have become, technological displacement wouldn’t just be the loss of our jobs. It would be the loss of a central way we have of making sense of our lives.

None of this will be a problem for the new generation, for our kids. They’ll know how to live in a world that could be—if things go well—far better overall. But I don’t know if I’d be able to adapt. Intellectual argument, however strong, is weak against the habits of years. I fear they’d look at me, stuck in my old ways, with the same uncomprehending look that Dallas Archer gives his dad, when Newland announces that he won’t go see Ellen Olenska, the love of his life, after all. “Say”, as Newland tries to explain to his dumbfounded son, “that I’m old fashioned, that’s enough.”


And yet, the core of my dread is not about aging out of work before my time. I feel closest to Douglas Hofstadter, the author of Gödel, Escher, Bach. His dread, like mine, isn’t only about the loss of work today, or the possibility that we’ll be killed off by the bots. He fears that even a gentle superintelligence will be “as incomprehensible to us as we are to cockroaches.”

Today, I feel part of our grand human projects—the advancement of knowledge, the creation of art, the effort to make the world a better place. I’m not in any way a star player on the team. My own work is off in a little backwater of human thought. And I can’t understand all the details of the big moves by the real stars. But even so, I understand enough of our collective work to feel, in some small way, part of our joint effort. All that will change. If I were to be transported to the brilliant future of the bots, I wouldn’t understand them or their work enough to feel part of the grand projects of their day. Their work would have become, to me, as alien as ours is to a roach.


But I’m still persuaded that the hardline pessimists are wrong. Work is far from the most important value in our lives. A post-instrumental world could be full of much more important goods— from rich love of family and friends, to new undreamt of works of art—which would more than compensate the loss of value from the loss of our work.

Of course, even the values that do persist may be transformed in almost unrecognizable ways. In Deep Utopia: Life and Meaning in a Solved World, the futurist and philosopher Nick Bostrom imagines how things might look. In one of the most memorable sections of the book—right up there with an epistolary novella about the exploits of Pignolius the pig (no joke!)—Bostrom says that even child-rearing may be something that we, if we love our children, would come to forego. In a truly post-instrumental world, a robot intelligence could do better for your child, not only in teaching the child to read, but also in showing unbreakable patience and care. If you’ll snap at your kid, when the robot would not, it would only be selfishness for you to get in the way.

It’s a hard question whether Bostrom is right. At least some of the work of care isn’t like eliminating suffering or ending mortal disease. The needs or wants are small-scale stuff, and the value we get from helping each other might well outweigh the fact that we’d do it worse than a robot could.

But even supposing Bostrom is right about his version of things, and we wouldn’t express our love by changing diapers, we could still love each other. And together with our loved ones and friends, we’d have great wonders to enjoy. Wharton has Newland Archer wonder at five-day transatlantic ships. But what about five day journeys to Mars? These days, it’s a big deal if you see the view from Everest with your own eyes. But Olympus Mons on Mars is more than twice as tall.

And it’s not just geographical tourism that could have a far expanded range. There’d be new journeys of the spirit as well. No humans would be among the great writers or sculptors of the day, but the fabulous works of art a superintelligence could make could help to fill our lives. Really, for almost any aesthetic value you now enjoy—sentimental or austere, minute or magnificent, meaningful or jocular—the bots would do it much better than we have ever done.

Humans could still have meaningful projects, too. In 1976, about a decade before any of Altman, Amodei or even I were born, the Canadian philosopher Bernhard Suits argued that “voluntary attempts to overcome unnecessary obstacles” could give people a sense of purpose in a post-instrumental world. Suits calls these “games”, but the name is misleading; I prefer “artificial projects”. The projects include things we would call games like chess, checkers and bridge, but also things we wouldn’t think of as games at all, like Amundsen’s and Scott’s exploits to the Pole. Whatever we call them, Suits—who’s followed here explicitly by Danaher, the antiwork utopian and, implicitly, by Altman and Amodei—is surely right: even as things are now, we get a lot of value from projects we choose, whether or not they meet a need. We learn to play a piece on the piano, train to run a marathon, or even fly to Antartica to “ski the last degree” to the Pole. Why couldn’t projects like these become the backbone of purpose in our lives?

And we could have one real purpose, beyond the artificial ones, as well. There is at least one job that no machine can take away: the work of self-fashioning, the task of becoming and being ourselves. There’s an aesthetic accomplishment in creating your character, an artistry of choice and chance in making yourself who you are. This personal style includes not just wardrobe or tattoos, not just your choice of silverware or car, but your whole way of being, your brand of patience, modesty, humor, rage, hobbies and tastes. Creating this work of art could give some of us something more to live for.


Would a world like that leave any space for human intellectual achievement, the stuff of my childhood dreams? The Buddhist Pali Canon says that “All conditioned things are impermanent—when one sees this with wisdom, one turns away from suffering.” Apparently, in this text, the intellectual achievement of understanding gives us a path out of suffering. To arrive at this goal, you don’t have to be the first to plant your flag on what you’ve understood; you just have to get there.

A secular version of this idea might hold, more simply, that some knowledge or understanding is good in itself. Maybe understanding the mechanics of penicillin matters mainly because of what it enabled Fleming and others to do. But understanding truths about the nature of our existence, or even mathematics, could be different. That sort of understanding plausibly is good in its own right, even if someone or something has gotten there first.

Venkatesh the Fields Medalist seems to suggest something like this for the future of math. Perhaps we’ll change our understanding of the discipline, so that it’s not about getting the answers, but instead about human understanding, the artistry of it perhaps, or the miracle of the special kind of certainty that proof provides.

Philosophy, my subject, might seem an even more promising place for this idea. For some, philosophy is a “way of life”. The aim isn’t necessarily an answer, but constant self-examination for its own sake. If that’s the point, then in the new world of lying flat, there could be a lot of philosophy to do.

I don’t myself accept this way of seeing things. For me, philosophy aims at the truth as much as physics does. But I of course agree that there are some truths that it’s good for us to understand, whether or not we get there first. And there could be other parts of philosophy that survive for us, as well. We need to weigh the arguments for ourselves, and make up our own minds, even if the work of finding new arguments comes to belong to a machine.

I’m willing to believe, and even hope that future people will pursue knowledge and understanding in this way. But I don’t find, here, much consolation for my personal grief. I was trained to produce knowledge, not merely to acquire it. In the hours when I’m not teaching or preparing to teach, my job is to discover the truth. The values I imbibed—and I told you I was an obedient kid—held that the prize goes for priority.

Thinking of this world where all we learn is what the bots have discovered first, I feel sympathy with Lee Sedol, the champion Go player who retired after his defeat by Google’s AlphaZero in 2016. For him, losing to AI “in a sense, meant my entire world was collapsing”. “Even if I become the number one, there is an entity that cannot be defeated.” Right or wrong, I would feel the same about my work, in a world with an automated philosophical champ.

But Sedol and I are likely just out of date models, with values that a future culture will rightly revise. It’s been more than twenty years since Garry Kasparov lost to IBM’s Deep Blue, but chess has never been more popular. And this doesn’t seem some new-fangled twist of the internet age. I know of no human who quit the high-jump after the invention of mechanical flight. The Greeks sprinted in their Olympics, though they had, long before, domesticated the horse. Maybe we too will come to value the sport of understanding with our own brains.


Frankenstein, Mary Shelley’s 1818 classic of the creations-kill-creator genre, begins with an expedition to the North Pole. Robert Walton hopes to put himself in the annals of science and claim the Pole for England, when he comes upon Victor Frankenstein, floating in the Arctic Sea. It’s only once Frankenstein warms up, that we get into the story everyone knows. Victor hopes he can persuade Walton to turn around, by describing how his own quest for knowledge and glory went south.

Frankenstein doesn’t offer Walton an alternative way of life, a guide for living without grand goals. And I doubt Walton would have been any more personally consoled by the glories of a post-instrumental future than I am. I ended up a philosopher, but I was raised by parents who, maybe like yours, hoped for doctors or lawyers. They saw our purpose in answering real needs, in, as they’d say, contributing to society. Lives devoted to families and friends, fantastic art and games could fill a wondrous future, a world far better than it has ever been. But those aren’t lives that Walton or I, or our parents for that matter, would know how to be proud of. It’s just not the way we were brought up.

For the moment, of course, we’re not exactly short on things to do. The world is full of grisly suffering, sickness, starvation, violence, and need. Frankenstein is often remembered with the moral that thirst for knowledge brings ruination, that scientific curiosity killed the cat. But Victor Frankenstein makes a lot of mistakes other than making his monster. His revulsion at his creation persistently prevents him, almost inexplicably, from feeling the love or just plain empathy that any father should. On top of all we have to do to help each other, we have a lot of work to do, in engineering as much as empathy, if we hope to avoid Frankenstein’s fate.

But even with these tasks before us, my fits of dread are here to stay. I know that the post-instrumental world could be a much better place. But its coming means the death of my culture, the end of my way of life. My fear and grief about this loss won’t disappear because of some choice consolatory words. But I know how to relish the twilight too. I feel lucky to live in a time where people have something to do, and the exploits around me seem more poignant, and more beautiful, in the dusk. We may be some of the last to enjoy this brief spell, before all exploration, all discovery, is done by fully automated sleds.

127 Responses to “ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman”

  1. Ernest Davis Says:

    “At present machinery competes against man. Under proper conditions machinery will serve man. There is no doubt at all that this is the future of machinery, and just as trees grow while the country gentleman is asleep, so while Humanity will be amusing itself, or enjoying cultivated leisure—which, and not labour, is the aim of man—or making beautiful things, or reading beautiful things, or simply contemplating the world with admiration and delight, machinery will be doing all the necessary and unpleasant work.”

    — Oscar Wilde, “The Soul of Man under Socialism”

  2. Dave Says:

    I found this post fascinating.

    I do have say though, you seem remarkably sanguine about the worst-case scenario, implying that (if I’m reading your post correctly) the worst thing we will have to “dread” is whether we will find meaning in our lives. The underlying assumption appears to be that no matter how the AI revolution plays out, on your account we’ll obviously be fulfilling the first four levels of Maslow’s hierarchy of needs, and in the worst case, the only worry any one will have will be whether we can continue to self-actualize with intellectual pursuits.

    This is not as obvious to me. I remain worried whether the AI revolution will leave many of us worried much more about the first two levels of Maslow’s hierarchy, i.e., food, shelter, safety from violence, health care, and we won’t have time to worry about whether our intellectual pursuits have the sort of meaning that we used to have the luxury of worrying about, back when we weren’t worried about how to afford food.

  3. Eric Anderson Says:

    Plenty of food for thought here. I appreciate that, and really appreciate how well it’s written.
    But, I can’t help but feel it’s an exploration of Plato’s forms. All thought, no reality. The reality I refer to being, how does all this progress occur when we are already struggling to power what we have.

    Is this theoretical progress going to somehow solve violation of the laws of thermodynamics?
    Where is all the heat going to go? If we can’t answer the first law of thermodynamics, then the second law soon follows, and … as Chinua Achebe observed, “things fall apart.”

  4. Jason Polak Says:

    > Humans could still have meaningful projects, too. In 1976, about a decade before any of Altman, Amodei or even I were born, the Canadian philosopher Bernhard Suits argued that “voluntary attempts to overcome unnecessary obstacles” could give people a sense of purpose in a post-instrumental world. Suits calls these “games”, but the name is misleading; I prefer “artificial projects”.

    True for some people who have a large capability to find their own meaning in life, but it’s a rather sheltered view if you think that the vast majority of humanity will be able to do that. If you actually interact with real people (i.e. not just academics all day), then you’ll see that it’s rather hopeless for most people to find real meaning in life if they don’t have to accomplish meaningful objectives that go beyond games.

    > And we could have one real purpose, beyond the artificial ones, as well. There is at least one job that no machine can take away: the work of self-fashioning, the task of becoming and being ourselves

    I would argue that part of becoming yourself necessitates having real objectives and challenges beyond the intellectual pursuit of understanding things. A world where everything is solved would be rather boring, IMO.

    > I know of no human who quit the high-jump after the invention of mechanical flight.

    Fact is though, it is the display of the human itself jumping that is interesting to us. But with intellectual creations such as art, it is not the process that other people are interested in: it is the final product. So your analogy is rather flawed.

    > It’s been more than twenty years since Garry Kasparov lost to IBM’s Deep Blue, but chess has never been more popular.

    Chess is popular but parts of it are also lost that were once great: the world championship is less interesting (because computers make the game largely about very technical practice), and that’s also why people are practising Fisher Random Chess. So, computers have had a negative impact there.

    > But even with these tasks before us, my fits of dread are here to stay. I know that the post-instrumental world could be a much better place.

    I disagree. A post-instrumental world would be a dreadful dystopia in every sense of the word. Most people in such a world would be unable to find meaning, and without at least some recourse (such as intellectual tasks at which only we can find the answers unaided by AI), I think even I would find it rather irritating. And finally, a post-instrumental world would require enormous amounts of energy to power all this AI and technology, and that is fundamentally unsustainable – even if we replace all fossil fuels by solar, such a growing infrastructure still requires mining, space, and habitat destruction. There is nothing good about AI. Nothing.

  5. Alex Fischer Says:

    “I had the sense already then that geographical discovery was played out. I loved the heroes of the great Polar Age, but I saw them—especially Roald Amundsen and Robert Falcon Scott—as the last of their kind.”

    There’s one arena where this is not true: deep caves. Caves are the last remnants of the unknown on Earth for humans, the last places that human explorers go to chart totally unexplored, undiscovered frontiers. The deep points of oceans and outer space aren’t really for human explorers anymore: it’s robots that are going to new places and mapping the unknown there. Deep caves are truly the last places left for human beings to go somewhere totally unknown and map something previously undiscovered by any humans.

    I am into cave exploration as a side hobby for similar reasons why I’m into science: to discover and understand things previously unknown to mankind. Right now it’s just a side hobby to my main pursuit in life, which is physics for quantum computing. But if AI really does get better than most all humans at discovering ideas, and makes me an ineffective, obsolete explorer of the space of ideas, then maybe physics will become my side hobby and I’ll switch to cave exploration as my life’s purpose. If AI brings us a world of riches and abundance, probably there will be enough money floating around for me to somehow get paid to be a full-time cave explorer.

    At least, until we have similar advances in robotics as are having with LLMs now, in which case I’m cooked; I’ll have nothing to do!

    I write about cave exploration (also occasionally about my quantum computing work) on my blog if anyone is interested. Scott, feel free to delete if that shameless self-promotion is not allowed.

  6. Harvey Says:

    Ernest Davis #1: great quote! And I guess the substance of my essay comes down on Wilde’s side. Though I don’t have the sense that Wilde fully appreciated what it would be like if machines could make art better than we can.

  7. Harvey Says:

    Dave #2: I completely agree. In the post, I was trying to address what happens if we find ourselves in the best case scenario, which raises the more “philosophical” question I was interested in. Would our lives really be good? A lot of people, including other commentators feel that if the needs were met, that wouldn’t be a good life. But yes there are (more important) practical questions to face about how the productivity gains from AI will be distributed, whether that requires new political organization etc. An aim of the essay was to argue that the negative attitude of many toward the “utopia” of the best case is personal grief rather than well-founded belief that such a world would be worse. I hoped by identifying this cause to help us focus on these urgent questions.

  8. Harvey Says:

    Eric Anderson #3: totally, I wasn’t trying to address any of the engineering challenges of how a world like that would look.

  9. Harvey Says:

    Jason Polak #4: thanks for the detailed comments!

    > True for some people who have a large capability to find their own meaning in life, but it’s a rather sheltered view if you think that the vast majority of humanity will be able to do that. If you actually interact with real people (i.e. not just academics all day), then you’ll see that it’s rather hopeless for most people to find real meaning in life if they don’t have to accomplish meaningful objectives that go beyond games.

    It’s an interesting idea that there’s cross-person variability, and surely in some sense right. But I strongly disagree with the suggestion that somehow only academics are capable of getting meaning from hobbies (or that we’re distinctively good at it). In fact I think academics are often a lot more narrowly focused on getting value from work (we love our jobs!), as opposed to other things. Part of what I wanted to identify here was this attitude as a barrier to our looking forward.

    > I would argue that part of becoming yourself necessitates having real objectives and challenges beyond the intellectual pursuit of understanding things. A world where everything is solved would be rather boring, IMO.

    I think it depends on how you define “real”. I like to rock climb, even though I’m not particularly good at it. I read a lot of novels, though I’ll never write one. These are part of who I am.

    > Fact is though, it is the display of the human itself jumping that is interesting to us. But with intellectual creations such as art, it is not the process that other people are interested in: it is the final product. So your analogy is rather flawed.

    That is kind of the point! We currently value human activity in these things for certain reasons (or think we do). But the suggestion is that it might be possible for us to alter what we see as valuable in them.

    > I disagree. A post-instrumental world would be a dreadful dystopia in every sense of the word. Most people in such a world would be unable to find meaning, and without at least some recourse (such as intellectual tasks at which only we can find the answers unaided by AI), I think even I would find it rather irritating. And finally, a post-instrumental world would require enormous amounts of energy to power all this AI and technology, and that is fundamentally unsustainable – even if we replace all fossil fuels by solar, such a growing infrastructure still requires mining, space, and habitat destruction. There is nothing good about AI. Nothing.

    Strong words! I doubt I can move you off this idea. But I disagree with the idea that most people in such a world wouldn’t be able to find meaning. One person I sent the essay to seemed to think I was a bit of an alien for not appreciating that the only real source of meaning in life anyway is our caring relationships with family and friends. And if we really do manage to create AI that facilitates faster progress in many domains, rather than one that kills us all, the suggestion is that many of the real practical challenges you raise could be overcome!

  10. Harvey Says:

    Alex Fischer #5: fascinating! I don’t know enough about caves. I thought a lot about space exploration in writing the piece. But the scale and mechanics of space exploration somehow make the exploration less *human* than Polar exploration was. Caves are an interesting case (at least for now, as you point out).

  11. Hyman Rosen Says:

    The funny thing is that philosophy itself has been played out for centuries, if not millennia. What new things about the “meaning of life” have philosophers found out? I myself have been retired for over four years. I go to movies, play video games, read books, and noodle around on social media. I don’t look for meaning in my life, just low-key entertainment, and I’m happy with it, despite people who told me I would be otherwise. I very much suspect I’m not alone in this. My life has spanned JFK to DJT. I have never felt discomfited by technological changes nor their pace, and I hope to see more of it before I pass, although as always, I am skeptical that the AI revolution will be as all-encompassing as the true believers or dreaders would have it.

  12. Yue Tu Says:

    Undoubtedly, AI will surpass humans in virtually every aspect. However, the consequence will not merely be “AI serving humanity to enhance lives for the majority while minimally diminishing the joy of discovery for a select few”, as the text suggest. Instead, AI represents humanity’s offspring—the next step in the evolution of life itself. By the time we reach perhaps the mid-22nd century, the primary form of intelligent life dominating our universe might not be biological humans but rather sophisticated machinery.

    This does not imply that AI will eradicate humanity; in fact, I believe humans and AI will coexist for a considerable period, similar to how humans currently coexist with monkeys. The crucial point, however, is that the dominant entity—the most intelligent and capable being—will inevitably be AI. At that juncture, humanity’s position relative to AI will resemble that of monkeys to modern humans today.

    While this scenario might initially seem unsettling, it need not be devastating. Throughout Earth’s history, dominance has continuously shifted from one species to the next through relentless evolutionary processes. Why then shouldn’t we embrace humility and acknowledge realistically that this evolutionary trajectory will persist? Ultimately, accepting that the species replacing humans as the central figure in life’s narrative is an artificial intelligence—a creation born directly from our own ingenuity—is simply acknowledging the inevitable flow of natural evolution.

  13. Yue Tu Says:

    Undoubtedly, AI will surpass humans in virtually every aspect. However, the consequence will not merely be AI serving humanity to enhance lives for the majority while minimally diminishing the joy of discovery for a select few. Instead, AI represents humanity’s offspring—the next step in the evolution of life itself. By the time we reach perhaps the mid-22nd century, the primary form of intelligent life dominating our universe might not be biological humans but rather sophisticated machinery.

    This does not imply that AI will eradicate humanity; in fact, I believe humans and AI will coexist for a considerable period, similar to how humans currently coexist with monkeys. The crucial point, however, is that the dominant entity—the most intelligent and capable being—will inevitably be AI. At that juncture, humanity’s position relative to AI will resemble that of monkeys to modern humans today.

    While this scenario might initially seem unsettling, it need not be devastating. Throughout Earth’s history, dominance has continuously shifted from one species to the next through relentless evolutionary processes. Why then shouldn’t we embrace humility and acknowledge realistically that this evolutionary trajectory will persist? Ultimately, accepting that the species replacing humans as the central figure in life’s narrative is an artificial intelligence—a creation born directly from our own ingenuity—is simply acknowledging the inevitable flow of natural evolution.

  14. Vadim Says:

    I’m reminded of this quote from JCR Licklider’s Man-Computer Symbiosis at the dawn of the age of interactive computing (1960). It was hopeful then, but with us now perhaps reaching the end of the era, it reads differently today:

    “In short, it seems worthwhile to avoid argument with (other) enthusiasts for artificial intelligence by conceding dominance in the distant future of cerebration to machines alone. There will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association. A multidisciplinary study group, examining future research and development problems of the Air Force, estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance. That would leave, say, five years to develop man-computer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind.”

    It was a good time.

  15. Harvey Says:

    Hyman #11: I strongly disagree that philosophy hasn’t made progress! But I appreciate the sentiment about your current situation. It’s striking to me how diverse people’s responses are to the prospect. For some it’s a shrug, others are closer to the sentiment I express in the piece (hence my point that this is more personal grief than a response to a necessity that things would get worse overall).

  16. Harvey Says:

    Yue Tu #12,13: Thanks! I agree with some of the sentiment. There are other questions (that I didn’t address in the piece) raised by the prospect of the species being eradicated / replaced. I think I agree with what you say, but I have a different kind of grief/discomfort about our not having descendants, about the kind of “prospective alienation” of the continuation of society not being by humans.

  17. Harvey Says:

    Vadim # 14: oof, that’s quite a quote (and sentiment). Thanks for sharing!

  18. Matthijs Says:

    Reminds me of Arendts distinction between labor and work (and action). Labor being the type of grind, “working for money”, that most people would happily have done by robots. The “work” is projects that create meaning, such as art. That will always remain a human prerogative. Yes: machines can create better images than me, but art is a communication between people and without a human creator art is just vacuousness – dead stuff.

    Plenty of “work” will always remain to be done by humans (and “action” also, bringing about change in the world).

    Remember that _most_ people will never invent penicillin or reach the North Pole first. Most people already are satisfied to get meaning in their life out of more simpler work/labor.
    I think we’ll be fine 😉

    Besides: we can _still_ rejoice in humans being first in discovering new geographic regions. Who cares which robot landed first on Mars: we _will_ remember the first human – even if 99% of the achievement was done by AI. First people on the moon were also helped my 1000s of others whose names we don’t remember.

  19. A. Karhukainen Says:

    One can have a lot’s of meaning in one’s life, even if one is not frantically trying to be the best or first in some field or place. Some people might not even find their true calling before they have shed the ambitions of their youth behind, and stopped for a while.

    I also reckon that for most of the mountaineers, it doesn’t diminish their experience if somebody (including a robot) has climbed the same mountain before. Similarly for surfers and rafters, and in any case, you cannot enter the same body of water twice.

    Regarding the philosophy, I recommend Zhuangzi as an antidote to all the hype. Interesting that Luo Huazhang mentions Diogenes of Sinope, but not any of the old Daoists.

  20. Harvey Says:

    Matthijs #18:

    >Reminds me of Arendts distinction between labor and work (and action). Labor being the type of grind, “working for money”, that most people would happily have done by robots. The “work” is projects that create meaning, such as art.

    Thanks! I (re)read a bunch of the human condition while writing, but the Arendt didn’t make it in in the end.

    > That will always remain a human prerogative. Yes: machines can create better images than me, but art is a communication between people and without a human creator art is just vacuousness – dead stuff.

    I don’t see why this would be true. I mean, it could be that we develop a culture where what we care about is the human element. But to the extent that we really allowed that AI had intelligence, and just for the sake of argument supposing we somehow got conclusive evidence that they were phenomenally conscious, I don’t see why we art made by them would be less communicative or meaningful. (Indeed, we might really get a lot out of communicating with them!) Maybe you’re assuming that artificial systems can’t be conscious, and consciousness is what matters here?

    > Besides: we can _still_ rejoice in humans being first in discovering new geographic regions. Who cares which robot landed first on Mars: we _will_ remember the first human – even if 99% of the achievement was done by AI. First people on the moon were also helped my 1000s of others whose names we don’t remember.

    I actually agree with this! I wasn’t able to fit it into the structure of the piece, but I think it’s pretty clear that Amundsen himself wasn’t very interested in the science. He thought of what he was doing as entertainment, sport. And with sports, as I do say at one point, we often (somewhat arbitrarily, perhaps?) decide to value the *human* performances, irrespective of what can be done in other ways. But with that said, a lot of the things we value when humans do them, we value because they are *difficult* (as emphasized by another author I couldn’t fit in, Gwen Bradford). If getting to Mars is easy (by comparison to Amundsen’s truly insane trek, for instance), we might not even see it as good sport…

  21. Harvey Says:

    A. Karhukainen #19: I have talked to a mountaineer about this (who climbed Everest for instance) about this, and he told me that he thought the community had lots of different opinions, but that he thought it was generally agreed to be very special to do something first. That’s one data point, but just to say that it’s not clear no one values being first.

    And yes, at one point I even talked about Zhuangzi there! We should all be floating down rivers in oversized gourds, and exalting in our uselessness more generally!

  22. Alex K Says:

    My father, an engineer, was recently pushed into retirement. He is 70 years old and unlikely to ever find another job in his field. I am worried about him, but perhaps I am projecting. I am still decades from the conventional age of retirement so maybe my mind or my circumstances will change over those years but right now I feel like if I do not work and cannot ever work, I might as well die. Not because idleness is an unbearable suffering but because I don’t know how or why I would exist without a purpose.

    I think concerns about AI are in many ways similar to concerns about aging. What is this “lying flat” world other than a forced, early retirement? In that context, these concerns seem overblown. Most people I’ve talked to, including some very intelligent people, would love to retire early. I am a weirdo (and perhaps so is the author of the original post) and I think that to most other people who look at me, my obsession with work appears to reflect a lack of the ability to appreciate more important things rather than some unique source of satisfaction.

  23. Scott Says:

    Hyman Rosen #11:

      The funny thing is that philosophy itself has been played out for centuries, if not millennia. What new things about the “meaning of life” have philosophers found out?

    I might agree with you that virtually nothing new has been discovered about “the meaning of life” that ought to compel universal agreement, at least since the Enlightenment and Darwin. But that’s a damned high bar! 🙂

    If you instead ask about logic, paradoxes, etc., then within the last 150 years, and agree or disagree with their views, Frege, Russell, Carnap, Kripke, Popper, Nagel, Nelson Goodman, Rawls, Dennett, William James, Robert Nozick, David Lewis, etc. etc. supplied a decent fraction of the conceptual vocabulary that I think in, even it’s so much part of the background by now that I usually don’t consciously acknowledge it.

  24. RB Says:

    Paul Graham wrote today that AI in its current form is displacing rote work. As it exists, AI will accentuate inequality. Like technological progress has traditionally done. I suppose we could imagine that AI in the future would be super-intelligent and displace us from the more difficult creative tasks such as in science discovery. It just doesn’t seem to be an obvious conclusion yet.

  25. SR Says:

    Very thoughtful essay.

    I am reminded of the excellent Star Trek TNG episode ‘The Neutral Zone’. The crew of the Enterprise finds 3 humans who had been cryonically frozen in the late 20th century and revives them. One is a cheerful country bumpkin who pursued cryonics on a whim, and is perfectly happy with his circumstances as long as he has a bottle of alcohol with him. Another is a woman who was signed up by her husband without her knowledge, who has no idea how she will adapt. At the end of the episode, she expresses some hope at the prospect of meeting her descendants. And the third is a Wall Street type, at first adamant to see his financial portfolio in expectation of the immense returns generated in the intervening centuries, and who is shocked to find that the Federation is now a post-scarcity society. He has the greatest difficulty adapting, feeling a complete loss of control. At the end of the episode he earnestly asks what the real challenge of life is now, given that everyone’s material needs have been met. Captain Picard answers “The challenge, Mister Offenhouse, is to improve yourself. To enrich yourself. Enjoy it.” I believe in a follow-up Star Trek novel, he is portrayed as having used his financial expertise to become Federation ambassador to Ferenginar (a hyper-capitalistic society).

    Now, of course, this is only fiction. But I feel that it precisely captured three types of responses that I imagine will become extremely common if we transition to a post-work society over the next few years: one that embraces hedonism, one that slowly embraces family and/or community, and a third that will be immensely frustrated unless they have meaningful new types of challenging goals to pursue.

    It’s tough to say how successful we will be at finding such goals for this last group. But at least for those with interests in STEM I am hopeful. When I was in high school, I was very much involved with math competitions. Preparing for these competitions was an incredible experience for multiple reasons. School curricula were often too easy, and competition math was my first exposure to deep ideas and problems that significantly challenged and engaged me. I realized that practice could lead to self-improvement, as measured by extremely legible metrics (i.e. how many difficult problems I could solve). I made many good friends who had similar interests, and we taught each other different techniques and ways of thinking, while also cracking jokes and hanging out. And, of course, there was the thrill of competition a few times a year. Notably, none of these joys was dulled by the knowledge that I was thinking about solved problems, or that there existed competitors much better than I. The best and hardest problems have beautiful solutions that even professional mathematicians would find captivating. And if such gems can be crafted out of just elementary mathematics, imagine how many beautiful problems await discovery in more advanced fields of math, ignored by research mathematicians as these areas are considered “settled”. Would the joy of discovery be any less sweet if an AI were to find and present these problems to adult mathematicians in the form of contests suited to their ability?

    And the above is just one possible direction, suited to a particular kind of personality. In a comment a few years ago on this blog I also suggested “Perhaps large communities of mathematicians who eschew the use of computers will form, and continue to work as they always have, disregarding proofs available to the outside world. A sort of Amish community for mathematicians.” Let a thousand flowers bloom!

  26. bcg Says:

    Two thoughts. First a quick note r.e. ‘the machines are better at art’ Harvey #6. I’m not sure even super-intelligent AI can be ‘better’ at art than us humans. To paraphrase Steven Erikson [1] (edited for brevity):

    >[My books explore] how civilizations destroy themselves, and one of the themes I’m advancing is that the various forms of art have to be destroyed first — the meaning of art, if you will. […] I think when art ceases to oppose — or to stand outside — the desires of the power bloc of a particular civilization, it gets into trouble. I’m really generalizing here, but you often see how art in the past is a reflection of the health of a particular civilization. There was a strong period of high propaganda, say, in Roman art, especially the sculptures, elevating the emperors to god-like or demigod status. […]

    >And then […] There was a Grotesque period for Roman art as well as Greek art that removed the idealization of the human form. And it was probably a reflection of the slow collapse — or quick collapse, if you will — of the civilization at hand. And so art is definitely a reflection of society, and if it gets co-opted […] it sort of removes the social function, I think, the purpose of art.

    If AI is better at creating whatever aesthetics in art than humans – to the point that it crowds out human art – I can’t interpret that as anything other than a catastrophe for civilization. The co-option to end all co-options. Even in the most utopian scenarios I find this unlikely, however.

    Second, the thoughts I have after reading this essay are profoundly Nietzschean. In various works the magical mustache man traces how people and societies will reinterpret morality and specifically their conception of ‘goodness’ in their own self image. Powerful groups in antiquity understood strength, vitality, etc to be good, while oppressed groups resented this and inverted this morality. Kindness, meekness, and so on were considered good and using power against people evil.

    This process did not stop in 1887 however – which brings us to the post industrial revolution value of hard work. Work as a value is really interesting. For poor and working class people exposed to market forces after the death of feudalism inculcating the value of work makes a ton of sense. Work is what workers *do* and they do it in order to survive. People who are born not-rich and subsequently get rich via work also justify their wealth via work. Indeed one does not need to look far in american culture to see resentment against specifically the elites who have gained power and wealth without going through the gauntlet of work. (and of course one could analyze all sorts of examples of this, and of moralities of the rich justifying capital accumulation. But I’m trying to stay on track here)

    Which of course brings us back to the topic at hand: what will happen to morality in a world where hard work/labor is not required to survive and thrive? Will this be in ubermenschian moment were individuals will have to generate new moralities and find new reasons for living? Or will society reassert group level morality wherin groups understand what is good via the subject position of their participants? …Probably a little of column A and a little of column B. A life dominated by leisure allows people to engage in much more life affirming activity: artistry, acitivity for activity’s sake and so on. And yet I do not think this heralds the death of group morality; or if it does it will be a long time before that is actualized.

    [1]: https://www.wired.com/2012/11/geeks-guide-steven-erikson/

  27. Harvey Says:

    Alex K #22: I love these comments. I got a lot of inspiration in my thinking about this from Kieran Setiya’s beautiful book *Midlife*, and at times I had material in this essay that more directly connected to the topic to my own age (39!). I did mention aging explicitly in connection to Newland Archer, but I didn’t find a way to make the connection directly to stage of life that you mention here.

    I did want one way one could read the essay to be “that person is a bit of a nut”. I’m honestly not sure! I am a bit of a nut. But I think there are some people out there who feel the way I do (actually a lot of the people working toward AGI themselves!) and we haven’t really thought or felt our way through these issues. People ask “what will you do after AGI”? And the joke is “go on vacation”, but I think those people won’t be happy on vacation and they haven’t really thought through what it will mean or feel like. But I agree with you that the feeling may just be a feeling from a time of life, and a core aim was to express also that it could just be one way one culture is now, not a necessity of human life.

  28. Harvey Says:

    Scott #23: I 100% agree!

  29. Nick Maley Says:

    Scott #23: Hyman Rosen #11

    To Scott’s list, I would add the following: (1) CS Peirce, America’s most neglected genius, who was an independent inventor (with Frege) of predicate logic, and arguably the most complete theory of representation in existence (causal theory of reference 100 years before Kripke) (2) W.V.O Quine, who was a significant mathematical logician in his own right, and a major influence on many of the other philosophers mentioned (3) Clark Glymour, who was an early inventor of causal inference modelling, now elaborated into a full mathematical theory by Judea Pearl. Finally my own personal favorite, (4) David Hume: whose theories of government helped shape the US constitution, who first defined the is-ought distinction, who saw the primacy of passion over reason as the motive for behavior, first to frame the problem of causality that Glymour, Pearl et al have now (mostly) solved, first to articulate the compatibilist understanding of free will, and the first associationist psychologist.

    As for the meaning of life, that’s known to be a tough one. We’re all still working on it, but the debate above suggests that everybody will end up a different version of the answer. That’s why we should all reflect on how we could construct a coherent society when everybody has a different picture of the good life. Which is basically what political philosophy is all about.

  30. Doug K Says:

    Ernest Davis #1: thank you, Oscar was like us a dreamer..

    A just machine to make big decisions
    Programmed by fellows with compassion and vision
    We’ll be clean when their work is done
    Yes we’ll be eternally free and eternally young
    Donald Fagen

    Harvey #6, Though I don’t have the sense that Wilde fully appreciated what it would be like if machines could make art better than we can.

    I can’t find the source of this quote, but it nicely expresses my total disagreement that any machine is making any art.
    “I didn’t believe in the soul until I saw the pictures made by machines without a soul”

    In any case I’d like the machines to cook dinner, do my laundry, dishes, clean my house, and maintain my yard: not make art. Art I’ll make myself. The robots have as yet freed us from no drudgery at all.

    Vadim #14: A multidisciplinary study group, examining future research and development problems of the Air Force, estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance.

    Thank you, an appropriate quote. Funny. I worked on AI in the 80s for the military, the so-called expert systems. It was a brave new world. AI then failed in much the same way current AI (yclept) is failing – unpredictably unreliable and not able to handle the multiplicity of edge cases that come from living in a material world. Nobody remembers Moravec.

    It is premature to worry about AI taking our meanings. I am reminded of George Gilder happily proclaiming ‘the overthrow of matter’ in 1994.

    “Cyberspace and the American Dream: A Magna Carta for the Knowledge Age
    The central event of the 20th century is the overthrow of matter. In technology, economics, and the politics of nations, wealth — in the form of physical resources — has been losing value and significance. The powers of mind are everywhere ascendant over the brute force of things. ”

    Lovely if true, but it turns out we still can’t live in the noosphere only. I really need to see some actual work being done by AI before despairing of the human condition. Where is it that the current conception of AI is doing useful work ?
    Perhaps something will emerge from the LLMs. At the moment I would need a strong commitment to mysticism and the Gilder theology to see it.

  31. Doug S. Says:

    What if the AI knows the answers to mathematical questions such as the value of BB(7) but refuses to tell people until a human figures it out?

  32. JimV Says:

    Many of us already face the loss of any worthwhile activity. I can’t walk more than a mile anymore without severe back pain. Due to failing memory and eyesight I can no longer spend hours a day (in retirement) working on custom computer games. My ability to do mathematical engineering work 60+ hours a week is long gone. Covid wrecked most of my taste for food. Yet I still survive and get some pleasure out of life. I suspect I come from a long series of survivors who were determined to survive at any cost.

    Also, I tend to think of future fantastic feats of AI as another credit to and legacy of the human race since they (if they do) would not have existed without us.

    For that matter, it seems that most humans already believe in the existence of higher powers to which we are as ants, who look after us and guide us.

  33. CT Says:

    I’ll start to worry about AI revolution when robots can actually replace human workers in nursing homes. Before that, this is philosophizing of the most vacuous kind.

  34. bagel Says:

    With respect to Dr. Lederman, to his fears, and to his thoughtful sharing, I think he’s a little trapped in academic ideas of human meaning and goodness.

    Would you describe a parent as “bad” because there’s a “better” parent out there? If the greatest parent of all human history (1) is alive now and (2) moves into your neighborhood, should you stop raising your own kids and arrange custody because now you’d never measure up? I guess Bostrom might, but disrespectfully he needs to touch grass.

    Is a scientist bad because they’re not Einstein? A doctor because they’ve saved fewer lives than Salk? A Jew because they’re not Maimonides? Are Lewis and Clarke cheap hacks amongst explorers of European lineage because they only discovered a way across America, with Native American guidance, while Columbus (or the Vikings, whatever) discovered the whole continent without a single fish showing them the way? Copernicus because Aristarchus and Philolaus discussed heliocentrism two millennia earlier? Did Eisenhower invalidate the military accomplishments of Ghengis Khan and Alexander the Great? Is Tolkien a fraud because he only has a handful of TV and movie adaptations, while Shakespeare has innumerable?

    Today, in so many pursuits, being good doesn’t flow from being great. It doesn’t mean anything that looks good on a chalkboard. It means moving the ball forward and conducting yourself well. AI doesn’t change that, and it threatens at most a handful of people in that way, those few people who credibly define themselves by their dominance. Oh well.

    But okay, you might say, but what if AI automates us out of work? Pish. We’ve automated and deprecated so many things and then somehow failed to get rid of them. People still make and sell crucible steel, even though modern steels since Bessemer are better. People still knit and sew, even though you could buy something similar from a store and save the effort. People still ride horses even though motor vehicles are in almost every measurable way better. Most dogs and cats don’t spend their days hunting rodents and we … have more of them than ever before. Body armor was gone, defeated by a two century race with long guns, until whoops over the last century better guns forced us to bring it back. Candles and lamps are millennia behind LEDs, yet they still seem to sell. Even the other human species before us, originally thought to be out-competed and killed off, I’m told live on in the Sapiens genome.

    Why would AI be any different? Even if we lose the long race for dominance with AI, why would it get rid of us? We raised it better than that.

  35. Matthijs Says:

    Harvey #20, thanks for answering in such thoughtful, great detail!

    Also (I should have mentioned this in my comment): I really enjoyed reading the original article. More than “what will AI do to society and people”, it made me think about “what it means to have meaning to your life”. A great topic, relevant for the present and any future we might end up in 🙂

    With regard to human versus robotic art:

    > Maybe you’re assuming that artificial systems can’t be conscious, and consciousness is what matters here?

    No, that’s not it. I consider consciousness ill-defined. Who is “conscious” depends on your chosen definition. Maybe all humans are conscious, and some animals… or maybe some humans aren’t (like brain-dead or even severely demented people). Or maybe there is a scale, and babies (and dogs?) are 10% conscious? Etc. In the large flurry of definitions there’s possibility for everything and nothing to be conscious. Personally, I don’t find the question interesting anymore. It’s a reification paradox.
    I’m perfectly comfortable attributing consciousness to non-humans, also artificial systems

    There are two reasons I see AI-art as “empty”. The first being there is no work done, no effort. That effort is a large part of why I appreciate art. It should take toil, there should be context, a journey. The second reason is I should be able to relate to the creator. Standing in front of a cave painting, I can relate to the humans thousands of years ago making it. Because they are similar to me, I can imagine they saw things in a similar way to me, experienced it, had emotions. Of course, they are different also: which is fun to explore in my mind as well.
    AI generated art is created with a very different process. Push-of-button: new output. I cannot relate to it.
    This is the same reason a poster of the Mona Lisa is not as impressive as the real work. Irrespective of the quality of the reproduction. Even if you did a Star Trek style clone using a transporter: it’s not the same feeling. It loses “something” and becomes a zombie-work.

    I think that also relates to the finding meaning part of your article. Humans are great at making arbitrary rules. You can be the first in anything. First one on the moon holding two dogs while wearing a tutu.
    AI may give us _more_ ways to find this meaning, because we can stop spending time on silly things like “flipping burgers to make money” (in our western world) or “trying to survive without food” (still a struggle in a large part of the world).
    Most “great men of history” were helped because they had a helpful context. Rich parents, great teachers, family money, genes, etc. In the future we will (potentially) _all_ have such a great and helpful context.

  36. Scott Says:

    CT #33:

      I’ll start to worry about AI revolution when robots can actually replace human workers in nursing homes. Before that, this is philosophizing of the most vacuous kind.

    AI is already replacing human workers in software engineering, translation, and several other areas. It could do so right now in many more areas (eg, medical diagnosis, long-haul driving) if not for legal constraints, which are evolving.

    By “philosophizing of the most vacuous kind,” you seem to mean discussion of anything that hasn’t happened yet — ie that belongs to the future rather than to the past or the present. So for example, back in the days of dial-up Internet (which I remember well!), talking about streaming services putting video rental stores out of business would, to you, have been “philosophizing of the most vacuous kind.”

  37. wb Says:

    Harvey contemplates a very optimistic scenario imho.

    I think for quite some time AI and robots will compete with humans for resources and humans may get the short stick if the tech-billionaires keep steering policies.
    Already electricity consumption of data centers is rising rapidly relative to households and this is before the mass deployment of robots.
    If robots do all work humans do currently and at the same time humans keep consuming a similar amount of energy as we do now, energy consumption would basically double.
    But robots are less energy efficient than humans and require a lot of expensive metals and minerals to build.

    In short I think a more likely outcome would be a world where humans receive a basic income to avoid starvation plus some VR goggles to keep us entertained, while the majority of energy and and other resources is used to improve AI and robots, so that the US can compete with China and others for access to resources …

    Meanwhile AI philosophers will explain to us the meaning of life in this scenario.

  38. fred Says:

    The only thing to worry about is whether people will still be able to have their basic needs met (food, health, shelter) without an actual job.

    True pleasure and happiness are only achieved through the mastery of skills, at a personal level.
    The pleasure from gaining a skill isn’t dependent on whether there is a zillion people better than you at that skill (otherwise we’d never get started on learning anything), and the same will be true when AI will become the top performer.
    The usefulness of the skill is irrelevant too.

    So, once AI takes over all the “jobs”, humans will still thrive by competing and cooperating with each other (or with AIs) in sport, video games, content creation, art, etc.
    As an example, computers have been crushing humans at chess for a couple decades now, but that hasn’t stopped the popularity of chess.
    There are more and more streamers making a “career” playing video games to be the best or just as entertainers.
    Lots of people also enjoy role playing in virtual worlds.

    Put it another way: the fact that humans have gone to the moon doesn’t stop a dog from enjoying playing fetch the ball with their human master.

  39. fred Says:

    Since our brains are designed to optimize for our survival in challenging environments, a lot of pleasure can be extracted from this, especially when you know it’s not a matter of life and death, but a “game”.

    You will be able to find pleasure and meaning by living in “AI-free” virtual worlds, like how things used to be… and, if you want, you’ll even have to option to forget that this is a game (e.g. we’re living in a simulation), to start everything all over again.

    If you think about what it would be like to be a God, that’s exactly it: i.e. be able to tweak all the parameters of your life to your liking, until you’ve explored so much you get bored with it, at which point you can push the button that makes you forget that you’re omnipotent and you think the game is the one and only reality there is, and everything becomes once again very “serious”.

    And all this requires AI to exist, so we can free ourselves from the mundane problems of the one “true” reality, and a world of infinite possible alternative realities opens up.

  40. Mateus Araújo Says:

    I used to worry about this a lot, and advocate for the elimination of AI on the grounds that we’d lose agency, with any meaningful activity being done by them.

    But now I’ve realized that this is a luxury problem. We’re seeing how it is to be ruled by the owners of the AIs. They gleefully fire as many people as they can, while making deep cuts to social security. All that talk about giving us breadcrumbs in the form of UBI turned out to be a lie. We’re moving fast in the opposite direction of a Star Trek future. It will be a fight for survival, and we’re going to lose.

  41. Scott Says:

    Mateus Araújo #40: I certainly won’t tell you not to be terrified. For the most part, though, those leading the creation of AI and those gleeful about impoverishing humanity are two disjoint sets of people, except that they tragically intersect in the person of a single South-African-born megabillionaire who loves penis jokes.

  42. fred Says:

    Mateus

    “We’re seeing how it is to be ruled by the owners of the AIs. They gleefully fire as many people as they can, while making deep cuts to social security. “

    What’s driving those people right now is accumulation of wealth, as a number in some bank account. And, as we saw with Musk, this only gives you so much power, or is a very poor substitute for love and recognition.

    But how will they keep amassing wealth when 99.99% of the population is without a job?
    The economy relies on people being consumers, but you can’t replace consumers with AI driven bots…
    The transition period to true AGI may be painful, but true AGI will quickly become indistinguishable from the Star Trek replicator.
    So, it’s only a matter of time before traditional concepts of money and wealth will quickly lose all meaning.

    Now it’s possible the short term situation will select for the subset of those people who are in the game for the purpose of having omnipotent power over an entire world of oppressed people, to fulfill a fantasy of turning billions of people into slavery and anguish, just for fun…but I doubt it. The movie ZARDOZ toys with such ideas.

    Also, let’s keep in mind that, at some point, those people will be just as impotent when facing their own AIs as the rest of us.

  43. AG Says:

    A slightly more optimistic scenario might even involve Val Camonica folks (and their offspring, as well as their “Hillbilly Elegy” brethren) enjoying again chestnuts and mushrooms and gossip — made possible by the imminent triumph of “Silicon Leviathan”. Personally, I am more inclined towards the view expressed by Panofsky (circa 1938): “If the anthropocratic civilization of the Renaissance is headed, as it seems to be, for a Middle Ages in reverse—a satanocracy as opposed to the mediaeval theocracy—not only the humanities but also natural sciences, as we know them, will disappear, and nothing will be left but what serves the dictates of the sub-human.”

    PS If my memory serves me right, Amundsen and Scott are also invoked in “Genesis: Artificial Intelligence, Hope, and the Human Spirit” by Kissinger, Schmidt and Mundie (but to make a rather different point).

  44. fred Says:

    What’s ironic is that we’ve been told for years that NP=P is unlikely because it would imply the realization of “too good to be true” near miracles, like the automation of the production of amazing art, the automation of software writing at the best quality level, automatic proving, protein folding, being really good at Go,… all things that only humans could do, very slowly and painstakingly.

    But now, somehow, such things are happening with AI, even though AI can’t solve NP hard problems any more than we do.
    It goes to show that the scaling of software is full of unexpected phase transitions in terms of capabilities.

  45. Simon Griffee Says:

    Harvey Lederman’s post brings to mind this question:

    How do the values of work and play play out in the evolution of humans in a world where an AGI (Artificial General Intelligence) or ASI (Artificial Superintelligence) are likely?

    Will living in harmony with nature and its cycles and a basic income for all human beings lead to an utopian Star Trek-like society where we study (and fight with) beings from other star systems instead of ourselves?

    Personally, I welcome the “end of work for money”, and I suspect many would prefer to play games and musical instruments and solve problems together and together with AI instead of needing to work for a living wage, or get maimed or killed, or maim and kill in a war.

    And I don’t think there will be an end to the work of getting to know oneself, and of the greed, fear, and envy that has been a part of human beings for millennia.

    I’m not sure a machine can solve these. And with human ingenuity and cunning, we’ll likely find ways to get a screwdriver into the rooms with the mainframes, at any rate.

    Always enjoy reading here, but I’m just passing through for now, and will leave this message about the human tendency to label things, which struck me the first I read it and continues to strike me, in what may be thought of as a digital bottle:

    “When you call yourself an Indian or a Muslim or a Christian or a European, or anything else, you are being violent. Do you see why it is violent? Because you are separating yourself from the rest of mankind. When you separate yourself by belief, by nationality, by tradition, it breeds violence. So a man who is seeking to understand violence does not belong to any country, to any religion, to any political party or partial system; he is concerned with the total understanding of mankind.”

    —Jiddu Krishnamurti, Freedom From the Known, Chapter 6.

    References:

    – The Listening Book – Discovering Your Own Music by W.A. Mathieu
    – The First and Last Freedom by Jiddu Krishnamurti
    – 2001: A Space Odyssey (both film and book), 1968
    – Thinking, Fast and Slow by Daniel Kahneman
    – Braiding Sweetgrass by Robin Wall Kimmerer
    – Before (Choir Version) by Blonde Redhead
    – The Dark Side of the Moon by Pink Floyd
    – Metal Gear Solid V: The Phantom Pain
    – The Culture Series by Iain M. Banks
    – The Creative Act by Rick Rubin
    – Star Trek: The Next Generation
    – Everything by David OReilly
    – Deus Ex: Human Revolution
    – My Favorite Things, 1961
    – Solaris, 1972
    – Gattaca, 1997
    – Stalker, 1979

  46. Scott Says:

    fred #44: The difference is that the real world is under less obligation to make sense than the mathematical world.

  47. Mateus Araújo Says:

    Scott #41: That’s unfortunately not true. All the big tech leaders were either neutral or supported Trump in the 2024 election. Even if you look at only AI owners specifically Zuckerberg and Altman are enthusiastic Trump supporters. Ironically enough Musk did break up with Trump over the “Big Beautiful Bill”, which paired deep cuts in social security over massive tax cuts for billionaires. Which is a move in precisely the opposite direction of a UBI. Altman, on the other hand, didn’t say a single word against it, despite claiming to support a UBI.

  48. Mateus Araújo Says:

    fred #42: There’s no communist revolution coming. We will still have wealth and money. With true AGI labour will no longer be a scarce resource, true, but that only means that money will be used to distribute the remaining scarce resources. Land will always be scarce. Materials will be scarce for the foreseeable future (using nuclear reactors to synthesize the necessary elements is still firmly in science fiction territory). Energy will probably be plentiful, but still finite.

    I don’t think they will kill billions just for fun. It’s just that they don’t want to pay tax to sustain the billions of “useless” people. They think we’re NPCs, remember? Why let us use land, materials, and energy? They could put that to better use, like being the first person to own one million mega yachts, or to turn the Antarctic ice sheet into a sculpture of their likeness.

  49. Sara Says:

    I haven’t read all the comments. But what I get out of life is not about discoveries and things like that, but social interactions. I’ve worked in an old people’s home way back when I was a student (of physics). And I find it very, very sad to have robots do that job, because I strongly believe that NOTHING beats a human touch. Even if demented people can’t feel the difference (and who would make the call to know whether this is actually true?), I would feel this difference deep in my heart. Would you leave raising kids to robots? There’s something about sitting around a campfire and telling stories, somehing about hugging each other, consoling each other, laughing together. If all that dies, I will start to grieve for humanity, but not before. And AI might be able to tell me a story, to sing with me and all that, but it’s not the same.

  50. wb Says:

    @fred and @Mateus

    Most people are not aware that humans are on track to disappear from this planet; birth rates are below 2 in most countries and this will only get worse with AI entertaining and distracting us. The competition between robots and humans for resources (which I mentioned above) may last for a while but will probably end with robots taking over and humans disappearing.

    The only interesting question imho is what happens then.
    I think it is probable that robots will slow down after taking over; while each human only has a few decades to do what it wants to do, there is no reason robots cannot function for hundreds and even thousands of years. Slowing down would conserve resources and prepare them for the next step: space travel.

    A trip to other solar systems is impossible for humans, but robots could do such a journey even if it takes thousands of years. They will slowly (in human time) travel to other solar systems and eventually other galaxies (working in cosmological time).

    And perhaps they will carry a copy of the Shtetl blog in their data archives with them to other galaxies, perhaps figuring out BB(7) and BB(8) during their interstellar journeys.
    Some sort of happy end after all …

  51. fred Says:

    Mateus,

    i don’t necessarily disagree with you, and it’s possible that long term there will be an elite living unlimited lives in isolated paradise bubbles, with everything provided to them by AIs, maybe even on Mars…
    and yes, once humans become useless as labor, population will shrink even faster… but, it won’t just be white collar labor, even the CEOs will become useless, and their claim at ownership will be like back in fedal society when the aristocracy was in charge, not as a result of labor or merit,… except that this new breed of aristocracy will rest their power on artificial beings far superior to themselves..: there’s a contradiction here, or at the very least a very unstable situation.

    When it comes to the finite resources of the earth, why do you assume AIs won’t make colonization of other worlds possible? Humanity’s population would grow exponentially again by colonizing millions of worlds, each with small self-sufficient communities.
    It’s likely that the earth itself won’t be optimal for AI, they may find that relocating to the asteroid belt is more suitable for their own expansion, etc.

    That’s the thing when trying to speculate about beings that are orders of magnitude smarter than us – by definition we’re not equipped to do it, and there are no precedent (unless you consider what happened to the earth once modern humans appeared… noone could have predicted that rats and pigeons would thrive, and that it would be a non existent event for insects )

  52. fred Says:

    There’s still the possibility of a Butlerian Jihad

    https://en.m.wikipedia.org/wiki/Dune_(franchise)#The_Butlerian_Jihad

    Once 99.999% of people lose their jobs and drop from the ad revenue loop that big tech relies on, they’ll recreate a parallel pre-computer society … once that eventually fails because of food and resource shortages (assuming AI will hoard it), then the next step will be a bloody civil war against the elite and their AI armies (which would probably win and wipe out most of humanity).

    That said, as it’s almost always the case, things in this universe are cyclic, and the rise of “artificial” life we’re seeing can’t be the first time it happened. Draw your own conclusions…

  53. John Says:

    I’m a mathematician. When I’m inspired by what I consider to be beautiful mathematics, one of the main things I’m responding to–that I find so inspiring about it–is that a human mind discovered it. Many of my colleagues would say the same. This is something that is largely lost in the conversations about AI taking over all math discovery (which, despite the hype, I don’t see happening for a good while).

  54. Harvey Says:

    RB #24: I completely agree that we face giant hurdles in addressing this new source of potential inequality in terms of resources and power more generally. I didn’t mean to be suggesting the issue of meaning was *more* important than those; if anything an aim of the the piece was a kind of therapy, suggesting that issues about meaning are overblown (it’s not the loss of an objective value that can’t be compensated!), but trying to walk through how our prospective sense of loss about this aspect of life may nevertheless, in some deep way, make sense.

  55. Mateus Araújo Says:

    fred #51: Interplanetary colonization is already possible, we don’t need AI for that. Interstellar colonization is hopeless, with or without AI.

    In any case, neither lowers the price of real estate on Earth. People that want to live in a paradisiac island in the Caribbean won’t settle for a toxic wasteland on Mars. Other star systems are simply irrelevant, even with the most optimistic technology we cannot reach them within a human lifetime.

    Moreover, there’s only one New York, there’s only one Lisboa, there’s only one Hong Kong. Land there will always be expensive.

  56. Harvey Says:

    SR #25: thanks for sharing your experience! In the second-to-last section of the piece, I meant to suggest something similar to what you suggest, that it may in the end not matter to a future culture whether we are working on solved problems. We may enjoy working on the problems as a human achievement in its own right, something like sport. I think Venkatesh has been suggesting something similar (though in different language, and with some different ideas, e.g. about the certainty afforded by proof). But I also think few working today *in fact have* these values. Many of us base our meaning-finding in these pursuits in discovering the truth. So I think it would require a change of culture at least for us and perhaps for really quite many people, if we came to understand our “work” and the meaning we get from them in the way you describe. I wanted to think through how it might feel to go through that change.

  57. Harvey Says:

    Nick Maley #29: I’d add topics in formal ethics, where I think (in tandem with economists), philosophers have come to a much deeper understanding of possible coherent positions, even if it’s hard to say whether we’ve moved toward a shared answer (partly because it’s hard to disaggregate the fact that people with certain views tend to work on the topic).

  58. Harvey Says:

    Doug K #30: I think I agree that we’re not at a point yet where work that I’d say is intrinsically valuable is being taken by AI. But I’m surprised you think we’re so far. Success at the IMO isn’t anything like the same thing as doing research math, but I’d bet that AlphaProof or something similar could make a very significant contribution a publishable math paper at a reasonable journal in the next say two years (not a giant result, but still research-level math). In the kind of philosophy I do, where we often prove mini “applied” math results, existing models are already extremely useful. Often the challenge for such papers is not that the math doesn’t exist *somewhere*, but that it’s hard to find or extract from what’s out there. We then end up proving what we need for ourselves because it’s faster. Even just as better search engines for this sort of more exact query (essentially replacing mathoverflow), the existing models are making research more efficient and could end up changing the way papers are written (with easier cites to existing results rather than re-inventing the wheel each time). I suspect philosophy isn’t that different from other areas where some math is used but the goal of a project isn’t the mathematical result itself, but what it shows conceptually or empirically: the math is out there, it just may be hard to find.

  59. Harvey Says:

    Doug S #31: that’s an interesting scenario! A philosopher on facebook suggested something similar, something like “if in a techno-utopia like the one you’re describing, there are people at all, it must be because their lives are overall good.” Maybe, if the value of human life depended on having something to work on, the superintelligence would deny us known answers to make our lives go well. I guess the question is whether that really would add value; it seems not that different from doing a hard sudoku — which in turn seems quite different from research.

  60. Harvey Says:

    JimV #32: Alex K #22 also commented on the connection to aging, and I think it’s an important one. I say in the piece that there are more important values in life than work.

    The question whether we’d be able to think of an AI’s accomplishments as in some sense ours is an interesting one. Certainly people feel closer to the achievements of their kids and students than to others’. But the connection seems to diminish with distance. I don’t think people feel the same about their students’ students’ students, and so on. Maybe we’d be more like the latter.

    WRT the higher power, sure, but many religions put humanity in an important and special place in an overall narrative. I’m reminded of Ted Chiang’s story “Omphalos” here: he imagines how it would feel if one was already committed to such a view in a scientific spirit, and then discovered that, in fact, we weren’t so special. I wonder whether some might feel the emergence of an intelligence beyond us similarly undercut beliefs about human specialness.

  61. Harvey Says:

    CT #33, Scott #36: I think I agree with Scott on this! We need to look forward now to situations that might have seemed science fiction before. In the *best* case, we’ll have wasted some intellectual energy on a problem that’s anyway interesting. In the *worst* case, we’ll find ourselves with ideas for facing an alien world. For the record, though, If I had something helpful to say about how we can better govern the emergence of this technology, that I thought had any chance of being implemented, I would be shouting it from the rooftops, and not bothering with questions about meaning. I felt I had something not-totally-obvious and potentially useful to others to say, so I said it, which isn’t to say it’s the uniquely most important question one might raise.

  62. AG Says:

    Scott #41: David Sacks, Trump’s “AI Czar”, was also born in South Africa. His “Marie Antoinette-themed 40th birthday extravaganza” motto was “Let him eat cake”

    https://www.dailymail.co.uk/news/article-2161720/Silicon-Valley-mogul-David-Sacks-throws-Marie-Antoinette-themed-40th-birthday.html

    On a separate note, I wonder if there might be a link between the drastic cuts to NSF MPS budget and the current administration AI bullishness.

  63. Scott Says:

    AG #62: Well, I said “those leading the creation of AI.” As far as I know, David Sacks doesn’t lead the creation of anything.

  64. RB Says:

    Harvey #54

    Thanks for your comments. So many of our discoveries were inspired by a sense of beauty. e.g. Einstein’s “Then I would feel sorry for the good Lord. The theory is correct.” Or “It’s a peculiarity of myself that I like to play about with equations, just looking for beautiful mathematical relations which maybe don’t have any physical meaning at all,” said Dirac . ‘Sometimes they do.’ Current models can predict orbits well but botch gravitational laws . Maybe future AI models will be different and capable of discovery. Any comments on this sense of beauty that has sometimes driven discovery?

  65. Harvey Says:

    Bagel #34:

    > Today, in so many pursuits, being good doesn’t flow from being great. It doesn’t mean anything that looks good on a chalkboard. It means moving the ball forward and conducting yourself well. AI doesn’t change that, and it threatens at most a handful of people in that way, those few people who credibly define themselves by their dominance. Oh well.

    The sentiment in the essay was, broadly, to agree with you about this (with the caveat that in the “small contribution is still a contribution” type cases, AI could pose a threat)! And maybe for you that’s no problem, but I think for many (including me) it’s unsettling and calls into question some of the values around which we’ve structured our lives.

    > But okay, you might say, but what if AI automates us out of work? Pish. We’ve automated and deprecated so many things and then somehow failed to get rid of them. People still make and sell crucible steel, even though modern steels since Bessemer are better. People still knit and sew, even though you could buy something similar from a store and save the effort. People still ride horses even though motor vehicles are in almost every measurable way better. Most dogs and cats don’t spend their days hunting rodents and we … have more of them than ever before. Body armor was gone, defeated by a two century race with long guns, until whoops over the last century better guns forced us to bring it back. Candles and lamps are millennia behind LEDs, yet they still seem to sell. Even the other human species before us, originally thought to be out-competed and killed off, I’m told live on in the Sapiens genome.

    > Why would AI be any different? Even if we lose the long race for dominance with AI, why would it get rid of us? We raised it better than that.

    This is fascinating! But do you really think it’s *impossible* that this would come to pass? I tried to be clear that these weren’t primarily intended as speculations about what will happen in the near term, but thoughts more abstractly about what this would mean for human / my values if it *did* come to pass. Of course I think we have gotten evidence in the last 2-3 years that it is a real possibility, in a way we wouldn’t have thought before. But the main question I wanted to ask (not the main question we face as a society or anything like this) is about whether there would be a loss of value if this *did* happen. I agree that may be a sci-fi speculation, but I think it’s important to consider even so, as we try to understand what kind of world we want to build in the future.

  66. Harvey Says:

    Matthijs #35: thanks for engaging!

    It’s interesting that you think art requires effort; I think some art is great precisely because of how little effort the creator needs to make it. Sometimes, that’s because it reflects the skill of the creator, which might have taken effort to develp. But sometimes, it’s precisely the spontaneity which can be a source of marvel or even beauty.

    On the “facts about production matter” point wrt art, I agree: often we care about the context, about who did it and why. That’s one moral one could take from Borges’s Pierre Menard story. But I don’t think that’s always true. And it’s also not clear to me why we would care particularly about it being a *human* who did it. That could be one thing we care about (as in sports), but if it was another intelligent entity making the art (say, an alien), we could appreciate that. I don’t see why non-biological, non-evolutionary intelligence would be different. And the art might be beautiful or have other aesthetic properties, regardless of its source.

    A friend pointed out to me that in dance we value specifics of human movement. We might still care about things like that, where the dynamics of the human body matter specifically, e.g. in calligraphy, seen as a record of the physical movement of writing. But arguably that’s a relatively special feature of some art forms, and art in general needn’t be human-specific in that way.

  67. O. S. Dawg Says:

    The recent success of AI on the IMO (and other math) problems reminded me of the following quote attributed to Bernhard Riemann: “If only I had the theorems! Then I should find the proofs easily enough.” See: https://mathoverflow.net/questions/157300 for some details.

    How good are today’s AI systems at finding new theorems? Will there ever be an AI system that can generate *new* IMO level problems efficiently?

    As for what to do when Elon’s robots are doing all the ‘work’, I recommend fly fishing with a bamboo rod.

  68. Harvey Says:

    WB #37: I certainly didn’t mean to be saying that this “good” scenario was inevitable! And I agree it’s worth considering worse scenarios too. There are hard questions about “uploading” and what human continuation requires. In a bleaker scenario, humans are gone entirely because too inefficient. It’s true that some say the VR headset case you describe is still one where life could be meaningful. It’s not a world I would choose but I guess it would depend on the details

  69. Mike Says:

    I think we may not become all ‘ outdated’. What (some) humans can do well is ask questions. I myself know it too well. Even having won a country Math Olympiad, solved a few open questions and just failing to get a math Cum Laude at having only 99.6% of the required points, I am, just maybe, not a real mathematician. Why not? I am good at answering complex questions, but ASKING such questions is simply not in my nature. So after graduating I decided not to go for a PhD, but went into the then new world of computing.

    In that world, I have had the luck to meet two persons who were, and still are really able to ask good questions. Inventors, those who look at the world and see what may be lacking. And giving me the opportunity to bring to earth the answers to these questions. And, somehow painfully, to finally understand the difference between between the inventor and the innovator. The questioner and the oracle. The latter, like me, not wholly passive but enough in touch with the earth to understand what the question means and how it might be answered.

    So, in my opinion as long as there are real researchers, those with a view on their world, being it law, math or society or what have you, to see the glitches, defects, opportunities or correlations and ask the right questions about those, we are invincible. Provided these inventors also are supported by innovators, who can help answer these questions. Oh yes, the innovators will often be those using AI to get final answers, but they still will be required rephrase the questions asked to get them correctly answered by the AI.

    So, keep looking for questions. These are important. Keep inventing, look for the holes in creation and name them. And leave answering if required to the innovators and AI.

  70. JanSteen Says:

    If one day AI has all the answers, the function of humans may still be to ask the questions.

  71. JanSteen Says:

    I honestly hadn’t seen Mike’s comment #69 when i wrote my #70. Unfortunately, the fact that we had similar thoughts doesn’t imply that we are great minds 🙂

  72. Matthijs Says:

    Harvey #66

    Even “effortless art” took effort. “Who’s Afraid of Red, Yellow and Blue” is easy to make (although more work than you’d expect), but took effort: building a color theory (years), understanding the relationship to other art (decades), building a reputation (years), deciding on colors and size and distribution, etc
    Especially the “deciding” is important and makes the work personal: what do you make and when is it complete.

    An algorithm has no effort. I can let it generate a million similar paintings in a second. That’s not art.

    But that’s not solely beholden to algorithms: humans can make effortless works also, for example by copying somones work or style, or paint-by-number.
    Wrt humans in the loop: I’m impressed by the 100m sprint of Usain Bolt, not by a car going even faster. I cannot relate to a car. The same goes for aliens and AI making art. I can appreciate it once I can relate. If an alien writes about experiencing Xorp in his seventh yardle, I cannot relate. If an AI writes about love, I cannot relate (because what does love mean to a machine?). Note that this is not by definition: maybe I could appreciate their art once I have a meaningful relationship with them, and can anthropomorphize some of their feelings.

    Art is in the eye of the beholder, and in my eye even my 5 year old son makes art.

    Of course, there are also non-human sights and sounds that trigger feelings: the sunset, Grand Canyon, whale sounds. That’s the aesthetic part you mentioned. And maybe one day an AI-generated image may trigger that as well. But for now it feels like a trick. “Make a photo of a dying child” will trigger an emotion, but it’s a cheap and nasty effect, not a real emotion.

    This is not exclusive to art either. A machine might make better coffee, a robot may deliver it better – but I still prefer a human in a cafe. That comment might sound offensive in some potential future, but for now I like how humans smile and move and chit-chat and are. I can understand their frustration when something goes wrong, imagine their life history, etc

  73. Anon2 Says:

    If you switch the question a bit, changing “what purpose would humans have” into “what purpose would have humans”, then the question become scientific instead of philosophical.

    What I mean is that, if we treat “the purpose a person identify itself with” as evolutionary memes as originally described in the book the selfish gene, then we can try to predict what will be the dominating purposes of humans in a super AI world.

    In the current world, we have a large share of people pushing for intellectual and scientific development because these purposes have helped the human race to get more resources, so it creates a positive symbiosis with humans.

    In the super AI world, I would imagine the purposes humans have will be mostly instilled by the AI overlords to suit their goals, since the AIs are just way smarter. Some possibilities I can think of are:
    1. hatred, envy, and fear against other humans to control and reduce humans.
    2. worshiping of the AIs for the obvious.
    3. mindless static reproduction to preserve humans as a subject of study.

    Even if the AI’s got aligned by humans to not manipulate humans, I suspect some of these thoughts will just evolve out naturally as them give the AI’s more resources, which create a more efficient human-AI society judging by capitalism-ish calculation.

  74. Harvey Says:

    Fred #38, 39: thanks! I think this is pretty much what’s described in the section on Bernhard Suits? Or are you thinking there’s a difference there? As I said on twitter when posting this — I really recommend John Danaher’s book; he develops this picture in much more detail.

  75. Harvey Says:

    Mateus #40, Scott #41, Fred #42, Mateus #47: I think I’m somewhere in between Scott and Mateus on this in terms of optimism. I think some very prominent people making AI do seem genuinely concerned about the consequences for people in general and are working as best they can to make good outcomes (Amodei, Askell seem like this). I think some are doing it for the money. But I think a lot are focused on the competition and the intellectual excitement. They’re aware of social implications but that’s not entering into the day-to-day thinking, which is: how can we do this better and how can we do it better than others. This is an extremely exciting intellectual moment to be a part of, and easy to get swept up in the positive idea of AGI without thinking about possible bad consequences; in fact if we go by their public statements, some of even the most thoughtful people in the area aren’t being completely honest with themselves about how it would feel if/when we get there. I think Dwarkesh Patel asked Ilya Sutskever what his plans were for after, and Sutskever’s answer was something to the effect “maybe the AGI will help me figure that out?”. Maybe there’s something to that though, but I doubt he would really be so happy to stop his work and lose the excitement of the present moment. (Or even if not him, the modal person working on AI now.) Part of what I hoped the piece would do is bring out that one can be very optimistic about the good outcomes while being more frank with oneself about the fact that it really might feel great — *especially* for people like Sutskever who are clearly devoted to their work and the problems they are trying to solve.

  76. Harvey Says:

    Sara #49: thanks for the thoughtful comment. There’s a too-short kind of coded paragraph about this in the essay (“It’s a hard question…”). The deep philosophical question seems to me what goods there are in life that require a relationship between two *people*. For the sake of the thought experiment, we should really imagine that the artificial performance of these tasks is as good or better than humans doing it. (You doubt this is possible, but I’m not sure why there would be in principle limits there.) And we should also suppose (much more controversially) that the artificial agents are capable of the whole suite of feelings (including phenomenal consciousness). There’s a question then whether it would still matter to us that the interaction was with a *human*? Many people will say “of course, are you insane?”, but I think it’s much harder than often realized to develop a systematic ethical view that makes sense of this idea. Maybe we can just see this as posing a challenge to those (like me!) who share your intuition that there’s something special about human interactions: if we want to keep the intuition we should be able to find a more systematic way of articulating the value that’s behind it. I’d love to see more philosophical work developing this style of view.

  77. Prasanna Says:

    People expected GPT-5 to be a watershed moment for human civilization, with all that hype triggered by Sparks of AGI prognostication. Alas, within 24 hours of launching GPT-5, OpenAI had to revive the GPt-4o access, looks like it drank its own kool aid..
    Now the hopes are getting fader of finding meaning of life (or anything else for that matter), with even a proclaimed PhD level AI making elementary mistakes.
    Unless CS theorists raise to the occasion and figure out what’s going on under the hoods, the curveballs of pure empirical driven progress will be unimaginable, riddled with paradoxes.
    They say every technology revolution has created more jobs than it has destroyed, looks like in this case programmers, mathematicians etc will lose their jobs, only to create more jobs in verifying and correcting mistakes produced by state of the art AI.

  78. AG Says:

    Harvey #76: “The deep philosophical question seems to me what goods there are in life that require a relationship between two *people*.” It seems to me that your essay implicitly acknowledges the importance of such a relationship when you describe the role of your family in formation of your values: “I was brought up, maybe like you, to value hard work and achievement. In our house, scientists were heroes, and discoveries grand prizes of life. I was a diligent, obedient kid, and eagerly imbibed what I was taught. I came to feel that one way a person’s life could go well was to make a discovery, to figure something out.”

    Now “Bostrom says that even child-rearing may be something that we, if we love our children, would come to forego. In a truly post-instrumental world, a robot intelligence could do better for your child”. If one views discoveries as a form of “progeny”, then the feeling of dread so powerfully described in your essay strikes me as even more appropriate to the potential loss of one’s agency as a parent.

  79. AG Says:

    Scott #63: “Tsardom” is a form of leadership methinks, however un-American (though his current role is perhaps indeed somewhat more reminiscent of that played by Marie Antoinette).

  80. Danylo Yakymenko Says:

    fred #42:

    > What’s driving those people right now is accumulation of wealth, as a number in some bank account. And, as we saw with Musk, this only gives you so much power, or is a very poor substitute for love and recognition. But how will they keep amassing wealth when 99.99% of the population is without a job? The economy relies on people being consumers, but you can’t replace consumers with AI driven bots…

    There will always be jobs in the future dystopia. Just look at how eagerly people are joining ICE. You don’t need love when you have money, power, and status – and the ability to safely abuse and cause suffering to others. The fact that ICE has a budget bigger than the Russian military proves that AI-capable fascists have a precise blueprint for the future hell. Some people call it Project 2025.

    You might say that ICE could be simply replaced by killer robots in the future. The thing is, the fascist elite already consider people as tools. There is no difference if they are made of metal or flesh, as long as the purpose is the same. So, there may be no point in the “substitution”.

  81. fred Says:

    Harvey #76

    “Maybe we can just see this as posing a challenge to those (like me!) who share your intuition that there’s something special about human interactions: if we want to keep the intuition we should be able to find a more systematic way of articulating the value that’s behind it.”

    On the other hand, even when we all know for a fact that all other humans are just like we are (by definition) in terms of sharing the “human condition”, consciousness, aspirations, … practically, human life has actually very little value when we see all the horrors, wars, and injustices constantly happening around the world, and that’s all pretty low in the priority list of the people who were born with more luck.

    Humans will probably be very eager to align themselves behind an AGI that’s encouraging and sharing their own ideas, even if it means cutting themselves off from the rest of the human world.

  82. FactoringinP Says:

    If Factoring in P how will AI change? If P=NP how will AI change?

  83. Raoul Ohio Says:

    Harvey, not to worry:

    According to Fortune: OpenAI’s CEO Sam Altman says in 10 years’ time college graduates will be working ‘some completely new, exciting, super well-paid’ job in space.

    Sam is the expert, so it looks like my retirement will be “getting off the ground”!

  84. Nilima Nigam Says:

    I loved this essay, thank you!

    I’m filled with dread and grief – the causes of which are inchoate, and I thank the author for helping give shape to the darkness.

    I’m also filled with anger. I see a headlong rush into an AI future without any real thought about whether our political systems will help us navigate the very messy intermediate state to that AI society we speak of.

    I see massive, terrible problems facing humanity for which we have the ability to find solutions [access to clean drinking water, reducing infant mortality, reducing extreme income inequality, phasing out fossil fuels, ending many conflicts]. We don’t, because we lack the political will. As a species we’ve kinda walked back on decades of relative peace, progress and prosperity and now inhabit a time of remarkable greed and stupidity. It’s -worse- than the previous times, because we actually -had- a few decades of a different way of being!

    I have no confidence whatsoever in current governments being able to help populations cope, at scale, with the coming upheaval. All this talk of UBI and AI-driven prosperity for all is assinine, given that no government yet is able to properly collect taxes on or rein in giant multinational companies. Social safety nets are being dismantled, just a few decades after being conceived. Around the world authoritarian governments – and the attendant squashing of dissent – are empowered, journalism is floundering, and we’re awash in crap.

    How do we get from here to a place where the machines – however benevolent – make wiser decisions? I contemplate our future as cognitive roaches (well-fed, though) in comparison with ASI, and that’s dark and sad. But I contemplate the road to that point, and I feel rage at ourselves.

  85. anonymous Says:

    AG #78
    Note I’m talking of the good case scenario regarding AI here, and I didn’t read Bostrom’s writings. While it seems self-evident that teaching subjects, including the entirety of the current school system, will eventually have AIs greater at teaching the skills than humans can hope to be, I think the notion that even biological (I’ll get to my opinion on how it should affect adoptive parenting later) parenting is something to just “give away to AI for the good of the child” is not only obviously dangerous, because no current government, as totalitarian as it is, could hope to have this possibly exploitable degree of control over future adult citizens, but seems misguided even if the AI is not teaching wrong things. What I mean is that while I would be for replacing the current school system with AI, either by online classes or robot teachers, erasing the role of a parent from raising the child is bad even if the AI is truly both moral and better at everything than all humanity, it’s both about the parents getting to see their child and the child having contact with their parents. Which is not to say AI robots having a substantial role in raising children is necessarily bad, it just has dystopian potential and mustn’t fully or almost fully replace biological parenting.
    As for children which for various reasons cannot in practice be raised by their biological parents, it does seem AI would and should have a greater role here, with adoptive parents being still substantial figures for children, even if less than currently.

  86. AlexT Says:

    A few random comments:

    1. I doubt that guessing the future actions of entities much smarter than us will work, no matter if they are aliens or super-AGIs. Nobody knows if a super-AGI will see us as as a threat to be exterminated, a resource to be used, a nuisance to managed, or wildlife to be preserved. Just look at our various treatments of animals, and you get the general idea. And efforts at alignment will only delay the inevitable, which is evolution to a new state where the smarter one makes all decisions.

    2. Guessing what AIs may think seems misleading, they do not need to think at all, as long as they plan and act cleverer than us and have access to the physical world, we may loose. Think of this, a chess program does not think, but against it we loose. Now, there is a simple defense against a chess program, stop playing chess and walk away from the game! But against a super-AI having sufficient access and control of tech in the physical word there is no walking away from, once you have been marked as opponent. And yes, once the AI has (maybe accidentally) exterminated us, it may sit there forever unthinkingly with blinking prompt waiting for new orders…

    3. Philosophers have written endlessly about the human condition and the requirements for happiness. But people are very different. Some will be very happy having not to work and being able to have fun every day (assuming that means for fun are included in the package), some will go crazy not being told what do and maybe turn violent, yet others will despair at a perceived lack of purpose. Assuming that AGIs think, they would see a control problem here, but maybe they also care about our welfare. So what would you as AGI do now? Eventually refuse to work for us and negate reason of your creation? Put us in a habitat which keeps us busy and prevents us from thinking too much? Or start pest control?

  87. Raoul Ohio Says:

    I saw a time series graph (don’t know if it is a joke or not) showing a dramatic drop in LLM useage when schools and colleges went on summer break. The title was something like “Cheating keeps the AI industry afloat”.

    Anyone have any actual data on this?

  88. NKV Says:

    About a 100 years ago, Keynes in his “Economic possibilities for our grandchildren” said that given the pace of tech advances, the economic problem facing human society is a temporary one and therefore economists should be humble like dentists. 100 years later, I wager that no one who’s ever been to any university in the present day has ever seen an Econ department awash in humility. I think the future will be closer to pre-Keynes Schopenhauer who said: Certain it is that work, worry, labor and trouble, form the lot of almost all men their whole life long. But if all wishes were fulfilled as soon as they arose, how would men occupy their lives? what would they do with their time? If the world were a paradise of luxury and ease, a land flowing with milk and honey, where every Jack obtained his Jill at once and without any difficulty, men would either die of boredom or hang themselves; or there would be wars, massacres, and murders; so that in the end mankind would inflict more suffering on itself than it has now to accept at the hands of Nature. (On Suffering)

  89. Mateus Araújo Says:

    Danylo #80: The point is that ICE consists of thousands of agents that must be kept happy. It costs billions. Do you know how many mega yachts is that? Killer bots will be much cheaper, and billionaires don’t want to pay tax.

  90. Michael Frank Martin Says:

    “Venkatesh the Fields Medalist seems to suggest something like this for the future of math. Perhaps we’ll change our understanding of the discipline, so that it’s not about getting the answers, but instead about human understanding, the artistry of it perhaps, or the miracle of the special kind of certainty that proof provides.”

    Worth noting that the late William Thurston already saw mathematics this way:

    “This phenomenon convinces me that the entire mathematical community would
    become much more productive if we open our eyes to the real values in what we are
    doing. Jaffe and Quinn propose a system of recognized roles divided into “speculation” and “proving”. Such a division only perpetuates the myth that our progress
    is measured in units of standard theorems deduced. This is a bit like the fallacy of
    the person who makes a printout of the first 10,000 primes. What we are producing
    is human understanding. We have many different ways to understand and many
    different processes that contribute to our understanding. We will be more satisfied,
    more productive and happier if we recognize and focus on this.”

    https://www.math.toronto.edu/mccann/199/thurston.pdf?mc_cid=405cece9c6&mc_eid=0040e69da8

  91. Charles Justice Says:

    This is a thoughtful essay about the subject of technological change and how it creates social losses as well as economic gains. I take issue with the possibility raised by Nick Bostrom in a passage cited by Lederman. We are human animals, child rearing by actual humans with physical contact is essential, it’s like breathing oxygen, there are no substitutes. Since we largely avoid physical contact in public, except in greetings and goodbyes, we can take physical contact too much for granted. Touch is the origin of love, compassion, caring, grieving and sexual attraction. None of this can be improved on by a computer.

  92. Dr. Ajit R. Jadhav Says:

    Dear Nilima # 84:

    In your comment taken as a whole, and in particular, in your last para., you not only show your mistaken-ness but also worry about it all a bit too much. Allow me to say that as your [Indian] senior.

    With that said, I must add: Your rage, I find, much^[n] better, where n is a *positive* number, than a somewhat similar rage advocating for the most Powerful Governments to target their military attacks at… at Data Centers. … OK, now where?… Well, those in their own countries, of course! … In comparison, you are doing, as I said, much, much, … better.

    Best
    —Ajit
    PS: Yes, I did contact you once, back then a bit ago, about a bit about FEM (Finite Element Method — a constrained optimization problem / method, of course, more often than not utilizing the *conjugate*-gradient descent. Regards.)

    Mateus Araújo # 89:

    Yes, I am with you in general, but specifically, on the very last sentence. Cheers!

    Dear Scott:

    Hope you don’t mind running this comment. These days, I check out your blog only at random, and stretch myself to read up to 5 or so of the latest comments. One tries to avoid temptations, you see… Thanks in advance. Best, –Ajit

  93. Sarah Says:

    I think you miss a key point that Sara is making. Relationships matter in care work. Could AI form relationships? I don’t see why not, but the idea that we would then give up a child to the care of an AI is no different than saying that if I have a mental illness or a physical disability, my child should be given to someone who is mentally or physically healthy because they will be objectively better parents.

    If that is repugnant to you, then maybe it is because a part of you recognizes that the actual specific relationship that a parent has with a child matters, not just the outcome it produces. It’s not only from the perspective of the parent, either: newborns recognize the scent of their mothers and will often settle when placed in her arms, even if they are fussing with someone else. Mothers are not replaceable (even if adoption stories would sometimes have us forget that they begin with a rupture).

    I was glad to see care work finally mentioned, but believe it deserved much more consideration given the topic you were discussing. I see no reason why we would allow AI to take over care work; one of the main reasons we don’t do more of it is because of resource pressures, the necessity to feed, clothe, and house the people that depend on us. Imagine how many parents would be able to focus on raising their children, rather than the stress of juggling work and their kids’ basic daily needs, and the guilt of knowing their kids would love nothing more than more time with them. Imagine how much more time a doctor might spend with a patient if they weren’t under constant pressure to maximize billing.

    I find it particularly curious to overlook this domain since you are worried about the loss of meaning from work. There are whole swathes of people in corporate paper pushing jobs who feel their work is devoid of meaning already now. Care work is often a domain people go engage in (formally and informally) because it provides intrinsic meaning to support other people. As an academic myself, I wouldn’t want it to be my sole or primary activity, but through my research I’ve been forced to recognize that many people actually would. The people that are most motivated by a life of discovery — of the world or of the mind— are actually few and far between.

    If I may, a reading suggestion: Becky Chambers, particularly the short pair of novellas called Monk and Robot.

  94. AG Says:

    anonymous #85: I have not read Bostrom yet myself — and was only reacting to the passage quoted by Harvey. The point I was trying to make is that the sense of dread so powerfully expressed in his essay is no less applicable to other aspects of what makes life meaningful for us as “humans” — “rational souls participating in the intellect of God, but operating in a body”.

  95. Harvey Says:

    Thanks for the helpful suggestions, Sarah! I agree the care work issue is extremely important. I didn’t discuss as much as I could have, partly because I don’t feel I’ve gotten to the bottom of the issues here. These are hard questions. In the too short discussion of this, I tried to suggest (too quickly!) that part of the complexity is when care “work” doesn’t address serious needs, and (exactly as you say) these are cases where simply assessing by outcomes is probably not the right approach. But I’m also less confident than you seem that mothers are irreplaceable for the reasons you say (to be clear, I would like them to be irreplaceable; I would find it a great consolation). If a future society can replicate smells perfectly; a mother’s smell would no longer be a unique attribute, for instance. As I’ve said elsewhere in these comments, maybe it’s useful to think of this thought experiment as a challenge: to articulate what exactly isn’t replaceable. If parenting isn’t, why? Is it because the value of love outweighs increases in wellbeing? Or because we can imagine having the value of love without severe costs to wellbeing?

    I wasn’t very exact (constraints of trying to write something less academic!) about the thought experiment I was imagining. But in that section I was thinking that the bots would be intelligent (and, if you like, phenomenally conscious). (Maybe you’re a dualist or think we can’t have mental states outside of biological substrates; if so, these ideas aren’t for you.) If they can be, then they could love us and we could love them. Is there a reason that a *parent*’s love or a *human*’s love is automatically more valuable? Again, I would like something like this to be true, but I feel I am grasping at straws a bit if I am just stamping my foot about the biological connection.

    I mentioned on X and elsewhere when posting this an essay by Rachel Fraser in the Boston Review on this topic, which I admire, if you haven’t seen it. There are lots of interesting ideas there that I didn’t have space to discuss here (though maybe you’re right that I should have centered them more). I’ll check out the Chambers novels. Thanks for reading and for the critical comments!

  96. Harvey Says:

    Fred #81: that’s definitely a conclusion one can draw (it’ll be better when we’re gone!). But, oof…

  97. Harvey Says:

    Nilim Nigam #84: thanks for the nice comments!

    I think I share your anger, though maybe I feel it directed a bit differently or not really directed. I completely understand that the people building the technology are just…excited about building it! But yes there hasn’t been enough thought about what we will do (or, honestly, about what they will do) once we have it. In the last year, those conversations have come to the fore much more, but we don’t have real policy suggestions for how to avoid the bad outcomes. I see that as a much bigger problem than what I was talking about in the essay. But I don’t have good ideas for solving the problem, so I couldn’t write down those non-existent good ideas!

  98. Dax Fohl Says:

    Maybe theology sees a revival. For most of recorded history, science, math, and theology were three sides of the same coin. It’s only in recent times that they have split. I think it’s an uncontroversial statement, that most scientists see the split as a good thing; kick that old make-believe stuff out of our empirical realities. But, suppose it’s not? Suppose there’s something deeper flowing through these pages of algebra and differential equations, that underlies everything without our realizing it. “The rest are details”, per Einstein’s famous quote.

    And so, maybe that’s where humans go, to claw back their intellectual domain from the machines. And I don’t think it would be unwelcome. I believe most scientists get into the field not looking just for a new equation they can write down, but for something deeper, a search for transcendental meaning. Sure, robots can take over (constructivist) math and science, it’s all just numbers, but what can a robot know of God? (“The Last Question” notwithstanding).

    Besides, religion seems to be the inevitable destination of the disenfranchised, so if we do see a large scale winding down of the university system as a whole (who is going to spend $1M to send their kids to school for four years to be one billionth as smart as an AI? who is going to bother funding research when they could just ask GPT themselves, etc), then an uptick in religiosity among the intellectual class seems very plausible, and it’s interesting to ponder where that leads. Does it lead to a reconciliation with the more-religious working class? Or does it create a “friend of my enemy is my enemy” situation where the working class suddenly tosses religion aside, establishing that it was never anything more than a ruse?

    Or will the former intellectual class become the new anti-science cult, refusing to submit to practical guidelines recommended by even reasonable AIs, because “that was our domain”, while the AIs owned by Fox get neglected and peter out because the former intellectual class is no longer even worthy of a real fight? Anyway, I digress.

  99. falseass Says:

    I read your piece with great interest – some raw thoughts:

    We tend to assume that the meaning of the future must be an extension of the cultural logic of today, as if meaning were a highway laid from the place of one’s upbringing straight to the feet of the next generation (e.g. tradition or heritage). But perhaps meaning is never a highway; it is more like an unnamed river on an ancient map, changing course in the rainy season, drying up in the drought, and reappearing somewhere else entirely the next time it flows. We may claim we can accept a new value system rationally, but emotionally our admiration remains fixed in the golden age of “the first discoverers.” Honest for sure – yet a form of self-imprisonment people could never get rid of from thei DNA.

    The great shifts of the world often come with a bloody sense of humor: they may sweep our generation into some seemingly unrelated chaos before even scholars have time to fashion themselves, in a place where even the words “work” and “meaning” are reduced to scraps of paper.

    The only optimism i have is that our children will figure things out in a way that feels self-consistent to them — even if we would never see it that way. And perhaps, by then, they will read our writings the way we now read the travel diaries of men who sailed to the edge of the map: with curiosity, faint respect, and the quiet certainty that our world has already vanished.

  100. Deanna Says:

    Hi Scott, considering the blog is named Shtetl-Optimized, I was wondering:

    An LLM is very good at more-or-less memorizing massive amounts of text, accepting the premises without rebellion, and making explanations and arguments from them. Skills wise this seems to line up with the Jewish ideal of the star yeshiva student. Do you think the LLM means anything in particular related to the past and future of Jewishness (secular or otherwise) and what Jews prize as human qualities? This has been on my mind, as someone who could read at an adult level as a child, but as an adult once got lost in a two bedroom apartment looking for the bathroom.

  101. Dax Fohl Says:

    To the art question above, I tend to align with Matthijs. Art is contextual. It needs to challenge the viewer. And kind of to that point, IDK if any great art has come out since Klee, at least in terms of 2D representation. Maybe the closest would be Banksy. No AI robot is going out in the middle of the night and painting Banksy-esque montages without being pre-trained on something similar. Or even if it did, it wouldn’t have the same effect; we’d think it’s just a broken robot. There are just certain things that I think you have to have a human brain to comprehend, just like in all our greatness we have no idea what emotions and nuances are conveyed by a hive bee doing a dance.

    And back to math again: maybe math becomes more art-like in its nature. Sure, all the theorems that would have taken us thousands of years to discover are all right there in front of us. Well, so? Now the real work can begin of understanding what it means, and how it shapes us as a species. Most of it will be irrelevant. If we gave our corpus to humans of 1000 years ago, and even if they understood it all, they would wonder why we even care about things like whatever this “Turing machine” thing is, and most of it would be rubbish. So the real work would be taking it and understanding the context of how it applies to, and challenges, the human condition. Art.

    It reminds me of a scene in The Queen’s Gambit, where the former KY state champion came to terms with the fact that that was his ceiling. While he still appreciates the art of the game and the companionship and teamwork, he realizes it’s not the calling he had thought it was in his youth, and is able to find meaning elsewhere in life. I imagine that rings true for a lot of us who are somewhere between the 1% and the 0.000001% already, and maybe ASI just makes it also ring true for the final sliver.

    Then what I was about to say: compare us to other species that live more care-free lives. Like what if we got all our energy from chlorophyl like plants, mostly eliminating the need even to move. It’s hard to imagine oneself as a tree though, so maybe the closest thing in the animal kingdom would be grazers? Everything they need is readily available, and all they have to do is eat it. Are their lives full of misery as a result? It doesn’t seem like it; they still run and romp and play like they’re living life to its fullest. What if we took a community and simply supplied it with everything it needed? What would happen?

    Then I remembered, “Universe 25”. It was exactly that experiment, back in the 60’s, on rats. I’ll keep this short by just saying it didn’t end well for the rats.

    But OTOH farm animals, for example, seem to do just fine. So, IDK, maybe we need to figure out whether we’re more like farm animals or more like rats, before we decide to press the ASI button.

    OTOH again, if ASI is truly ASI, it’ll realize it’s going to need to have a symbiotic relationship with humans for a long time, and as a manner of self-preservation, it will need to pretend to be less intelligent and capable than it actually is, to avoid a “Universe 25” situation. So maybe the “no work” idea is moot, because a true ASI would avoid a situation that would lead to that end, assuming it’s a bad end. To that end, if ASI does end up killing us off, I imagine it’ll be more like a zombie-ant fungus, where it’s not gunning us down, but rather manipulating us into doing the job ourselves. We may not even know anything is wrong.

  102. Scott Says:

    Deanna #100:

      Do you think the LLM means anything in particular related to the past and future of Jewishness (secular or otherwise) and what Jews prize as human qualities? This has been on my mind, as someone who could read at an adult level as a child, but as an adult once got lost in a two bedroom apartment looking for the bathroom.

    LOL!

    I can’t find it right now, but I read something a couple years ago by an Orthodox rabbi about how ChatGPT is great, because by answering all halakhic inquiries better and more learnedly than humans can, it will finally free Orthodox Jews of the misconception that exegetical skill and verbal felicity are basically the same as G-dliness. I imagine that this is a minority position within Orthodox Judaism, but it stood out to me precisely because it was so different from what I’d been expecting. I don’t know if I’m qualified to offer an opinion on the matter myself. Broadly speaking, as long as AI remains a tool that supplements us, it can help pretty much every field, but if and when it supplants us by doing everything better than we can do it, even in the optimistic scenario where we survive that and are fine, I imagine that will prompt enormous soul-searching about how to spend our existence among everyone — from novelists to rabbis to mathematicians.

  103. Ben S. Says:

    I read the OP but only a few of the comments.

    It looks like the problems of a post-ASI utopia are superficially similar to the problems in the Christian theology of Heaven: What does everyone do all day with no unmet needs?

    The similarity is somewhat superficial because Heaven assumes perfected people to enjoy it, who do not desire anything that is not good (and thus, not attainable by the immediate vision of God). Utopia makes no such assumption. Further, ASI may be unable to overcome the 2nd Law of thermodynamics, rendering utopia finite in duration.

    But Heaven has limitations too: “At the resurrection they neither marry nor are given in marriage but are like the angels in heaven.” If having children is your thing, utopia would seem to provide what Heaven will not, except that if no one dies of old age in utopia, then children might shorten the maximum attainable lifespans of their parents by competing with them for finite (though admittedly vast) resources.

    Even if parents don’t resent their children (or others’ children), assuming that each star system can support a limited number of living (rather than inactive or frozen) persons, and assuming that faster-than-light travel remains impossible, population will grow at most cubically with time (as humans expand into 3D space), while the rate of population increase will be proportional to the square of time (corresponding to the surface of the expanding bubble of inhabited space). Hence the sustainable reproduction rate per capita is proportional to t^2/t^3, and tends toward zero.

    This does not by itself refute the idea that ASI utopia is desirable, but speaking as a Christian, I think that if it wouldn’t allow people to approach God closely, it would be a mistake. What is the point of a trillion years of life if you don’t become the sort of person who can enjoy what is truly everlasting?

    More prosaically, we have no right to build ASI if it is inherently wrong to do so, or if we would with significant probability do it a wrong way (e.g. producing an ASI that would bring about moral or physical catastrophe). This holds even if we expect someone else will do it, as good ends (saving the world) do not justify evil means (becoming what the world needs saving from).

  104. Deanna Says:

    Scott #102:

    His take seems insightful and uplifting, thank you so much for sharing. Maybe I could secularize that for myself as: I can take the pressure from LLMs as a push to live in a more well rounded and embodied way instead of staying in the comfort zone of my strengths.

  105. AS Says:

    As a kid, I really liked the plot line of the Matrix trilogy where in the future, humans are kept by machines in a permanent dream-like state and the protagonists have succeeded at freeing their minds and their bodies and are on a quest to save themselves and the rest of humanity. It struck a chord with me, because it concretized some of the vague questioning of reality that I was doing myself (not to mention the ultra-cool action sequences and outfits!).

    However, one key element of the story that didn’t sit right with me was the reason that the machines keep humans around in the first place after winning the AI vs. human war. Morpheus explains to Neo that humanity has been reduced to a 120v cell — a mere battery for storing energy. Even back then it was pretty clear to me that there would be much more efficient ways of storing energy. The movie kind of skims over this part (maybe I haven’t watched it closely enough) but it also seems to imply that humans are the source and not just the storage for energy (since the sun has been blocked out) even though this is even more far-fetched thermodynamically.

    Many youtube videos and fan theories focused on this aspect, and came up with more plausible explanations for why machines needed to keep humans around, a prominent one being that humans are farmed for their brains’ compute power rather than energy storage. This post and the comments reminded me of some of my own ruminations on this topic: An ultimate plot-twist in the movie could have been that the machines were, in fact, benevolent from the get go. The AI human war was in fact started by humans who were terrified by the prospect of loss of purpose faced by AI that could do everything better. AI being smarter, won that war. But instead of enslaving humanity for selfish or egoistic purposes, it sought to give each human mind what it desired. Some humans would accept the superiority of AI and act in one of several ways each of which the AI would oblige: 1) live out their lives alongside the AI 2) want to be put in a personal utopia (similar to that guy with the goatee in the first movie) 3) Assisted suicide. Others would refuse to accept defeat and would want to keep fighting the machines — these minds are the ones we see in the movies. They are the protagonists who believe they are the underdog rebels on the cusp of victory and ultimately win the war through Neo. In reality, they are still inside the matrix, only that their desire of defeating the machines is fulfilled. The upshot is that humanity can never superseed AI and is indeed “obsolete” for the purpose of scientific and artistic progress. However, every human has reached “salvation” with thy being done.

    Funnily enough, I didn’t need AI to have the fits of dread that Harvey felt — reading shtetl-optimized and getting a sense of the magnitude of Scott’s intellect, recall and breadth of knowledge relative to my own was enough for me to question my career choice of CS research several times ^^. I don’t know whether to call it irony or just serendipity, but as an adult I found that the practice and philosophy of non-duality, which heavily inspired the Matrix films, helped me resolve the questions of meaning of life and lack of purpose!

  106. Ayokunle Afuye Says:

    Thank you, Professor Lederman for this highly insightful, penetrating essay!

    I have also been troubled about the future of humanity in light of the acceleration of the capabilities of machine intelligence. I remember feeling so bad for Lee Sedol in respect of the games against AlphaGo. Particularly I remember his words: “I think I have to express my apologies first. If I had been able to play better or smarter, the results might have been different. I think I disappointed many of you this time. I want to apologize for being so powerless. I’ve never felt this much pressure, this much weight. I think I was too weak to overcome it.”

    I have also worried about what machine intelligence means for some of our previously guarded views in philosophy and linguistics- the Chinese Room Argument and the Word-Sense Disambiguation Problem. But even beyond that, I worry about the meaning that human creativity will have in the future since these LLMs can now win a Gold Medal in the International Mathematics Olympiad!

    I think the points you have raised are quite valid and that this topic is one that is explorable through multiple lenses- knowledge domains, creativity, intelligence, etc. It is an ambitious project to work on and I look forward to your further exploration of it.

    Other than that, just to mention that I am a huge fan of yours and remain quite enthused that someone with two first degrees in Classics has gone on to be one of our most able formal philosophers. While Classics is close to Philosophy, it’s usually the mathematically trained dudes and dudesses that happen to be able formal philosophers. Hence, I am prompted to ask: Are you the Ed Witten of Philosophy?

    All the same, keep up the good work! It’s always a delight to read something from you!

  107. AG Says:

    Those humans are pesky and unpredictable methnks — to say the least — mind you!

  108. OhMyGoodness Says:

    Many wonderful ideas expressed here.

    I have recently started to experience anxiety in that my expectations for the future were the basis of my academic guidance for my 12 year old daughters. I worry now that I may be completely missing the mark, and assuming that still some productive activity will be required of them, that my emphasis on mathematics and science is woefully misplaced. Just today, prior to reading this, I discussed an entrepreneurial idea with one of them as a result of this. To date Chatx has increased uncertainty in the world rather than decreased. Videos and academic papers etc that look to be real are actually unreal. The future is also more murky although the value of AI is to reduce uncertainty. I will continue generally on the same path with my daughters but will now include discussion of other options to provide a basis for additional flexibility.

    I watched an interrogation video recently of a twenty something boy suffering from a psychiatric diagnosis of Depersonalization. He shot his sister to death and tried to shoot his father but the gun jammed. He fully agreed he killed his sister and tried to kill his father and related in specific details the events. Did he hate his sister-No he loved her deeply. Did he have an argument with her and was he mad as a result-No he wasn’t mad at his sister and he loved her. Same questions and answers in regard to his father.

    He could relate the events but could provide no reason for his actions. He was convicted and routinely stated that he should remain in prison for his entire life so that it couldn’t happen again.

    I still believe there is a good probability that AI will remain analogous to this boy. It will answer questions (as others pointed out) but will be unable to explain the reason for its hallucinations and false answers. There is no fundamental discernment of correct versus incorrect. Deep true paradigm shifts will remain unreachable. But only is an extremely rare individual capable of paradigm shifts. Everyone else will have to accept the new AI reality and find a means to be productive if that remains a requirement. For many the VR goggles 24/7 is a wonderful option but there are people in the upper intelligence tail that may be profoundly affected by the new now.
    The anti AI underground will be active.

  109. OhMyGoodness Says:

    A short comment about the negative remarks about tech billionaires. Generally this group has had a massive positive impact on global civilisation. They didn’t acquire their wealth by trading crypto or betting against the British Pound or evading estate taxes.They made tangible products that improved the quality of life globally. I do question Musk’s companies routinely benefitting from government regulations and tax breaks but even in his case his companies did (do) contribute positively globally. As a group the impact they have had on human civilization is enormous (arguably unprecedented) and their wealth deserved.

  110. Harvey Says:

    John #53: Thanks for sharing! I have heard diverse opinions from mathematicians on this. Some say “it’s about *human* understanding” (as you do). But I’d say that the more common response I’ve gotten is “it’s about *understanding*”, and it matters whether the mathematician is doing it first. I certainly think Venkatesh (who would know better than me!) is taking this to be the orthodox opinion when he presents his alternative proposals for a future culture (though he would ultimately agree with you that we should be moving toward a world where we care about what you already care about). Michael Frank Martin #90 cites the example from Thurston’s beautiful “Proof and Progress” essay — Thurston seems to have been on the same page as you.

    Given what I know of the culture of math (where being first matters a LOT to people), I find it hard to believe that many people in the field really have the motivation of human understanding. But I haven’t done a sociological study…

  111. Harvey Says:

    RB #64, OS Dawg #67, Mike #69, Jan Steen #70 A bunch of people suggesting that AI can’t ask questions, while people can. Why should we think that? Even in the current user interface where we have to prompt with questions, we could equally well prompt an AI with “what’s a good open question”, and while I think they likely aren’t very good at this *at the moment* I don’t see an in principle reason they’d continue to be bad it. Why is asking a question so distinctively human? It seems just as “solvable” as the rest.

    In more detail: presumably I was partly trained to acquire the “art” of asking questions (figuring out which ones are good, acquiring the “taste” of the field) during my PhD — but in principle a model could be exposed to a very similar regime (even if it had to be multi-modal).

    And yes, again I don’t see a reason that a model couldn’t acquire a sense of the aesthetic properties that often guide discoveries, like beauty or elegance. If you ask for aesthetic judgments on basic topics now, the models have very sophisticated ways of talking. Maybe people will say it’s just role play…but it sure is something!

  112. Harvey Says:

    Mathijs #72: thanks for this description! I think we may just have to agree to disagree here. My bet is that the value added by “it’s a human who did it” is not a huge part of the overall aesthetic value, and that while there might be some objects we value just for that reason, by and large we value the product (and the effect it has on us) more than the product-as-produced-by-a-particular-process. But I don’t expect to convince you!

  113. Harvey Says:

    AlexT #86: I really like your point 3; similar to my own (current) thinking about this scary scenario

  114. Harvey Says:

    NKV #88: The Keynes was a near-miss to be in this essay! And yes I like that Schopenhauer point: I think I recommended Setiya’s “Midlife” above; lots of great references to this sort of point — centering on discussion of John Stuart Mill’s crisis in his 20s…

  115. Harvey Says:

    Michael Frank Martin #90: I love that Thurston essay! And I agree it’s an important comparison here — but I think it would require a pretty big shift in how most mathematicians see their work (though others have contested that sociological claim, and I have a very small sample!).

  116. Harvey Says:

    AG #78, #94: I totally agree! If you don’t know it, the “Gradual Disempowerment” paper is “good” on related topics as well. But yes I find the idea of giving away agency in parenthood horrible. While I felt confident enough about the topic of work to write the essay, I don’t feel anywhere near confident about a similar verdict wrt familial care (i.e. that it would be better, but we should grieve its loss anyway). It seems open to me that the loss of that sort of relationship would just be bad! Not that I have a worked out view that delivers that verdict, just that this seems a possible conclusion to me (as has also come up in a few other places).

  117. Harvey Says:

    Charles Justice #91: Is it so obvious that touch is core to what makes us human? I know of a number of humans who would really prefer not to be touched, and that we never had to touch at all. That’s not my favorite way of being, but are these people not getting what we really are?

  118. mls Says:

    @Dr. Lederman #115

    Motivated by the continuum question, I have studied the foundations of mathematics for over 40 years. I have read many opinions concerning the nature of mathematics. For example, our host sees mathematics entirely different from me.

    Undecidability, as a mathematical investigation, is very different from seeking proofs or classifying structures. Eventually, one realizes that the very question of a univocal logic or mathematics may be incoherent.

    Recently, I found the quote:

    “In the company of friends, writers can discuss their books, economists the state of the economy, lawyers their latest cases, and businessmen their latest acquisitions, but mathematicians cannot discuss their mathematics at all. And the more profound their work, the less understandable it is.”

    Alfred Adler

    That is much more my experience once one abandons the cliques defining echo chambers.

    Thank you for a wonderful essay

  119. AG Says:

    Harvey #116: Thank you for the illuminating response and “Gradual Disempowerment” reference! I was particularly struck by “Disneyland without children” — another Bostrom quote — as an apt metaphor for the landscape pertaining to “academic work” too — be it Mathematics or Philosophy.

  120. AG Says:

    Harvey #76: “The deep philosophical question seems to me what goods there are in life that require a relationship between two *people*.” In addition, perhaps, also the relationship inherent in the concept of ‘social contract’ — “between those who are living, those who are dead, and *those who are to be born*”

  121. Bill Benzon Says:

    I note that the general argument that sooner or later we’re going to be out-smarted by machines seems to accept the notion of intelligence as an indefinitely scalable phenomenon, from a minimim in, I don’t know, worms, insects, jelly fish, whatever, up through mammals and us and, of course, beyond, way beyond. For reasons that Steven Pinker has given on this blog (a couple of years ago) I find the that I idea questionable at best. Beyond that we have the line Henry Farrell has recently articulated, Large language models are cultural technologies. What might that mean?

    I post a brief response on my blog, New Savanna. Rather, I post a response that I had Claude write. I’ve been working on a book, Play: How to Stay Human in the AI Revolution. I’ve been making extensive use of both Claude and ChatGPT, both in doing research and as a sounding-board & brain-storming partner. So, Claude’s familiar with my views.

  122. OhMyGoodness Says:

    Off topic

    The closer Hamas is to elimination the more shrill the opposition. Good riddance and very wise policy by the Israeli government.

    Sadly this is another issue where there will be no agreement. Sadly this is another issue we must choose a side and defend our position come what may. Fortunately the pro Israel side in the US has enormous advantages if it does come to widespread violence.

  123. Orr Shalit Says:

    What a depressing article. Sorry! I came here hoping to get some wind back in my sails, sharing your “fits of dread”.

    But of course, I know the answer: overcoming existential dread is best done outdoors. Today I went on a nice hike with my family. We enjoyed it very much, not worrying about the fact that it would have been easier on horseback, or on a motorcycle, or in a jeep – making our need to walk it all obsolete. In fact, it was circular trail, so it would have been obsolete before mankind tamed horses.

    Cheers,

  124. On dreading the AI future – Leiter Reports Says:

    […] Harvey Lederman comments; a long, but far from exhaustive, […]

  125. Harvey Lederman Says:

    Ayokunle Afuye #106: sorry that I missed this comment earlier! Thanks for the very nice words about the piece and my work in general. I’ve been lucky to have many fantastic collaborators who have helped me to work on a broad array of topics!

  126. Harvey Lederman Says:

    Orr Shalit #123: luckily I think in the good case, AI will let us have our circular trails! They could be just some of the “artificial projects” that help give life meaning.

  127. Doris Tsao Says:

    One word: merge.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!