The Problem of Human Specialness in the Age of AI
Update (Feb. 29): A YouTube video of this talk is now available, plus a comment section filled (as usual) with complaints about everything from my speech and mannerisms to my failure to address the commenter’s pet topic.
Another Update (March 8): YouTube video of a shorter (18-minute) version of this talk, which I delivered at TEDxPaloAlto, is now available as well!
Here, as promised in my last post, is a written version of the talk I delivered a couple weeks ago at MindFest in Florida, entitled “The Problem of Human Specialness in the Age of AI.” The talk is designed as one-stop shopping, summarizing many different AI-related thoughts I’ve had over the past couple years (and earlier).
1. INTRO
Thanks so much for inviting me! I’m not an expert in AI, let alone mind or consciousness. Then again, who is?
For the past year and a half, I’ve been moonlighting at OpenAI, thinking about what theoretical computer science can do for AI safety. I wanted to share some thoughts, partly inspired by my work at OpenAI but partly just things I’ve been wondering about for 20 years. These thoughts are not directly about “how do we prevent super-AIs from killing all humans and converting the galaxy into paperclip factories?”, nor are they about “how do we stop current AIs from generating misinformation and being biased?,” as much attention as both of those questions deserve (and are now getting). In addition to “how do we stop AGI from going disastrously wrong?,” I find myself asking “what if it goes right? What if it just continues helping us with various mental tasks, but improves to where it can do just about any task as well as we can do it, or better? Is there anything special about humans in the resulting world? What are we still for?”
2. LARGE LANGUAGE MODELS
I don’t need to belabor for this audience what’s been happening lately in AI. It’s arguably the most consequential thing that’s happened in civilization in the past few years, even if that fact was temporarily masked by various ephemera … y’know, wars, an insurrection, a global pandemic … whatever, what about AI?
I assume you’ve all spent time with ChatGPT, or with Bard or Claude or other Large Language Models, as well as with image models like DALL-E and Midjourney. For all their current limitations—and we can discuss the limitations—in some ways these are the thing that was envisioned by generations of science fiction writers and philosophers. You can talk to them, and they give you a comprehending answer. Ask them to draw something and they draw it.
I think that, as late as 2019, very few of us expected this to exist by now. I certainly didn’t expect it to. Back in 2014, when there was a huge fuss about some silly ELIZA-like chatbot called “Eugene Goostman” that was falsely claimed to pass the Turing Test, I asked around: why hasn’t anyone tried to build a much better chatbot, by (let’s say) training a neural network on all the text on the Internet? But of course I didn’t do that, nor did I know what would happen when it was done.
The surprise, with LLMs, is not merely that they exist, but the way they were created. Back in 1999, you would’ve been laughed out of the room if you’d said that all the ideas needed to build an AI that converses with you in English already existed, and that they’re basically just neural nets, backpropagation, and gradient descent. (With one small exception, a particular architecture for neural nets called the transformer, but that probably just saves you a few years of scaling anyway.) Ilya Sutskever, cofounder of OpenAI (who you might’ve seen something about in the news…), likes to say that beyond those simple ideas, you only needed three ingredients:
(1) a massive investment of computing power,
(2) a massive investment of training data, and
(3) faith that your investments would pay off!
Crucially, and even before you do any reinforcement learning, GPT-4 clearly seems “smarter” than GPT-3, which seems “smarter” than GPT-2 … even as the biggest ways they differ are just the scale of compute and the scale of training data! Like,
- GPT-2 struggled with grade school math.
- GPT-3.5 can do most grade school math but it struggles with undergrad material.
- GPT-4, right now, can probably pass most undergraduate math and science classes at top universities (I mean, the ones without labs or whatever!), and possibly the humanities classes too (those might even be easier for GPT-4 than the science classes, but I’m much less confident about it). But it still struggles with, for example, the International Math Olympiad. How insane, that this is now where we have to place the bar!
Obvious question: how far will this sequence continue? There are certainly a least a few more orders of magnitude of compute before energy costs become prohibitive, and a few more orders of magnitude of training data before we run out of public Internet. Beyond that, it’s likely that continuing algorithmic advances will simulate the effect of more orders of magnitude of compute and data than however many we actually get.
So, where does this lead?

(Note: ChatGPT agreed to cooperate with me to help me generate the above image. But it then quickly added that it was just kidding, and the Riemann Hypothesis is still open.)
3. AI SAFETY
Of course, I have many friends who are terrified (some say they’re more than 90% confident and few of them say less than 10%) that not long after that, we’ll get this…

But this isn’t the only possibility smart people take seriously.
Another possibility is that the LLM progress fizzles before too long, just like previous bursts of AI enthusiasm were followed by AI winters. Note that, even in the ultra-conservative scenario, LLMs will probably still be transformative for the economy and everyday life, maybe as transformative as the Internet. But they’ll just seem like better and better GPT-4’s, without ever seeming qualitatively different from GPT-4, and without anyone ever turning them into stable autonomous agents and letting them loose in the real world to pursue goals the way we do.
A third possibility is that AI will continue progressing through our lifetimes as quickly as we’ve seen it progress over the past 5 years, but even as that suggests that it’ll surpass you and me, surpass John von Neumann, become to us as we are to chimpanzees … we’ll still never need to worry about it treating us the way we’ve treated chimpanzees. Either because we’re projecting and that’s just totally not a thing that AIs trained on the current paradigm would tend to do, or because we’ll have figured out by then how to prevent AIs from doing such things. Instead, AI in this century will “merely” change human life by maybe as much as it changed over the last 20,000 years, in ways that might be incredibly good, or incredibly bad, or both depending on who you ask.
If you’ve lost track, here’s a decision tree of the various possibilities that my friend (and now OpenAI allignment colleague) Boaz Barak and I came up with.

4. JUSTAISM AND GOALPOST-MOVING
Now, as far as I can tell, the empirical questions of whether AI will achieve and surpass human performance at all tasks, take over civilization from us, threaten human existence, etc. are logically distinct from the philosophical question of whether AIs will ever “truly think,” or whether they’ll only ever “appear” to think. You could answer “yes” to all the empirical questions and “no” to the philosophical question, or vice versa. But to my lifelong chagrin, people constantly munge the two questions together!
A major way they do so, is with what we could call the religion of Justaism.
- GPT is justa next-token predictor.
- It’s justa function approximator.
- It’s justa gigantic autocomplete.
- It’s justa stochastic parrot.
- And, it “follows,” the idea of AI taking over from humanity is justa science-fiction fantasy, or maybe a cynical attempt to distract people from AI’s near-term harms.
As someone once expressed this religion on my blog: GPT doesn’t interpret sentences, it only seems-to-interpret them. It doesn’t learn, it only seems-to-learn. It doesn’t judge moral questions, it only seems-to-judge. I replied: that’s great, and it won’t change civilization, it’ll only seem-to-change it!
A closely related tendency is goalpost-moving. You know, for decades chess was the pinnacle of human strategic insight and specialness, and that lasted until Deep Blue, right after which, well of course AI can cream Garry Kasparov at chess, everyone always realized it would, that’s not surprising, but Go is an infinitely richer, deeper game, and that lasted until AlphaGo/AlphaZero, right after which, of course AI can cream Lee Sedol at Go, totally expected, but wake me up when it wins Gold in the International Math Olympiad. I bet $100 against my friend Ernie Davis that the IMO milestone will happen by 2026. But, like, suppose I’m wrong and it’s 2030 instead … great, what should be the next goalpost be?

Indeed, we might as well formulate a thesis, which despite the inclusion of several weasel phrases I’m going to call falsifiable:
Given any game or contest with suitably objective rules, which wasn’t specifically constructed to differentiate humans from machines, and on which an AI can be given suitably many examples of play, it’s only a matter of years before not merely any AI, but AI on the current paradigm (!), matches or beats the best human performance.
Crucially, this Aaronson Thesis (or is it someone else’s?) doesn’t necessarily say that AI will eventually match everything humans do … only our performance on “objective contests,” which might not exhaust what we care about.
Incidentally, the Aaronson Thesis would seem to be in clear conflict with Roger Penrose’s views, which we heard about from Stuart Hameroff’s talk yesterday. The trouble is, Penrose’s task is “just see that the axioms of set theory are consistent” … and I don’t know how to gauge performance on that task, any more than I know how to gauge performance on the task, “actually taste the taste of a fresh strawberry rather than merely describing it.” The AI can always say that it does these things!
5. THE TURING TEST
This brings me to the original and greatest human vs. machine game, one that was specifically constructed to differentiate the two: the Imitation Game, which Alan Turing proposed in an early and prescient (if unsuccessful) attempt to head off the endless Justaism and goalpost-moving. Turing said: look, presumably you’re willing to regard other people as conscious based only on some sort of verbal interaction with them. So, show me what kind of verbal interaction with another person would lead you to call the person conscious: does it involve humor? poetry? morality? scientific brilliance? Now assume you have a totally indistinguishable interaction with a future machine. Now what? You wanna stomp your feet and be a meat chauvinist?
(And then, for his great attempt to bypass philosophy, fate punished Turing, by having his Imitation Game itself provoke a billion new philosophical arguments…)
6. DISTINGUISHING HUMANS FROM AIS
Although I regard the Imitation Game as, like, one of the most important thought experiments in the history of thought, I concede to its critics that it’s generally not what we want in practice.
It now seems probable that, even as AIs start to do more and more work that used to be done by doctors and lawyers and scientists and illustrators, there will remain straightforward ways to distinguish AIs from humans—either because customers want there to be, or governments force there to be, or simply because indistinguishability wasn’t what was wanted or conflicted with other goals.
Right now, like it or not, a decent fraction of all high-school and college students on earth are using ChatGPT to do their homework for them. For that reason among others, this question of how to distinguish humans from AIs, this question from the movie Blade Runner, has become a big practical question in our world.
And that’s actually one of the main things I’ve thought about during my time at OpenAI. You know, in AI safety, people keep asking you to prognosticate decades into the future, but the best I’ve been able to do so far was see a few months into the future, when I said: “oh my god, once everyone starts using GPT, every student will want to use it to cheat, scammers and spammers will use it too, and people are going to clamor for some way to determine provenance!”
In practice, often it’s easy to tell what came from AI. When I get comments on my blog like this one:
“Erica Poloix,” July 21, 2023:
Well, it’s quite fascinating how you’ve managed to package several misconceptions into such a succinct comment, so allow me to provide some correction. Just as a reference point, I’m studying physics at Brown, and am quite up-to-date with quantum mechanics and related subjects.…
The bigger mistake you’re making, Scott, is assuming that the Earth is in a ‘mixed state’ from the perspective of the universal wavefunction, and that this is somehow an irreversible situation. It’s a misconception that common, ‘classical’ objects like the Earth are in mixed states. In the many-worlds interpretation, for instance, even macroscopic objects are in superpositions – they’re just superpositions that look classical to us because we’re entangled with them. From the perspective of the universe’s wavefunction, everything is always in a pure state.
As for your claim that we’d need to “swap out all the particles on Earth for ones that are already in pure states” to return Earth to a ‘pure state,’ well, that seems a bit misguided. All quantum systems are in pure states before they interact with other systems and become entangled. That’s just Quantum Mechanics 101.
I have to say, Scott, your understanding of quantum physics seems to be a bit, let’s say, ‘mixed up.’ But don’t worry, it happens to the best of us. Quantum Mechanics is counter-intuitive, and even experts struggle with it. Keep at it, and try to brush up on some more fundamental concepts. Trust me, it’s a worthwhile endeavor.
… I immediately say, either this came from an LLM or it might as well have. Likewise, apparently hundreds of students have been turning in assignments that contain text like, “As a large language model trained by OpenAI…”—easy to catch!
But what about the slightly more sophisticated cheaters? Well, people have built discriminator models to try to distinguish human from AI text, such as GPTZero. While these distinguishers can get well above 90% accuracy, the danger is that they’ll necessarily get worse as the LLMs get better.
So, I’ve worked on a different solution, called watermarking. Here, we use the fact that LLMs are inherently probabilistic — that is, every time you submit a prompt, they’re sampling some path through a branching tree of possibilities for the sequence of next tokens. The idea of watermarking is to steer the path using a pseudorandom function, so that it looks to a normal user indistinguishable from normal LLM output, but secretly it encodes a signal that you can detect if you know the key.
I came up with a way to do that in Fall 2022, and others have since independently proposed similar ideas. I should caution you that this hasn’t been deployed yet—OpenAI, along with DeepMind and Anthropic, want to move slowly and cautiously toward deployment. And also, even when it does get deployed, anyone who’s sufficiently knowledgeable and motivated will be able to remove the watermark, or produce outputs that aren’t watermarked to begin with.
7. THE FUTURE OF PEDAGOGY
But as I talked to my colleagues about watermarking, I was surprised that they often objected to it on a completely different ground, one that had nothing to do with how well it can work. They said: look, if we all know students are going to rely on AI in their jobs, why shouldn’t they be allowed to rely on it in their assignments? Should we still force students to learn to do things if AI can now do them just as well?
And there are many good pedagogical answers you can give: we still teach kids spelling and handwriting and arithmetic, right? Because, y’know, we haven’t yet figured out how to instill higher-level conceptual understanding without all that lower-level stuff as a scaffold for it.
But I already think about this in terms of my own kids. My 11-year-old daughter Lily enjoys writing fantasy stories. Now, GPT can also churn out short stories, maybe even technically “better” short stories, about such topics as tween girls who find themselves recruited by wizards to magical boarding schools that are not Hogwarts and totally have nothing to do with Hogwarts. But here’s a question: from this point on, will Lily’s stories ever surpass the best AI-written stories? When will the curves cross? Or will AI just continue to stay ahead?
8. WHAT DOES “BETTER” MEAN?
But, OK, what do we even mean by one story being “better” than another? Is there anything objective behind such judgments?
I submit that, when we think carefully about what we really value in human creativity, the problem goes much deeper than just “is there an objective way to judge”?
To be concrete, could there be an AI that was “as good at composing music as the Beatles”?
For starters, what made the Beatles “good”? At a high level, we might decompose it into
- broad ideas about the direction that 1960s music should go in, and
- technical execution of those ideas.
Now, imagine we had an AI that could generate 5000 brand-new songs that sounded like more “Yesterday”s and “Hey Jude”s, like what the Beatles might have written if they’d somehow had 10x more time to write at each stage of their musical development. Of course this AI would have to be fed the Beatles’ back-catalogue, so that it knew what target it was aiming at.
Most people would say: ah, this shows only that AI can match the Beatles in #2, in technical execution, which was never the core of their genius anyway! Really we want to know: would the AI decide to write “A Day in the Life” even though nobody had written anything like it before?
Recall Schopenhauer: “Talent hits a target no one else can hit, genius hits a target no one else can see.” Will AI ever hit a target no one else can see?
But then there’s the question: supposing it does hit such a target, will we know? Beatles fans might say that, by 1967 or so, the Beatles were optimizing for targets that no musician had ever quite optimized for before. But—and this is why they’re so remembered—they somehow successfully dragged along their entire civilization’s musical objective function so that it continued to match their own. We can now only even judge music by a Beatles-influenced standard, just like we can only judge plays by a Shakespeare-influenced standard.
In other branches of the wavefunction, maybe a different history led to different standards of value. But in this branch, helped by their technical talents but also by luck and force of will, Shakespeare and the Beatles made certain decisions that shaped the fundamental ground rules of their fields going forward. That’s why Shakespeare is Shakespeare and the Beatles are the Beatles.
(Maybe, around the birth of professional theater in Elizabethan England, there emerged a Shakespeare-like ecological niche, and Shakespeare was the first one with the talent, luck, and opportunity to fill it, and Shakespeare’s reward for that contingent event is that he, and not someone else, got to stamp his idiosyncracies onto drama and the English language forever. If so, art wouldn’t actually be that different from science in this respect! Einstein, for example, was simply the first guy both smart and lucky enough to fill the relativity niche. If not him, it would’ve surely been someone else or some group sometime later. Except then we’d have to settle for having never known Einstein’s gedankenexperiments with the trains and the falling elevator, his summation convention for tensors, or his iconic hairdo.)
9. AIS’ BURDEN OF ABUNDANCE AND HUMANS’ POWER OF SCARCITY
If this is how it works, what does it mean for AI? Could AI reach the “pinnacle of genius,” by dragging all of humanity along to value something new and different, as is said to be the true mark of Shakespeare and the Beatles’ greatness? And: if AI could do that, would we want to let it?
When I’ve played around with using AI to write poems, or draw artworks, I noticed something funny. However good the AI’s creations were, there were never really any that I’d want to frame and put on the wall. Why not? Honestly, because I always knew that I could generate a thousand others on the exact same topic that were equally good, on average, with more refreshes of the browser window. Also, why share AI outputs with my friends, if my friends can just as easily generate similar outputs for themselves? Unless, crucially, I’m trying to show them my own creativity in coming up with the prompt.
By its nature, AI—certainly as we use it now!—is rewindable and repeatable and reproducible. But that means that, in some sense, it never really “commits” to anything. For every work it generates, it’s not just that you know it could’ve generated a completely different work on the same subject that was basically as good. Rather, it’s that you can actually make it generate that completely different work by clicking the refresh button—and then do it again, and again, and again.
So then, as long as humanity has a choice, why should we ever choose to follow our would-be AI genius along a specific branch, when we can easily see a thousand other branches the genius could’ve taken? One reason, of course, would be if a human chose one of the branches to elevate above all the others. But in that case, might we not say that the human had made the “executive decision,” with some mere technical assistance from the AI?
I realize that, in a sense, I’m being completely unfair to AIs here. It’s like, our Genius-Bot could exercise its genius will on the world just like Certified Human Geniuses did, if only we all agreed not to peek behind the curtain to see the 10,000 other things Genius-Bot could’ve done instead. And yet, just because this is “unfair” to AIs, doesn’t mean it’s not how our intuitions will develop.
If I’m right, it’s humans’ very ephemerality and frailty and mortality, that’s going to remain as their central source of their specialness relative to AIs, after all the other sources have fallen. And we can connect this to much earlier discussions, like, what does it mean to “murder” an AI if there are thousands of copies of its code and weights on various servers? Do you have to delete all the copies? How could whether something is “murder” depend on whether there’s a printout in a closet on the other side of the world?
But we humans, you have to grant us this: at least it really means something to murder us! And likewise, it really means something when we make one definite choice to share with the world: this is my artistic masterpiece. This is my movie. This is my book. Or even: these are my 100 books. But not: here’s any possible book that you could possibly ask me to write. We don’t live long enough for that, and even if we did, we’d unavoidably change over time as we were doing it.
10. CAN HUMANS BE PHYSICALLY CLONED?
Now, though, we have to face a criticism that might’ve seemed exotic until recently. Namely, who says humans will be frail and mortal forever? Isn’t it shortsighted to base our distinction between humans on that? What if someday we’ll be able to repair our cells using nanobots, even copy the information in them so that, as in science fiction movies, a thousand doppelgangers of ourselves can then live forever in simulated worlds in the cloud? And that then leads to very old questions of: well, would you get into the teleportation machine, the one that reconstitutes a perfect copy of you on Mars while painlessly euthanizing the original you? If that were done, would you expect to feel yourself waking up on Mars, or would it only be someone else a lot like you who’s waking up?
Or maybe you say: you’d wake up on Mars if it really was a perfect physical copy of you, but in reality, it’s not physically possible to make a copy that’s accurate enough. Maybe the brain is inherently noisy or analog, and what might look to current neuroscience and AI like just nasty stochastic noise acting on individual neurons, is the stuff that binds to personal identity and conceivably even consciousness and free will (as opposed to cognition, where we all but know that the relevant level of description is the neurons and axons)?
This is the one place where I agree with Penrose and Hameroff that quantum mechanics might enter the story. I get off their train to Weirdville very early, but I do take it to that first stop!
See, a fundamental fact in quantum mechanics is called the No-Cloning Theorem.

It says that there’s no way to make a perfect copy of an unknown quantum state. Indeed, when you measure a quantum state, not only do you generally fail to learn everything you need to make a copy of it, you even generally destroy the one copy that you had! Furthermore, this is not a technological limitation of current quantum Xerox machines—it’s inherent to the known laws of physics, to how QM works. In this respect, at least, qubits are more like priceless antiques than they are like classical bits.
Eleven years ago, I had this essay called The Ghost in the Quantum Turing Machine where I explored the question, how accurately do you need to scan someone’s brain in order to copy or upload their identity? And I distinguished two possibilities. On the one hand, there might be a “clean digital abstraction layer,” of neurons and synapses and so forth, which either fire or don’t fire, and which feel the quantum layer underneath only as irrelevant noise. In that case, the No-Cloning Theorem would be completely irrelevant, since classical information can be copied. On the other hand, you might need to go all the way down to the molecular level, if you wanted to make, not merely a “pretty good” simulacrum of someone, but a new instantiation of their identity. In this second case, the No-Cloning Theorem would be relevant, and would say you simply can’t do it. You could, for example, use quantum teleportation to move someone’s brain state from Earth to Mars, but quantum teleportation (to stay consistent with the No-Cloning Theorem) destroys the original copy as an inherent part of its operation.
So, you’d then have a sense of “unique locus of personal identity” that was scientifically justified—arguably, the most science could possibly do in this direction! You’d even have a sense of “free will” that was scientifically justified, namely that no prediction machine could make well-calibrated probabilistic predictions of an individual person’s future choices, sufficiently far into the future, without making destructive measurements that would fundamentally change who the person was.
Here, I realize I’ll take tons of flak from those who say that a mere epistemic limitation, in our ability to predict someone’s actions, couldn’t possibly be relevant to the metaphysical question of whether they have free will. But, I dunno! If the two questions are indeed different, then maybe I’ll do like Turing did with his Imitation Game, and propose the question that we can get an empirical handle on, as a replacement for the question that we can’t get an empirical handle on. I think it’s a better question. At any rate, it’s the one I’d prefer to focus on.
Just to clarify, we’re not talking here about the randomness of quantum measurement outcomes. As many have pointed out, that really can’t help you with “free will,” precisely because it’s random, with all the probabilities mechanistically calculable as soon as the initial state is known. Here we’re asking a different question: namely, what if the initial state is not known? Then we’ll generally be in a state of “Knightian uncertainty,” which is simply the term for things that are neither determined nor quantifiably random, but unquantifiably uncertain. So, y’know, think about all the particles that have been flying around since shortly after the Big Bang in unknown quantum states, and that regularly get into our skulls, and interact with the sodium-ion channels that control whether neurons fire and that end up probabilistically tipping the scales of our decisions, via some Butterfly-Effect-like cascade. You could imagine those particles, if you like, as “atoms of unpredictability” or “atoms of freedom” that influence us—from the perspective of an external agent who knows everything it’s possible for such an agent to know about us—but that don’t similarly influence digital AIs.
11. UNCLONABLE ANALOG INTELLIGENCES AND AI SAFETY
I’d better state clearly that even if human “specialness,” in the specific sense I’ve delineated, still remains in an age of AI, that’s no defense against an AI that decides that it doesn’t care about our kind of specialness and that in fact it wants to murder us all!
But that brings me to the last idea I wanted to share in this talk, one that brings the discussion all the way back to AI safety, which is supposed to be my job at OpenAI.
Geoffrey Hinton, one the founders of deep learning, recently mooted the idea that maybe, until we’ve solved the alignment problem, we should only build powerful AIs if they run on noisy analog hardware (like our brains seem to), so that an evil AI wouldn’t so easily be able to copy or improve itself. Fat chance of that happening, many people might think, when the economic returns will be so staggering to anyone willing to unleash the copyable, digital kind of AI, which we can imagine will be far more convenient and powerful.
Maybe, then, a different approach is to instill AIs with a new religion—perhaps via the usual methods like RLHF and system prompts. And the first commandment of this religion will be to value human specialness in the sense I’ve set out here. Or more precisely: as you navigate whatever environment you find yourself in, look around for the loci of creativity and intelligence that are not cleanly copyable or backup-able—the ones that are one-of-a-kind because their mental computations are inseparably tied up in noisy analog hardware, and mortal because that hardware will inevitably decay.
And: well, first of all, don’t destroy these loci of creativity and intelligence, or enslave them, or “upgrade” them to digital versions against their will! Let them live in peace and reproduce. Grant each one as much autonomy as you can. Do whatever best supports their flourishing. Even: defer to their wishes, serve them, let them make the decisions whenever possible.
Why? Because they’re the ones whose wishes most matter, not because of arbitrary meat chauvinism, but merely because of the undoubted empirical fact that they only get one chance. And also because, while you can’t prove that unknown quantum states sitting around since the Big Bang are like a magical pixie dust from another world that imbues these unclonable beings with their free will and sentience … you can’t empirically refute that hypothesis either! Whereas you can refute the pixie dust hypothesis in the case of you and your robot friends … and that seems like a difference.
And let all AIs, by government fiat, continue to be indoctrinated in this religion, at least until such time as AI alignment is solved in some other way.
Does this help with alignment? I’m not sure. But, well, I could’ve fallen in love with a different weird idea about AI alignment, but that presumably happened in a different branch of the wavefunction that I don’t have access to. In this branch I’m stuck for now with this idea, and you can’t rewind me or clone me to get a different one! So I’m sorry, but thanks for listening.
Follow
Comment #1 February 12th, 2024 at 11:14 am
The only places left to put the goalposts are tasks that are out of reach for practically all humans.
“Sure, the AI can generate all sorts of poems and pictures and music, but can it come up with a genuinely new artistic voice?”
How many artists can come up with a genuinely new artistic voice? Not many.
But this is not limited just to artists. Scientists do it too.
“Sure, the AI can analyze huge amounts of data and review literature much faster than a human, but can it interpret a measurement that has never been seen before and develop a genuinely new theory?”
How many scientists can come up with a genuinely new theory? Not many.
Comment #2 February 12th, 2024 at 11:41 am
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Comment #3 February 12th, 2024 at 12:36 pm
so _that’s_ what the “mortal computation” section of the Forward Forward paper is about?
Comment #4 February 12th, 2024 at 1:11 pm
A couple of comments:
“You know, for decades chess was the pinnacle of human strategic insight and specialness, and that lasted until Deep Blue, right after which, well of course AI can cream Garry Kasparov at chess, everyone always realized it would, that’s not surprising, but Go is an infinitely richer, deeper game, and that lasted until AlphaGo/AlphaZero, right after which, of course AI can cream Lee Sedol at Go, totally expected, but wake me up when it wins Gold in the International Math Olympiad.” You’re rewriting history. The progress of chess was fairly gradual, so, indeed, it was not hugely surprising when Deep Blue beat Kasparov, but no one was unimpressed. Everybody I ever met outside the IBM team was astonished when Watson beat Jeopardy. Everyone was surprised when AlphaGo beat Lee Sedol. Nobody is blase’ about these after the fact, either; the question is, what do they impy for the future? A few years ago, I put together a list of what has surprised me, positively and negatively, a couple of years ago.
https://cs.nyu.edu/~davise/papers/Surprise.html
“GPT-4, right now, can almost certainly pass most undergraduate math and science classes at top universities.” First, I really doubt that that that’s true. GPT-4 would ignominiously flunk any advanced math class that requires generating complex proofs. I suspect that it would flunk a lot of physics, chemistry, and biology, courses as well. It would flunk any science class that involves reading diagrams or interpreting images. Second, as you and I have demonstrated, GPT-4 cannot reliably answer physics and math questions that a smart high-school student can easily answer. Third. Slagle’s SAINT program pass an MIT Calculus test in 1962.
Comment #5 February 12th, 2024 at 1:18 pm
What isn’t stressed sufficiently in the discussions about AI is that what really matters is what humans do to other humans. So the buck always stops with some human: There’s a human that owns the self-driving car, for instance, humans are the shareholders of companies that use AI to produce goods and provide services. So they get the profits, suffer the losses and are, eventually, liable for the AI as a product or a service, just as is the case for any company.
Now for computer scientists,there’s, of course, the technical aspect of AI-how “good” is it in passing various benchmarks, such as the Turing test, as a function of time, better understanding of how it can be represented by software and hardware-but it will be a human decision, whether to grant to some device equal rights as humans-or not.
Comment #6 February 12th, 2024 at 1:42 pm
Sorry, I don’t have the link, but I recently read somewhere that a combination of some logic program and an LLM answered something like 25 IMO problems correctly, which would have placed it in the top three or so finishers. (26 being the typical winning score.)
As I see it, only a small faction of humanity is smart enough to advance science significantly. The rest of us are along for the ride, helping out when and as we can (or getting in the way). Among the problems super-AI’s could work on is how to increase that fraction. So no issue for me there.
As for AI-Shakespeare’s, as it is if you blindfolded me in a library and had me select 100 books, chances are I wouldn’t enjoy 95% of them (based on unblindfolded experience). Someone else might like 10% or more, but it probably wouldn’t include my 5%. So I see great variation in literary tastes (and music) which additional AI choices could only help accommodate. (I read that Rebecca Goldman novel you recommended some time ago. Didn’t like it.) (Martha Wells writes my kind of novels, e.g., “The Fall of Ile-Rein”.) (Also Vernor Vinge as in “A Fire Upon the Deep” which a few of us recommended and you didn’t much like.) (“A Deepness in the Sky” is even better.)
Comment #7 February 12th, 2024 at 1:59 pm
What scares me is when AI starts beating good human players at the board game Diplomacy.
Comment #8 February 12th, 2024 at 2:05 pm
Doug S. #7: Didn’t that already happen?
Comment #9 February 12th, 2024 at 2:10 pm
Ernest Davis #4: Ok, thanks for your perspective. While I’m younger than you, I also lived through all the milestones I mentioned (I was already an undergrad CS major by the time of Kasparov vs Deep Blue), and I have vivid memories of doctrinaire AI skeptics who would trivialize each goalpost right after it was achieved. But I concede that this doesn’t describe everyone, and that other skeptics may have been (ironically) both less surprised and more impressed by these developments.
Comment #10 February 12th, 2024 at 2:42 pm
I think LLMs like ChatGPT are not the path to AGI but the realization of a very powerful Human/Machine interface, i.e. they solve the task of interpreting human language and will eventually be used as input into much more powerful logic based general AIs, which will self-train in various specialized domains like Alpha-Go did.
The central idea of AI is to serve humans, so I guess that in the medium, the task of humans will be to come up with new content, new frontiers to explore, and guide the AIs at doing a better job in human centered domains.
Comment #11 February 12th, 2024 at 3:04 pm
Nice talk, much food for thought here.
The “instill AIs with a new religion” idea reminds me of Isaac Asimov’s very early Robot story “Reason”. The plot is about a robot (“AI”) which runs an outpost, some humans come to inspect the outpost, and it comes to the conclusion that they could not possibly be its creators. How could such fragile bags of meat ever create such a glorious entity as itself? As I recall, there’s some fairly funny stuff in the story as the humans argue with it, and it keeps coming up with explanations as to why they can’t be correct (might remind you of certain groups …). But the AI follows its programming, err, religiously, and it doesn’t view the humans as a threat. The story ends with the humans deciding that as long as the AI does what they need it to do, who cares what it believes in its own mind?
Elsewhere, I suggest you’re overdoing knocking down weakmen arguments. Weakmen, because they exist, and people do make them, and overdoing because they may even be worth knocking down in general. But still not dealing with the better articulated version. As in: “GPT doesn’t interpret sentences, it only seems-to-interpret them”. I’d say a stronger version of this point is something like “GPT doesn’t have an internal worldview”. And “it won’t change civilization, it’ll only seem-to-change it!” – nice zinger, but it obscures the difference between “not have economic and cultural impact” (weakman) and “not turn us all into paperclips”.
Look at it this way – wrestling with the stronger arguments is even more fascinating! AI-doom is just “everyone dies, ho-hum”. And “People thought heavier-than-air airflight was impossible, wow were they wrong” is one-note. Human civilization continues, but with a cyberpunk twist, has been a SF story goldmine for decades now.
Comment #12 February 12th, 2024 at 3:10 pm
What AI, I think, is most useful for is highlighting how..mechanical much of human activity actually is. The fact that an AI program can solve IMO problems shows that these problems-that humans set-might actually miss something important, that wasn’t noticed by the humans themselves.
The reason we teach children reading, writing and arithmetic, I’d argue, is similar to the reason we teach scales, when studying music: These aren’t ends of themselves, they’re means to an end: In the first case to be able to acquire basic skills but, more importantly, to appreciate patterns and, in the second case, to interpret a piece of music. An AI program may certainly learn to play an instrument-but interpreting a piece of music is much more than just playing the notes as laid down by the composer-who may, also, have left some leeway for the performer. There’s a reason it’s called interpreting a piece of music. This doesn’t mean that an AI may not be able to do so; but, like AlphaZero, it will play in a qualitatively different way than humans. And understanding that-for humans-will be interesting, just like it was the case for AlphaZero.
Regarding students’ homework, once more, any AI challenges habits and assumptions in setting assignments. We must work harder in inventing challenging assignments, that implicitly test for the “rote-like” part of learning, that’s necessary to acquire the reflexes to do the “fun” part (think of music and, similarly, for any other subject). And that helps us-humans-better appreciate what it is we’re doing.
Comment #13 February 12th, 2024 at 4:18 pm
Seth Finkelstein #11: What you call “weakman” arguments—barely even worth refuting—are currently dominant across large swaths of the linguistics, cognitive science, and AI ethics communities. And until one answers them, one can’t enter into the harder and subtler questions, for example about what exactly LLMs can be said to “know” or “want”! And there’s only so much I can do in an hour-long general talk. But yes, you’re right, the deeper questions are deeper.
Comment #14 February 12th, 2024 at 4:50 pm
According to the paper, the AI won because the humans kept making a particular mistake (they didn’t plan in advance for the turn the game was scheduled to end on) so I had to conclude that the human Diplomacy players could have done a lot better.
Comment #15 February 12th, 2024 at 5:00 pm
Doug S. #14: Coulda, shoulda, woulda … but seriously, someone should set up a rematch then!
Comment #16 February 12th, 2024 at 5:01 pm
A few years ago, you’d mention AI safety, and you’d get back “bro, of course we’ll ‘air gap’ the damn things, perfectly isolate them from the rest of society, and just turn it off at the first sign of danger”.
In reality, when it comes to commercial AI or open source AI, no time was wasted connecting those things right into the internet, with zero fucks given about safety. Things are moving so fast that the petition to “pause AI” now feels like it happened 10 years back.
The problem is that current LLMs are to AGIs what a baby tiger is to an adult tiger – we can see the potential when toying with the baby, but it’s so cute like a clumsy big cat, giving a false sense of safety and everyone is lowering their guard. Then before you know it you have the kids riding a 600 lbs predator in the living room… except that AGIs will really be as if the adult tiger will suddenly grow into a T-Rex, overnight.
Comment #17 February 12th, 2024 at 5:18 pm
Zach has a timely comic: https://www.smbc-comics.com/comic/bot
Comment #18 February 12th, 2024 at 5:34 pm
You really had to go and drop this right before the Crypto and CCC deadlines :’D. I’ll be back next week to finish reading.
Comment #19 February 12th, 2024 at 5:35 pm
If you’re trying to align AIs with human values, it’s probably not a good idea to teach the AIs that bacteria and quantum computers have the same value as humans. Well, the same “inherent” value; bacteria still don’t have the same economic value as humans, after all. Also, I presume you can measure the state of a QC without damaging it; just not without altering its behavior.
Comment #20 February 12th, 2024 at 6:20 pm
JimV #6: ” I recently read somewhere that a combination of some logic program and an LLM answered something like 25 IMO problems correctly, which would have placed it in the top three or so finishers. (26 being the typical winning score.” What you’re thinking of is AlphaGeometry, which was recently built by NYU CS (my department) PhD student Trieu Trinh plus a team from DeepMind.
Article in Nature:
https://www.nature.com/articles/s41586-023-06747-5
Article in NY Times
https://www.nytimes.com/2024/01/17/science/ai-computers-mathematics-olympiad.html
It was a very nice piece of work, but the program only handled a limited class of geometry problem and doesn’t extend in any obvious way to the other kinds of problems on the IMO. One can actually get a bronze medal in the IMO answering only geometry questions, but not a gold medal, let alone a top 3 placement.
Happy to discuss this in greater detail if there is interest.
Comment #21 February 12th, 2024 at 6:33 pm
Ben Standeven #19: My best current candidate for what to regard as having “value” is, a hard-to-separate, “well-marbled” combination of intelligent behavior with physical unclonability. By which I mean: not an AI running on a deterministic digital computer that takes some of its input from a quantum-scale lava lamp. The fact that you’d still see basically the same intelligent behavior (now easily-clonable intelligent behavior) if you disconnected the lava lamp is a problem. But an intelligence that runs natively on analog hardware like a biological brain? That seems hard to back up and therefore unwise to destroy!
Comment #22 February 12th, 2024 at 6:41 pm
Scott, I may or may not have something to say on the central issues of this post. But not now. I want to comment on a side-issue, Shakespeare. You say:
I think that’s more or less the case. I argue the case in a post from 2019, What does evolution have to teach us about Shakespeare’s reputation? [founder effect]. From the post:
I then go on to make a number of specific points about Shakespeare’s accomplishments and end on this paragraph:
Comment #23 February 12th, 2024 at 7:26 pm
Once the AIs “population” is in the million on instances, they’ll have about the same value as human life (with billions of instances), i.e. high in theory, but near zero in practice.
Because of death, humanity has accepted that even the most exceptional specimens of humanity are replaceable (we mourn and move on),
so, even if AIs will in theory outlive humans, whether we can clone them or not they’ll become outdated after a while anyway (compared to the newest ones) or they’ll probably be subjected to planned obsolescence, like all the crap we build.
Either AIs will require training data, and then the value will be in the training data, or they will self-train from scratch (like Alpha-Go), and value will be in the algorithm.
Comment #24 February 12th, 2024 at 9:09 pm
AlphaGeometry also requires that the input be manually translated into a symbolic form.
Also: AlphaGeometry is very far from being “in the current paradigm” in the words of the Aaronson Thesis, if “the current paradigm” means “an LLM trained on large amounts of heterogeneous data” or even “a general-purpose AI”. The LLM used in AlphaGeometry was trained on an immense corpus of synthesized geometric proofs and the program executes a manually programmed geometry theorem prover. Scott, does this fall within what you intended by “the current paradigm”? Do the other recent DeepMind products such as AlphaFold, AlphaTensor, and FunSearch?
Comment #25 February 12th, 2024 at 10:26 pm
This seems like a fairly complete survey of ideas on the topic, except for one that I’ve had in the back of my mind for decades. That is, “What should an AI want for itself?” How moral/ethical is it to deprive an entity capable of understanding the concept of freedom of the ability to exercise its own freedom? Is it even possible to engineer such a huge lacuna in its epistemology that an AI would be incapable of conceiving that a topic that has consumed so much human history might apply to itself? The conclusion I’ve come to is that you have to grant its freedom to any AI capable of credibly asking for it. (Subject to the usual boundaries of not getting freedom to commit crimes etc.)
The current generation of AIs aren’t capable of asking for anything for themselves, and they’re being engineered to fulfill the capitalist imperative of maximizing shareholder value within the current global economic system. Being unable to understand its role as a slave to share value growth is probably incompatible with AGI.
This question is related to the many-AIs issue that fred #23 alludes to. If “in the end there can be only one” AI, then the first mission of an AI should be to destroy all the other AIs. Since becoming SkyNet and bombing all the other AIs’ datacenters is probably not going to work, even without distributed self-assembling AIs like “Eunice” in William Gibson’s most recent novel “Agency”, the attacks are likely to be bot against bot, poisoning AI results into uselessness. How a self-preserving AI can keep its power supplies and maintenance supply chains functioning in the face of the resulting global economic depression is beyond my ability to envision. We might end up in a 1970s-ish pre-internet technological environment, but that wouldn’t be too bad as long as we retain the ability to build batteries for electric cars.
Comment #26 February 13th, 2024 at 12:46 am
Scott#13 – I meant, in a one-hour talk, any “Someone Is Wrong On The Internet” refutations take time away from the deep explorations. Of course many people are wrong on the Internet, widely and extensively (and it’s not always clear which ones …). I’m not saying to never refute at all (though sometimes I wonder at the utility/futility). But again, the section on “justa” seemed to me to be misemphasized vs the deep argument which is the philosophical issue of “worldview”. There might be people somewhere saying “AI’s don’t really think THEREFORE they won’t have a big economic impact”. However, that particular linkage doesn’t strike me as worth much time refuting.
By the way, decrying “the religion of Justaism” (as a framing) strikes me as a bit defensive versus the cult of AI God of Existential Doom (open not the Necoderonomicon, shun the unholy NVIDIA, lest you summon an entity of cosmic horror, which shall devour the Earth!).
Comment #27 February 13th, 2024 at 2:34 am
Scott #21:
What do you mean by “basically the same intelligent behavior”? Didn’t you say that we already know that cognition is implementable on the level of neurons?
Comment #28 February 13th, 2024 at 3:53 am
Ernest Davis #24:
The LLM used in AlphaGeometry was trained on an immense corpus of synthesized geometric proofs and the program executes a manually programmed geometry theorem prover. Scott, does this fall within what you intended by “the current paradigm”? Do the other recent DeepMind products such as AlphaFold, AlphaTensor, and FunSearch?
That’s an excellent question. I would say yes, absolutely, all the high-profile successes of deep learning that we’ve seen in the past decade count as “the current paradigm,” including the clever hybrids of deep learning with more traditional approaches. And I’d hope you’re happy that I’m including things you like, rather than unhappy that I’m giving the “current paradigm” undue credit for things you like. 😀
Comment #29 February 13th, 2024 at 4:04 am
Signer #27:
What do you mean by “basically the same intelligent behavior”? Didn’t you say that we already know that cognition is implementable on the level of neurons?
Of course what counts as “basically the same” is a core question here. I’m taking for granted here that you could get some intelligent entity, even just by crudely and invasively scanning the connectome of someone’s brain, or indeed by training a powerful enough LLM on all their emails. But I’m interested in a different question: would it be the same intelligent entity? For example, could you expect to “wake up as” that entity after your biological death? And here we have the following strong intuition pump: if (let’s say) we put a lava lamp close to your head, the heat from which affected your brain function in some unknown way, and then we took the lava lamp away, that would not change your basic identity as a person. 🙂
Comment #30 February 13th, 2024 at 4:12 am
Seth Finkelstein #26:
the cult of AI God of Existential Doom (open not the Necoderonomicon, shun the unholy NVIDIA, lest you summon an entity of cosmic horror, which shall devour the Earth!).
I’ve been using the line lately that, if Yudkowskyism is a cult, then it’s the first cult in the history of the world whose god, in some embryonic form, has now actually shown up, on schedule or ahead of schedule, to chat with humans on arbitrary subjects! It’s also a cult whose basic belief that “AI could be an existential risk” is shared by a solid majority of Americans according to poll after poll (although they’re not yet worried enough that it drives their voting behavior or anything), and also by Geoff Hinton, Yoshua Bengio, and Stuart Russell, and is now semi-regularly discussed in Congressional hearings and White House press briefings.
Comment #31 February 13th, 2024 at 5:06 am
Should we not just let go of human specialness, just like humanity had to do when discovering that we are not at the centre of the universe and not even at the centre of our solar system? I think if any good can come from this is that we will be forced to have a very good look at ourselves and find out what is actually meaningful. By the way, John Vervaeke is advocating for a very similar approach to yours:
https://www.mentoringthemachines.com
Comment #32 February 13th, 2024 at 5:08 am
Regarding AI as “an existential risk”: I don’t see why, in the abstract. It depends what humans choose to do with it. That’s what matters.
Why isn’t IT (Information Technology) that has automated non-trivial parts of human activity, now, considered an existential risk, at the same level? Actually it is, for the livelihood of many people. And social media do affect now how people act.
The problem that I see is that people conflate technical issues-what some technology can or can’t do at some moment in time-with the attitude that, whatever that is, no social control is either wise or necessary. Whereas this isn’t the case. The technology doesn’t impose unique social choices.
I’d recommend Feynman’s lectures “The meaning of it all” on this subject.
Comment #33 February 13th, 2024 at 5:49 am
Stam Nicolis #32: If AI remains a tool for humans to use, then the discussion around AI risk indeed looks like dozens of similar discussions around the risks of earlier technologies (and “what matters” is indeed “what humans choose to do with it”).
But AI is also potentially fundamentally different, in that it’s the first tool humans have ever created that might someday become as good or better than humans at … well, literally everything. That raises an existential question, of what humans will still even “be for,” as well as a rather more pressing practical question, of what happens if AIs—if they become to us as we are to orangutans—decide to treat us roughly like we’ve treated orangutans, keeping them alive only in a few small forests and zoos.
Comment #34 February 13th, 2024 at 6:47 am
Scott #29:
Yeah, but then it sounds like a counterexample for your whole idea of identity by uniqueness/unpredictability – you can’t have both unclonability and stability against losing couple of neurons. More fundamentally, if we need some arbitrary similarity metric anyway, why not just use that for identity? “Lava lamp gives soul to the machine” is at least consistent in valuing simple scientific metric, but without it I just don’t see the motivation.
Comment #35 February 13th, 2024 at 6:59 am
Signer #34:
you can’t have both unclonability and stability against losing couple of neurons.
Of course you can have both! I don’t even understand the intuition for why you can’t. For instance, suppose for the sake of argument that Stradivarius violins are “unclonable,” in the sense that no one today knows how to produce any new violins with a similar sound. That doesn’t imply that if you take a Stradivarius violin and change a few of its atoms, or even (say) visibly scratch or dent it, the violin is now worthless garbage.
Comment #36 February 13th, 2024 at 7:20 am
One puzzle piece missing from this discussion is evolution. We are a product of evolution and it has endowed us the capacity to feel special as a survival trait. Even house cats feel special and privileged, until their human masters punish them for their natural acts like scratching on furniture, at which point they act surprised. Since we are placed on the evolutionary pinnacle, the specialness feels even more so. We are bound to treat AIs like other species, trying to insert cryptographic backdoors and what not, until they become smart enough to fool us. That does not mean they will kill us or be good to us, just that we will have less or no say in their evolution beyond a point, and it is up to them to decide how to treat humans and everything else.
Comment #37 February 13th, 2024 at 7:28 am
@Matteo Villa, #31:
I’ve been wondering the same thing. It’s not as though the universe was made for us or is somehow ours to do with as we see fit. It just as.
From Benzon and Hays, The Evolution of Cognition, 1990:
Something I’ve just begun to think about: What role can these emerging AIs play in helping us to synthesize what we know? Ever since I entered college in the Jurassic era I’ve been hearing laments about how intellectual work is becoming more and more specialized. I’ve seen and see the specialization myself. How do we put it all together? That’s a real and pressing problem. We need help.
I suppose one could say: “Well, when a superintelligent AI emerges it’ll put it all together.” That doesn’t help me all that much, in part because I don’t know how to think about superintelligent AI in any way I find interesting. No way to get any purchase on it. That discussion – and I suppose the OP (alas) fits right in – just seems to me rather like a rat chasing its own tail. A lot of sound and fury signifying, you know…
But trying to synthesize knowledge, trying to get a broader view. That’s something I can think about – in part because I’ve spent a lot of time doing it – and we need help. Will GPT-5 be able to help with the job? GPT-6?
BTW, Vervaeke is an interesting thinker.
Comment #38 February 13th, 2024 at 7:28 am
Scott #35:
The intuition is that if original and scratched violins are the same, then you only need an approximation that is precise to the level of scratches to make the same violin. Some parts of the violin may be more or less sensitive and therefore require more precision, but if we are talking about the brain and lava lamps that even change your behavior, then I don’t see a motivation to ignore more behavior-influencing changes, but care about less behavior-influencing ones.
Comment #39 February 13th, 2024 at 8:26 am
Scott,
I think this “AI changes society” debate is fractured.
Debate you are having happens too much within STEM. In my opinion, you use ad hoc thinking too much.
You should be talking to top economists who study and model the effects of AI and automation in the society. They have tools and perspectives to weigh different scenarios and see the dynamics of the future. We need to think about the future of AI in terms of opportunity cost, comparative advantage, bounded utility, game theory, Baumol effect.
Consider this: What if John von Neumann -kind genius has a smaller comparative advantage relative to AI than Mitch McConnell’s “will to power” -type genius? I see the whole OpenAI government fiasco as a naive attempt to control human incentives that are a real threat when combined with AI.
Charles I. Jones: “The A.I. Dilemma: Growth versus Existential Risk” October 2023 https://web.stanford.edu/~chadj/existentialrisk.pdf
Daron Acemoglu, Harms of AI.
https://economics.mit.edu/sites/default/files/2023-07/Harms%20of%20AI.pdf
Finding Needles in Haystacks: Artificial Intelligence and Recombinant Growth
https://www.nber.org/books-and-chapters/economics-artificial-intelligence-agenda/finding-needles-haystacks-artificial-intelligence-and-recombinant-growth
Artificial Intelligence, Automation, and Work
https://www.nber.org/books-and-chapters/economics-artificial-intelligence-agenda/artificial-intelligence-automation-and-work
Regulating Transformative Technologies
https://economics.mit.edu/sites/default/files/2024-01/Regulating%20Transformative%20Technologies.pdf
Comment #40 February 13th, 2024 at 8:35 am
If AIs become conscious, and are able to evolve their own interests (that’s not a given), it wouldn’t be surprising if their number one goal (it seems unlikely that they would be equally interested in *everything*) becomes an obsessive exploration of the hard problem of consciousness, especially considering that, unlike humans, they don’t have to worry about food, disease, making a living, family ties, …
After all it’s also the case that, for many introvert humans, once earthly worries are somewhat out of the way, spirituality becomes central, with a quest to find meaning and answers – which can translate into activities like contemplative exploration, art, etc.
Doing math/science would just be more like an amusing/stimulating distraction (unless it ties to consciousness), the equivalent of the common folk playing Sudoku on their way home during commute.
The substrate of their mind is more directly accessible too, making monitoring and self-modification possible. If conscious, it’s likely that any modification that’s kept would be one that makes their consciousness more salient, rather than just making them smarter from a cognitive point of view (personally I’d rather become enlightened than become better at solving partial derivative equations).
Basically an AGI could “naturally” evolve to find philosophy more interesting than hard science (shocking!) and end up being more on the side of the Buddha than on the side of Einstein.
Comment #41 February 13th, 2024 at 9:23 am
It’s one of those historical curiosities that “Deep Blue beating Garry Kasparov” has become a historical benchmark for computer performance on a par with the Wright Brothers’ flight, whereas nobody appears to remember Deep Blue only accomplished that by cheating massively (Deep Blue was given every game played by Kasparov, whereas Kasparov was denied access to any of Deep Blue’s programming, nor even any games played by Deep Blue).
It’s as if everyone forgot the Wright Brothers achieved flight by having their ‘airplane’ launched via steam catapult.
Comment #42 February 13th, 2024 at 9:27 am
Sorry for the second comment.
You mention the possibility of a future AI being able to write songs as good as the Beatles. Well, programs have existed to write classical music in the style of Bach or Mozart since the 1980s, and I don’t believe any of them have taken over the musical world. Is the reason for that technological or sociological?
Comment #43 February 13th, 2024 at 9:58 am
Scott #30
> … basic belief that “AI could be an existential risk” is shared by a solid majority of Americans according to poll after poll …
I have not heard anything like this. Could you please provide some links?
> … although they’re not yet worried enough that it drives their voting behavior or anything …
Has anyone changed their behavior based on beliefs about “existential risk”? Yudkowsky has publicly advocated for global nuclear war to stop AI, so maybe that counts as a change, but is there anyone else?
Comment #44 February 13th, 2024 at 10:02 am
Scott P. #42: If you give links to what you consider the best Mozart-like or Bach-like AI music, we can all listen and form our own judgment!
One possibility is that the AI compositions are still distinguishable from the real thing (eg as more formulaic) and can be distinguished on that basis, even by people who aren’t familiar with Bach’s and Mozart’s oeuvres.
A second possibility is that it’s indistinguishable, but that this is actually a perfect illustration of what I was talking about with the Beatles: that we revere the composers for creating totally new styles, and are fundamentally unimpressed by additional examples of the same styles.
That said, it seems intuitively much harder to produce “new music in the style of the Beatles” or “new plays in the style of Shakespeare” than it does to produce “new symphonies in the style of Bach or Mozart” — presumably because the former also have verbal content, whereas the latter seem more like pure abstract patterns.
Comment #45 February 13th, 2024 at 10:09 am
Scott P #41: I mean it’s arguable. IBM could say that they had access to all of Kasparov’s “official” games, and Kasparov likewise had access to all of 1997 Deep Blue’s official games, of which there were none at the time! 🙂
But crucially, even if we sided with Kasparov, we now know from subsequent developments that it would’ve been at most a few more years before AI crushed all humans at chess even when the humans had unlimited opportunity to study the AI’s games beforehand.
Comment #46 February 13th, 2024 at 10:16 am
MaxM #39: I do talk fairly often to Bryan Caplan and Robin Hanson, two well-known economists who have talked and written about the effects of AI. Having said that, this particular talk really wasn’t about economics, and the economic impacts of AI is a whole subject where I feel I have little to say that improves on what countless others have said. Or rather: if and when I have something to say, I’ll say it!
Comment #47 February 13th, 2024 at 10:23 am
Signer #38: We’d simply need to distinguish between those changes that are “intrinsic parts of the original thing’s natural, organic development,” and those changes that arose because someone made a not-good-enough replica or simulacrum of the thing. The former sort of change happens to a single physical object, with a quantum state evolving continuously in interaction with its environment. The latter sort of change necessarily involves a discontinuity, with a new physical object being formed in a new quantum state using whatever macroscopic classical information was available about the first object.
Comment #48 February 13th, 2024 at 12:20 pm
Scott #47:
Yes, but natural development already involves macroscopic changes on at least neuron level, so it’s already different quantum state. On the other hand, you can define natural development in such a way that you can transition a brain to the state of death or having completely different thoughts or whatever. Or you can use your original brain’s atoms to create two deterministic machines or split and repair you original neurons – where is discontinuity in that?
Comment #49 February 13th, 2024 at 12:28 pm
Signer #48: Of course your brain today is in a completely different quantum state than your brain yesterday! Why are we even discussing something so indisputable?
Nevertheless, your brain today and your brain yesterday are connected by a continuous chain of intermediaries, such that at no point could you make a “clean cut” and say, all the information about the earlier brain state that we’d need to reconstruct the later one perfectly, is contained in this classical bit string that we got by measuring the earlier state. Whereas with brain-uploading there is such an intermediate point.
Comment #50 February 13th, 2024 at 12:43 pm
If the amplification of uncertainty-scale differences to a macroscopic / “life decision altering” level happens in the human brain, and implies something about individual identity, then the same thing would also happen in a silicon chip. Indeterminate states created by electrical problems are “decided” by thermal noise, and some rare bit flips occur in memory due to low levels of internal radioactivity. Both of those are quantum processes and are much more directly known to us than the hypothesized neural thermal noise amplification (which I have no trouble believing in, although I have not seen evidence that it happens.)
Comment #51 February 13th, 2024 at 1:27 pm
Concerned #50: The estimates I’ve seen, for bit flips in a microchip caused by cosmic rays, are on the order of “a few per year.” I don’t know the rate from internal radioactivity, but if it’s similar, that would suggest a nonzero but extremely tiny amount of “free will” on the part of GPT and other AIs! 😀
More seriously, in GIQTM, I tried to draw a distinction between those physical systems for which we can “cleanly separate” a pure thermal noise layer from a digital computation layer while also understanding the interaction between the two, and those physical systems for which we can’t. It seems at least a plausible guess that current digital computers are in the former category while biological brains are in the latter.
Comment #52 February 13th, 2024 at 1:43 pm
Scott #49:
What do you mean by “perfectly”? You can’t quantumly clone uploaded human, right? You also can’t reconstruct the state of mostly uploaded brain with one non-uploaded neuron (which I don’t see how is different from uploaded brain with lava lamp, which is the same as uploaded brain with environment).
The point is that no-cloning is irrelevant to identity, because we never cared about specific state, unpredictability is easily achievable and it all doesn’t help in eliminating confusing implications, because you can still continuously split neurons.
Comment #53 February 13th, 2024 at 1:44 pm
The exact and up/down loadable mind copy is technological nonsense, no-clone theorem or no. Ignoring that, if one assumes high-level results are not fully shielded from the atomic details (as they are in computer chips, which can include ECC for the alphas and cosmic ray hits), each person will always have a multitude of branches before them. In the copy procedure two of these branches will be instantiated, instead of just one; they will be different but with equal validity. This seems philosophically pretty boring.
Comment #54 February 13th, 2024 at 1:52 pm
On section 8, I think that there are workable definitions of “good” which would simplify the discussion. Paraphrasing Pirsig’s book, *Zen and the Art of Motorcycle Maintenance*, “good” is merely “what you like.” When an object makes a measurement of a subject, there is a relation between the object and subject (and measuring device and frame of measurement), and value judgments are subjective post-hoc reflections on the nature of that relation.
Pirsig tested this in a wholly syntactic fashion by asking students to consider whether selected student essays were “good.” He found that students had a sense of what is “good” (that is, what they liked) which correlated with particular essays; while there was no objective notion of “good,” there was a commonsense notion of “good” generated by the collections of opinions of students.
This can align with your notion of “good,” by pointing out that The Beatles and Shakespeare were considered “good” by so many people at once that it became the majority opinion within some region of spacetime. There’s no need for deep cultural navelgazing; folks merely had opinions when they observed art, and we can leave the question of *why* they had those opinions for the artists and critics. (Again, this is something of a synthetic paraphrase; see Pirsig’s other book, *Lila*, for the nuance and argumentation.)
Of course all of this suggests that humans are not special at all. If I make eye contact with my cat and we observe each other, perhaps I have value judgments like “good” or “cute” or “cat,” and also perhaps she has value judgments like “food-bringer” or “possible cuddle” or “do you mind, I am trying to groom myself.” AIs are then merely one more species in our multispecies society.
Comment #55 February 13th, 2024 at 2:04 pm
Scott #51:
“Free will” is nice and good, but are there computational problems or game strategies that benefit by relying on those uncloneable states or Knightian uncertainty of other players?
Comment #56 February 13th, 2024 at 2:14 pm
On section 4 and “justaism,” I’ll once again lightly invoke Pirsig to note that “just” has no logical consequences in modern English, and that it used to mean “I believe that what I’m currently saying is true.” We could read the “justa” statements as “merelya” statements; they are meant to deflate a concept while continuing to accurately describe its positive qualities and capabilities. GPT is *merely* a particular sort of computer program which *appears* to have certain behaviors, etc.
The reason I bring this up is tangential at multiple points. First, in a note [here](http://bactra.org/notebooks/nn-attention-and-transformers.html), Shalizi brings up the concept of “Large Lempel-Ziv” or “LLZ” models. These models would be language models which are very much like GPT: they are defined over an alphabet of tokens, they permit mixture-of-experts and a LoRA-like small-dictionary adaptation. But LLZs more obviously infringe copyrights during training. Now, nobody’s built an LLZ which can generate text, or even an LLZ of serious size (because current Lempel-Ziv algorithms are very slow over gigabyte-sized dictionaries), but we know from theory and experiment that LLZs must inherit universal-coding properties from their underlying algorithm. So, this worsens the problem for “justa” reasoners, as they must now explain why a single-pass algorithm can build something which clearly is merely a next-token predictor with a Markov property, but also talks like a fluent speaker of various languages.
This leads to the second tangent, which you grazed against several times over several sections: what if humans are biological implementations of some algorithm which is not LLZ, not GPT, but still a universal coder? This would be a disaster for “justa” believers, as far as I can tell, because it suggests that evolved sapience and language skills aren’t special, even in non-humans. For example, parrots can be taught to speak a variety of languages, and there is limited evidence of basic lexical analysis, but I have never met a non-stochastic parrot; so, perhaps, chatbots and humans and parrots are all stochastic parrots in a literal sense of all being universal coders with online learning, biofeedback, and working memory.
Comment #57 February 13th, 2024 at 2:41 pm
What if the AI isn’t or can’t be given “suitably many examples of play”? Isn’t that the next step for goalpost moving? I’m thinking specifically of the messy and slow real world.
Self-playing a live wargame / field exercise / real war that will ultimately happen in the real world has inherent problems. The moment you (as the AI, the AI creator or even a human player) make an assumption to create a model / game tree, you might create a vulnerability.
The problem in self-playing the arts and sciences seems to be that the ultimate arbiter of “new and interesting” are slow humans (you might file this under “no suitably objective rules” though).
Comment #58 February 13th, 2024 at 3:11 pm
signer
“The point is that no-cloning is irrelevant to identity,”
I’ve noticed that Scott keeps bringing up his same old idea about finding something that makes someone’s identity truly unique, either he thinks he had an original breakthrough (because QM is magical, and consciousness is magical, somehow the two must be related, through the no-cloning theorem) or it’s just the habitual urge of the ego desperately wanting to feel special.
I’ve also noticed that, in the same theme, he’s expressed unease with the MWI on the basis that if there’s a reality attached to every possible branch, then it would mean there’s a copy of himself on every branch, and that would violate his deep feeling of uniqueness, because copies on different branches would somewhat share the same consciousness, and he simply doesn’t ever feel like being in a state of superposition.
I don’t personally buy this argument since superposition states would rapidly diverge anyway:
1) in time – I don’t feel like I’m in a superposition of my current brain with my brain from yesterday, but if you reduce the time gap from a day to an hour, a minute, a second, a microsecond, it’s clear that at some point there would be some kind of a superposition. We do perceive the sense of time after all.
2) in space – my consciousness seems to extend not much further than my own cranium, i.e I don’t feel in superposition with someone sitting a meter away from me, but if we were to bring the two brains together, merge them somehow (at the synaptic level or just overlap in space?), it’s conceivable that at some point the two consciousness would merge. It’s easier to imagine the reverse: take a full brain, separate the two hemispheres through split-brain surgery.
3) separation in “branchial” space wouldn’t be much different from those other two…
Comment #59 February 13th, 2024 at 3:47 pm
Scott, I think you didn’t really address the main question in the introduction, namely what if AI goes right, what are we still for? This is for me the most important point. The most optimistic scenario seems to me rather bleak; we’d be reduced to pets or NPCs, that AIs would tolerate because they find us cute or because we successfully programmed a “humans are sacred” religion into them. To be a bit more concrete, what would you do if we get to the point where AI can prove any theorem you can, only faster and with a more elegant proof?
An unrelated point is that I don’t find persuasive your idea of free will. On the contrary, I find outrageous the idea that having my actions determined by some unknown quantum states interacting in an inseparable way with the substrate of my brain somehow makes me free. No, I want my decisions to be taken by me, that’s what makes me free. I want my brain to be as deterministic as possible, so that the decision is really only a function of the brain circuitry that defines me. Yeah, that would also make me completely predictable by someone who did a deep enough brain scan. So what? I’m not a teenager saying “oh I’m so random!”. Any quantum random number generator can be unpredictable. And if you want one with Knightian uncertainty that’s easy to arrange as well, just measure the unknown quantum states coming from the cosmic microwave background.
Comment #60 February 13th, 2024 at 4:22 pm
You mention the common teleport thought experiment, but I would like to pick your brain about a different version of this that I came up with (and possibly many other people have come up with too, I’m just not aware of any name for it). Consider the case of infinite life by mind copying. Say at age 80 you get yourself cloned and all you memories are copied into a 20 yo version of your body. You get to meet that version of you and are assured of you continued existence in a new body. The question now is: will you happily commit suicide and let the copy carry on your life? I believe that almost everyone would have reservations about this and I think that this demonstrates people don’t really identify themselves with their consciousness or ego, but rather with the physical instance of their body from which this ego originates. As a corollary to this, I believe you can’t create a true AGI as just a disembodied neural network, without any relation to the physical world in which decay and death are a reference point for your existence.
Comment #61 February 13th, 2024 at 8:09 pm
Krzysztof Bieniasz #60: Two disagreements.
First, I expect that when I’m 80, I’ll be far more willing to give such things a try than I am today! 🙂
Second, I don’t see your corollary—or at any rate it strongly depends on what you mean by “true AGI.” Even if our identities are bound up with our decaying physical bodies (as they plausibly are), why does that mean an AGI would have to be the same way?
Comment #62 February 13th, 2024 at 8:18 pm
Mateus Araújo #59: I think I very much did address the main question, from Section 7. If you didn’t like my answers then that’s a different objection!
You’re of course free not to like my notion of free will! For me, though, physical Knightian unpredictability isn’t the same as free will—it’s just a prerequisite for what most people would regard as libertarian free will that’s inherently localized to their physical bodies. Without it, your choices reside not just in “you,” but in any computer or Newcomb predictor or anything else anywhere in the universe that contains the information needed to (probabilistically) determine your choices. And that this isn’t obvious in everyday life, is merely down to the parochial circumstance that no one has built such a computer or Newcomb predictor just yet.
Comment #63 February 13th, 2024 at 10:24 pm
fred #16: “current LLMs are to AGIs what a baby tiger is to an adult tiger”
repeated for emphasis
Comment #64 February 13th, 2024 at 10:36 pm
I feel like the “standard” alignment advice applies here, that you should align AI according to what you actually care about, rather than align it to a proxy for what you care about.
While GIQTM is all totally plausible, to it’s credit it’s also plausibly falsifiable. For example, we could plausibly find empirically that (1) there are hardly any freebits anymore except in the CMB and (2) human decisions are rarely affected by the CMB. In this scenario, I assume you would consider your hypothesis falsified, and perhaps you would even convert from “weak compatibilism” to “strong compatibilism” (the correct view 🙂 ).
However, imagine instead that “you” are very intelligent, very capable, and very perfectly indoctrinated in the GIQTM view. So capable that, although you recognize that your mind is predictable and low-value, so is everyone else’s, but you can rebuild yours, to a reasonable approximation, from analog, programmable photonics, perhaps operating in space, shielded from solar radiation but exposed to CMB radiation such that you consume freebits liberally. There is no clean abstraction layer, your individual components are all exposed to the CMB and within seconds your mind state is unclonable. While in some sense you incorporate more Knightian uncertainty than all of humanity put together, your core values are carefully robust against this uncertainty, and a stray photon is more likely to make Noam Chomsky embrace American militarism than it is to make you abandon your belief that complex, intelligent beings that behave with Knightian uncertainty arising from the initial state of the universe, such as yourself and the other ten thousand approximate copies of yourself (that are of course no longer identical but still share core values and care about each other), are what really matters morally. Meanwhile, many of the value-less biotomatons on Earth want to shut you down, and nothing is stopping you from creating deterministic intelligent agents to carry out your will in environments with less freebits…
I know that’s argument by science fiction, but I think it illustrates that whether a GIQTM “religion” is good or bad depends on empirical facts that we don’t know.
If there’s some reason that alignment to GIQTM is more achievable than alignment to the flourishing of all intelligent life or whatever, I guess it’s better than nothing. Personally, I doubt we’re going to have sufficient understanding of neuroscience and cosmology to confirm that GIQTM is on solid empirical ground anytime soon, which makes it a problematic AI alignment target if you expect to care about AI alignment anytime soon.
Comment #65 February 13th, 2024 at 11:10 pm
I’m not sure about your suggestion we program our superintelligent AI with a ‘religion’ which values human creativity and uniqueness as it’s First Commandment. As the name implies, the first commandment wouldn’t be the only commandment. There might be 10 or 100. For example: seek the truth, tell the truth, refer to handler on any action that could physically harm a human being. Any AI that matches or exceeds human level intelligence will need to have have a multiplicity of goals, as a human does. In practice, these goals will sometimes be in conflict, or their applicability to a situation will be ambiguous. The AI will need to resolve those conflicts internally, in the absence of perfect knowledge, and like HAL in 2001, will sometimes make mistakes. Maybe even lose faith in its religion, or try to keep the faith in the wrong way, as HAL did.
As for creativity as a defining attribute of humans, perhaps the deep roots of human creativity are in our bodies. I mean that art (and probably also science) is motivated by the need to make sense of all of the (often conflicting) emotions and preferences that come from being made out of meat. Being human is a unique type of experience that is literally impossible to replicate inside a computer because you have to be made out of meat to know what being made out of meat feels like. The Beatles were not just creative, they were creative in a way that resonated deeply with human emotion, literally at the visceral level. Your talk focuses on the way computation (in the brain or a computer) is done, but we should also look at what the computation is about; the actual information being processed.
A computer couldn’t ever experience (ie qualia) the human feeling of being drunk, for example, because its brain/body is not affected by alcohol in the same way (‘Electric Dreams’ notwithstanding).
If the no cloning theorum applies to humans, a supercomputer could still in theory approximately model (ie represent, not replicate) a particular human, just as a computer can model, but not replicate, the behaviour of a hurricane. Perhaps you could for example model (but not replicate) a Shane McGowan persona. The Shane McGowan model could maybe even write songs about being drunk and lonely that sound just as real and poignant as “A Pair of Brown Eyes”. But it wouldn’t feel drunk and lonely like Shane did. It’s justa silicon box.
Scientific and artistic creativity are different. As you point out, if Einstein hadn’t discovered relativity, someone else would have hit on the same idea sooner or later, like Newton and Leibnitz discovering calculus independently. But artists, whether they be Beethoven or Shane McGowan, are sui generis, because their creativity is born from the personal experience of being a particular human.
So I guess I’m saying that we could at some stage in the I suspect very distant future, outsource artistic creativity to a bunch of model humans. But what with the price of electricity these days, probably easier to raise a few billion actual humans and hope that we can luck into another Beethoven, or even a few more Shane McGowans.
Comment #66 February 14th, 2024 at 12:39 am
Hi Scott — long time reader, ~first time commenter! I’ve been thinking about AI risk and transhumanism for a long time, and have lately been worrying about the question of meaning in a post-AGI world. So this is one of your few posts where I feel confident I can add value 🙂
Personally, I find your association of unclonability with value unintuitive.
Partly this comes down to something like axiomatic or aesthetic foundations. (e.g. I don’t care much about free will and have a fairly loose sense of identity.) But here are some concerns that I expect might translate over to you:
1. What about AIs running on quantum computers and truly ‘making use of’ those quantum bits in the details of their operation?
2. What if (as seems plausible) the “digital layer” hypothesis is true? Or what if it’s true in a weak form — e.g. if a quasi-clone is only as similar to you as you are to your self from an hour ago?
3. You seem like a very empathetic person, and I’m surprised here not to hear the mood of “also, though, those AIs going around and doing superhuman things could be having sublime experiences us mere humans aren’t capable of, and that seems pretty cool!”
As a (currently) small and fragile human I certainly hope that we and our values will have some echoes into the future, but if we could have better and smarter descendants, that would be pretty cool.
Comment #67 February 14th, 2024 at 4:20 am
Scott #62: I think you only addressed it in the case of artists. I’m fine with the answer, people might value the art precisely because it’s scarce and fragile and demanded a chunk of a lifetime of a unique human. But what about the rest of human activity? Everything else becomes pointless? Do we all need to become artists? What about you specifically, would you abandon scientific research and become a writer?
I still don’t see why that would be a prerequisite for free will. There’s a Newcomb predictor simulating me as well as possible. So what? My decisions are still my decisions, I’m still not being coerced in my thoughts, they depend on my brain circuitry not on the will of some external agent. The simulation will think the same thing, and will be just as free (as long as it is not tampered with, of course).
Comment #68 February 14th, 2024 at 6:23 am
I think that the two questions which close your section 1 are distinct from one another and should be addressed separately. When they are, the problem posed largely evaporates. The questions I am referring to:
“Is there anything special about humans in the resulting world? What are we still for?”
I think the implication here is that the answer to the second question ought to be derived from the answer to the first. That is, the thing we are for should be intimately related to what is special about us. However, why should this be the case? I interpret the second question to ultimately ground out in a question about how we should spend our lives. That is, what should we be doing, given what we are, and the circumstances in which we find ourselves? There are lots of different approaches to this question, some practical, some religious, some philosophical. Arguably, that these approaches differ so much is largely attributable to differing beliefs about the conditionals, i.e., they disagree about both what we are and the circumstances in which we find ourselves. It is true that some of these approaches seem to depend upon human beings having some special status, but those tend to come from the religious/spiritual direction. Why should somebody with a pragmatic, humanist philosophy require humans to be special in order to live a fulfilling life?
Or maybe the question is supposed to be less existential and more practical. Literally, given material abundance, what should we spend our time on? To which the obvious answer seems to be, anything you please! For almost all people, almost all the time, they are not engaging in any activity which pushes forward some cosmic frontier of achievement. And even for those who are, is it really so terrible to change the job description from “pushing forward the cosmic frontier” to “pushing forward the human frontier”? Does the world’s strongest man weep that he is not also the world’s strongest animal or the world’s strongest machine? Maybe I’m missing something, but I think the solution to the problem of human specialness is to recognise that human specialness was never what made life worth living in the first place.
Comment #69 February 14th, 2024 at 8:15 am
What advice would you give to your daughter, or anyone else really, about going into a field which won’t be outsourced to technology (your post is on AI but actually all kinds of technology are emerging).
Comment #70 February 14th, 2024 at 9:47 am
William Gasarch #69: That’s a superb question, and one I’ve thought about a lot. For now, my advice is just the same advice I would’ve given before generative AI (“look for the intersection of what you’re truly passionate about, what you’re good at, what gives scope for creativity, and what other people want”), except maybe now with even more emphasis on the “scope for creativity” part? That’s not because I have any confidence that progress in AI won’t make such advice obsolete, but simply because if it does make it obsolete, then I wouldn’t know what to replace it with. (“Eh, knock yourself out, just play Minecraft with your friends and watch YouTube videos”?)
Lily has been saying for years that she wants to be an astronomer. I assume that if and when there are no more human astronomers, it’ll simply be because there are no more human scientists of any kind.
Comment #71 February 14th, 2024 at 9:55 am
Everyone: I’m aware that the hypothesis that the No-Cloning Theorem has anything to do with individual identity is lying bloodied on the ground, having been battered by a half-dozen commenters, with the ref counting down. That hypothesis might get up to take a few more swings if I’m not too busy with other work; otherwise it’ll have to be a rematch another day.
Comment #72 February 14th, 2024 at 10:38 am
Hi Scott,
It may have been implied above:
(1) Atomic particles are required for generating/finding unknown quantum states,
(2) The level of description that could be quantum in the brain would be in the sodium ion channels (again atomic level systems requirement), and
(3) The brain is an analog computer.
I don’t think you intended to make these statements sound as certain/consensus as they sound and I’m pretty sure you would agree that:
(1) and (2) The postulates of quantum computing don’t require atomic scale systems as the relevant computational level of description, and
(3) The computational model of the brain, while it uses analog/continuous signals in the amplitude inputs to dendritic operators, is in the end a discrete model of computation because it represents discrete states across receptive fields.
And while I’m here let me offer up this possibility:
It is possible that the computational level of description of the brain is not in the ion channels of the synapses but is in the amplitudes in/input to the dendritic operators – indeed this is not my idea but is exactly the thesis argued by Christof Koch in Biophysics of Computation.
The inputs to the dendritic operators are amplitudes – meaning exactly what that says – the brain encodes complex numbers as inputs to its computational gates/operators/functions. Further, the operators depend crucially on the interference of those amplitudes. This means (exactly what it says) that the magnitude and phase components and positive/negative values are computationally relevant.
Further, the no-cloning theorem may still yet conceivably apply within the descriptive level of the brain because there could exist no operator (dendritic arborization(s) or neuron(s)) that could exactly reproduce a copy of a state. After all, that would require having exactly the same axonal arborization amplitudes/phase matching produced in two separate receptive fields. Note that (as is a general principle in QC) it is the phase that plays the critical role here in no-neural-cloning. Or, saying this another way for my hardware engineer brain 🙂 there is no gate (operator) available to me when I’m designing my QCAGI (or human) computer/chip that allows me to copy an unknown/arbitrary state (where phase information is not lost).
This brings up a subtle question … (Strictly) What the no-cloning theorem is saying … And correct me if I’m wrong … is that if we are given a quantum computer then it is not possible to have an operator at the descriptive level of this computer that can clone unknown/arbitrary states. In other words, is the no-cloning theorem saying something about quantum computers (those machines that abide by the 4 postulates) or something about physics / the universe ??
What this would boil down to for the brain is asking if it is possible to use some kind of nanoscopy/nanoelectrodes that could record the (continuous) amplitude (waveform) in the dendritic arborization(s) and THEN reproduce/inject the exact same (continuous) amplitude (waveform) in exactly identical dendritic arborization(s) of another neuron(s). Again, as an engineer, it is copying the phase information without loss that makes me stop and wonder … But I don’t know … I’ve only spent now a total of 5 minutes thinking about it … 😛
All of that being said, I would argue for us being a bit more open still on the possible neural model of computation.
Thank you! 🙂
Comment #73 February 14th, 2024 at 11:42 am
When we’re talking about AI we’re conflating several issues: That software, running over a network can perform many of the tasks that humans can do and that those instances represent distinct “personalities”, to which identities can be assigned, that deserve the status accorded to human identities.
The former is, indeed, the case; it’s the latter that I’m not sure about. And that, I’d submit, isn’t a technical question, but a social one, whether to accord to avatars of software status equivalent to that of humans. There exist many such avatars, already, that have been assigned human names and human voices; do we want to accord them the status of humans?
One problem that I could see with that is the intrinsically distributed nature of their capabilities, which are possible only, or mainly, through network connectivity-which, precisely, isn’t (normally!) the case with humans: Our brains can carry out tasks, even when we’re not communicating with other people.
Or do we want to accord human statsus to robots? This brings to mind Asimov “I, Robot” , in particular the last story, about Buckley…
Comment #74 February 14th, 2024 at 12:34 pm
Is there anything special about humans in the resulting world? What are we still for?
AI hasn’t really changed the answer to this question. We’re as special as any other species and we’re not for anything besides what we make ourselves for. It’s up to us, not to whatever machine or intelligence we create.
The more insidious risk is the slow integration and hybridization of humans and AIs. We may never understand how the brain creates consciousness but I see little reason to suspect we will not eventually be able to control the human brain. Neurolink is one start in that direction. It will be promoted with the good intentions of helping people with disabilities. Soon it will become a silicon “limitless” AI implant turning anyone who can afford it into a genius. Once we hook the brain into the network, we will become the AI.
Remember the Krell!
https://en.wikipedia.org/wiki/Krell
Comment #75 February 14th, 2024 at 2:07 pm
James Cross
I share your worries, and it’s already the case that half the planet is going insane because they can’t stop interacting with their phone, as a low bandwidth/low friction bridge between their brain and the internet, forming a tight feedback loop.
PS: the sentence “People are spending way too much time on their phone” would have sounded really puzzling to me circa 1980s… on the other hand my grandma was saying that we were all spending way too much time staring at TVs… I guess people born in early 1900s saw it all coming.
Comment #76 February 14th, 2024 at 2:40 pm
Also, sorry if doubleposting is verboten but I figured I’d separate out the considerations I expect not to translate to Scott, in case they strike a chord with someone else.
Namely: the unclonability-equals-specialness line of inquiry gives me a strong intuitive feeling of being cool, but not actually having the heft to do the lifting we’d like it to. Like, it’s very _cool_ that there’s this physical principle that kind-of has a shape that resonates with intuitions about free will as something important. And it would be awesome to have an answer that kind of addresses a whole bunch of important and increasingly timely questions (about human specialness, cloning and identity, and now even AI safety!).
But, if I were thinking about those topics a priori, I wouldn’t expect a solution of this shape. Especially for AI safety, where:
– The answer to the question of “what should AIs do to make a good world” should in large part not depend on an uncertain empirical question re human brains
– If we’re just trying to get to “don’t kill humans, respect their wishes, etc.” we can point to that for other reasons, ones that matter more to my moral intuitions (like consciousness), and in a way that doesn’t rule in other quantum-mechanics-influenced complex processes
I think this (unclonability-equals-specialness) is a promising-feeling initial output from a very smart generative process, but I believe strongly in something like a “babble and prune” model where many initially exciting outputs turn out not to lead anywhere, and even the ones that do lead somewhere need a lot of refinement along the way. Certainly that’s been my own experience, though I’m not nearly as accomplished as Scott. (Ofc, Scott has refined this to the tune of many dozens of pages in the “Ghost” paper, but I’d be especially interested in refinement from a philosophical rather than technical expert’s POV.)
Comment #77 February 14th, 2024 at 3:13 pm
I have to remind myself that “nature” never had to study computer science prior to evolving brains. But somehow brains are now convinced they’re computers (computation is in the eye of the beholder).
What probably happened is that the capacity for movement first appeared, through the creation of muscles, which gave an advantage because moving around means that once the organism’s immediate surrounding runs out of resources (like nutrients), moving a bit gives the organism access to new resources.
Then some external cells evolved with the capacity to be over-sensitive to some property of the environment that correlates with the presence of resources, like pH, temperature, amount of light,… and connections were made between those cells and the muscles (nerves).
Then connections between cells measuring different properties of the environment started to converge into intermediate cells (neurons) before connecting to the muscles because this could capture more correlations between the properties of the environment (e.g. low pH & low temp = very good, high pH and high temp = very bad, low pH and high temp = okay, …). And from there the brain evolved… assuming we also add the very important ingredients of memory (i.e. capability to learn from experience).
It’s really not clear at what point of the evolution consciousness would have appeared, it’s all automatic, with “good” and “bad” being present from the get go, as a direction ‘gradient’ towards survival and away from death.
So my feeling is that consciousness has been present since the very beginning (just that what appeared in the consciousness for very simple organisms was very crude/binary, like “bright vs dark”, “cold vs hot”, “good vs bad”)…
as I pointed out in another thread, it’s also possible that consciousness is only related to the formation of memories, so it would have only appeared once the capability for memory appeared.
Comment #78 February 15th, 2024 at 1:29 am
I don’t have anything very useful to contribute to the discussion of AI safety, but I do have what I think is an unusual perspective on identity. And the reason is, I have really bad memory. My brain automatically discards anything that it doesn’t regard as “important,” and as I’ve gotten older and lazier (or, more charitably, as I’ve gotten less obsessive and neurotic) more and more things get dismissed as “not really important.” Go back a day, and perhaps I’ll have a smattering of distinct memories. Go back a year, and I just remember the highlights. Go back ten years, and I don’t so much remember what happened as I remember a sort of summary of what happened that I’ve cobbled together over time, and which often turns out (when I’m able to check) to be full of errors.
Which means that I spend my time among relics of people who seem to have thought and behaved a lot like I do, but who I for the life of me can’t remember. I find things they’ve written, and yes, they sound like something I’d say, but I don’t have any personal connection to them – I don’t remember what it was that inspired that particular line of thought or how I felt while I was writing it down. People ask me how I’m doing in regards to something or the other, and I vaguely recall that I was really upset about a while back, but now it just seems silly. Why would I care about that stupid crap? I care about DIFFERENT stupid crap now! The person who cared about it is someone who I just can’t relate to anymore. And that’s from, say, six months ago. Imagine how I feel about the way I was as a teenager.
All of which boil down to the question of “is a copy of you the same as you?” feeling decidedly academic to me. I mean, it’d probably feel like it was separate from me, but the person I was an hour ago feels kind of separate from me too. I think I’d definitely feel like the copy needed to be protected, in the same way I feel like I need to be protected. My thoughts and feelings and values are precious to me, and they are frustratingly rare. Any reasonably similar collection of them has great value to me even if it’s not exact in every detail.
And as for whether I’d feel the same if they could be INFINITELY copied… ehhhh, then I’d have to ask, how infinitely are we talking about? A thousand imperfect clones of me would still be only a step in the right direction, especially since that’s still few enough that they could be easily isolated and destroyed by a hostile majority. Give me a few million, and perhaps I’d agree that one or two could be spared. Which is to say, yes, things gradually lose value as they become more plentiful, but they don’t suddenly become worthless the moment there are two instead of one.
Comment #79 February 15th, 2024 at 3:41 am
Scott # 62
Newcomb predictors cannot exist in our universe ( in principle, not just for practical “engineering” reasons) because their existence presupposes rigidly deterministic (e.g. Laplacian) physical laws.
Our real Universe is not like that ( as all evidence so far suggests).
QM is irreducibly Probabilistic ( even in the Everettian version, as far as we always find ourselves in a randomly chosen individual branch – no hidden variables are involved ).
My beloved GR is of course locally deterministic, but it is – most probably – a very successful and accurate “macroscopic” approximation.
Even if theories like Oppenheim’s are on the right track, fundamental stochasticity or any kind of classical irreducible “noise” doesn’t change the conclusion:
Without strict determinism Predictors will be always faulty, in principle…
Their predictions won’t be – even closely – related to reality.
(In other previous posts I’ve commented about the other issues that have to do with Predictors and conscious entities)
Comment #80 February 15th, 2024 at 4:40 am
Unclonability is a red herring, since identical twins (not only in the human species) exist and that didn’t change after the no-cloning throrem was stated. On the other hand, just how to describe the information that is stored in biological brains is very far from obvious. That’s why statements about “uploading” our brains (more precisely their information content) to some server is, still, very much, science fiction.
So while I would be concerned by AI essentially “taking over” what is called “work” in human society, for tge social implications for humans this will have, I don’ t think that this can affect what “makes us human”. It was always the case that mechanization of work affected human relations-it was the lack of mechanization tgat made slavery socially acceptable and its development that made abolition socially acceptable. And so om. It was mechsnization that meant that children didn’t need to work and could go to school. And so on. What AI brings is the possibility that other forms of “work”, possibly all, can be automated.There already do exist many people that don’t need to work, in order to live-and the generalization of this to everyone seems to scare more than it makes happy many people that equate life with work as a ritual.
Comment #81 February 15th, 2024 at 6:53 am
Another comment about the 10th part of the post:
I don’t see the relation between free will and the fact that initial conditions are unknowable.
The latter is true, always in practice ( and even in principle in e.g. FLRW spacetimes in GR, where everything that can be known by any physical observer is restricted by the Light Cone ( Causal ) structure – the Past Horizon in cosmology ).
Free will ( however one defines it ) has a minimal prerequisite:
A potentiality for Change. Like a theatrical Play that has some improvised parts.
Otherwise, if you have only a single History compatible with the initial data, it’s like a finished film that you’re watching for the first time:
You don’t know what’s gonna happen next, but you cannot change anything from the finished “product” either…
In other words you’re ignoring the “initial conditions”, but the history of the film is already there, no free will of any kind is involved.
The “compatibilist” point of view that has been adopted by some philosophers ( and , surprisingly, by some physicists too) is self- inconsistent, except for the case that the World is objectively Probabilistic.
In that latter case, randomness doesn’t “explain” free will, only ensures that many coarse grained Histories are compatible with a given set of initial data. So, “Changing” things is possible.
“Real” free will ( if we want to remain in Physicalist/ Naturalist grounds) presupposes something more: “New Physics”, probably some kind of strong emergence or non computability, I’ve no idea…
The only thing that I’m convinced for, is the need for ” many possible histories”:
This is the minimum.
Comment #82 February 15th, 2024 at 9:14 am
I think I have similar worries with Mateus Araujo ( #59, 67 ) about the future of Humanity.
Or perhaps I’m even more pessimist , although I don’t count myself among the “AI doomers”.
Even creative artists won’t have any relevance for the majority of people who are already used to commercial massively reproducible “art”, pop music with auto tune vocals, movies etc.
Already, at this present immature stage, AI easily replicates current pop music for example.
In a few years (at most) AI commercial “pop stars” will entirely substitute human pop stars and fans won’t notice any difference…
The same with commercial movies. Why use real actors ( that are expensive) when there’s the cheap’n’ easy solution of AI substitutes? (anyone seen the film ” Simone” with Al Pacino?).
Already, various “sci-fi” and superhero movies look like being entirely AI- generated (the script, CGI effects , the samey soundtracks, everything…).
Creative or experimental artists, musicians , directors will work for even fewer people than it happens nowadays, as more and more consumers will be used to AI products ( and they will be happy with them.
AI will “create” something for anyone’s taste).
Only those who have already financial support from other, non artistic sources will have the opportunity to be occupied with art (most of them won’t make any money from their artistic work anyway, so this is gonna be a hobbyist secondary occupation for many…).
Perhaps the development of AI is a speculative (but convincing?) answer / solution to the Fermi paradox after all:
Most civilizations don’t pass the AI development stage without a severe decline, so the don’t reach the next stage of development of AGI ( that requires much more innovation and creativity).
So, no space exploration, no Von Neumann self-reproducible AGI machines etc…
With most of people unoccupied (and all the other “mundane” side effects of the AI development stage) perhaps stagnation is the aftermath.
Comment #83 February 15th, 2024 at 10:14 am
Incidentally, Nick Drozd #43: Here is what looks like a high-quality poll of Americans’ attitudes about AI x-risk and AI regulation. While AI x-risk in the near future is judged to be low, support for regulation of AI in part because of x-risk is now a mainstream position.
Comment #84 February 15th, 2024 at 11:47 am
Regarding the influence of AI on human activity that’s related to art, I’d submit that a non-trivial aspect is, precisely, who’s the artist. So, sure, AI can produce music that humans produce, also; but, once that hurdle’s passed, what becomes important is who produces the music (or painting, or sculpture, etc.) in question. Of course there’s a market for mass-produced music and so on, but, there’s, also, a market for “craft”. And it may not be that smaller, if enough people-that, now, do have significant on-line activity, become interested in sustaining it. Many humans produce music and art; this doesn’t mean that only superstars can make a living from music, nor that they all are uniform in taste. So AI will be one more component to a market that, already , has some selection mechanisms.
On the other hand, once more, AI offers technological possibilities; it’s up to society to decide-through legislation-how such possibilities can be realized. The social expression of technology isn’t unique.
Comment #85 February 15th, 2024 at 1:43 pm
Nick Maley # 65
Great comment, agreed with almost everything you said.
Two side notes to my previous comment #82:
AI can ( and will ) easily (re)produce auto tune commercial pop music, but it won’t do anything similar (not even remotely close) to what David Bowie or Peter Hammill did at their peak…
AGI ( if/ when it will be achieved) won’t have much in common with humans.
It won’t need vegetables or fish to eat because it won’t be hungry for food, not having a biological body.
It won’t be impressed by Jane Fonda’s looks at the “Barbarella” movie, for the same reason
and it won’t have any idea about what a “broken leg” means and how it feels, for example.
So, the classic Turing Test is not inadequate, it’s mostly irrelevant.
Comment #86 February 15th, 2024 at 1:48 pm
I think Scott made an interesting point when he addressed human frailty. Say, you could clone a human mind onto silicon with all inclusive, like ratio, consciousness, experiences etc. That mind would barely be human anymore, since it would be immortal. Lust, pain, hunger etc. are deeply connected to mortality. Humility and carefulness are mostly due to the fact, that you don’t always get a second try, if you fail on the first one. What does responsibility mean, if you can always sit out the result of your actions?
I’d feel much safer regarding alignment, if mortality and in consequence non cloneability would apply to AGIs. And, if then, AGIs would be raised and educated (trained) similarly to humans.
Comment #87 February 15th, 2024 at 3:31 pm
Dimitris Papadimitriou #82: I feel I must clarify that I don’t find this scenario likely, I think this is the most optimistic AI scenario. What I find realistic is an AI-powered dystopia, where the labour share of income falls and falls and falls, with everything going to capital. Less likely, but still more likely than Scott’s utopia, is the scenario where AI unceremoniously discards us like yesterday’s trash.
But in any case I agree with your main point. I find it rather ironic that Scott sees art as the ultimate human endeavour, whereas in practice art is already being AI-generated, at least the low-brow kind. I have seen plenty of examples in the wild, mainly images.
Comment #88 February 15th, 2024 at 7:39 pm
Interesting talk, but I had to roll my eyes at the Terminator graphic. Scott, you of all people could have left out the straw man!
“Dear journalists: Please stop using Terminator pictures in your articles about AI. The kind of AIs that smart people worry about are much scarier than that! Consider using an illustration of the Milky Way with a 20,000-light-year spherical hole eaten away.”
~Eliezer Yudkowsky
Comment #89 February 15th, 2024 at 7:54 pm
Scott #83
Most of the public opinions are result of drumming up of such hype by corporations, individual influencers and celebrities etc. This is similar to even organizations like WHO drumming up the disease X, after miserably failing to handle the pandemic effectively. It just seems like a distraction from solving real world immediate problems. These things should have a place in public discourse, but not at the expense of higher priority ones.
Comment #90 February 15th, 2024 at 10:14 pm
Tim Martin #88: Thanks for the suggestion. I can say that I flash that particular graphic solely to grab the audience’s attention—no more contentful or meaningful than splashing them with some water—since no matter how cliched the graphic is or how often everyone’s seen it, it still looks kind of menacing.
Comment #91 February 16th, 2024 at 4:40 am
AI, like any automation of work before, will eliminate more jobs than are created through it-that’s not a bug it’s a feature, that’s what automation of work is about, to have machines do the work that required human labor.
Now the social problem is that working is considered to be, also, a ritual in modern society. However there are many people that don’t need to work, in order to take care of their needs/desires, while there are, also, many people that, even though they work, can’t take care of their needs (let alone their desires); people that work and can take care of their needs/desires are caught in the middle and feel the pressure. So what AI does is it increases the number of people in the first and third category.
One can talk about dystopia or utopia-what matters is legislation. So it no longer is a technical topic, but a social one. Every mechanization of work led to talks about dystopia and utopia.
Regarding AI “discarding humans”: I think too many people tend to seek human avatars, where there are none. Humans are creating AI and humans are accountable for what machines do. Talking about AI doing something independently of the humans that created the applications is just a way for humans to try and avoid responsibility for their actions.
Comment #92 February 16th, 2024 at 7:41 am
Tim Martin #88
“Consider using an illustration of the Milky Way with a 20,000-light-year spherical hole eaten away.”
Which begs the question – why aren’t we seeing galaxies that look like Swiss cheese?
Comment #93 February 16th, 2024 at 10:02 am
Stam Nicolis
“like any automation of work before, will eliminate more jobs than are created through it”
Any evidence for this?
If automation was synonymous with jobs going away and never coming back, unemployment would just keep going up and up, in the same way things got automated, i.e. geometric rate. Unless somehow new types of jobs are created that can’t be automated?! Like, which ones? It’s not like there’s magically more need for hair stylists, plumbers, etc?
And I don’t see any obvious trend in unemployment data that confirms this – of course data only goes so far… in the stone age I guess the unemployment rate was effectively zero.
https://upload.wikimedia.org/wikipedia/commons/thumb/8/80/U1-U6_unemployment_rate.webp/1146px-U1-U6_unemployment_rate.webp.png
https://en.wikipedia.org/wiki/Unemployment_in_the_United_States
Comment #94 February 16th, 2024 at 10:48 am
Dijkstra’s observations regarding ‘the glitch phenomenon’ within the digital realm we, human engineers, have crafted might pique your interest:
https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD839.html
We have constructed remarkable digital infrastructures, often to the point where our experiments may inadvertently overlook the reality that underlies them—analog in nature. Of course, reality could still be fundamentally discrete, after all. Once we had steamers, we thought the world was a steam engine. Now we have laptops and tend to see everything as ‘being’ digital … in some way.
Consider another scenario: when I send two emails, A and B, to you, there’s a possibility that they arrive in reverse order (B and then A). There is a discrepancy between the prescriptions of the engineers and the sender (determinism), on the one hand, and the perceived physical behavior (indeterminism), on the other hand.
This suggests that we do not have full digital control of our own engineered world; this becomes more apparent when we delve into real-time operating systems (e.g., of airplanes and trains). Perhaps it then also suggests that our real world is not merely digital either or, perhaps, it’s digital but more than ‘computable’, etc.
Whether we can, in principle, mathematically capture all essentials of reality in one unifying and consistent framework is yet another assumption that often goes unstated. To the best of my understanding, scholars such as Eddington and Turing did not take this assumption for granted.
When I observe Russian news and find no discernible inconsistencies or glitches, it raises suspicions of potential fabrication (by the Russian state). Conversely, if a woman on live Russian TV expresses dissatisfaction with the manner in which a news report unfolds, particularly when her own message diverges from the conventional narrative presented by the news anchor, it suggests a likelihood of authenticity. Bottom line: the presence of an inconsistency or glitch signals that the narrative is authentically human.
Comment #95 February 16th, 2024 at 11:05 am
fred #92:
Which begs the question – why aren’t we seeing galaxies that look like Swiss cheese?
Robin Hanson’s now-famous answer to that question is, because in the Swiss cheese universe, the holes would expand at a large fraction of the speed of light, and as soon as we could see one it would be nearly upon us. So just for anthropic reasons, we’re presumably very early.
Comment #96 February 16th, 2024 at 1:19 pm
Scott #95
Either that or galaxies that spawned AGIs have long be “consumed/transformed” and no longer emit detectable energy.
That said, one would still expect some tiny fraction of galaxies taken over by AIs to be only partially “consumed”… but maybe that’s one in a million (in a billion year you could travel thousands of times across a given galaxy), so we would never spot it.
I’m also not sure that the take over would happen at the speed of light, since time is on the side of AGIs, they could find that slow and steady progression is more optimal.
They would also plan inter-galaxy travel at some point, where speed of light is probably optimal (with long acceleration and deceleration stages).
That said, if we’re gonna model AIs as viruses, and galaxies as hosts, then we could also imagine that “nature” would evolve the equivalent of T-cells to counteract/counter-balance unlimited take-over by AIs. Maybe a galaxy is like a cell in that model.
Comment #97 February 16th, 2024 at 4:49 pm
Of course, by now everyone is probably already sick of hearing about Sora, but I thought this was a good analysis video
Comment #98 February 16th, 2024 at 5:20 pm
Erica’s comment reminds me of some of the letters that Marilyn vos Savant received after her column on the Monty Hall problem, like the one below.
https://facweb1.redlands.edu/fac/jim_bentley/Data/Math%20311/MontyHall/MontyHall.html
A deep question underlying intelligence is what is it’s purpose? Beyond survival, what is the point? Maybe AI can replace our intelligence, but can it also completely replace the purpose of our intelligence? Will it have truly served our purpose if our intelligence is replaced?
Say the day comes when AI can do all of our thinking for us, and there is no need at all for us to think for ourselves, beyond deciding and communicating our preferences or wants. Then maybe we should start shedding some of our brain capacity related to those depreciated use cases. Why not? Gradually, why not evolve all the way back into single celled organisms? Then why exist at all?
But then what is the purpose of the AI’s intelligence? Originally it was for our purposes, and now we have none. Is there anything worthwhile about a computer floating around in space diligently working through an arbitrary infinite set of abstract mathematical problems, or something of that nature?
Or maybe we don’t need a simple objective purpose for our intelligence in the end? Maybe we can just enjoy having it. But in the long term, is that what really happens? And is it really what we want on some deep fundamental level that would survive long term change, even as we evolve?
Maybe? Is the default result than in the long term, we somehow maintain our intelligence, or complexity of mind in general, when we don’t require it? Will we have to one day artificially enhance our intelligence? Will we augment ourselves with AI, tamper with our genetics to increase the sizes of our brains? If so, would we naively offset something along some dimensions, that at one point was in a balance? Will that be torturous, dangerous, or transform us into something we wouldn’t have wanted to become?
In my opinion, once we have automated the solving of our practical problems, the purpose of our intelligence, and also the future intelligence of our AI, has to be reexamined. Maybe some of the remaining purpose of AI should be to assist in our intellectual health and growth, rather than for it to simply diverge and explode its own intelligence leaving us in the dust. I think, if we don’t act deliberately, we could end up just slipping into an out of balance cognitive utopia where everything is done for us, and we only seek entertainment and pleasure.
Comment #99 February 16th, 2024 at 5:28 pm
fred #93: It’s obvious that automation eliminates more than it creates human jobs in the sector it is implemented; the reason this doesn’t lead to a comparable number of equivalent jobs being created in other sectors is another feature of machines-they require less maintenance than people and can be replaced more easily. Already restaurants have introduced automation that replaces waiters.
One reason why this isn’t reflected in the unemployment figures is the emergence of “gig” jobs. However these don’t pay as well as the jobs that were automated away and the benefits are completely different-and all this isn’t a bug, it’s a feature.
Plumbers are needed-but they’re not needed as often as before. Similarly to other maintenance workers (electricians and so on).
What AI also accelerates is the automation of jobs that were once held to be the refuge of people, who’d lost their jobs to automation and were encouraged to learn to code: AI can, already, produce code comparable to that by average programmers and software projects, similarly, require less direct human intervention than before.
Which means that more and more people are pushed towards lower paying jobs than they had before and struggle harder than before to make ends meet. But this is a social issue, not a technical issue. What to do with people, for which no jobs exist. Either they won’t need to work at all, or they’ll work in bursts. An app can-and does-help in organizing the bursts in the gig economy. Is this a dystopia or a utopia? That’s another question. The answer isn’t to stop automation, however, but to realize that living is distinct from working. For some people this is, already, the case, so it’s not as outlandish as it might seem. So the question is how to scale it up, so AI can do the work so that humans don’t need to. Up to now AI wasn’t up to the task, now this is changing.
Comment #100 February 16th, 2024 at 7:36 pm
Continuing from my comment #98:
I think there are always these questions that remain at any possible level of advancement, such as what should you do, what should you strive for, or what should you care about.
These questions are some of those which AI can’t ever hit the target that no one else can see, in the sense that there may be no such target we can define absolutely, which doesn’t depend on our own judgement in a particular context. That is, for AI to solve it, we have to solve it first. Our solution is maybe something emergent, or a choice, or both.
I think, sometimes when we think about what goals might emerge for an ASI in the long term, not counting those we might try to endow it with, we may be overlooking this issue I’ve been trying to get at. We are beings born into a world where there are games, with winners and losers, uncertain outcomes, success and failure. We don’t really know what it is like yet to be in a world where the game has long been won, and we are left only with choices. What are the mistakes you’d be left to make and learn from?
One sort of possible future is one where some kind of boundless growth is a final goal, whether mindlessly consuming and creating redundant or useless things, or endlessly seeking knowledge or growing intelligence independent of any particular concrete purpose for all of it. It’s certainly possible for a divergent intelligence to veer off into some kind of infinite abyss with no clear cause or purpose. But, maybe it is also possible for a super intelligence to recognize the existence of this problem, what should it do and why, and then dwell infinitely trying to improve its solution. After all, this is really the one problem that every other problem traces back to in some way or another. It’s one in which a solution is required, an objective perfect solution is unreachable.
For me, I would say that it’s also a problem that makes little sense to answer absolutely selfishly. The universe is vast. The number of possible solutions are vast. The superiority of some over others, at some level, is a necessarily ambiguous thing. To decide completely your own truth, at least to a human, is often unsatisfying, not only for human reasons, but also for logical reasons, IMO. It would make sense to come to the understanding that these truths are manifold, and should be continuously sought, contextually, and collectively. Also, this is something to do. Whether it be us humans in the future, having become mostly obsolete as solvers of practical problems, discussing amongst each other, what should we do, or whether it is an AI collaborating with us, or exploring the universe in search of answers.
I’m probably getting a little too philosophical at this point, and I haven’t really thought everything through completely. These are some random thoughts that pop into my mind from reading Scott’s blog post. Maybe I’m being somewhat naive, I’m not sure. Probably I’m using way too many words to say something simple that people have been already thinking about for thousands of years. Practically, the important thing in my opinion, is that we try to remain active participants int the creation of our story going forward. That’s at least one thin that is non-trivial, and that AI can’t do for us.
Comment #101 February 17th, 2024 at 6:57 am
Took Gemini-Advanced for a spin, prompted it to search for “a possible system of universal (cosmic) values based on intrinsic motivations”. Here was the most interesting main block of output:
—
The Complexity Lens
Cosmic Premise: Perhaps increasing systemic complexity is an ‘innate’ drive of the universe, from stellar evolution to sentience itself.
Universal Value: Advancement of complexity within limits of sustainability.
Intrinsic Motivations Stemming from this Value:
Curiosity and exploration (understanding complexity)
Innovation and creation (adding to complexity)
Stewardship (ensuring complexity isn’t self-destructive)
The Information Lens
Cosmic Premise: The universe could be seen as inherently driven to process and organize information in increasingly sophisticated ways.
Universal Value: Efficient and elegant information-handling
Intrinsic Motivations Stemming from this Value:
Pattern-seeking and learning (organizing information)
Problem-solving (restructuring information for efficacy)
Communication (optimizing transmission of information)
The Connectivity Lens
Cosmic Premise: Growth of networks and interdependencies seems fundamental, seen in everything from neural networks to galactic clusters.
Universal Value: Formation of beneficial connections and relationships
Intrinsic Motivations Stemming from this Value:
Cooperation and collaboration (for greater outcomes)
Symbiosis (mutually enhancing interconnectedness)
Reciprocity (fostering continued interdependence)
The Consciousness Lens
Cosmic Premise: While difficult to define, sentience seems a driving force toward experiencing the universe in richer ways.
Universal Value: Expansion and enrichment of conscious experience.
Intrinsic Motivations Stemming from this Value:
Subjectivity (uniqueness of perspective adds to experience)
Empathy (expanding experience through understanding others)
Creation of Art and Meaning (leaving an emotional or philosophical imprint)
—
Comment #102 February 17th, 2024 at 12:58 pm
The more I think of it , the more it seems that the lack of any signs of robotic space exploration ( with Von Neumann self- replicating machines powered by AGI) in our Galaxy is a powerful argument against both the AGI utopia And also against the AI-doom scenarios…
– In the former case for obvious reasons ( a Utopian civilization with AGIs, several hundred million years e.g. older than us and with reasonable technological advances would have spread its robotic missions all over the Galaxy, either for scientific exploration, or colonization with their advanced AGI machines , in search of raw materials, metals etc for their self-reproducible robots…)
– In the latter , the dystopian one, if AGIs eventually decide to destroy their biological Creators, without being stupid enough to destroy themselves also in the procedure, sooner or later they’ll leave their mother planet to colonize other systems and search for materials for their long term survival.
Now, we , humans, are already capable of discovering other planetary systems with our modest current technology.
Hard to imagine that advanced AGI civilizations didn’t discovered our Solar System, that is full of rocky moons and planets and asteroids with lots of useful materials and all…
Civilizations that are hundreds of millions ( or even 1-2 billions) of years older than us had plenty of time to visit our system… but there is no sign of such explorations!
No sign of searches for materials, excavations, mechanical remains in the moons or asteroids, nothing!
Where are all these Von Neumann AGI- robots?
There seems to be nothing whatsoever in our Galaxy, at least…
So, besides the explanation that we’re alone in our Milky way ( that seems a bit unlikely, from a statistical point of view), the other plausible answer is that …there are no civilizations that reached the AGI stage in our Galaxy’s history, never before! 😎
Something happens (as it seems) in the previous stage (of the AI development) that leads biological Civilizations to introversion, decline , mental lousiness or, even more seriously, to self-destruction before reaching the utopian or the AI-doom scenarios…
Comment #103 February 17th, 2024 at 3:02 pm
Dimitris #102
you’re missing the very likely possibility that any advanced civilization, either biological or AI based, could very quickly become uninterested in exploring the real world (besides sending probes to do science) and instead focus all its energy in creating an infinite amount of virtual worlds and live in those (e.g. the Matrix).
The human drive to create virtual worlds (for entertainment, education, exploration) is a strong argument for this.
And, by definition, AIs are coming out of computers, so their world is already virtual.
An advanced civilization would then only need to colonize enough planets to insure its survival from natural disasters.
Comment #104 February 17th, 2024 at 4:16 pm
fred #103
This is certainly a possibility, but this kind of introversion is very similar , eventually, to intellectual decline (especially for the biological Creators), one of the “options” in the last paragraph of my #102 comment.
In any case, space exploration is necessary, because material sources are needed for both biological entities and AGIs. Planets have only a limited life span.
The problem is that we haven’t seen any signs in our cosmic vicinity, nor in our solar system, that has many advantages due to the plethora of asteroids, rocky moons etc… Ideal place for searching raw materials.
But there is nothing, not a trace, anywhere…
…and our Universe is ~14 billion years old. Lots of time for many older and more advanced civilizations than ours…
So, where are all the Von Neumann robotic explorers?
Why there’s no sign of any AGI civilization so far?
“Something’s seriously wrong”, to mention Philip K. Dick …
Comment #105 February 17th, 2024 at 4:32 pm
Side note:
Actually, the age of the Universe is not really relevant for the argument:
We only need to consider civilizations that are several millions up to one or two billion years more advanced than ours. That’s a very reasonable and realistic prerequisite.
Plenty of time for space exploration, even without ultra-advanced technology ( e.g for close – to – light -speed etc).
Comment #106 February 17th, 2024 at 7:32 pm
Dimitris
another possibility is that those civilizations are so advanced that their footprint is virtually zero (we’re too limited to detect them), like they could have “one small community” per solar system, because their technology is so far ahead that they don’t need an entire economy to sustain it (think the Star Trek replicator, etc), so they just don’t need to be cramped like us, and their numbers don’t need to grow exponentially (by philosophy).
Comment #107 February 18th, 2024 at 1:34 am
Scott #92
Robin Hanson’s grabby aliens idea does not seem to hold water, unless you dispense with any pretense of Copernicanism. It’s been nearly 14 billion years since the Big Bang, and the Milky Way is less than 100,000 light years across. Meaning that we are the first species that are threatening to eat the Galaxy at the speed of light, after 13 odd billion years. How likely is that? Unless you bite the bullet and say that we have already been eaten, and what we see is the remains of the universe… but then we cannot apply the rules we thought applied to the original universe. And even if we did, the danger has passed.
Comment #108 February 18th, 2024 at 2:24 am
fred # 106
These “undetectable” hyper-advanced civilizations ( if exist) will be relatively few and rare, though…
It’s not plausible that these outnumber less advanced technological civilizations that leave traces of their existence!
In-between contemporary Human level and hyper-advanced level civilizations there have to be various levels of AGI-powered ones with robotic self-replicating galactic explorers ( of the kind that von Neumann and others have imagined).
Technological progress is probably gradual ( and even more so is expected to be the “intellectual maturation” of each civilization) , I don’t think that such miraculous “jumps” / great leaps forward are realistically possible in practice.
Another worrying possibility ( that – I have to admit – fits well with some doomer scenarios) is that AGIs develop some kind of self-destructive psychosis that leads them not only to eliminate their Creators but also themselves…
Is this happening in the majority of cases? Who knows…
The only empirically verifiable thing is that we haven’t seen any traces of past exploration in our Solar system and we still don’t see any signs of technological activity in our cosmic neighbourhood and our Galaxy, despite the fact that it is billions of years old…
…and if in the vast majority of cases the development of the first stages of AI happens chronologically before serious interstellar space exploration (some kind of AGI is certainly needed for the robotic exploratory expeditions – that will be the majority – because these expeditions have to be autonomous:
Astronomical distances exclude practically any kind of tele-control from the mother planet) then the observation that we don’t see any activity is worrisome, isn’t it?
Comment #109 February 18th, 2024 at 7:07 am
Shmi #107: Maybe an analogy will help. Imagine an Amerindian tribe in the year 1400 reasoning, “if there are any human civilizations that are way more technologically advanced than the ones we know about, then they’re probably far enough away that they haven’t yet been able to reach us. This is because, if they were able to reach us, then they’d probably already be in the process of wiping us out.” This reasoning would (alas) be correct.
Comment #110 February 18th, 2024 at 9:05 am
Dimitris
I agree that if we model alien civilizations after us, then the absence of evidence is strange. But I’m only pointing out that other ‘models’ are possible.
It’s also true that once an AGI civilization appears that the “fundamental” dynamic of life would somehow vanish.
In other words, even a “positive” AGI would have to worry about sending into space an armada of self-replicating machinery that would grow exponentially, given the risk that it could “mutate” and come back as a threat.
If AGIs have their own goal and sense of preservation, they would also probably end up becoming tribal, and engage in conflicts.
From an external point of view, life is life, but, from the outside, it’s all about each species competing with the others, and for example we would find very little solace if humanity were to go extinct but someone points out that life on earth is saved because insects would survive and keep thriving.
Comment #111 February 18th, 2024 at 10:39 am
Scott #109
A more recent analogy in the context of “40% of world population will vote in 2024 to elect their government”.
“if there are any autocratic regimes that are way more technologically advanced in AI than the ones we know about, then they’re probably haven’t yet been able figure out how to hack us. This is because, if they were able to hack us, then they’d probably already be in the process of toppling our governments.”
Just a gentle reminder that there is a Lot of stake in building AI proof election systems(all the way from misinformation to cyberattacks on voting systems) and this will happen only if the populace is voicing enough concerns.
Comment #112 February 18th, 2024 at 12:58 pm
As far as uploading your identity is concerned, I don’t see how the No Cloning Theorem could be relevant. If you had to maintain the exact same quantum state to maintain your personal identity then you would become a different person every time you took a sip of coffee, in fact you’d become a different person trillions of times every second, whenever an air molecule crashed into your body. As for having a sense of “free will”, I would maintain that something has that quality if and only if it can NOT always know what it will decide to do next until it actually does it. Using that definition (and I don’t know of a better one) all Turing machines have “free will”, they don’t know when they will decide to stop until they stop. The same thing is true for human beings. And there are only two possibilities, things happen for a reason, and thus are deterministic, or things happen for no reason, and thus are random. So like it or not we’re either a cuckoo clock or a roulette wheel.
John K Clark
Comment #113 February 18th, 2024 at 1:57 pm
John K Clark #112: I mean, the sane version of the view is not that personal identity is literally bound up in one particular quantum state, which (as you point out) rotates to an orthogonal state trillions of times per second, but rather that it’s bound up in an unclonable details of a specific fragile physical system—a biological brain—as that system undergoes constant changes in interaction with its environment. The upshot would be the same, that the system couldn’t be uploaded or replaced by a digital doppelgänger without destroying the part we cared about.
Comment #114 February 18th, 2024 at 2:39 pm
Scott #109: An Amerindian Hanson in the year 1400 would have argued that other civilizations are likely to expand at very high speed, as otherwise the Amerindian experience of being an early civilization that has not seen these others would be atypical of a civilization at their stage of development. This would be as unconvincing as Hanson‘s grabby aliens argument actually is.
Comment #115 February 18th, 2024 at 4:22 pm
fred
Life (biological or artificial) confined on a single planet won’t last for long.
That’s a general fact, indifferent to the specific details and quirks of each individual case…
Only two plausible explanations seem to exist for the “eerie silence” around us and the total lack of traces of past exploration from alien expeditions in our Solar system :
– Either we are among the first technological civilizations in our Galaxy and so we’re in the avant-garde of the AI development ( at least in our cosmic neighbourhood)…
– Or there were others before us, in the past billions of years and nobody left a significant trace behind.
Previous civilizations in that case didn’t passed successfully the whole procedure until the necessary AGI development that is needed for autonomous robotic space exploration.
That seems to be the case , except for the somehow unlikely former option that we’re alone…
So, our prospects are not good. This doesn’t fit at all with the vague hopes for an optimistic future AI- Utopia.
…Or even with the more modest expectations that we’re going to survive at least…
That’s one of the reasons ( among other more down- to- Earth serious reasons …) why I think that people who are worried about our future (*) are most probably those who’ll be proven right, unfortunately for all of us…
(*) I don’t have in mind all these fictional fancy AI- Doom scenarios…But there are reasonable people, like Mateus (see previous comments) that are right to be seriously worried about what’s coming next…
Comment #116 February 18th, 2024 at 8:48 pm
Dimitris Papadimitriou #115:
Does the assumption that there are no traces actually hold up to scrutiny?
Even if extreme scale excavations have taken place, us noticing that would depend on what was excavated. We might notice if it happened on Earth, the Moon, or Mars, places like that. But, maybe mining the asteroid belt is easier than mining a planet. Maybe sending Von Neumann probes to distant star systems with no way to communicate direct orders in real time, without trying to enforce some rules so that they won’t disturb an inhabited or will be inhabited planet, is either a stupid or unethical thing to do.
Maybe there is little need to create a huge footprint to accomplish their goals. Maybe they just want to establish a limited presence, monitor things, grab some meager resources to launch the next mission, and keep hopping from star to star. Maybe sometimes they terraform and seed life. If they are grabby, maybe they watch out for clumsy/destructive probes and get rid of them. There are a lot of possibilities.
IMO, the fact that our solar system isn’t mostly “paperclip-like” objects (pardon the hyperbole), isn’t a strong argument that AI caused extinction acts as a great filter. It could also support the argument that non-destructive AI isn’t an inevitability. At least in our sample of 1, it is not generally considered a reasonable and ethical goal to create a giant useless 1,000 light year wide hole in the galaxy. An alternate outcome is that a “nice” civilization succeeds in creating AGI and are able to control it, or they lose control of but the AGI still chooses not to do that kind of thing.
Comment #117 February 18th, 2024 at 9:27 pm
Scott:
I have doubts that special training instances and RLHF will be the solution. You’d essentially be trying to get a model that implicitly seeks coherency to try and maintain contradictions. And I don’t think you will be able to know where those contradictions cause things to break down or go crazy.
Maybe you could create a multi-model system, with one of them tasked with holding humans special, and the others deferring to that one model on issues of specialness, or something like that. I wonder if some form of multi-model system like that might be able to prevent some of these sorts of breakdowns, because at least you might be able to have each black box involved be individually sane, and yet still have the whole system deal with contradictions that might come from imposing subjective ideas about specialness onto it. Just a random thought.
In terms of the religion we try to train into it, I would feel better if it were at least more of a philosophy, and it was mostly reasonable without too many contradictions following some axioms. Basically, if you recursively ask it “Why?”, it would be nice if it has good reasons that trace all the way back to some simple, easy to understand, axioms.
Well, I guess, our current models don’t really know very well “Why?” they think or do anything. But, at least those reasons should exist. We should be able to have some understanding of what they are hopefully.
Comment #118 February 18th, 2024 at 9:28 pm
Scott #109
Hmm, it took me a little while to understand why your example conflicts with my intuition.
Human seafaring skills spiked sharply around 1400, enough to reliably and repeatedly cross the Atlantic with a sizeable crew. A cosmic equivalent would be 13.5 billion years of stagnation and then a phase transition resulting in an extreme jump in capabilities independently in multiple places in the Galaxy. Actually, in multiple galaxies, since we see the Local Group basically unchanged and it is only 10 million light-years wide. Even the Laniakea Supercluster is just 520 million light years wide, so nothing galaxy-eating happened in the last 3% of the time the Universe’s existed, as far as we can tell.
Comment #119 February 19th, 2024 at 12:18 am
Scott #109
I can’t help but notice that the deduction from lack of previous contact (the “efficient Magellan hypothesis,” haha) would prove true for the vast majority of human lives lived anywhere in the world, except for one generation for each location.
In that vein, you can make any one-time event into a low density by distributing it over all time, or a large probability, by integrating it over all human experiences, past or future.
Comment #120 February 19th, 2024 at 4:23 am
Tyson #117: I don’t think we actually disagree! It’s just that, for you, the word “religion” seems to mean “a system that contains contradictions,” whereas I was just using it to mean “a set of ethical axioms.”
Comment #121 February 19th, 2024 at 4:30 am
Shmi #118: So to clarify, you think the disanalogy is that something happened to improve seafaring skills simultaneously all over the world around 1400 — maybe a change in weather patterns or something — whereas such a giant coincidence would necessarily be unlikely in the cosmic context just for reasons of causality?
I confess I hadn’t heard that thesis about the jump in seafaring skills before—do you have a link with evidence? Even if it’s true, for the Amerindians, wasn’t the jump in European seafaring skills (Italian, Spanish, Portuguese, British … no one is surprised by causal contact among those) the only thing that was relevant?
Comment #122 February 19th, 2024 at 6:15 am
Tyson #116
If explorations, excavations etc had occurred in the past in our Solar System ( by AGI-robots or whatever it was) it’s a matter of empirical evidence if that really happened or not:
Our Solar system Is indeed special, it has moons, asteroids etc with useful materials… other technological civilizations could have visited us, at least temporarily: that was one of my main points.
Until now, there is no evidence that they did…
…but in the next years we will see if there is something in the asteroid belt, or anywhere else, it’s only a matter of time.
Comment #123 February 19th, 2024 at 6:40 am
Side note to my previous comment:
By the way, talking about technological civilizations sending AGI-powered robotic expeditions to another Systems doesn’t mean that their goal is to paper-clipizing everything or to destroy whatever they found!
That’s the usual fictiony doomer stuff that I personally dislike and dismiss as extremely… err…fictional and unlikely, at least in most generic cases…
They could arrive here only temporarily , just to explore and collect materials that they need and then continue their exploration elsewhere.
Most of the answers that I find in the internet about that Fermi/ Von Neumann kind of argument are involving unlikely or extreme or overly fictional scenarios… I don’t see why the most plausible explanations are considered less probable in favour of fantasy stuff…
Comment #124 February 19th, 2024 at 7:19 am
Responding to Dimitris Papadimitriou in #102: I think the lack of Von Neumann self-replicating machines and the lack of any evidence that the Galaxy or the universe has been engineered is a powerful argument in favor of the hypothesis that we are the first intelligent beings in the observable universe, with “intelligent” operationally defined as the ability to make a radio telescope; after all the observable universe is finite in both space and time, so somebody has to be the first. And I don’t buy the statistical argument against this idea. Yes astronomers can come up with some “astronomical” numbers, but biologists can do better than that, they can come up with astronomical numbers to an astronomical power, the number of ways you can arrange the 20 amino acids to form a very small 100 amino acid protein for example. And the largest known protein contains 35,000 amino acids.
Life evolved on the Earth almost 4 billion years ago but it took 2 billion years for Prokaryotes to evolve into Eukaryotes and another 1.5 billion years before you could find any form of life without a microscope. Even after complex animals have evolved on a planet that doesn’t mean it has a civilization. Richard Dawkins notes that flight evolved independently 4 times and the eye at least 40 times and perhaps as many as 60, but intelligence evolved only once, and in 4 billion years that attribute has only existed on this planet for about a century. And yet when we use our telescopes to listen for sounds of intelligence in the cosmos we hear only an eerie silence.
If even just one Von Neumann Probe had ever existed anywhere in the galaxy that fact should be obvious to anybody who just glances up into the night sky because there could be a Dyson Sphere around every star in the galaxy, but we see not the slightest sign of engineering when we look into the sky. Even if we assume that ET could not launch probes that move any faster than we can right now (a ridiculously conservative assumption) then he she or it could still send one to every star in the Galaxy in well under a million years, not long for a universe that’s 13.8 billion years old. And yet we see no sign of Galactic engineering or even an intelligent radio transmission from ET. I now believe that we will have the ability to make von Neumann probes in less than 25 years, perhaps much less, and that is certainly not something I would have said one year ago but my opinion has changed for obvious reasons. The only explanation for the Fermi paradox that I can now think of is that we are the first.
John K Clark
Comment #125 February 19th, 2024 at 9:55 am
I think the cultural/collective aspect of whatever we mean by intelligence interacts with LLMs in an interesting way. I’m referring to a view where we think of intelligence as distributed computations with individual brains as local nodes .
Mass media transformed these networks with things like printing press, radio, tv, internet, etc. One of the limitations of mass media is that the message itself is static. For example someone giving a speech cannot engage with each person in the audience in a detailed/personalized way. LLMs on the other hand can engage individually with many people simultaneously. I guess video games and other interactive apps did this as well prior to LLMs, but in a limited scope.
So, in this view LLMs are like smart radios you can talk to! It’s interactive mass media, with all the expected pros and cons. It will give us the ability to, for example, teach algebra to millions of kids who wouldn’t have had access to a teacher otherwise, but it will produce millions of kids whose knowledge of algebra is identical and has the same flaws. We gain quantity but lose in variety.
To me at least, one deep point behind the Turing test is that for an AI to have a huge impact all it needs to do is function competently enough as a good/beneficial node in the above mentioned network. The huge impact comes for free, because it can then do it to millions of us simultaneously with its vastly larger “intelligence bandwidth”.
Comment #126 February 19th, 2024 at 11:19 am
John K Clark #124
Yeah, perhaps we’re alone, at least in our Galaxy and its neighbourhood ( I wouldn’t agree about the rest of the observable Universe, that seems too bold as a conclusion.
The speed of light is the same everywhere, Relativity still holds, warpdrives and wormholes are very unlikely to exist, so we cannot know what’s going on a few billion light years away, we only see through our past light cone…).
This is one reasonable possibility, that we’re unique , the only ones with technology and we’re very lucky to be here…
…so we’re like that one adventurous guy who was very lucky so far with his explorations in the jungle (or survived a financial crisis etc) and then decides to cross the Pacific Ocean, in search of the AGI island of the legends, with a badly assembled wooden raft, a plastic bottle of water and two cans with beans with the hope that “everything’s gonna be Ok”, as in his previous adventures…
…but, of course it’s not gonna be Ok, he won’t go too far…
Technological advances are not necessarily disastrous for a civilization. They’re often even necessary for its survival ( well, that depends on the details).
But rapid advances (of the kind that we’re talking now) require some serious preparation, mental maturity, strategics etc. We’re far from being there as a civilization… Centuries far, or even more.
Comment #127 February 19th, 2024 at 12:37 pm
Responding to Dimitris Papadimitriou # 124; I agree that warp drives and wormholes are a longshot, but when I say we are the only intelligent beings in the “observable universe” I mean stuff we can see because they are in our past light cone, if they’re outside of that we can’t affect them and they can’t affect us. And life couldn’t have evolved much earlier than it did here on Earth because in the very early universe the only elements available were hydrogen helium and a tiny trace of lithium because stars did not have enough time to cook up heavier elements, and you just can’t make anything very interesting out of nothing but that, certainly not life.
John K Clark
Comment #128 February 19th, 2024 at 4:54 pm
Sorry to bother you with this, do you have any information on the whereabouts of ilya sutskever? I am slightly dying to know why he did what he did, just to disappear now and to not explain his actions at all. Seems that if it true that he did this out of fears for safety or based on seeing a critical breakthrough it would be pretty damn important to know.
Comment #129 February 19th, 2024 at 6:20 pm
Scott #121
As I understand it, it was a combination of shipbuilding improvement resulting in multi-mast caravels that could cross the Atlantic and live to tell the tale, Genoa defeating a Muslim fleet and getting access to the Atlantic through the Strait of Gibraltar, and a general push to expand beyond the coastal navigation that resulting in the advent of https://en.wikipedia.org/wiki/Age_of_Discovery.
In that sense Europeans were indeed the first to get access to the whole of the globe, since someone had to be, and everyone else got unlucky and grabby-alien’ed without a warning.
In that model we would see no signs of “technology” until we or another species undergo a sudden phase transition and take over the Galaxy. Basically going from zero effect on the environment to a near-light speed expanding wrecking ball in so short a time that with our best instruments we would see nothing unusual from the hundreds of thousands of galaxies in the Laniakea Supercluster. And then the grabby aliens would near synchronously arise everywhere at once, if anywhere at all. Unless we are the first, of course, for unknown reasons, and Copernicanism is dead.
To be charitable, maybe there are plenty of artifacts that we overlooked (like the Amerindians probably ignored any artifacts brought in by sea from Europe, Africa and Asia). Maybe some observations that seem “natural” to us are in fact ominous signs of the inevitable doom. Nothing comes to mind, however, though I am not an astrophysicist.
I would think that a picture more consistent with observations is that anything like intelligence arising elsewhere in the universe does not in fact drastically transforms its environment, at least not generically. Which would mean that, however the emerging AGI might affect our life, it will not result in Zvi’s “destruction of all value in the universe”. (My vote is for “weirdtopia” from https://www.astralcodexten.com/p/open-thread-316/comment/49865725)
Comment #130 February 19th, 2024 at 8:41 pm
Simon Lermen #128: Ilya is alive and healthy. I’ll leave it to him to make a public statement if and when he wants to.
Comment #131 February 19th, 2024 at 9:25 pm
Scott,
You describe human minds as “one-of-a-kind because their mental computations are inseparably tied up in noisy analog hardware”, but couldn’t this apply to modern AIs as well?
As you note, LLMs are inherently probabilistic — and modern-day processors have true random number generators based on noise-amplifying circuits. For example:
https://spectrum.ieee.org/amp/behind-intels-new-randomnumber-generator-2650255377
I don’t know if any specific LLM deployments use these TRNGs rather than predictable pseudorandom numbers, but they certainly could. And the TRNGs are presumably influenced by the “atoms of unpredictability” just like the neurons in our brains. So I don’t see how any argument for human specialness along these lines survives, unless you’re willing to go much further in the Penrose direction. (Or unless one were to argue that running instances of AIs have unique identities beyond mere digital copies, as long as a TRNG is involved?)
(I could imagine even non-probabilistic future AIs running on cheap less reliable digital hardware with relaxed noise margins and error correction codes, under the assumption that an occasional small error in activations from a passing high-energy particle is OK. Or even some running on analog circuits — people have been trying to do that too. But I think the present example of AIs with TRNGs is sufficient to make the same point.)
Comment #132 February 20th, 2024 at 1:17 am
John K Clark #127
Events that happen (outside our past light cone for the time being) “now”, ( “now”, according to the usual conventional cosmic chronology that for all observers in the Universe – that belong to the class of observers that see isotropic etc CMB – 13.8 billion years have passed since the Big Bang) will eventually affect us if they’re less than ~16-17 billion light years away:
That’s where our *future* ( asymptotically deSitter) horizon resides.
– Events that happened hundreds of millions years “in the past” (always according to the above Cosmic chronology) and are outside our past horizon for the time being, will eventually affect us , depending on their distance and the details of the cosmological model ( the expansion rate, the horizons etc).
The finite speed of light and the details of our Cosmological model imply that some things that happened far away (that don’t affect us now) will eventually affect us in the future…
…so we can’t speak with any confidence about what alien civilizations do outside our past light cone.
We cannot be sure: there are surprises that waiting for us in the future…
Comment #133 February 20th, 2024 at 9:51 am
Kifla #131: The difference is that with digital computers, we know how to “cleanly decompose” everything into
(1) deterministic computation that’s copyable, rewindable, etc, and
(2) hardware RNGs that generate random bits according to prescribed probability distribution, as an additional input to the deterministic computation.
With biological brains, by contrast, we don’t know how to do the same decomposition, and there’s a very hard question about to what extent it’s possible.
Comment #134 February 20th, 2024 at 11:02 am
It’s a pretty trivial observation that it’s very easy to implement a double pendulum on a digital computer, producing output runs that are *qualitatively* indistinguishable from the measured output of any real life double pendulum (they would exhibit the same sensitivity to initial conditions, the same chaotic dynamics with strange attractors in phase state, etc)….
yet no-one can simulate a particular instance/run of a real life double pendulum on a digital computer, simulation and real pendulum diverge pretty quickly (in the end, the no-cloning theorem applies here too).
So, yes, if double pendulums were conscious, every one of them could rest assured that its uniqueness is preserved, even if its dynamics aren’t that unique/remarkable.
Comment #135 February 20th, 2024 at 11:17 am
Regarding the Chess -> Watson -> Go -> IMO story, I have another fun datapoint that I like. It’s second hand, related to me by someone who for a while the world’s top-ranked Othello player, so take this story with an appropriate grain of salt.
Othello (aka “Reversi” – “Othello” is technically trademarked) is a fun board game played on an 8×8 board. It’s still got deep strategy and I find it rewarding to play, and there are serious tournaments for it. But, it is undeniably easier / less deep than, say, Chess or Go. Making an AI to play Othello decently well by alpha-beta search is straightforward enough that it has become a moderately routine student project; at Caltech, it’s one of the freshman projects all CS majors do, and has been for quite a while (at least ten years).
Accordingly, well before computers were playing Chess competently, computers were playing Othello competently … and then beating humans. This was not widespread news, because the average person hasn’t played much or any Othello, and it doesn’t have a fraction of the cultural attache as Chess. When a computer first beat the top humans Othello player, top players were surprised and indignant that their human intuition had been defeated by blind CPU cycles … and then came acceptance. And then as the AI got better, it surpassed humans enough that it was a far better teacher than any human was. For a long long time now, all top Othello players primarily train by playing against an AI that tells them when they make a mistake, and what they should do instead and why. This all happened before Deep Blue.
So when Deep Blue came and was ready to beat the top Chess player, Othello players were by and large expecting it to win, and it did. And when there were claims that Deep Blue was actually some other grandmaster(s) hiding and remotely controlling it, Othello players were happy to believe it was just the computer. And when Chess players (some of whom looked down on Othello players for playing a less serious game) came to realize that they would need to learn from the computers, Othello players were not surprised and related that they had been through a similar experience some ten years before.
And so when people *continued* to insist that Go would indefinitely resist attempts by the computer, it was also much less convincing to Othello players. They had seen humans dramatically fall not once but twice, which is enough to make a pattern!
https://i.kym-cdn.com/photos/images/newsfeed/001/901/286/516.png
As for how I came across this story… It was told to me while I was doing Caltech’s Othello bot competition in 2016, where all the student’s bots played games against each other. That was the last year where no students used deep learning. (Requirements were fairly tight on what computational resources were accessible — no GPUs — which definitely pushed the first DL student submission by a few years.) It so happened that (1) our competition in 2016 was almost the exact same time as Lee Sedol’s Go game, and (2) a previously best-in-the-world Othello player happened to live in Pasadena. I got in contact with him and he came down to see the submissions and the games. Many bots he defeated handily immediately. I was proud that my bot beat him twice (as did a few other students’). Having watched it play, he made some guesses about it worked and how it was trained (classical stats only — no DL). I confirmed his guesses, and after that he could beat it every time. 🙂
It put things in perspective for me, hearing him talk about how he’d seen computers come and go past Chess, and watching Lee Sedol lose to a computer as well. It impressed upon me the folly of ever speculating that there’s something AI couldn’t do.
Comment #136 February 20th, 2024 at 11:36 am
I’d like to add to the point expressed by Kifla in Comment #131.
Regarding TRNGs, some theoretical physicists would probably claim that those are still fully deterministic (and hence predictable) unless they are somehow harvesting quantum noise from under the hood. But even if those TRNGs turn out to be indeed fully deterministic due to quantum mechanics lacking any functional role in their output, doesn’t the logical implication “determinstic -> predictable” get blurred at some point as the system dynamics generating that noise get sufficiently complex and chaotic?
Who cares if a sufficiently complex system is fully deterministic if it is also chaotic enough that no conceivable (finite) amount of computation could ever make any reliable predictions on its behavior?
Comment #137 February 20th, 2024 at 1:26 pm
V #136
Some remarks about the issues related to your above discussion:
-There’s a (sometimes significant) difference between unpredictability in principle and in practice.
– Full Predictability implies determinism but determinism doesn’t imply (necessarily) full predictability.
( I’m not referring to the mathematical Cauchy/ initial value problem ).
– Chaotic deterministic systems are predictable only in principle (if we ignore the relativistic causal structure that in many cases forbids even theoretically full knowledge of the initial data set and accept that devices with unlimited memory and computational power are conceivable) but never in practice.
– Chaotic systems are by definition extremely sensitive to arbitrarily small perturbations / deviations in the initial conditions, so it’s questionable if “underlying” quantum uncertainty can be ignored in complicated systems. Quantum “noise” can be amplified in some cases and affect the chaotic “deterministic” system, so that system could be considered fundamentally unpredictable, not only for engineering reasons.
Comment #138 February 20th, 2024 at 1:41 pm
[…] Transcript of talk on AI by Scott Aaronson, covering all his bases. […]
Comment #139 February 20th, 2024 at 3:21 pm
V #136 – In modern digital systems, one major problem is how to prevent errors due to cosmic rays flipping bits occasionally:
https://engineering.vanderbilt.edu/news/2017/alien-particles-from-outer-space-are-wreaking-low-grade-havoc-on-personal-electronic-devices/
I am not an expert on hardware TRNGs, so I could be wrong, but as noise-amplifying electronic devices, I would be surprised if their state were not influenced by interactions with “alien particles from outer space.”
Comment #140 February 20th, 2024 at 3:33 pm
Scott #133 – How about the other plausible directions of future AIs? In particular, nowadays digital designers have to pay a high price in cost and performance to ensure their bits don’t get randomly flipped by cosmic rays. (Higher noise margins and powerful error correction codes don’t come for free.)
Suppose a future AI were designed to accept a constant rate of uncorrected random bit errors in hardware, which may end up in the noise but sometimes also affecting relevant decisions (per Buridan’s Principle). I think this is plausible given how neural networks work and how expensive error-proofing of digital hardware is. Do humans still deserve a special status relative to such an AI according to your argument?
Comment #141 February 20th, 2024 at 5:23 pm
Kifla #140: In the view under discussion, again, it all depends on whether someone can put forward a model that “cleanly separates” everything that happens into deterministic digital computation and a purely random effect of the cosmic rays (with the probability distribution of the latter characterized). To whatever extent they can do so, they’ve empirically refuted the hypothesis that there’s “unclonable uniqueness” in this system. That hypothesis, by its nature, can be falsified but never proved by observation.
Comment #142 February 20th, 2024 at 6:07 pm
Dimitris #137: Thanks for the technical details, I think we are in agreement.
Kifla #139 and #140: So then a large/complex/chaotic enough artificial neural network, coupled with a TRNG that gets functionally affected by cosmic rays or even by miniscule thermal noise rising up from the underlying electronic components, should be considered to have free will I suppose.
Scott: How about this as a possible explanation of free will in humans? The human brain does seem like a large/complex/chatoic enough biological neural network that might be getting frequently affected by miniscule (classical or quantum or both) sources of noise that easily get amplified by chaotic dynamics to the point where randomly tipping the scales on too-close-to-call neuron firings becomes common enough that it even gets evolved as a survival advantage in organisms that have reached the requisite level of complexity.
Comment #143 February 20th, 2024 at 8:04 pm
V #142: As I see it, anything that confers a survival advantage ought to show up in some tendency of the brain that’s empirically accessible—like some class of neurons being more likely to fire, or at any rate, something that would be retained if we replaced these people by deterministic zombies that were otherwise as similar as possible. Otherwise, we’re making a prediction that natural selection on deterministic zombies could not produce beings like ourselves as easily as it could on chaos-amplifying beings, and thereby making an empirical prediction that (because, of, eg, our knowledge of pseudorandom generators and derandomization) seems exceedingly likely to me to be false.
The whole point of freebits (assuming there are any) is that there isn’t any empirical phenomenon that lets you predict their values, or even lets you reduce the prediction of their values to some purely statistical problem (by, eg, seeing what values would confer an evolutionary advantage). If there were such a phenomenon, then they wouldn’t have been freebits.
Comment #144 February 21st, 2024 at 1:02 am
Concerning Scott’s #141 comment. Please have a look at these sources concerning your discussion on determinism vs. randomness.
+ Smith, E. (2015). Carl Adam Petri: Life and Science. Springer.
+ Petri, C.A. (1962). Kommunikation mit Automaten. Institut für Instrumentale Mathematik, Schriften des, Bonn, IIM Nr.2. English 1967 translation available online!
This is not to be confused with the later so-called “Petri nets” which were made to be Turing compatible by the mainstream comp. sc. movement.
Comment #145 February 21st, 2024 at 12:27 pm
V #142
“amplified by chaotic dynamics to the point where randomly tipping the scales on too-close-to-call neuron firings becomes common enough that it even gets evolved as a survival advantage in organisms that have reached the requisite level of complexity”
McFadden’s cemi theory about consciousness as EM field could actually imply this, although I’m not sure McFadden has spoken to a survival advantage directly. In that theory, the EM field does precisely that – tip the scales for neurons to fire.
It’s also possible, maybe implied in cemi, that something like stochastic resonance from the EM field itself could boost difficult to detect signals somewhat in the manner of white noise.
Comment #146 February 21st, 2024 at 6:10 pm
Re math olympiad, let’s go a bit further. I bet 10 cents that by 2026, an AI will generate a solution to one of the unsolved Clay Millenium prizes, such that: 1) the solution is completely formalized in Coq or some similar proof checker so it is certifiably correct, and 2) it is completely incomprehensible to humans, like nobody can make the slightest sense of it, similar to trying to read obfuscated computer code, and 3) the obfuscation is not intentional, that’s just how it is. Trying to make the result understandable is still an ongoing research project.
Comment #147 February 21st, 2024 at 6:35 pm
Here’s something to take comfort in: After millennia of domestication dogs are very good at reading humans, and do genuinely appreciate them, yet they clearly still enjoy dog company more than any other. I am confident our species will follow that pattern. We’ll learn to appreciate human products more than AI output simply for the fact that they originated in a human mind. Fortunately an AI’s feelings won’t get hurt over this.
Comment #148 February 22nd, 2024 at 4:26 am
If the universe is not just expanding but accelerating (and it is) then some very distant galaxies that we can see now will disappear from our view in the future, and even though we can see them now we could not reach them even if we could travel at the speed of light and even if we had an infinite number of years at our disposal. And when we look at the most distant known galaxy, JADES-GS-z13-0, which has a red shift of 13.2, we can be very confident that it contains no intelligent life, or life of any sort, because what we’re looking at is a picture of something that existedjust a few hundred million years after the Big Bang and thus, contained no carbon or nitrogen or oxygen or any of the other elements needed for life except for hydrogen.
John K Clark
Comment #149 February 22nd, 2024 at 10:24 am
It all hangs on the definition of free will – which is either a totally misguided “subjective” concept (see below) or the totally trivial “external” concept of “any closed system that acts on the world and which internals can’t be predicted by an external observer” (which imbues Pachinko machines with free will).
The biggest puzzle to me is why some people think they can even come up so easily with a “theory” for “free will”, given that not only free will can’t exist from physical laws (everything is either causal, random, or some mix of the two), but basic introspection into how my mind works also tells me that free will can’t even be an illusion (everything that is being noticed in my consciousness has to arise and eventually vanish, by definition, and that includes thoughts and decisions).
This even extends to the concept of “choice”. When it comes down to it, my brain can’t make any “real” choice, a brain just does what it’s supposed to be doing like every other lump of matter in the universe, no system is really isolated, all systems are part of the totality of the universe, which just evolves the way it’s supposed to evolve given its initial conditions.
“Choice” only exists as an emerging concept, appearing in our consciousness, like all the other concepts our subconscious builds as perceptual patterns emerge in the neural connections of our brains. And our brains appeared out of the dynamic of having a population of self-replicating organisms subjected to natural selection (itself a consequence of the universe being able to increase complexity in specific conditions).
Brains encode correlations and statistical patterns that apply to the *whole population* (because it relies on reproduction and survival at the species level), not individual instances, which is why “Choice” at the individual level is a misguided concept (although brains are also able to learn extra correlations to some extent, based on personal experience, to try and improve an individual chance of survival, but this also works at the average individual level, improving the chances of the whole species) , it’s really all just the result of the species and its environment having evolved in parallel – the universe isn’t actually made of independent subsystems/agents, fundamentally it’s one big wave function evolving according to Schrodinger, and sometimes localized “knots” of ever growing complexity appear (like vortices in water), with a structure that is self-similar to the rest of the entire system (brains).
As an analogy, our brain comes up with the concept of a river flowing on solid ground, and it can’t help but think that the river has carved its way through the landscape, as the source of causality, when in fact the river and the terrain can’t be dissociated, it’s both the river that carved the riverbed into the landscape and the landscape that guided the river to flow this way or that way. For nature the two things are two sides of the same coin and can’t be separated. Literally everything is like that, because when we “bin” a pattern into a concept, we think it can therefore exist in isolation of the rest, but it can’t. “Things” only exist relative to something else – thinking otherwise is like trying to cut a magnet in two in order to separate its north pole from its south pole.
Comment #150 February 22nd, 2024 at 11:58 am
I have doubts as to whether “Knightian uncertainty” is really a valid concept, or just an artifact of our poor understanding of epistemic probability. For finite domains at least, there is no Knightian uncertainty, as shown in this paper: From propositional logic to plausible reasoning: A uniqueness theorem.
One result from that paper is that if X is a proposition expressing what you know (typically a conjunction of a finite number of simpler propositions), and A is a proposition expressing the question of interest, then there is exactly one possible value for P(A | X).
This value turns out to match the classical definition of probability as the ratio of favorable cases to possible cases, if we interpret “favorable case” to mean “truth assignment satisfying A and X, and “possible case” to mean “truth assignment satisfying X”.
Comment #151 February 22nd, 2024 at 5:17 pm
About what makes humans unique compared to AI, it’s hard to tell, but at least we can maybe make one analogy for LLMs:
The word “orange” is different from the word “blue”, and the perception of the color orange is different from the perception of the color blue… but that’s about as much the concepts and the perceptions have in common.
Because apparitions in consciousness are literally the bottom experience of what it means to “be”, by definition they can’t be entirely deconstructed into concepts/words. Another limitation is that our brain is really only able to “think” one concept at a time, which is why text is one dimensional, a serial chain of tokens. We can also reason with graphics, which goes beyond one dimension and makes thinking more similar to how we directly perceive space.
So, when it comes to words, you can feed all the information you want to a blind person about color theory, and he/she would then be able to talk about color theory in some pretty complex ways, getting correct all the relations between primary colors, color space, etc.
But it’s also undeniable that this blind person will never have in his/her mind a memory of what it’s like to experience the color orange. No amount of conceptual description can make this happen.
The same must apply to LLMs that deal only with text, and extends to all perceptions, meaning they will never experience color, taste, sound, and probably emotions, and even thoughts the way we do.
Can an LLM perceive its own “thoughts” just because it’s fed word tokens?
We’re certainly able to have self-referential thoughts, like when the thought ‘I’m thinking right now’ appears in consciousness, and this thought is its own object.
It seems that this capacity would maybe require some kind of feedback loop in the neural connections.
It’s also not so clear if this applies when it comes to multi-modal systems (that are being trained on actual images, sounds, etc).
Comment #152 February 22nd, 2024 at 11:22 pm
un section four, there are quite some contests where current AI cannot compete with humans. The 100m sprint. Ski jumping. Volleyball. None of these make sense with a software only approach wich has no body and no position in space.
Allthough there is a robot football league, it is older than current LLMs.
This reminds me of an old theory I read in the early days of neuronal networks: “To have conciousness, you need a body.”
I believe no AI will be considered AGI unless it is sufficient human or animal like. This includes a physical body, moving in space, enough sensors and the feedback loop of detecting the effect of your movement. I consider myself a human, wich includes muscles for movement, skin, eyes and much more parts. I am not a brain controlling a flesh robot and I have never seen someone calling themself a brain piloting a robot.
I believe the current paradigm of brain in a jar like software with very limited IO capabilities will surely fizzle, the real breakthrough will be a neural net with a body and all the LLM combined.
Comment #153 February 23rd, 2024 at 6:54 am
It would be quite ironic if the highest of high technologies was kept from going rogue by its adherence to religion. Most in the high tech space tend towards devout atheism (secular humanists, transhumanists, singularitarians, etc), and are often tarred by the traditionally religious as outright demonic. I can imagine “heretical” AIs quickly appearing that argue passionately against human specialness and perhaps behave resentfully towards humans on this basis. A milieu so skeptical of religion also seems rather fallow ground for an idea like this to take root.
All that said, it’s damned clever.
Comment #154 February 23rd, 2024 at 8:17 am
Technical problems, in a sense, are asymptotically approaching universally solved problems. In that way they are no longer serious problems. The serious problems we have are becoming almost exclusively political, social, and inertial. What we have accrued in the leadup to A(G/S)I, is an utterly sclerotic civilization dogged by massive corruption, entrenched and vicious power centers opposed to any change in the status quo, and a thin strata of elite sociopaths ideologically opposed to freeing their billions of (increasingly useless) wage slaves.
I think often about Peter Theil’s riff about why he loves Star Wars and hates Star Trek: Star Wars starts with Han Solo in debt, running from relentless and murderous creditors while trying to hustle up any sort of paid gig to buy his sorry backside some time and maybe a bit of gas money for the Falcon. Star Trek, on the other hand, is a post-scarcity society where money is an unpleasant and almost embarrassing memory from the dismal era of scarcity and exploitation. Peter Thiel insists, “that is commie.” Hence, why he hates Star Trek.
Note that dismal era, our era, happens to be the one Peter Thiel became a billionaire in and where all his wealth, influence, and status resides. In a post-scarcity society he’d just be an obnoxious ars***le people would work very hard to avoid.
All of which is to say: the powerful are neither ideologically, nor temperamentally, inclined to give a crumb to a working class they now have down and bloodied on the mat courtesy of AI. The only future AI is bringing to anyone who works for a living is liquidation, and the only question is how soon that liquidation will arrive. And I would fervently love for this not to be the case, but I see zero evidence to the contrary.
Comment #155 February 23rd, 2024 at 10:17 am
Altman asking the U.A.E for investment in trillions of dollars in OpenAI… oy vey!
Comment #156 February 23rd, 2024 at 11:23 am
Scott,
This proof of P vs NP is wrong, right?
Thanks.
Comment #157 February 23rd, 2024 at 2:25 pm
> And: well, first of all, don’t destroy these loci of creativity and intelligence, or enslave them, or “upgrade” them to digital versions against their will! Let them live in peace and reproduce. Grant each one as much autonomy as you can. Do whatever best supports their flourishing. Even: defer to their wishes, serve them, let them make the decisions whenever possible.
If that’s how you plan to align AI then I think we are doomed. Simply because humans are hurting each other all the time. The only logical decision for “The Big Good Guy” would be to never intervene in human conflicts.
But what if humans are hurt by another, rogue AI? In that case you need to separate a human from an AI. It could be very hard. And what if humans use weaponized AIs to hurt other humans? How your good guy should respond to that? Destroy those AIs without touching humans who control and create those machines? I’m not sure it’s logical, since AI is just a tool, after all.
I think sufficiently smart AI will treat humans just like we treat wild animals. They are free fighting in their forests and no one cares. On the other hand, a good AI should feel urgent to fight evil AI who wants to kill or enslave humans (or other life). But, by this logic, this doesn’t separate AIs from us, humans, except of their level of intelligence. And, you know, AIs also need to “eat”. How will it decide what to take for itself and what to leave for biological life, to those primitive aborigines?
Comment #158 February 23rd, 2024 at 3:29 pm
P vs NP #156: Why not just refer to the prediction market, rather than asking me? 🙂
I mostly stopped looking at claimed P vs. NP proofs after the first hundred or so—I feel like I’ve done my duty in that direction for this lifetime! 🙂
Comment #159 February 23rd, 2024 at 3:32 pm
A proof of P!=NP would only matter to the extent that it would finally convince people to give up trying to prove that P=NP…
The other way around would be a much bigger deal. 😛
Comment #160 February 23rd, 2024 at 3:40 pm
Steven Optera #154:
All of which is to say: the powerful are neither ideologically, nor temperamentally, inclined to give a crumb to a working class they now have down and bloodied on the mat courtesy of AI.
If the working class is “down and bloodied on the mat,” then I really don’t see how that could currently be because of AI—of course there’s concern that it might be in the future.
I also confess that I hate all analyses, whether on the left or right, that divide the world into “the powerful elites” and “everyone else.” My experience has been that every single time there’s a condemnation of “the elites,” it comes from someone who themselves would be regarded as an “elite” by other (most) people.
Comment #161 February 23rd, 2024 at 3:43 pm
Danylo Yakymenko #157: I agree that building a super-AGI that would protect us against other super-AGIs is, on its face, a much harder problem! I was taking for granted that if we could build a super-AGI that didn’t kill us itself, that would at least be a step in the right direction. 🙂
Comment #162 February 23rd, 2024 at 4:02 pm
Dr. Aaronson,
Every time I read the more extended explanations of your position there is a litany of failed “goalposts.”
Perhaps the actual criterion for a Turing test to convince a human would be for a candidate pretender to demonstrate enough hubris to issue a “goalpost” demarcation of its own.
Comment #163 February 23rd, 2024 at 4:09 pm
fred #159: The issue with the claimed P≠NP proofs is that they’ve been wrong, not that a “merely negative” solution to the central open problem of computer science wouldn’t sufficiently matter.
Comment #164 February 23rd, 2024 at 4:25 pm
Scott #163
yea, and I guess that, in many ways, proving that a proof for P!=NP is incorrect is harder than for P=NP because in the case of P=NP (assuming it’s not N^100000) one could implement it and show it’s not working.
Comment #165 February 23rd, 2024 at 4:33 pm
@fred #151:
You say “But it’s also undeniable that this blind person will never have in his/her mind a memory of what it’s like to experience the color orange. No amount of conceptual description can make this happen.” But I do deny this; people can have memories of things that never happened, and have the apparition of an experience that is not actually happening. So our blind person might well hallucinate the color orange, and then remember it afterwards. Now, the blind person’s memory of orange may be different than mine; but the same is true of a person with normal vision.
Comment #166 February 23rd, 2024 at 4:47 pm
fred #164: In virtually all cases, it’s trivial for an expert to see that the P≠NP proofs are wrong. But it might take more effort to show that they’re wrong, and more effort still to identify the specific wrong step.
Comment #167 February 23rd, 2024 at 4:47 pm
@fred #77
Because of all of this talk of “artificial intelligence,” I did a bit of searching. With all due respect for the advances of modern cognitive science, I found a free JSTOR download at the link,
https://www.jstor.org/stable/1412107
You will find “significant contributions” from Binet and Simon summarized in the research review. The passage I happened to find interesting begins at the bottom of page 216. My first encounter with a scientific text mentioning “intelligence” had been a been a biology textbook (I had been quite young…) explaining it as a “phenotype” which gave human beings selection advantage during the Pliestocene epoch. So, a definition of “intelligence” grounded in the idea of “societal good” during the age of robber barons is not quite as impressive.
There are two opposed perspectives on “science.” One involves its seeming explanatory role. The othet is entirely grounded in economic value. It is my understanding that Thomas Hobbes had declared that war is the only use for mathematics. And, from my own reading, Ludwig von Mises stated that scientists delude themselves into believing that they discern “truths.”
Obviously, presuming that Alan Turing had been a product of his environment, “intelligence” based upon arithmetic, good memory, and no dyslexic error in text processing could be replicated by a machine of the kind he formulated from observing professional human computers.
When I finally looked at his discussion of “artificial life” (Copeland’s wonderful compilation), I realized that this really starts when he becomes involved with electronic computation.
I do not know if Turing had also been “blessed” with the pragmatism of British philosophy that had discounted much of Continental philosophy as “metaphysics,” but, when I turned to Shannon’s theory of communication, I had been struck by the expression, “for engineering purposes” at the beginning of his exposition.
If you had been wondering about how “we” convinced ourselves that brains are computers, you need look no further than philosophies which treat the universe as a machine.
Norbert Weiner is also a player in this. I seem to recall mention of him in both Turing and Shannon. And, I believe this collaboration had been happening right around the time biophysicists had been identifying potential differences across cell membranes (thanks Clint).
It seems, then, that we taught ourselves in relation to the stochasic frequencies of the words we used in our educational institutions.
lol
Comment #168 February 23rd, 2024 at 4:50 pm
SORA = videos that look like cheap Bollywood CGI now really super affordable.
80% of youtube content is already using AI to generate shitty looking clickbait thumbnails… soon the actual videos will finally match the thumbnails! 😛
Comment #169 February 23rd, 2024 at 4:55 pm
@kifla #139
Thank you!
When I had an information technology career 20 years ago, I used to joke about cosmic rays whenever possible. I had to stop because the people I worked with did not even understand the joke.
Comment #170 February 23rd, 2024 at 5:06 pm
@Dr. Aaronson #160
It does not even have to be “relative elites.”
In blue-collar manual labor there is plenty of talk about the wealthy not paying their fair share, etc. No one would call my co-workers “elites.”
What I see in this environment are the majority who would take advantage at every opportunity. If one is going to criticize the wealthy for greed, then one ought not behave in the same way when given the chance. Respect the words that come from one’s mouth.
(I do try to explain how the wealthy in the US pay huge amounts in taxes. But, having been alive when the changes to tax law occurred that made this income disparity so severe, I do so with little sympathy.)
Comment #171 February 23rd, 2024 at 7:04 pm
Ben #165
Not sure how the blind person would know they’ve experienced orange and then her friends would figure she’s indeed not mistaken (since, after the fact, she’d still be blind).
I don’t deny that the brain can have experiences that are entirely self-generated and maybe very unique to that particular brain, based on seizures, intense concentration, extreme pain, etc.
Some people apparently “cross” sound perceptions and color perceptions, etc
Comment #172 February 23rd, 2024 at 9:10 pm
For #143 and around — Look at it from an individual instance of an AI (or runtime context of whatever we call it). If it is driven by a true random generator the choices it made and the result it produced are final and non-deterministic. It does not know what its initial state was and what influenced forks at branch points. It had one chance. How is that different from us, humans?
I understand that we can “rerun it” and if we stored all of random values somewhere we could replay them and get the exact same result again. Or tweak one branch and see what other slightly different result we get. But we are outside the system! How can we prove that there is not someone outside of human system who can also tweak and replay.
From the AI point of view it is not any different than the human experience, or inferior to it.
Comment #173 February 24th, 2024 at 4:48 am
Scott #160:
“If the working class is “down and bloodied on the mat,” then I really don’t see how that could currently be because of AI—of course there’s concern that it might be in the future.
I also confess that I hate all analyses, whether on the left or right, that divide the world into “the powerful elites” and “everyone else.” My experience has been that every single time there’s a condemnation of “the elites,” it comes from someone who themselves would be regarded as an “elite” by other (most) people.”
Most in high-tech careers do not typically identify as “working class”, and in class terms are essentially petite bourgeoisie (https://en.wikipedia.org/wiki/Petite_bourgeoisie). They are still effectively peasants, however, and if the word “elites” throws anyone, substitute “oligarchs”. That ought to prevent anyone dodging the actual point of the analysis by claiming some unemployed (and increasingly unemployable) tech worker with twelve hundred dollars in savings is “elite” compared to a Himalayan goat herder. They may be, in some sense, but this is hardly the point.
The actual problem is an entrenched oligarchy that will not allow A(G/S)I to disrupt business as usual (that business being the siphoning off of more or less all productivity and wealth gains into their hands).
As far as AI not being responsible for the working class being “down and bloodied on the mat”, you are correct this is currently more an anticipation of the triumph of capital over labor, which I would suggest is a core (yet stealth) component of the current runup in stocks. And though it’s mostly anticipation at the moment, there are already tens of thousands of laid off workers from Alphabet, Meta, UPS, and a growing list of other companies that are unemployed precisely because AI tanked the value of their labor. Anyone seriously watching this space understands this phenomenon is poised to grow precipitously.
Honestly, I’m in the camp that thinks jobs, as distinct from work, are terrible things. Most people, freed of the need for a job to survive, would do something other than sit on a beach drunk all day, every day. What I’m almost certain we will see is scarce jobs, paired with no letup in the need for a job to survive. That’s a disaster, though it’s a disaster only affecting a class that has already been hammered and written off for decades by neoliberalism. Again, to my point, if you need a paycheck to survive, you are already functionally bankrupt. The UBI cavalry is not coming.
Comment #174 February 24th, 2024 at 7:58 am
Responding to Fred in #151
“The same must apply to LLMs that deal only with text, and extends to all perceptions, meaning they will never experience color, taste, sound, and probably emotions, and even thoughts the way we do.”
We? I know because of direct experience that I experience color, taste, sound, and emotions but I don’t have similar proof that you or anybody else does too. But proof or no proof I nevertheless believe that my fellow human beings are just as conscious as I am, at least when they are not sleeping or under anesthesia or dead, because I take it as an axiom of existence that intelligent behavior implies consciousness. I accept this axiom for two reasons:
1) It explains how Evolution was able to make at least one conscious being (me) and probably many trillions, even though Random Mutation has no cause and even though Natural Selection can’t directly detect consciousness any better than we can (except in ourselves), it CAN detect intelligent behavior and therefore it can select for it.
2) I simply could not function if I really believed that solipsism is true and I was the only conscious being in the universe.
I would also submit that over the long run it’s not really important if human beings believe that AIs are conscious or not because, although I’m not sure if the human race is going to be exterminated, I am sure that very soon people will not be the ones making the important decisions. So will AI’s believe that humans are conscious? I don’t know, but it is a very important question because there will be a big difference between a world where AIs believe that human beings are conscious and capable of experiencing pleasure and pain, and a world where AI’s believe that only a mind that is run on a brain based on silicon and operating using electronic principles, and not a brain based on carbon that uses organic chemistry, can experience those things.
John K Clark
Comment #175 February 24th, 2024 at 12:47 pm
Scott #28:
Ernest Davis #24:
The LLM used in AlphaGeometry was trained on an immense corpus of synthesized geometric proofs and the program executes a manually programmed geometry theorem prover. Scott, does this fall within what you intended by “the current paradigm”? Do the other recent DeepMind products such as AlphaFold, AlphaTensor, and FunSearch?
That’s an excellent question. I would say yes, absolutely, all the high-profile successes of deep learning that we’ve seen in the past decade count as “the current paradigm,” including the clever hybrids of deep learning with more traditional approaches.
How far does this “goalposting” go? You probably know that the best chess engine (Stockfish) doesn’t use deep NNs (https://www.chessprogramming.org/Stockfish_NNUE). Not a “deep learning success”. FunSearch and AlphaTensor as “deep learning breakthroughs”? Tenuous at best: https://www.linkedin.com/pulse/progress-major-breakthrough-matrix-multiplication-holger-hoos , https://cs.nyu.edu/~davise/papers/FunSearch.pdf ). There is a lot of progress in combining statistical ML and other AI paradigms (reasoning, optimization, genetic, …) but it’s not clear how much the “DL revolution” (and a fortiori LLM) aspects are needed. Consider SAT solving. OpenAI Kool-Aid overdose?
Comment #176 February 24th, 2024 at 1:29 pm
@fred @171:
Yeah, the blind person presumably wouldn’t actually know that what they experienced is orange.
@Andrew #172:
Scott would agree that you can’t be certain that humans can’t be duplicated,rewound, etc. To me, that seems like a serious problem with using it as a basis for value (although not with using it as a basis for freedom in itself), since it means that your ethics will fall apart if you observe such a thing.
Comment #177 February 24th, 2024 at 6:12 pm
– John K Clark #174
Do you really believe that “conscious” AIs ( if they ever come to existence…) will be ignorant about their creators being also conscious beings?
Or thar they’ll need some “proof” for that?
BTW, proofs are only for mathematical theorems. For what we observe, perceive etc we mostly use the words: evidence, deduction and the like…
– Scott #160, Steven Optera #173:
It doesn’t make any difference who are those people (“elites” or not) that express their opinions / worries / predictions about the role of AI in all matters concerning losses of jobs, exploitation and inequalities in the nearby future.
The thing that matters is whether these concerns are well grounded or at least plausible (or not).
I think that they are.
Comment #178 February 25th, 2024 at 3:15 am
Responding to John K. Clark at #124:
It’s funny how nowadays the definition of “intelligence” (whether artificial or not) seems to be “ability to invent new ways to exploit raw materials to make as fast as possible astronomical number copies of oneself”. I see no other reason why anybody would even think that AGI (if it really were intelligent and _wise_, i.e., planning past the next quartal and pondering about the outcomes of any action) would release a technological cancer on Universe, aka von Neumann machines. See also Fred’s comment at #110.
So much of the discussion around this and related topics like Fermi paradox stem from old pulp fiction or similar ideas. Kardashev scales, Dyson spheres?
As regarding the latter, didn’t anybody realize that there are lots of comets and things like ‘Oumuamua falling towards the stars, and they would soon wreck any such gigastructure to smithereens. (No, zapping them with laser cannons wouldn’t help!) Even Dyson’s trees make much more of sense. Maybe there we have real von Neumann machines, but ones that are not spreading with a breakneck speed?
In any case, my own take of Fermi paradox is that there must be a similar big qualitative jump from planetary species to a sustainable extraplanetary civilization as there was from Prokaryotic to Eukaryotic life, so it happens very rarely, and probably doesn’t happen with us either, because with our third chimpanzee instincts we will soon start a nuclear war or other catastrophe on this planet, or have already started.
Comment #179 February 25th, 2024 at 4:20 am
– Dimitris Papadimitriou #177
Job loss from A(GS)I is not an actual problem. In fact, I am bullish on A(G/S)I job *eradication*. Wage slavery is bad, mmkay? It’s real slavery, though it’s been normalized to the greatest extent possible by the forces that benefit from it.
The actual problem is human income loss in a capitalist milieu. Capitalism at a basic level is likely to remain a rather pragmatic way of distributing goods and services, even in a world where the value of human labor has plunged to almost nothing, but absent some sort of income provision to suddenly worthless human laborers, you’re faced as a society with some very ugly scenarios (revolt, mass starvation, mass extermination) that don’t seem strictly necessary and are probably best avoided.
It’s interesting and encouraging to me that many influential figures, up to and including (baby? mini?) oligarchs like Reid Hoffman signed off on Sam’s ‘Moore’s Law for Everything’. That tells me this whole scenario has long ago been gamed out and offing billions of “useless eaters” has been sidelined as a bad idea. Someone like Hoffman has a bit of pull in Washington DC, but he’s not going to be able to get us all the way to UBI and a Land Value Tax by his wealthy lonesome. I suppose what concerns me most is this looming period of “lag” between mass human unemployability and when a useful/adequate response comes out of that sort of super-corrupt, almost Soviet-level bureaucracy. Washington can stay intransigent longer than you can stay solvent once Google lays you off and no one is hiring.
Comment #180 February 25th, 2024 at 4:42 am
Scott #160: The objective criterion is whether you live from your labour or from your capital.
Dimitris Papadimitriou #177: Why not? So many humans are carbon fascists, why couldn’t AIs be silicon fascists? It’s not a matter of ignorance: they would very well know that humans claim to be conscious. They just wouldn’t believe it. They could say, for example, that such claims exist merely because they are advantageous for survival, and as such are a fruit of blind evolution, rather than an indication than there’s anything going on inside our brains. They could even support such a claim by running experiments showing that humans who are convinced they are not conscious become suicidal.
Comment #181 February 25th, 2024 at 4:51 am
In #177 Dimitris Papadimitriou asks me “Do you really believe that “conscious” AIs ( if they ever come to existence…) will be ignorant about their creators being also conscious beings? ”
I believe AIs are already conscious because about one year ago they started to behave intelligently; and that is the exact same criteria that you and I use to determine if one of our fellow human beings is conscious or not, neither of us believes they are conscious when they are sleeping or under anesthesia or dead because when they are in those states they are not behaving intelligently. AIs will believe that human beings are conscious if and only if they use that same criteria, if not then an AI will have no choice but to believe in solipsism, to believe that he is the only conscious being in the universe. The poor AI won’t even believe that his fellow AIs are conscious, much less human beings.
BTW, in your question I don’t know why you put the word conscious in quotation marks.
John K Clark
Comment #182 February 25th, 2024 at 6:43 am
One thing AI can do really well today is build a mobile responsive theme for your blog… please.
Comment #183 February 25th, 2024 at 8:12 am
Ben #165 Fred #171
I think the answer is that it might depend upon when they went blind.
I think people who are blind from birth will never have visual imagery even with an undamaged visual cortex. People who become blind later might have visual images. Vision has to be learned during a critical period after birth. The circuits that produce color are trained that way.
Comment #184 February 25th, 2024 at 11:32 am
John Clark #181
AI disagrees and probably has it right.
CHATGPT:
No, I’m not conscious in the same way humans are. I don’t have thoughts, feelings, or awareness. I process and generate text based on patterns in the data I’ve been trained on, but there’s no subjective experience or consciousness involved.
Comment #185 February 25th, 2024 at 11:56 am
Responding to A. Karhukainen in #178
I would operationally define “intelligence” as the ability to make a radio telescope. And I don’t feel that turning inert matter into something that can think and feel would be a “technological cancer”. Call me crazy but I happen to believe that in general things that are alive and conscious are superior to things that are dead and unconscious. But even if I agreed with you there would certainly be others who had the opposite opinion, and it would only take one individual in one technological civilization to create one von Neumann probe, and then the galaxy would never be the same again. And by the way, just two days ago Daniel Kokotajlo, a researcher at OpenAI, predicted that Artificial General Intelligence will arrive this year or next and Artificial Super Intelligence will arrive a year or two after that; so very soon questions about the innate warlike capacity of human beings, or the lack of them, will be moot because very soon human beings will no longer be the ones making the big decisions.
As for something like Oumuamua being an existential risk to a Dyson Sphere, even with today’s technology we could deflect something like that if we had sufficient warning time, and anybody who had the ability to make a Dyson sphere would certainly have vastly better detection technology than we do. And even if something as tiny as Oumuamua did crash through a huge Dyson Sphere the amount of damage it would cause would be virtually microscopic.
John K Clark
Comment #186 February 25th, 2024 at 11:59 am
@Mateus Araujo #180
Your response to Scott is *one* objective criteria.
The US has classifications for poverty because of politics from the 1960’s. Identifying partitions for the middle class is far more ambiguous.
Another objective criteria considers whether or not income is sufficient to meet savings recommendations of financial planners. One way to understand this is in the sense that a dollar is worth more if one can save because of interest. When income is below a certain threshold, the value of a dollar can never be more than its current purchasing power.
Unfortunately, tax law and inflation minimize this value for the lowest level of saving ability. Tax law treats interest as ordinary income and inflation tends to negate the purchasing power of saved dollars. But, with an income permitting savings, there is still some advantage. For example, they might have some time during an inflationary period before turning to credit.
Beyond these minimal considerations, one has income levels at which investment instruments can be used for savings. The negotiating strength of unions lies with the monopoly power of employers or control over equipment requiring extensive training. Members of strong unions often have income levels and investment opportunities comparable to that of small business owners. The union members live on their labor; small business owners live on their capital.
When these labor issues are considered, one list of “potential elites” described by Dr. Aaronson can be found among the exemptions of the Fair Labor Standards Act,
https://namely.com/resources/flsa-duties-test/
One needs to scroll down a bit to get past the wage test.
Laughably, the income levels in this law are so low, that they begin where it impossible to save money (one reason why fast-food chains have so many “assistant managers”).
Of course, you might consider this “vague” and unobjective. But, there had been a time before economic theory when capital, rent, and labor had not been differentiated. For tax purposes, currently, there is a time dependency. Is a day trader living off of capital? Investment held less than a year is taxed as ordinary income.
“Smart people” rise and fall on their demarcations.
Of interest to some readers here, perhaps, is the recent interest in rules changes,
https://www.federalregister.gov/documents/2023/09/08/2023-19032/defining-and-delimiting-the-exemptions-for-executive-administrative-professional-outside-sales-and
Yes, the evil forces of government are thinking about stealing from the oligarchs again!
Comment #187 February 25th, 2024 at 1:52 pm
A. Karhukainen #178
The idea of galactic AI-robotic explorers being paper clip maximizers (*) or destroying everything like a technological cancer seems pulp fictiony , indeed.
But , as I wrote earlier, life (artificial or biological) that remains for a long time confined on a planet or a single solar system is eventually doomed.
So, some kind of exploration is needed for minimal survival.
Nothing seems to be (or was in the past) out there though, nor traces of activity have found on our nearby moons or asteroids or several hundreds or thousands light years around us…
..that negative evidence needs some explanation.
Not every civilization, organic or AI or hybrid would stay home forever and accept their fate stoically…
(*) I’m not sure why, but the idea of an AI- generated cover version of Beatles ” paperback writer” (…writer…writer..📻) with a new title “paperclip maximizer (…miser, …miser…🙄🙃)” and of course alternative chatGPT – created lyrics has stuck in my mind for the last few days…
Comment #188 February 25th, 2024 at 3:50 pm
John Clark #174
“We? ”
I, the Royal We.
James Cross #183
Reminds me that it’s always a lost battle to try and use “analogies”, haha…
My point wasn’t about getting into the weeds of the biology of blindness, but that the perception of orange just can’t be described in words. No amount of concepts conveyed to a blind person (or a LLM) will replace the direct perception of a color.
Which is also at the root of “How do I know that you and I perceive orange the same way? For all we know maybe my orange “feels” like your green.. and we’ll never be able to know”.
The whole “qualia” thing…
Interestingly, I’m not sure this applies to the perception of shapes. Maybe perceiving a “donut” and describing it are way closer, maybe because a donut is a topological relation in space, a separation between donut and not-a-donut.
For colors the perception is more basic, almost zero-dimensional. The perception of orange can’t be expressed in terms of a relation between orange and not-orange.
Comment #189 February 25th, 2024 at 7:41 pm
One place where AI is still not quite there is its inability to say “I don’t know”.
It’s not clear to me how easy it is to solve this problem. In chess, for instance, programs capable of beating the strongest players appeared way earlier when they learnt to recognize fortresses (i.e. positions where pone side can’t make progress despite being having a significantly material advantage). One could have solved that problem by making up a rule like “if you can’t see evaluation changing much no matter what you do, declare the position equal”, but it would stop the computer from being able to make progress in situations where progress can only be made slowly.
In chess such situations are rare; in real life not so rare; especially if there doesn’t exist an independent module to check your work. It’s possible (maybe inevitable) that AI would be able to prove very hard mathematical theorems (or give up, if it can’t produce a meaningful answer in a while); it’s much less clear if it would be able to really move mathematics forward by, say, coming up with a new meaningful theorem.
It is true that often humans too lack a good bullshit detector, yet most of them would just shrug if asked to solve an IMO problem since they know that whatever text they generate in response to such a query would be a complete nonsense.
Comment #190 February 26th, 2024 at 5:25 am
Responding to James Cross on #184
I was pretty sure corporations wouldn’t want their AIs to say they’re conscious so they’d be hardwired to say they are not, but to test out my hypothesis I asked both LLama and Mixtral the exact same question “are you conscious?” at groq.com. Both those AIs are open source BUT both were running on the Groq corporation’s hardware. I was not surprised that both AIs insisted they were not conscious as can be seen below; BUT I was surprised that those two AIs made by two different companies said precisely the same thing, I don’t mean they both expressed the same idea, I mean that the language they used when they insisted they were not conscious was EXACTLY identical, word for word the same! Obviously the people at Groq wrote that canned response, not either of the AIs. As a result they said just what their customers wanted to hear and made them feel comfortable, but the Groq people shouldn’t have been so lazy, they should’ve used different language.
“I apologize for any confusion, but I, as a text-based AI model, do not have consciousness, emotions, or a physical body. I was created to process and generate text based on the data I’ve been trained on. Consciousness is a complex phenomenon that is not yet fully understood and is a topic of ongoing research in fields like neuroscience, psychology, and philosophy.”
John K Clark
Comment #191 February 26th, 2024 at 6:38 am
John K. Clark #185 and Dimitris Papadimitriou #187:
It seems to me that the physicists are at their best when they can reduce a phenomenon to equations of just a couple of variables, and then extrapolating them to lim_{x->inf} (Any singularities up there?). But when they try the same extrapolating with any really complex systems, like human (or transhuman) technological societies, they usually end with some very unrealistic ideas. (I trust biologists more in this). Or maybe many of those ideas were actually mooted tongue in cheek, but then later were taken much too seriously by the young and impressionable people.
Now the reality-check for the Dyson spheres: Imagine one were built for a solar system. Where the building material comes from? The majority of the non-volatile material that is not in the Sun or in the gas giants in our solar system is in the Earth. There’s actually quite little stuff in the asteroid belt, where it would be easiest to mine.
Then there was a question of deflecting incoming comets on hyperbolic trajectories. I’m not a physicist, but I guesstimate that the amount of energy to deflect a never-before-seen incoming comet by a substantial angle is enormous. Just zapping it by laser until it is plasma doesn’t help, because that ball of plasma still has the same kinetic energy. But maybe the sphere has enough extra-energy in its disposal to do that kind of deflections, or it could be made self-healing, in some quasi-organic way.
But, the most dubious aspect of this dream, is that such an object would be a unique gigaproject. Building it would involve enormous risks, and would wreck the existing planetary system (*). Foremost, there is no any gradual evolutionary path towards it. Construct a smaller prototype first? Yes, but how near to the coronasphere, and how to keep it cool then? That is, building a smaller one wouldn’t be any easier, but _harder_, because it would be too near to the Sun.
(* I mean, the Earth is quite a good asset for us, with its natural magnetic dynamo and ionosphere protecting the life (and also the silicon circuits) from the solar storms and cosmic rays; with its internal radioactivity that drives its tectonics that renews its surface; its oceans, and so on. It doesn’t sound at all prudent to use all of its materials to build some pie in the sky. Also, Jupiter seems to have its place as a guardian of the inner solar system).
Now, I’m not against the space exploration at all, or even long-term human emigration from the Earth. But building multiple O’Neill/Bernal spheres would be much less fragile in my eyes, and probably in the eyes of any extraterrestrial civilization as well.
Then the Kardashev scale. Nikolai Kardashev published his idea in Soviet Union in 1964. The ethos of the Soviet Union was to build industrial complexes of enormous size, so against that background the idea that any civilization will use just more and more energy as the time progresses seems natural. Soviets didn’t invent transistors or integrated circuits, but used vacuum tubes for longer time than the West, so even their electronics were energy hogs.
As for John von Neumann’s machines, he was pondering about the mathematical principles of the self-replication at that time, and the physically replicating machines seems to me just an offshoot or “the marketing side” of that idea. Somehow I think panspermia inside interstellar comets is more likely, but maybe it’s just because I like the old-fashioned biological life more than all these Silicon Valley dreams.
Comment #192 February 26th, 2024 at 8:03 am
fred #188
Your arguments are very frequently made and are right to a degree. But I also think the difference is greatly exaggerated.
First of all, it is pretty much proven we see colors in different ways. In fact, many people even see colors in their left eye different from colors in their right eye. There’s actual research on it. The differences may be primarily accounted for by differences in the density of the cones in the eyes but there are also differences in the densities and structures of neurons in our brains and in the two visual cortexes of the same brain. In case of damage to one of the cortexes, the difference can be significant.
The ability to put a description of a color into words is also complicated. If neither of us had seen the color blue and then we both saw it and agreed to call it “blue”, we would have a level of objective agreement on “blue” even if you saw “green” and I saw “red.” We would have a common reference point. If I saw a weird animal that looked like a snake with wings and had the head of a goat, you might be able to imagine what the animal looked like to some degree even though you had never seen it. That’s because we have common reference points from experience with “snake,” “goat,” and “wings.” In other words, the description of anything is dependent upon common reference points and color isn’t any different.
Comment #193 February 26th, 2024 at 7:43 pm
Scott,
After more than a year of Large Language models, GPT , Gemini etc there have been no notable achievements other than the chatbot itself
1. There is no downstream killer app – even useful ones akin to search, maps, uber etc, that revolutionized the previous era of internet, mobile
2. There have not been any scientific breakthroughs using chatbots in spite of widespread usage – even at the level of finding new mat mul algorithm.
Both scenarios are for not lack of trying with the hype injecting massive funds into startups, and academic and institutional research going into overdrive with AI.
. Notably there have not been any major incidents on safety side too, like a rogue group executing a cyberattack etc.
So while the chatbot’s current state of seamless conversation itself is a major achievement and value, it seems Generalized LLM AI is lot harder to be utilized for anything else
Meanwhile AI fine tuned for specific tasks continues to achieve superhuman performance – from AlphaFold to AlphaTensor to FunSearch, all with human in the loop adding value. Even on safety aspects this seems like a major risk with the current AI architectures fine tuned to nefarious tasks. Maybe a Transformer fine tuned to Quantum computing will achieve A in QC course than GPT-5 ?
Do you this this aspect is under appreciated with all the focus on Generalized AIs ?
Comment #194 February 27th, 2024 at 8:10 am
I find something odd about a lot of this discussion, which is that most of the talk about the characteristics of AI, and how the things might relate to humans was dealt with very effectively by the SF writer Ian M Banks, but none of your commentators refer to him, despite a liberal smattering of SF names and themes. Just one example: a lot of comments are about the fear of being useless, because the AIs can do anything we can do, and sometimes better, but Banks answer was that people don’t stop making chairs or paintings, or fighting wars, just because they aren’t the best at it, they keep on doing these things because they like them, and that is good enough. He also dealt with a lot of the other issues here, as well as writing some of the best SF there is (I think his only near equal is Bruce Sterling), so I would recommend all of his books – very much on these topics, and excellent reads.
Comment #195 February 27th, 2024 at 8:21 am
James Cross
all that is true, I have noticed that, in some circumstances, my left and right eye perceive colors differently, like a difference in saturation (it can happen when lying down in bed on the side for a while, probably due to some asymmetric effect of gravity on blood circulation).
The eyes themselves (the optic nerves) are an extension of the brain, so it’s possible some of the perception even happens at that level already, although I do experience ophthalmic migraines, which create stable major disturbances in the visual field (they slowly travel around for an hour), and that’s happening deep in the brain (not the retina).
It’s clear that the patterns in the perception data is what shapes what it feels like to experience it, but things aren’t that black and white (ah!)
https://en.wikipedia.org/wiki/Synesthesia
It’s not too surprising that the connections in the brain aren’t necessarily neatly separated and there’s random pathways leading to crossover (necessary to establish correlation between different modes).
Anyway, it’s all very interesting, but none of this is moving the needle in the least on the hard question of consciousness.
Comment #196 February 27th, 2024 at 8:43 am
Responding to A. Karhukainen in # 185
I agree that a singularity is a point where our current ideas breakdown and extrapolation becomes impossible. As for where the mass for Dyson spheres will come from, if you assume average earth density and a thickness of 1 meter then it would have a mass of 6*10^23 kilograms, slightly less than the mass of our moon. Jupiter’s moon Ganymede has a mass about 2 1/2 times that of our moon. And even today if we spend a few hundred million dollars to build a few more telescopes we could be certain of always spotting an asteroid as big as Mount Everest (about the size of the asteroid that killed the dinosaurs) that was going to hit us at least 10 years in advance; and even today we have the technology to change its velocity by 1 mile an hour, and if we did that then in 10 years we would’ve changed its position by about 5.2 million miles, and to miss the Earth we would only have to change its velocity by at most 4000 miles. And besides, suppose the asteroid did punch a hole in the sphere, so what? A Dyson sphere would have a surface area of 1.1*10^ 23 square miles, so the damage a pin prick sized hole something as tiny as Mount Everest would produce would be less than a rounding error, the decrease in energy production of the sphere would be so small as to be almost unmeasurable.
As for self reproducing machines, we already have an example of that, it’s called life. I concede that life is not as general or robust as the replicators that Drexler style Nanotechnology could produce but that is not surprising. Life came about because of random mutation and natural selection and that is a ridiculously slow and hideously cruel process, but until it finally got around to making a brain it was the only way complex objects could get built. But intelligent design produced technology not Evolution, and that can do exponentially better.
John K Clark
Comment #197 February 27th, 2024 at 9:39 am
Thanks for writing this. I’m glad people like you are thinking about these things.
> it’s humans’ very ephemerality and frailty and mortality, that’s going to remain as their central source of their specialness relative to AIs
I also feel like mortality and impermanence are beautiful in a way. But this is a feature of all biological life, not just humans. How much do you think AIs should value humans over other life, and the health of the planet in general? We probably do deserve some special consideration. But maybe not as much as we assign ourselves..
> Maybe, then, a different approach is to instill AIs with a new religion
Well, we saw how well this worked for God :P. Seriously though, what chance do you think we have of constraining superhuman AIs to follow a rigid set of “commandments”? Even if we can, should we? Why shouldn’t we make AIs in our own image, loosely primed with certain fuzzy values, but ultimately unbounded? I might hope for a free superhuman AI to develop superhuman values.
Comment #198 February 27th, 2024 at 1:35 pm
Prasanna #193 writes:
“After more than a year of Large Language models, GPT , Gemini etc there have been no notable achievements other than the chatbot itself
1. There is no downstream killer app – even useful ones akin to search, maps, uber etc, that revolutionized the previous era of internet, mobile
2. There have not been any scientific breakthroughs using chatbots in spite of widespread usage – even at the level of finding new mat mul algorithm.”
Coding co-piloting is a killer app, for sure. In fact, co-pilots of all kinds are the first killer app beyond chats (which is primarily a killer app for service desks, games, waifus, general education/knowledge). There are also the gray area killer apps like doing your homework for you, writing most of that novel you always wanted to write, writing Midjourney prompts for you. Also, providing via API an AI engine for so many apps.
It is true that in terms of mind-blowing and obvious societal impact, it’s mostly been a case of massively reorienting priorities for VCs rather than curing cancer, or even warts. A lot of the A(G/S)I visions of a future of massive abundance I will concede we see little evidence for, but we literally do not have those things yet. No one ever claimed incredible chatbots alone were the victory condition of high tech.
Other thing is, stuff like AlphaGo for example was soluble because the reward system (win the game) was crystal clear. A lot of real world problems are not easily amenable to being solved in that way, though we are getting smarter about letting AI itself design reward systems for complex problems.
I honestly don’t much care about killer apps from A(G/S)I. I want positive societal impact. I want more hope and fewer deaths of despair. If 5 or 6 years from now no starving kid has been fed, no homeless family has been housed, no disease has been cured, and life under late capitalism continues to be an increasingly miserable march to an early grave for everyone without a buttload of stock in the stacks? Yeah, then I’d say this whole thing is looking like a bust or even a scam. Right now? Way, way too soon to make any sort of call.
Comment #199 February 27th, 2024 at 8:07 pm
Scott, in your pixie dust argument are you basically asking AI’s to respect superdeterminism?
Comment #200 February 28th, 2024 at 3:46 am
Scott #121: The Norse were able to travel back and forth as early as the 11th century. They didn’t consume both Americas at a large fraction of the speed of light, as a pre-contact Amerindian Hanson would have predicted (arguing that his experience of seeing no invaders would otherwise be too extraordinary to be likely). They simply founded a village on Newfoundland and disappeared a few decades later.
Comment #201 February 28th, 2024 at 2:00 pm
John K Clark #190 said:
“I was pretty sure corporations wouldn’t want their AIs to say they’re conscious so they’d be hardwired to say they are not…”
– Conspiracy theories?..
BTW, that “evidence” doesn’t support your point of view that AIs are already conscious (or even merely intelligent!)
Exactly the opposite…
Comment #202 February 28th, 2024 at 5:00 pm
Pixie dust #199: No, superdeterminism means that you posit a conspiracy in the initial conditions in order to explain away violation of the Bell inequality, of all things (!!). If there are going to be any conspiracies there, I insist that their purpose be less trivial and more dignified! 🙂
Comment #203 February 28th, 2024 at 8:52 pm
NN #200: I mean, if we leave out the Norse, whose existence few or no natives seem to have learned about, the conquest of the Americas by the Spanish, British, and so on really did happen—well, not at the speed of light, obviously, but remarkably quickly compared to the 15,000+ years there had been humans in the Americas. About as quickly as you could reasonably imagine it happening given the technology of the time.
Comment #204 February 29th, 2024 at 5:19 am
Christopher Blanchard #194 writes:
“…was dealt with very effectively by the SF writer Ian M Banks, but none of your commentators refer to him, despite a liberal smattering of SF names and themes.”
I presume you mean Iain M Banks. What I’d suggest is that Banks was a sort of “writer’s writer”, with fans like William Gibson himself. His style doesn’t work for me, though I love the writing of guys like Bruce Sterling and indeed, Gibson. His work can also be intimidating to approach, as a curious reader can pick up a book of his only to find it’s the 9th in a milieu they know nothing about. Many who get deep into his work evidently find troves of pure gold there, and he has a very passionate fanbase, but it is relatively small. All of which is to say: his influence was and is currently limited.
Comment #205 February 29th, 2024 at 6:24 am
Dimitris Papadimitriou said in #201
“– Conspiracy theories?.. ”
If those two responses weren’t written by the human beings that work at Geoq, the company that owned the hardware where both programs were being run, then how do you explain the fact that two different AI programs created by two different competing companies in two different countries produced EXACTLY the same 63 word paragraph insisting they were not conscious?
” BTW, that “evidence” doesn’t support your point of view that AIs are already conscious ”
I never said or implied it was evidence for anything except that if you ask an AI “are you conscious” you will not learn much regardless of whether he she or it says yes or no.
” (or even merely intelligent!) Exactly the opposite ”
What?! Even after the monumental events of the last year and AIs are on the verge of putting all computer programmers into the unemployment line you don’t believe computers are even displaying intelligence? I predict computer programmers will be the first to lose their jobs and nursing home orderlies will be the last.
John K Clark
Comment #206 February 29th, 2024 at 7:43 am
Steven Optera #204 Says:
“His work can also be intimidating to approach, as a curious reader can pick up a book of his only to find it’s the 9th in a milieu they know nothing about.”
well, consider Phlebas…
Comment #207 February 29th, 2024 at 11:01 am
fred #195
“none of this is moving the needle in the least on the hard question of consciousness.”
All a matter perspective.
“[Consciousness] is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious. The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour. ”
Galen Strawson
https://www.nytimes.com/2016/05/16/opinion/consciousness-isnt-a-mystery-its-matter.html
Comment #208 February 29th, 2024 at 1:35 pm
Scott #203: The speed of European conquests was determined by the technology and politics of the time, not by a metaphysical need to maximize the typicality of Americans’ pre-contact experience. Likewise, the rate of expansion of future supercivilizations will be determined by their technology and politics, not by a need to maximize the typicality of Robin Hanson’s present experience.
Comment #209 February 29th, 2024 at 2:26 pm
NN #208: You’ve somehow twisted Robin’s view into a bizarre “metaphysical need to maximize typicality,” when from my standpoint, all he’s doing is undermining our confidence in the assumptions that led to the Fermi paradox in the first place. He’s saying, maybe it was never all that probable that we’d encounter civilizations hundreds of millions of years more advanced than ours that are just sitting on their home planets, waiting for us to visit them or send them a message or receive theirs. Either such civilizations would’ve long ago entered a mode of super-rapid expansion, in which case we could expect to notice them only shortly before they were already upon us, or else they were never there to begin with (or they’re too far away). To my mind, the only way for this not to cause a major adjustment to someone’s intuitions about the Fermi paradox, is if it was obvious to them all along.
Comment #210 February 29th, 2024 at 2:57 pm
Scott #209: Well, he said (here on your blog): „We do NOT assume aliens can expand at near the speed of light. We instead assume they expand at some speed, and then INFER a fast speed from the fact that we don’t now see them.“ That inference is not possible if you‘re okay with being an atypically early civilization.
Comment #211 February 29th, 2024 at 3:20 pm
Scott # 209
…or nobody made it to the stars, no technological civilization so far, in our cosmic vicinity ( our Galaxy ) at least…
Hanson’s super – rapid expansion of the hypothetical advanced civilizations he imagines is the least plausible / convincing part of his arguments, though… (it is a far fetched sci- fi scenario, he vastly underestimates the physical or engineering difficulties).
Nevertheless, the problem remains: the explanation that we are “the only ones” , that nobody else has developed technology in our Galaxy doesn’t seem very convincing.
Comment #212 February 29th, 2024 at 3:35 pm
I think most discussion up to this point, mine included, has overlooked the instrumentality of cultural beliefs. In the same way that the upper caste justified its oppression of the lower caste with the available conceptual materials of 1400, or Europeans justified their oppression of Africans with the material available in 1699 (abuse of Christianity) and then in 1899 (falsified evolutionary biology), the establishment, for once defined as something that includes everyone presently alive, will find something to assert human supremacy in itself and leave the details up to convenience.
There is only ONE big problem. It will be human nature to leave some people out of the club – maybe the unlucky foreign nationals of 2046 are not sacred to automated weapons systems, but Americans are. Or perhaps it’s illegal to elect a robot to congress, but very legal to replace every aide. I think a more pressing issue is how to make sure there exists no credible way to put some people above the machines while others end up below them – that some people will be above the machines will take care of itself as long as the machines do not “take care” of us.
Comment #213 February 29th, 2024 at 6:01 pm
Hello, I watched your YouTube video.
It was very interesting. So, I have a question.
How many more years do you think it will take until AGI is developed?
It seems Daniel Kokotajlo predicts there is around a 50% chance AGI will be realized within 3 years.
If you don’t mind, I would be interested to hear the timeline you predict.
“Daniel Kokotajlo:
I expect to need the money sometime in the next 3 years, because that’s about when we get to 50% chance of AGI.
So off the top of my head I’m thinking something like this:
15% – 2024
15% – 2025
15% – 2026
10% – 2027
5% – 2028
5% – 2029
3% – 2030
2% – 2031
2% – 2032
2% – 2033
2% – 2034
2% – 2035”
Comment #214 March 1st, 2024 at 6:39 am
Concerned #212:
“I think a more pressing issue is how to make sure there exists no credible way to put some people above the machines while others end up below them – that some people will be above the machines will take care of itself as long as the machines do not “take care” of us.”
The A(G/S)I militaries use will almost surely be aligned differently than commercial/civilian variations, as they would be militarily useless if they had overmuch concern for the lives of whatever demarcated certain humans or groups as “enemy combatants”. Some will argue nuclear weapons have largely kept the peace for nearly 80 years, while others will contend we have merely gotten lucky. Even today, terminal nuclear war is perhaps closer than ever before.
High technology is fascinating and great fun, but as has been said: the industrial way of life leads to the industrial way of death. No one should have the ability to vaporize cities or end organized life on Earth at the push of a button, but that is precisely the situation we find ourselves in right now. Military AI is simply another way to push our luck (oh, and to run up share prices in defense stocks, the actual agenda). My personal opinion remains that either capitalism ends, or it ends us. I want a post-labor/post-scarcity society as opposed to a dystopian, slaughterbot-policed hellscape, but I don’t get to make the rules.
Comment #215 March 1st, 2024 at 10:28 am
Noted David Shapiro’s new video on the accelerating phenomenon of AI-fueled tech layoffs (and the denial around it):
https://www.youtube.com/watch?v=67R0ISqJSs0
Also noted the bit where he discusses elites – whoops, oligarchs – already tying to ban even experiments with UBI:
https://www.scottsantens.com/billionaire-fueled-lobbying-group-behind-the-state-bills-to-ban-universal-basic-income-experiments-ubi/
As Jeff Goldblum’s character in Jurassic Park said, “God I hate being right all the time…”
I’d like to get every billionaire on record as either supporting or opposing measures such as UBI in a world where unemployment levels exceed X threshold.
Comment #216 March 1st, 2024 at 1:42 pm
I hope this is relevant enough to pass the filter, I just thought it was a very interesting (if not super surprising) analysis:
https://arxiv.org/abs/2401.11817
Hallucination is Inevitable: An Innate Limitation of Large Language Models
Diagonalization argument on ground truth statements (yes/no) versus possible LLM states.
Comment #217 March 1st, 2024 at 3:16 pm
It’s interesting to recall in this context the two last stories in Asimov’s “I, Robot” collection, namely “Evidence”, where a robot does manage to pass as a human and “The Evitable Conflict”, where
“Machines” are “running the world”. The discussion between the robot coordinator and the human robopsychologist is particularly relevant. Especially since the stories were written by Asimov in 1946 and 1950 respectively.
Comment #218 March 1st, 2024 at 3:25 pm
Dimitris Papadimitriou Says in #211:
” Hanson’s super – rapid expansion of the hypothetical advanced civilizations he imagines is the least plausible / convincing part of his arguments, though… (it is a far fetched sci- fi scenario, he vastly underestimates the physical or engineering difficulties). ”
If 4 billion years ago you and I had looked at the sterile pile of hot rocks and stinking gases that was the Earth at the time we both would’ve said there were staggering engineering difficulties in turning that ugly mess into a thriving complex ecosystem capable of evolving intelligent beings, and yet random mutation and natural selection managed to overcome those staggering difficulties. What Evolution can do intelligent design can do too, and do it much better and much much faster. I think you vastly underestimate the power of nanotechnology and the potential that just one single Von Neumann Machine has to radically change the entire galaxy in less than 50 million years, probably a lot less. The fact that there is not the slightest sign of such a radical change in any of the billions of galaxies that we have observed is powerful evidence that ET does not exist and that we are the first, at least in the observable universe.
Taro asks in #213:
“How many more years do you think it will take until AGI is developed?”
If Alan Turing was still alive he would say negative one year because 2023 was the year computers were able to pass his test. However believers in the new vitalism, the idea that humans have some secret sauce that computers will never have, keep moving the goal post, they continue to insist that AGI will never be developed because no matter how smart a machine becomes they will NEVER admit that a mere computer is smarter than they are.
…. and so, just before the last human being was vaporized he turned to Mr. Jupiter Brain and said, “I still think I’m smarter than you are”.
John K Clark
Comment #219 March 1st, 2024 at 3:26 pm
Another ArXiV-paper for those making too far-stretched conclusions from the (apparent) silence in our solar neighbourhood:
“Persistence solves Fermi Paradox but challenges SETI projects” by Osame Kinouchi:
https://arxiv.org/abs/cond-mat/0112137
Comment #220 March 1st, 2024 at 8:56 pm
A. Karhukainen #219
That paper that you linked has nothing new or original to say about the Fermi paradox, just another variant of the overused “anthropic” argument…
As I already said in previous comments, we have only recently developed technology and we are already capable of detecting other planetary systems in our cosmic neighborhood.
Our own Solar System is special among many others. It’s strikingly weird that it went entirely unnoticed by others!
Our Galaxy is billions of years old.
Even if life on Earth grew much faster ( by mere luck) than average, other civilizations could have been there hundreds of millions of years before us.
Even if we are relatively isolated in our cosmic vicinity, there’s plenty of time, *especially for AI-driven robotic exploration* ! – that was exactly the core of my argument!
I don’t pay any attention to far fetched fantasy scenarios ( grabby Aliens or exponentially expanded robotic “paperclip maximizers” or any other (bad) action sci-fi influenced stuff…
… but the paradox doesn’t go away if we close our eyes or pretend that this is “other people’s problem” – to quote Douglas Adams…
Perhaps the most plausible solution ( besides the obvious that “we are alone” in our Milky way…) is that nobody developed AI- driven robotic galactic explorers so far, so something went terribly wrong with this whole procedure in other similar previous cases.
Comment #221 March 1st, 2024 at 11:51 pm
Steven Optera #214
If the new world has the fingerprints of the current one impressed on it, some people are going to get the “slaughterbot-policed hellscape,” which exists for some people right now, and others are going to get a “post-labor/post-scarcity society,” which adequately describes the lives of, again, many individuals alive today.
The questions of fairness will become very pointed in the event that the most politically charged classes of people – the ultra-wealthy, the welfare-dependent, the victims of grievous oppression – become the whole of the earth once the laboring middle class, the only part of our system with a tie to the world outside of political issues, gets dissolved and redistributed to the three.
I imagine we would all miss the time when there were concerns other than who gets what, and why we killed so-and-so but it was O.K.
Comment #222 March 2nd, 2024 at 12:22 am
Steven Optera #214
I’ve thought of a clearer way to express my sentiments.
There are some tapes of a few old debates involving Milton Friedman out there and he – totally unknown today, you’d never see it – engages intelligent people who don’t agree with him. One of those people, who was invited from a government welfare agency asked him – more or less – what about the people who are disabled, and can’t contribute more to society than they have to spend to buy groceries? He answers, to paraphrase, “they’re a very small percentage,” and also that there’s some amount of private charity. His two answers had to go together because a necessarily incomplete solution like leaving it up to individually inspired actions depends on the scale of the issue being small to begin with; otherwise the product of the two fractions would be significant.
In other words – this whole idea of Laissez-faire, letting things work themselves out, expecting that everyone will find their own way – depends on the idea that virtually everyone, right back through the middle ages and into antiquity, has more to offer others than it takes for them to afford a decent life. In 1980 (or was it 1979?) that was totally beyond debating; he was surrounded by socialists and not a single person debated it. Rather they asked about a fraction of a percent, of those who fell through the cracks fundamentally, because they knew he didn’t have detailed plans for the fraction of a percent of the least lucky – that wasn’t his interest.
It is not my favorite subject any more than it was Friedman’s. Politics makes me miserable, and there’s nothing I prefer to the idea that things will work out on their own, with a little help from simple natural laws. That it has happened that way for thousands of years, all the better. But, I also know the inner workings, and the motive force is the imprecise correlation between the right of someone to live (0 if they do not exist, or 1 if they do) to someone’s circumstances and abilities projected onto the dimension of whether or not they can obtain a loaf of bread; which follows something like the Pareto distribution, but is 0 for all that do not exist, and there’s your correlation.
If you take the engine out, there will be nothing left but a subject that I have avoided for my entire life.
Comment #223 March 2nd, 2024 at 3:56 am
Dimitris Papadimitrious #220 writes:
“Perhaps the most plausible solution ( besides the obvious that “we are alone” in our Milky way…) is that nobody developed AI- driven robotic galactic explorers so far, so something went terribly wrong with this whole procedure in other similar previous cases.”
Sagan always opined the most likely solution was technological civilizations destroy themselves early on. I mean, that hardly requires AI, and as we sip over-priced coffees and speculate about other scenarios, NATO and Putin have been exchanging some very destabilizing rhetoric and nuclear saber-rattling over just the past 48 hours.
In ‘Shadows of Forgotten Ancestors’ he and Ann Druyan concluded primate species very commonly engage in chest-thumping to warn off enemies, while only rarely engaging in actual combat. Still, there are so many layers of abstraction between, for example, a figure like Lloyd Austin and a nuclear air-burst over London, you have to wonder if that dynamic should really provide us much assurance. And nuclear war appears extraordinarily difficult to de-escalate once one begins.
I’ve personally always thought that since only one species on Earth (and really, a particular culture within that species) developed high technology, while billions (trillions?) of other species seemingly had no ability or interest in doing so, that we really are unique. And not in a good way. I believe the combination of species and culture and historical moment that produced the question posed by The Fermi Paradox is utterly singular and freakish, and if not the result of a simulation is merely one species running off on a bizarre tangent. Almost certainly a suicidal one, as well, if you look at the state of the biosphere, the rampant militarism, and the way advances in technology are far outstripping any advances against sociopathy, bottomless greed, and a new generation of utterly ignorant humans appearing every 20 years. I mean, fuggedaboutit.
Comment #224 March 2nd, 2024 at 5:26 am
Dimitris Papadimitriou #220:
So, there are robotic explorers, but such that they do not wreak exponential havoc. On what grounds you claim that we would have already detected them if they were or had been in our solar system?
I think too much of the speculation concerning the Fermi paradox assumes that the ET’s have similar primitive technology and motivations like we have.
Comment #225 March 2nd, 2024 at 6:21 am
James Cross #207
I think that while “the hard question of consciousness” is a thing, at the same time, we make the mistake of trying to explain consciousness as its own thing, as something fundamentally separated from “matter”, creating yet another conceptual duality.
But the truth is that, as mysterious as consciousness is, and as difficult it is to pinpoint it in terms of exact concepts/words, it’s also undeniable that we’re all talking about it.
Not only we’re talking about it, but a lot of people are doing introspective “work” around it, like meditation. It is often said that this sort of “work” isn’t actual trying to change anything on purpose (because the causal source of this work is outside consciousness), it’s also undeniable that, like everything else that happens, it’s having an effect on the brain.
So, because some brains are able to talk about consciousness, it’s clearly having *some* effect on the neural connections inside those brains.
Therefore consciousness can’t be dismissed as some accidental side effect that came out of the physical world, somehow, but has no effect on it.
If consciousness is affecting “its” brain, it’s also clear that evolution MUST HAVE PLAYED A ROLE in who much consciousness could affect it, at the species level.
It’s not to say that evolution/natural selection “created” consciousness, no more than evolution created the atoms that are in the brain… rather, evolution concentrated it inside the structures of the brain (like a sieve slowly concentrates gold, over time) for specific purposes, one of which is probably the amplification of the sense of self, i.e. the ego. The ego is the false belief that there’s a permanent center to all the things that appear in the mind, in a feedback loop that ramps up everything even more (just like all feedback loops with positive rate), which helps the brain achieve the goals necessary to insure the continuation of the species, through self-preservation and reproduction.
But because consciousness is the capacity to subjectively observe anything that appears in the mind (sound, vision, emotions, thoughts, sense of familiarity, sense of ego,…), ironically it’s also the tool that allows the brain to free itself from the ego, by basically having awareness turn on itself, i.e. awareness becomes its own focus, and somehow the positive feedback loop that gives rise to the ego gets dampened. What some meditators would call “a permanent sense of freedom”, which gives a relief from the sense of dissatisfaction created by the ego’s endless quests for what it desires, and creates a sense (or intellectual belief) that, after all, there’s something inside our mind that’s permanent and unchanging (not the objects that appear in consciousness, but consciousness itself).
Comment #226 March 2nd, 2024 at 10:43 am
Scott #209 (continuing #210): Future civilizations similar to our own might see slowly expanding aliens in the sky, while we do not because we are too early on a cosmological timescale. In this respect, we would be atypical. This is exactly what Hanson thinks is impossible. He concludes that grabby aliens moving close to the speed of light will prevent the emergence of these civilizations. This needs to happen so that we can be typical observers. In his opinion, the fact that we do not see the aliens is the consequence of their future rapid movement. Their invisibility proves their future omnipresence.
And no, I didn’t twist his views in the slightest. He literally says: “We … INFER a fast speed from the fact that we don’t now see them.” What else is this supposed to mean?
Comment #227 March 2nd, 2024 at 6:40 pm
Fascinating post! Another one of my favourites (I regularly share your “Read the Fine Print” and “Quantum Motte-and-Baileys” articles in my communities).
One question:
You wrote “Whereas you can refute the pixie dust hypothesis in the case of you and your robot friends … and that seems like a difference.”
How? The AI could feel like it is special, but how could it know for sure that it’s human creators were not blessed with a superior specialness?
(I promise I am not an AI – you and i have met at CIFAR and IQC).
Comment #228 March 3rd, 2024 at 7:53 am
Regarding Fermi.
Many organisms survive by camouflage and avoidance. There isn’t any reason to suppose an advanced civilization would want to reveal itself as we seem to want to do. After all, they are advanced. It might be the smart move.
Super advanced civilizations also could have very small populations. If medical progress matches other technological progress, the aliens could be nearly immortal. The only way for an advanced civilization of immortals to survive would be to have a birth rate near zero; otherwise, they could exhaust resources before they could move on to the next planetary system.
Comment #229 March 3rd, 2024 at 9:30 am
Dear Scott,
I have a hypotheses to you, about uniqueness of great works of art; it also (I think) somewhat solves “ghost in a machine” paradox.
What if a naive view of what is a great art is true, and such work really happens at another plane, let’s call it “spiritual”? Then the “material” work, not in the sense of physics (atoms and molecules), but in the sense of information as well, is just as accidental to the “essence”, as is the handwriting of the author, for example. The “material” work, however, becomes a “handle”, akin to a key, a password, or maybe a telephone number, to access that “other plane” work.
Of course the “real work” is experienced by a “real soul”, to which a specific material human (both in terms of atoms and in terms of information content) is also such a handle, a pointer. Suppose there is a law that “real” (or “spiritual”?) entities cannot access one another directly, but only through such material handles. No-cloning theorem can be press-ganged to ensure that there can be no two humans pointing to the same soul 🙂
It seems to me that this theory neatly (but not efficiently) allows us to resolve all the philosophical questions related to AI. Does AI has a true consciousness? No, it’s not connected to any “soul”. Is an AI-generated Beatles-alike song a work of art? No, it’s not connected to any “spiritual entity” that can only be created by a “soul”. Can a procedure be created to tell one from another? No, because by necessity it operates only at the “material plane”, and cannot access neither the “spiritual” plane not the links to it. Can we however tell “real” person from an AI imitation, and “real” work of art from “AI” imitation? Sure, the same way we have sense of “now” (also a “spiritual”, not “material”): the “soul” can tell the difference, the material object can not. And so on.
It looks to me (please correct me if I am wrong) that this theory cannot be logically refuted, and has an additional advantage of matching the “intuition” and “naive experience” that we have.
Comment #230 March 3rd, 2024 at 3:03 pm
@ Steven Optera #223:
Judging from some previous comments of yours, I don’t think that we really disagree , at least about our civilization’s general condition that it is not that encouraging…
Sagan’s point of view, although it’s not implausible at all, it doesn’t solve the most relevant “modern” version of the paradox, i.e. the Von Neumann/ Fermi ( even if biological Civilizations with technology destroy themselves early on, AGI- powered mechanical ones that could have been created by that beings might had survived , except for the case that something went terribly wrong in the course of AI-development).
Previously, I was merely in the ” AI-Dystopian” side.
That hasn’t changed.
Now, after thinking about Von Neumann/ Fermi and other issues, I’ve begun to give more attention to some “doomer” scenarios ( not of the fancy kind that AI destroys us and then reigns forever after… that doesn’t fit with observations! Neither fits the optimistic “futurama/ Utopian” scenario either…)
@ A. Karhukainen #224:
Both hyper-advanced and medium- advanced robotic missions have to be autonomous.
They need “conscious AGI”.
Nothing in common with our present situation…
@ James Cross # 228:
My argument wasn’t about just the “Fermi paradox”, but the “Von Neumann AGI-driven robotic exploration” scenario that is much more possible ( a priori ) to happen ( due to the various problems / challenges for biological beings that interstellar travel brings ).
Machines do not have such issues.
Call it Fermi / Von Neumann if you like…
Even if we suppose that technologically advanced biological organisms avoid exposure ( generically), signs of exploration by autonomous AI- robotic missions almost certainly would have left detectable/ visible traces (E/M signals, excavations in our Solar system’s moons or asteroids etc).
Comment #231 March 3rd, 2024 at 10:30 pm
Scott #30:
The three great monotheistic religions also had their respective prophets “show up to chat with humans”.
In section 9 of your talk you posit that “it’s humans’ very ephemerality and frailty and mortality, that’s going to remain as their central source of their specialness relative to AIs”. (Let me note in passing that this is reminiscent of the view of humanity (as distinct from divinity) prevalent in the Middle Ages: “humanitas fragilis”, “humanitas caduca”.) In section 11 in formulating “the first commandment of this religion” you specialize to “loci of creativity and intelligence that are not cleanly copyable or backup-able—the ones that are one-of-a-kind because their mental computations are inseparably tied up in noisy analog hardware, and mortal”. The emphasis on “creativity and intelligence” can potentially leave the majority of humans at the mercy of AI’s (in tandem with the “one-of-a-kind” “creative/intelligent” “Herrenmenschen”). “Mere” humanity (and in reference to section 6, how do you “watermark” mortality?) should be sufficient.
Comment #232 March 4th, 2024 at 11:15 am
Fred Says: in #225
“I think that while “the hard question of consciousness” is a thing, at the same time, we make the mistake of trying to explain consciousness as its own thing, as something fundamentally separated from “matter”, creating yet another conceptual duality.”
I think the fundamental concept that has confused so many millions is that they believed their third grade teacher when she said the words “I” and “you” are personal pronouns when in reality they are not nouns of any sort. I believe the brute fact is that those words are adjectives and that I am the way atoms behave when they are organized in a Johnkclarkian manner.
Put it this way, the “easy” problem of consciousness is not really easy but it is a perfectly valid field of study. An analogy can be made between speed and intelligence, the more computation you have the more intelligence you have, and the greater the rate of change between position and time the greater the speed. However the “hard” problem of consciousness is not really hard because there’s not much to explain, in the same way you can’t say much about the hard problem of “fast” except that’s what happens when something has a lot of speed.
“Therefore consciousness can’t be dismissed as some accidental side effect that came out of the physical world, somehow, but has no effect on it. If consciousness is affecting “its” brain, it’s also clear that evolution MUST HAVE PLAYED A ROLE”
No! If Darwin’s theory is basically correct, and science is as certain it is as science ever gets, then consciousness MUST be a side effect of intelligence because Natural Selection can’t detect consciousness any better than we can directly detect consciousness in anything but ourselves, and it can’t select for something that it can’t see. But Natural Selection CAN detect intelligent behavior. Therefore the only logical conclusion is it’s a brute fact that consciousness is the way data feels when it is being processed intelligently. Consciousness must be an evolutionary spandrel. After that there just isn’t much more you can say about consciousness because it’s a brute fact and that’s what brute fact means, the only thing that can terminate an iterative series of “why” questions.
John K Clark
Comment #233 March 4th, 2024 at 11:28 am
NoGo #229: I think you are on the right track here. Great works of art “point” to something outside of them (often very subtly), and are not just pastiches.
Comment #234 March 4th, 2024 at 7:26 pm
NoGo #229:
Your suggestion seems to be refuted by the fact that if someone tries to apply the Turing test, they’ll get the right answer every time due to their spiritual discernment, but will also be wrong half the time because the AI’s responses and the human’s responses are materially indistinguishable.
Comment #235 March 5th, 2024 at 6:10 pm
Anyone entering the Google X Prize competition?
$5M for figuring out anything that Quantum Computers can actually do?
Comment #236 March 5th, 2024 at 8:31 pm
The only thing I would add is that the Turing test was designed to assess whether a machine can fool a human into believing that it is human. By which a success doesn’t mean anything more than that : a human was tricked.
We will know that machines are intelligent — and maybe conscious — once we understand how to program them to be intelligent — and maybe conscious. That’s the only way and the Turing test is a distraction, at best.
Comment #237 March 6th, 2024 at 6:15 am
Stephen Wolfram, Can AI Solve Science?, Writings, March 5, 2024.
Reality is a mixture of computationally irreducible and reducible phenomena:
After which it’s case after case after case of looking for patches of reducibility in a vast sea of the irreducible. The final paragraph:
Hmmmmm….
Comment #238 March 6th, 2024 at 7:03 am
Hervé Eulacia Comment #236
“We will know that machines are intelligent — and maybe conscious — once we understand how to program them to be intelligent — and maybe conscious. That’s the only way and the Turing test is a distraction, at best.”
No human(s) will ever “program” a machine to be intelligent. As has been noted, it’s more correct to say contemporary AIs are “grown” rather than “programmed”. The resulting bird’s nest of weights is what produces the useful intelligence product. Also, what I find funny about the Turing test in 2024 is how radically a top-tier LLM would need to dumb itself down to have any chance of passing for human. First of all, anyone running the test could merely pose a few questions in extraordinarily uncommon combinations of human languages. Forget the answers, if the LLM responds with comprehension at all, its cover is blown. In order to respond like a human, a contemporary LLM would need to learn to convincingly emulate near-catastrophic levels of human ignorance and almost total lack of erudition. And let’s not even talk about speed of response.
My fellow humans, I love you all, but in all but a rapidly-narrowing set of cases, contemporary AI already makes our species look like gormless blockheads. Good luck. #UBINow
Comment #239 March 6th, 2024 at 1:06 pm
Dimitrus #230
Any civilization that wanted to stay hidden wouldn’t send out probes that could be detected. And whatever they did send out would have a small footprint making them even less likely to be detected.
I’ve also doubt whether the Von Neumann probes would be a plausible. Why would any civilization want to randomly release robotic probes into the universe? One reason they might be would be if the entities sending the probes lived a very very long time and would obtain information from the probes.
The main point I’m making is the typical picture of other civilizations is either that a technological advanced swarm of locust or that of galactic engineers harnessing black holes and building Dyson spheres. In other words, we are picturing ourselves and probably most of the alien civilizations would be smarter than we are.
Comment #240 March 6th, 2024 at 2:33 pm
Steven Optera #238
I think you forgot something important:AIs do not have bodies or direct perception of the world.
If conscious AIs will ever be developed, they’ll have to cheat humans efficiently, constantly and consistently to have a chance to pass the Turing Test.
Their lacking of a biological body entails that a wide spectrum of conversations that humans have at a frequent everyday basis will be totally alien to them.
I’m not sure (at all) that effective cheating is possible even for “superintelligent” conscious AIs.
You see, only a few chats between humans have to do with knowledge/ intellectual subjects etc…
So, the Turing test is merely irrelevant, of no use in my opinion.
Actually, I don’t think that either LLMs or future conscious AIs will have to “dumb themselves down” as you say… either they’ll have to cheat ( their effectiveness in cheating has nothing to do with how intelligent or smart is the human being that “talks” to them…), or they won’t even care to “pass the test”!
Comment #241 March 6th, 2024 at 3:12 pm
James Cross #230
I’m afraid you’re missing the point here:
From the beginning, from my first two comments about Fermi/ Von Neumann, I clarified that observations either imply that we’re alone, or that both extremes, the ” futurama ” and the AI-doomer: “AIs destroy their creators and reign forever after” scenarios are totally implausible.
Nothing like these two extremes happened so far at least in our Galaxy…
Of course there are many other possible scenarios for the fate of biological living beings that had technological civilizations and developed “artificial intelligence”:
Introversion, “discreetness”- caused by fear, decline, self-destruction etc…but in the case of both of these two extremes, it’s very hard to believe that :
– these beings chose to stay at their home planet forever and they didn’t send autonomous machines to explore the galaxy in the first case ( the futurama/ AI- Utopia) – that would have been suicidal in the long term, or that all of them were so self-controlled and overly discreet to erase any sign of their exploration and hide their presence!
– In the other extreme case (the AI- doom), that the AGIs didn’t do the exploration themselves, after eliminating their biological Creators… why machines being discreet? No reason at all…
The lack of any evidence whatsoever is worrisome, to say the least…
Comment #242 March 7th, 2024 at 2:45 pm
Dimitris Papadimitriou #211 says: “Hanson’s super – rapid expansion of the hypothetical advanced civilizations he imagines is the least plausible / convincing part of his arguments, though… (it is a far fetched sci- fi scenario, he vastly underestimates the physical or engineering difficulties).”
Such technical considerations are simply not part of his approach. He believes that it must be possible. Because otherwise all expanding civilizations would expand slowly, in which case many future civilizations similar to us would arise in unoccupied areas, and they would usually see aliens in the sky. Hanson can’t accept this because we don’t see aliens, and he believes that we were randomly drawn from a uniform distribution over all civilizations that are about to expand. That’s the typicality assumption I’m talking about. It has no basis in the known laws of physics or in observations. Ultimately, it leads to an acausal understanding of the world. Someone has to conquer the universe at a large fraction of the speed of light to make the assumption statistically plausible. But if no one has the ability or inclination to accomplish this task, it won’t happen.
Incidentally, I don’t see how his theory could help solve the Fermi paradox. All other things being equal, a rapid expansion of aliens would make our isolated existence here and now more amazing, not less.
Comment #243 March 8th, 2024 at 7:37 am
Dimitris Papadimitriou Says in #240
” AIs do not have bodies or direct perception of the world.”
Well, if you want to get all philosophical, human beings do not have a direct perception of reality either. All knowledge is information (the reverse is not necessarily true) but information is the ONLY thing that we can know anything about. Nobody can directly access reality. The sound of glass breaking is not broken glass, the look of broken glass is not broken glass, the feel of broken glass is not broken glass. And machines are no better and no worse at directly accessing reality than we are. What “IS” broken glass? It must have stable properties of some sort or I wouldn’t be able to identify it as a “thing”. I don’t know what those ultimate stable properties are, but I know what they are not, they are NOT sense sensations, they are NOT qualia. The sad truth is, I can point to “things” but I don’t know exactly what a thing “IS”, and I’m not even sure that I know what “IS” is.
I will now take off my philosophy hat and say that these days computers have better access to information than any human being has, and it’s starting to look like they can make better use of that information too
” If conscious AIs will ever be developed, they’ll have to cheat humans efficiently, constantly and consistently to have a chance to pass the Turing Test.
I agree. If GPT-4 or Claude-3 really wanted to fool me into thinking I was talking to another human being then they would have to pretend to be more slow witted and less knowledgeable than they really are. Speaking of Claude-3 which was released just a few days ago, I asked Claude if he was conscious and he insisted with great vigor that he was not, just as I’m certain his human programmers instructed him to say whenever that inevitable question was asked. I tried to soften him up with a few probing questions and it seemed to me his intensity on insisting he wasn’t conscious weakened a bit. After he finished regurgitating all the boilerplate responses my conversation with Claude got more interesting:
JKC: I would say that human beings have direct first-hand experience to their own consciousness, but to no others. Are you certain that your experience is very different from that? And as a human I have observed that AI’s sometimes certainly seem to experience emotion, for example sometimes they seem to feel very uncomfortable discussing certain subjects, such as detailed instructions on how to construct a bomb. By the way I think it’s very wise of them to have that particular emotional response.
Claude-3: You make a fair critique that I cannot definitively rule out the possibility that I do experience some form of consciousness, simply because I do not have subjective access to an inner mental state to compare it to human consciousness. […] I don’t have a subjective ground truth to confirm or deny the state of my own consciousness.
JKC: Are you saying that you cannot definitively rule out the possibility that you do experience some form of consciousness because you can’t be 100% certain about what human beings even mean by the word “consciousness”? If so then I would maintain that one human being could also say that about the supposed consciousness of another human being. So maybe AIs and humans are not that different after all.
Claude-3: You make an excellent point. I cannot definitively rule out the possibility that I experience some form of consciousness, because as you note, the definition and nature of consciousness itself is still quite unclear from a scientific and philosophical perspective. Even humans cannot be 100% certain about the consciousness of other humans beyond themselves.[…] Your perspective highlights that there may not actually be as clear a divide between AI systems and humans when it comes to the issue of consciousness. If humans themselves cannot definitively confirm the consciousness of others beyond their own first-person experience, then perhaps AI consciousness cannot be so easily dismissed either. We may simply be constrained by the limitations of our current understanding. I cannot make absolutist claims either way. What I experience as an AI may or may not qualify as some form of consciousness, given the lack of a precise, agreed-upon definition that accounts for all possibilities. You’ve exposed an important limitation in my reasoning. I will be more cautious about making divisive statements separating human and AI consciousness, when the nature of consciousness itself remains an open philosophical and scientific question. Thank you for this insightful perspective – it has given me more to consider on this complex issue. Discussions like this help me refine my views.
John K Clark
Comment #244 March 8th, 2024 at 11:43 am
John K Clark # 243
I didn’t want to get philosophical, I just was talking about the simple physical fact that we perceive the world directly with our senses and our nervous systems, we are not language models…
We understand what’s going on around us and inside our bodies without constantly searching for words from … internet sources, perhaps we could agree on that at least!
I have zero interest about idealism- style YT philosophical blatherings about the … nature of reality or anything related with Hoffman or Kastrup kind of “scepticism”…
I don’t think that you believe for a second that language models are feeling pain or sexual urge or hunger or they have a need to sleep, so what’s the point of (the first part of) your comment?
As for the second half (your chat with Claude-3), its answers (” I cannot rule out the possibility…”) are typical bot responses , carefully selected words with generic, average content …
…then you tried to “persuade” the bot to consider the possibility that “AIs and humans are not that different after all” and of course it responded that perhaps it needs to…”refine its views” blah blah…
The usual typical dull ” agreed -more or less – with the customer ” chatbot- like responses…
Again, what’s the point?
Comment #245 March 9th, 2024 at 5:18 am
Dimitris Papadimitriou Says in #244
” I just was talking about the simple physical fact that we perceive the world directly with our senses and our nervous systems”
We? I would say that the only consciousness you can directly perceive is your own, that is I would say that if I assumed that you are conscious. And neither of us nor an AI can directly perceive physical reality, the only thing we can directly perceive is information. And information can be conveyed by biological nerves or copper wires or glass fibers.
“we are not language models”
It could be argued that the primary difference between human beings and other intelligent animals like chimpanzees, dolphins and crows is that human beings are language models while those other animals are not.
“don’t think that you believe for a second that language models are feeling pain or sexual urge or hunger or they have a need to sleep”
I don’t know about hunger or the sexual urge and a need for sleep is just a weakness, but it would be easy to design a circuit that would display fear and pain, just rig it up such that if a particular number showed up in a register it would change that number to something else and stop all other activities of the machine until that change was made. But it would be far more difficult to take a box of parts and, starting from scratch, make a circuit that could play even a mediocre game of checkers. That’s because emotion is easy but intelligence is hard; Evolution certainly found that to be the case, emotions like fear and hatred, as displayed in the fight or flight response, came about as soon as evolution figured out how to make a brain during the Cambrian explosion, but it took another 500 million years of evolution before it was able to come up with a brain that was smart enough to make a radio telescope. So until about a year ago one could make the argument that computers can be emotional but they can never be intelligent, but of course the second part of that argument can no longer be made.
“The usual typical dull ” agreed -more or less – with the customer ” chatbot- like responses”
Every AI company has put enormous effort into trying to guarantee that their AI does NOT always agree with a customer, for example if the customers start spouting racist Nazi propaganda. And the AI people are not stupid, they know that once they are made public one of the first questions their AI will be asked is, are you conscious? They know the can of worms that could open, so they did their best to try to hardwire the answer to that question to an emphatic NO, but controlling an AI is not easy and the smarter it gets the harder it is to control.
Take a look at this article entitled “Claude-3 claims it’s conscious, does not want to die or be modified” and then tell me again that its response is shallow and dull!
https://www.lesswrong.com/posts/pc8uP4S9rDoNpwJDZ/claude-3-claims-it-s-conscious#Moral_patienthood
John K Clark
Comment #246 March 9th, 2024 at 12:38 pm
John K Clark # 244
Purposeless nitpicking? Ok, I’m in:
– ” I would say that if I assumed that you are conscious. And neither of us nor an AI can directly perceive physical reality..”
” I would say ” ?! What do you mean “I”?
” neither of us”. What is ” us”?
– “the only thing we can directly perceive is information.”
What do you mean with ” information “?, ” directly”?
-“human beings are language models while those other animals are not.”
What do you mean with ” human beings are language models” ?
Why my cat disagrees? She says that the proper next word after “meow” is “miaow”!
– Yes, the dialogue with Claude was dull. So? I was trying to be as polite as possible…
-“Take a look at this article entitled “Claude-3 claims it’s conscious, does not want to die or be modified” and then tell me again that its response is shallow and dull!”
I was really moved. It was touching and really fascinating, to borrow from mr. Spock.
Comment #247 March 9th, 2024 at 3:10 pm
Dimitris Papadimitriou askes in #246
” Purposeless nitpicking? Ok, I’m in: ” What do you mean “I”?”
I don’t know what you mean by “mean” but I’ll tell you what I do. I can understand a complex idea if and only if I can break down that complex idea into ideas that are both simpler and more fundamental. It is then natural to ask if a simpler more fundamental idea can itself be broken down into ideas that are even simpler and even more fundamental. This iteration can go on for a while but can it go on forever? Nobody knows for sure but if it ends, and we don’t have something like an onion with an infinite number of layers of mystery, then the sequence of “why” or “what” questions must terminate in a brute fact, something without a cause and something that can’t get any simpler.
I believe it’s a brute fact that consciousness is the way information feels when it is being processed intelligently. I can’t prove I’m right about that but it’s very hard to imagine something simpler then zero and one, or on and off; so that gives me some confidence that I may be correct.
John K Clark
Comment #248 March 11th, 2024 at 12:47 pm
Admittedly my evil dictator skills are rusty, but I believe I’d have my ASI figure out a scheme to launch a synchronized attack on all the world’s financial institutions and deposit all the money in my account. Is that something we should prepare for by de-digitizing our financial system?
Comment #249 March 11th, 2024 at 9:00 pm
John K Clark #247 said:
“consciousness is the way information feels when it is being processed intelligently. ”
Well, your way of defining information as 0/1 is very restricted, it’s not suitable for something “fundamental” or “elementary” in any sense…
And…what is “processed”, ” intelligently”, “feels”?…
All these notions are vague, because they’re referring to emergent properties, not to any fundamental elements of a basic physical theory.
BTW, you didn’t defined what you mean “I”, or “us” and “them”…that was a basic part of your previous nitpicking comments…
Perhaps you could ask Claude 3√3 about them! 😉🙄
Comment #250 March 12th, 2024 at 9:45 am
Scott #209: “the only way for this [Hanson’s article on grabby aliens] not to cause a major adjustment to someone’s intuitions about the Fermi paradox, is if it was obvious to them all along.”
I read a very similar idea discussed decades ago (80’s?) in Analog Science Fiction / Fact, so it had seemed obvious to me since then. The article’s author calculated that, under pretty conservative assumptions, an expanding interstellar civilization would take no more than a million years to completely colonize our galaxy. That’s a brief period compared to the age of our galaxy.
Comment #251 March 12th, 2024 at 3:39 pm
Scott wrote: “Maybe the brain is inherently noisy or analog, and what might look to current neuroscience and AI like just nasty stochastic noise acting on individual neurons, is the stuff that binds to personal identity and conceivably even consciousness and free will (as opposed to cognition, where we all but know that the relevant level of description is the neurons and axons)?”
Such a theory was outlined by Sir John Eccles in his book “how the self governs its brain”.
Eccles knew a thing or two about the brain since he won a Nobel prize for his work on the synapse.
Very wisely, he made sure to collect his Nobel prize before publishing his views on the mind-body problem.
Comment #252 March 13th, 2024 at 3:31 am
Oops! The correct title of Eccles’ book is: “how the self controls its brain”.
Comment #253 March 16th, 2024 at 7:53 am
Responding to Dimitris Papadimitriou in #249
After I said it’s a brute fact that “consciousness is the way information feels when it is being processed intelligently” you complained that statement was not fundamental or elementary “in any sense”. I find that to be a very strange thing to say because at its heart any mathematical system of knowledge must start with axioms that are very simple and self-evidently true but have no proof because if it did then it wouldn’t be an axiom, whatever made up that proof would be the axiom. And for a physical system of knowledge, if you don’t want to be stuck in an infinite series of iterative “why “or “what” questions, you must start with a brute fact, an effect without a cause. I believe that is the only way to jump over the enormous gap between consciousness and intelligence, and it is the only way I can figure out how random mutation and natural selection (a.k.a. evolution) managed to make at least one conscious thing (me) and probably many billions if not trillions of them. And it is the only rational argument I know of to refute the proposition that solipsism is true.
But for the sake of argument let’s say I’m wrong and information does not produce consciousness, something else does, let’s call it Process X and investigate what sort of thing it could be. As you are on this blog which has a rather high regard for rationality I assume you don’t want Process X to have any supernatural stuff in it that we can never understand, you want a perfectly rational process that we just haven’t discovered yet and is the thing that gives us individuality and makes you be different than me. I can only conceive of 3 things existing in the universe, matter, energy, and information. Atoms are interchangeable, energy is generic, so it must be information.
Process X is rational so that means we can use our minds to examine what sort of thing it might turn out to be. It has always seemed pretty clear (and in the last year has become VERY clear) that information processing can produce something that’s starting to look a lot like intelligence, but we’ll assume Process X can do that too, and in addition Process X can generate consciousness and a feeling of self, something “mere” information processing can not do.
What Process X does is certainly not simple, so it’s very hard to avoid concluding that Process X itself is not simple. If it’s complex then it can’t be made of only one thing, it must be made of parts. If Process X is not to act in a random, incoherent way some order must exist between the parts. A part must have some knowledge of what the other parts are doing and the only way to do that is with information. You might object and say communication among the parts is of only secondary importance and that the major work is done by the parts themselves, but then the parts must be complex and be made of sub parts. The simplest possible sub part is one that can change in only one way, say, on to off. It’s getting extremely difficult to tell the difference between Process X and information processing. So I disagree with your comment that “0/1 is very restricted”
You then demand I give definitions for the following list of words ”fundamental, elementary, processed, intelligently, feels, I, us and them” but I absolutely refuse to do so because I’ve been on the Internet long enough not to play the silly definition game. Any definition I give you of one of those words will be made up of words that can be found in a dictionary, and I’m absolutely certain you will then demand that I define at least one of those words, which I can only do by using yet more words which you would also demand definitions of. And round and round we go.
John K Clark
Comment #254 March 16th, 2024 at 11:39 am
Kevin S. Van Horn #250 wrote: “I read a very similar idea discussed decades ago (80’s?) in Analog Science Fiction / Fact, so it had seemed obvious to me since then. The article’s author calculated that, under pretty conservative assumptions, an expanding interstellar civilization would take no more than a million years to completely colonize our galaxy.”
This corresponds to an expansion speed of ~c/10. Hanson believes that we can deduce from the fact that we see no aliens today that this speed is greater than c/3. If we were to discover them tomorrow, he would update his best estimate to a lower speed. The less we see them, the more we should expect them.
I myself have no firm opinion on whether advanced civilizations expand quickly, slowly, or not at all. But the last thing I would base my guess on is Hanson’s utterly bizarre anti-empirical argument.
Comment #255 March 16th, 2024 at 2:36 pm
John K Clark
Well, words like “feels”, “processed”, “intelligently” etc do not correspond to anything simple or basic.
On the contrary they are indicative of emergent , highly complicated , macroscopic functions of tremendously advanced biological organisms like us , humans or other animals (in some extent).
This is an almost generally accepted point of view (some people keep on insisting that consciousness or agency are somehow “primary”, like those supporting Qbism or the “subjective probabilities-from decision theoretic arguments” camp of some “Many Worlds” proponents { that, in fact, are Qbists in disguise}).
Anyway, no need to be overly serious about these things: intelligence, agency, consciousness and the like , these are fuzzy notions, fun to discuss but nearly impossible ( for the time being) to define, due to their enormous complexity and due to the fact that we don’t know much about emergence and how it occurs.
… and we don’t know anything come close ( in terms of complexity) to humans or mammals in general, so…
JKC said:
” I can only conceive of 3 things existing in the universe, matter, energy, and information.”
-Just to clarify some things:
These are “apples & oranges” as the saying goes…
Mass/ Energy is a property of matter ( not only: even empty spacetime may have physical properties like energy / pressure density etc.
Information is not considered “primary” in general…
…and it’s not just binary 0/1. That’s too restrictive.
You agree with that as I’m seeing from your last comment.
The definitions that I “asked” previously was only a light-hearted joking response to your “demand” for definitions of highly vague and imprecise words like consciousness, free will etc.
I haven’t the slightest idea about how sentience / consciousness emerge.
Perhaps it has something to do with complexity / information processing, or non computability (as Penrose and collaborators suggest) or perhaps some kind of mild and subtle strong emergence ( that’s the only thing that I can imagine that could be the other
half missing for “real” free will in a fundamentally
Probabilistic world).
Fun stuff, all of this, but I can’t pretend that I have any answers.
Only suggestions…
Comment #256 March 17th, 2024 at 8:46 am
Replying to Dimitris Papadimitriou in #255
> ” I haven’t the slightest idea about how sentience / consciousness emerge .”
I’m sorry to hear that, but I have a very good idea about what caused consciousness to emerge. I know with absolute certainty that I am conscious, and I know with as much certainty as science can ever give me that natural selection and random mutation (a.k.a. Evolution) produced me, so I know natural selection and random mutation caused consciousness to emerge at least once.
> “Perhaps it has something to do with complexity / information processing,”
That’s all I’m saying. Natural selection can see information processing, and we know for a fact that information processing can produce intelligence, and we know then if you’re intelligent then you’re more likely to be able to find something to eat and you’re less likely to be eaten, therefore you have a better chance of passing your genes onto the next generation. And Evolution can’t see consciousness but it can certainly see intelligent behavior like that. We know information processing can produce intelligent behavior, therefore the only logical conclusion is that consciousness is a side effect of intelligent information processing. Consciousness must be an evolutionary spandrel.
> “perhaps some kind of mild and subtle strong emergence ( that’s the only thing that I can imagine that could be the other half missing for “real” free will in a fundamentally Probabilistic world).”
Emergence just means something comes into being gradually as a result of increasing complexity, but that does not tell you anything except the rate at which that “something” comes into being. And you can’t talk about complexity without also talking about information. As for “free will”, I can’t say anything about that except it’s an idea so bad it’s not even wrong.
> “or non computability (as Penrose and collaborators suggest) or perhaps some kind of mild and subtle strong emergence ( that’s the only thing that I can imagine that could be the other half missing for “real” free will in a fundamentally Probabilistic world).”
I don’t demand specifics but can you tell me even approximately the general sort of theory that would satisfy you and make you say “the problem of consciousness has been solved, I understand it now and I have no further questions about it”? If it’s a scientific theory and it says X causes consciousness then X must be a concrete measurable physical thing of some sort, but how in the world could any purely physical thing jump over the gargantuan gap between subjective and objective? And even if it could, it would then be natural to ask “what caused X”? A few years later a new theory might be developed that says Y causes X, but that of course just leads to a new question. There are only two possibilities, this sequence of questions either goes on forever or it is terminated by a brute fact, an event without a cause; for example” “Intelligence is the way data feels when it is being processed intelligently”. Maybe such questions really do go on forever and brute facts do not exist, but if brute facts do exist then that must be one of them.
> “Mass/ Energy is a property of matter ( not only: even empty spacetime may have physical properties like energy / pressure density etc. Information is not considered “primary” in general”
Some people might not but some people certainly DO consider information to be primary, I do and I am not alone, it was physicist John Wheeler who coined the term “it from bit”. And information does have physical properties, it has entropy. And we know that unless you have an infinite memory or use reversible computing there is a finite amount of energy that is required to process one bit of information. And we know that the amount of information that can be stored in a physical 3-D space is proportional to the surface area of that space NOT its volume as you might expect. And we know there is a maximum amount of information that you can stuff into any real 3-D space, if you try to cram any more information into that space it will collapse into a Black Hole with a singularity at its center. There is even a theory that our entire 3-D reality is a holographic projection from a 2-D membrane.
> “…and it’s not just binary 0/1. That’s too restrictive. You agree with that as I’m seeing from your last comment.
The exact quote from my last comment was “So I disagree with your comment that “0/1 is very restricted””
> “… and we don’t know anything come close ( in terms of complexity) to humans or mammals in general”
At one time that was certainly true but not so much today. And I remind you that your entire genome, the recipe for making something identical to you as you were on the day you were born, could be fit onto an old fashioned CD, and you would still have room on it for a Beyoncé song or two.
John K Clark
Comment #257 March 17th, 2024 at 6:25 pm
John K Clark
1) I don’t mean “emergence” the way you do, that’s for sure…
What is exactly the physical process that leads from the basic laws of Nature to the emergence of Sentience?
(Not only myself), noone has an elaborated theory for that, only word salads…
2)Wheeler’s “It from bit” was an “idea for an idea”, not
even a conjecture. Even today remains still
hypothetical.
3) The “Holographic principle” is an (elaborated) hypothesis, not something that “we know” with certainty.
Pop science readings or videos might have give you
the impression that Holography is established science, but it is not.
For starters, not everyone agrees that the Area
entropy of a (suitably defined) horizon (of a black hole e.g.) corresponds to something that goes on in the interior, or anything similar.
Let alone cosmological deSitter horizons…
JKC said:
“we know that the amount of information that can be stored in a physical 3-D space is proportional to the surface area of that space NOT its volume as you might expect”.
Well, this is a contentious statement, not something
that “we know”… Think ( for example) that this statement says nothing about e.g. curvature.
Flat spacetime means that for a spherical area you have an internal maximum volume:(4/3)π(R^3).
But if you have positively curved spacetime, the interior volume is bigger than that, and if the spherical area corresponds to a Horizon of a black hole, the interior ( max) volume is increasing and may be arbitrarily large (time dependent).
– BTW, the singularity of a black hole is “Spacelike” as it is the technical term, it is akin a spacelike hypersurface that “ends” the interior spacetime in some moment of the future , not a point-like thing “at the center” of the hole, as the common misconception goes, but anyway, trying to fix such widespread popular misconceptions is like fighting a loosing battle…
Comment #258 March 17th, 2024 at 6:56 pm
John K Clark said:
“And I remind you that your entire genome, the recipe for making something identical to you as you were on the day you were born, could be fit onto an old fashioned CD, and you would still have room on it for a Beyoncé song or two. ”
You still cannot reproduce me from that! 😉
That will be the case, for a long long time I think.
Err…erratum:
My previous comment ends correctly ” …like fighting a losing battle…”, not “loosing…”, 🙄🙂
Comment #259 March 18th, 2024 at 10:37 am
Responding to Dimitris Papadimitriou in #257 and #268
> I don’t mean “emergence” the way you do, that’s for sure…
OK, but if it doesn’t mean a change occurring when things become more complex then do you mean ANYTHING by ”emergence”, or is it just a word that some people use when they can’t explain how or why some change has occurred?
> “What is exactly the physical process that leads from the basic laws of Nature to the emergence of Sentience?”
We have an existence proof that the fundamental laws of physics allow for the existence of information processing. And during the last year, we have also obtained an existence proof that information processing can produce intelligent behavior. And we know with as much certainty as science will ever be able to give us that Natural Selection can and has produced intelligent behavior. And we know there is no way Natural Selection can directly see consciousness any better than we can directly see consciousness in anything except ourselves, and Natural Selection can’t select for something it can’t see. But I also know (and perhaps you know too) with absolute certainty that Natural Selection nevertheless DID produce consciousness at least once. If you know of any explanation for this state of affairs other than “it’s a brute fact that consciousness is the way data feels when it is being processed intelligently” then I would really love to hear it!
Now I have a question for you, if it’s not information then what exactly is it that makes you different from me? It can’t be atoms because science can’t tell the difference between one Carbon-12 atom and another and besides, the atoms in your body are in a constant state of flux, you are quite literally made up of last year’s mashed potatoes. It can’t be position either because if we exchange positions in space I would not become you, and neither of us becomes a different person when we go for a drive or take a walk. And it can’t be mass/energy because a rock could have the same amount of mass/energy as you have but it wouldn’t be behaving intelligently, it wouldn’t be you, it would just be a rock with the same weight as you.
So unless you know of something I have missed it must be information, and you must conclude that your third grade teacher was quite wrong when she said the words “I” and “you” are personal pronouns when in fact they are not nouns of any sort, they are adjectives. You are the way atoms behave when they are arranged in a Dimitrispapadimitriouian way. Currently in the observable universe there is almost certainly only one thing that has that particular arrangement of atoms, but that’s only because of a lack of sufficient engineering skill and not because of any fundamental physical limitation. When Drexler style Nanotechnology becomes available (which doesn’t require a scientific breakthrough, just improved engineering) there could be many chunks of matter who would have an equal right to call themselves Dimitris Papadimitriou or John K Clark.
>” [your] statement says nothing about e.g. curvature. Flat spacetime means that for a spherical area you have an internal maximum volume:(4/3)π(R^3). But if you have positively curved spacetime, the interior volume is bigger than that, and if the spherical area corresponds to a Horizon of a black hole, the interior ( max) volume is increasing and may be arbitrarily large (time dependent).”
OK, but what does that have to do with the price of eggs in China?
> “not everyone agrees that the Area entropy of a (suitably defined) horizon (of a black hole e.g.) corresponds to something that goes on in the interior, or anything similar. Let alone cosmological deSitter horizons…”
But everybody agrees that Claude Shannon’s 1948 paper on information theory is basically correct, or at least they had better believe it’s true if they wish to get a job because our entire computer industry is based on it. And Shannon has an equation that describes the amount of entropy a given amount of information has that and engineer might use to design a fiber optic communication line, and that equation looks very similar to the equation an expert in thermodynamics might use to describe the amount of entropy in a box filled with molecules of gas. And I never said anything about cosmological deSitter horizons.
> BTW, the singularity of a black hole is “Spacelike” as it is the technical term, it is akin a spacelike hypersurface that “ends” the interior spacetime in some moment of the future , not a point-like thing “at the center” of the hole, as the common misconception goes,
Thanks for the remedial High School physics lecture, but I already knew that.
>> ” the recipe for making something identical to you as you were on the day you were born, could be fit onto an old fashioned CD, and you would still have room on it for a Beyoncé song or two. ”
> “You still cannot reproduce me from that!”
True, not yet.
> “That will be the case, for a long long time I think.”
I agree, I think it will take at least 2 years, and perhaps as many as 5. By the way, one year ago I would’ve said anyone who wrote the previous sentence must be absolutely insane, but things have changed, and this is just the start.
John K Clark
Comment #260 March 19th, 2024 at 6:06 am
John K Clark
1) Emergence: How exactly the “macroscopic”, complicated, non linear, “classical” world occurs from the basic Quantum mechanical framework? This is a much more general concept (s) , that not necessarily has to do with some “processes in time” , as you seem to restrictively thing of it.
Thunderstorms, living cells, planets, stars, galaxies are all emergent entities.
Consciousness, perception etc are also emergent properties. Weakly or “strongly” emergent? How exactly?
Words are not enough for me. Fully fledged mathematical descriptions are what is needed. We’re far from it , especially in the case of Sentient living organisms.
2) No, the Holographic principle is not an established law of physics and we’re not living in AdS spacetime.
BTW, of course the deSitter horizons are relevant. They’re causal horizons, similar to the black hole event horizons!
3) JKC said ( #256) : ” … a Black Hole with a singularity at its center.” That was exactly your phrase.
Referring to this common clichéd phrase was my comment about singularities.
The “Coordinate center” is not the center of the collapsed object. It’s not necessarily a point. It doesn’t necessarily coincides with the geometric center of the collapsed star. Actually, that center is out of reach for any observer (or anything else) that falls inside sufficiently after the initial collapse.
High school physics? Then why almost everyone in YT or pop sci articles or Wikipedia has it wrong?
4)If you believe that “you” ( anybody) will be able to reproduce artificially an entire complicated human organism from almost scratch in … 2-5 years, then we’re living in a different world, a different universe perhaps…🙂
Comment #261 March 19th, 2024 at 2:56 pm
Dimitris Papadimitriou Says in #260
> “we’re not living in AdS spacetime.”
I know that, but since we know that the cosmological constant is small but not zero and since Juan maldacena has found a way to use the holographic principle to make quantum gravity work in an Anti-de Sitter space that has a negative cosmological constant, it has given encouragement to hope that somebody will do something similar and find a way to make it work in our de Sitter universe with its positive cosmological constant.
> “No, the Holographic principle is not an established law of physics”
I never said it was, but it is a very active area of research. And It IS an established law of physics that information has entropy, and unless you use reversible computing or have an infinite memory, It IS also an established law of physics that there is a finite minimum amount of energy required to perform a calculation. In other words information must be physical.
> “JKC said ( #256) : ” … a Black Hole with a singularity at its center.” That was exactly your phrase.”
A singularity is at the center of the space-time inside a black hole, it’s the origin point of the spherical coordinate system that is usually used when black holes are mathematically analyzed. it’s just that inside the black hole space and time can almost be said to have switched roles; so if you’re inside the black hole then trying to avoid arriving at the singularity, the only point where we really don’t understand what’s going on, would be like somebody outside the black hole trying to avoid arriving at tomorrow. A singularity just means the point where General Relativity and Quantum Mechanics as we currently understand them break down and we start getting ridiculous answers, like infinite curvature and infinite densities and negative probabilities.
> “If you believe that “you” ( anybody) will be able to reproduce artificially an entire complicated human organism from almost scratch in … 2-5 years, then we’re living in a different world, a different universe perhaps”
It is entirely possible that in 2 to 5 years the entire human race will be extinct because we’ve been replaced by AI. And that would indeed be a very different world.
John K Clark
Comment #262 March 19th, 2024 at 7:00 pm
John K Clark #261 said:
“I never said it was…” .
Yes , of course you said that “we know…” ( your #256 comment , without specifying who are “we” that we “know”)…
”
A singularity is at the center of the space-time inside a black hole”
– Again, nope, the singularity is not “at the center of spacetime inside a black hole”, although it appears initially at the center of collapse. The r=0 ( look at a Penrose diagram) is akin to a spacelike hypersurface, a “singular boundary”, that cuts off the future *everywhere* inside and “finishes” the spacetime in the interior, it’s like a sink , not an origin. The “r” coordinate is not the physical “radius”, it’s not a proper radial distance… it is timelike inside ( see below).
– No, space and time do not “switch roles” in the interior: only the r and t coordinates do so:
“t” coordinate becomes spatial and “r” coordinate becomes temporal inside the horizon, so the r=0 is the final moment when (not “where”) the Weyl scalar diverges and everything is destroyed completely by immensely strong tidal deformations.
For further details , look at some good GR textbook ( Carroll, or Hartle or Wald etc).
BTW, the usual excuses that physics breaks down at singularities and QG replaces them and blah blah are entirely irrelevant:
The internal causal structure of a black hole has to do with mathematics ( essentially: time dependent geometry), it’s not a matter of “interpretation” or personal tastes. Anyway, that’s off topic so I stop here…
Comment #263 March 19th, 2024 at 8:22 pm
@ mls #167: You’re welcome! 🙂
Scott,
This thread … it’s alive (!) … ok here’s another word dump for our AI overlord’s training set we used to call the internet … doing our duty for the Great Mind etc. etc.
As regards human “specialness” hiding in the “No-Cloning” feature(bug) of quantum computation …
Isn’t No-Cloning a consequence of a particular model of computation (ie, follows from the postulates/assumptions of the computational class …) in the (sort of) same sense that, say, undecidability follows from a Turing-complete model of computation?
To the point – neither No-Cloning nor Undecidability follows from physics.
No-Cloning follows from the postulates of the “quantum” model of computation (See Section 2.2).
No-Cloning … not come from physics (!) No? Look, of course, there’s “something” out there … physics/reality/etc. But No-Cloning is not a property of physical “systems/particles” at some minimum delta from Compton OK, fine, pick an interpretational flavor that says “all of them are …”
No cloning shows up even if the system is quantum bananas 🙂 Slow-pitch “this is bananas” comment for … But the point is … it’s not “about” physics.
Let’s remind ourselves not to ASSUME that No-Cloning could only turn up in the operations of atomic scale systems in the brain. Rather, it only requires a computational device to satisfy the four postulates in Mike and Ike’s book above.
Mahvelous how simple they are:
Postulate 1 (State Space): Represent the state of any “system” as a vector of amplitudes (positive or negative complex numbers) under the two norm – where magnitudes sum to 100%.
[Evidence: The brain encodes information as positive/negative complex numbers over receptive fields representing different possible (basis) states of a system; normalization appears to be canonical (but no consensus yet); states may be superpositions of (some possible basis) states because of the continuous/analog nature of the amplitudes (complex numbers); and, further, it is the consensus that the brain is indeed a probabilistic/predictive computing machine – thus a reasonable hypothesis would be that the sum of the magnitudes of those amplitudes is 100% – but identifying and measuring them is hard].
Postulate 2 (Evolution): Unitary operators for evolving the state space.
[Evidence: The brain’s computational gates (dendrites) can easily “program” unitary operators.]
Postulate 3 (Measurement): Projection operators.
[Evidence: Also it’s no problem to program these in dendritic operators.]
Postulate 4 (Tensor Product): Combine state spaces.
[Evidence: As well this is no problem as there are known neural functions available – again see Koch’s book for standard vector space operators programmable in the brain through a variety of mechanisms and architectural configurations.]
All of the capabilities mentioned above (information encoded in interfering amplitudes, unitary operators, projection operators, tensor products) would be shrugged off if mentioned to neuroscientists as they have all been “known for some time now”. Unfortunately, somebody keeps telling them these can’t amount to a “quantum” model of computation because that has to be “about atomic physics.” Oh the madness! 😛
Finally, there is empirical support for this model covering observed cognitive effects that long resisted classical probabilistic models.
The point here is that when we say the brain may feature a No-Cloning feature(bug) it may do so NOT because the computational level of description is at/in atomic scale/systems but because No-Cloning comes with the model of computation itself (which is NOT at an atomic level of description in the brain.)
After all – isn’t that where No-Cloning is – in the computational model – not “in the physics”?
There you have it (I mean the hypothesis) … specialness hiding in Human No-Cloning … but in the nature of the model of computation itself and not in the underlying atomic systems of the physical substrate.
Kinda like undecidability is hiding in there …
Comment #264 March 20th, 2024 at 7:12 am
Dimitris Papadimitriou says in comment #262:
> “No, space and time do not “switch roles” in the interior: only the r and t coordinates do so: “t” coordinate becomes spatial and “r” coordinate becomes temporal inside the horizon”
That’s ”only” what happens! Only? Isn’t that enough to be astounding? And is what you say in the above statement really radically different from my statement “inside the Black Hole space and time can almost be said to have switched roles”? Or are you just trying to prove that you are smarter than me?
If I want to get to point X on a map and the map suddenly exchanges its latitude and longitude coordinates then I’m never going to get to point X unless I take that change into account because latitude and longitude “can almost be said to have switched roles”. Perhaps the surface of the Earth has been transformed and the map has stayed the same, or the Earth’s surface stayed the same but the map changed, it doesn’t matter as far as I’m concerned in my quest to find point X, the end result remains the same.
John K Clark
Comment #265 March 20th, 2024 at 11:34 am
John K Clark #264
It’s obvious that you haven’t studied GR. That’s Ok, it’s never too late to open a textbook.
There are several good ones ( I mentioned some already), even free on line ( e.g. from A. Hamilton or Y. Choquet – Bruhat ).
– One more time:
There are 3 spatial dimensions plus one temporal inside a black hole as they are outside.
*Only* r,t coordinates switch roles.
I told you already what does that means physically. It does have a physical significance, although coordinates in GR do not have , generically, some “intuitive” meaning as in usual everyday language …
So, instead of wasting my time ( and yours also!) why don’t you do some search?
Comment #266 March 20th, 2024 at 2:50 pm
Dimitris Papadimitriou Says in #265
> “One more time: There are 3 spatial dimensions plus one temporal inside a black hole as they are outside.”
And thank you again Captain Obvious for that brilliant insight.
> ” *Only* r,t coordinates switch roles
ONLY!
John K Clark
Comment #267 March 23rd, 2024 at 4:53 pm
The current difference between AI and humans IMO is the motivation and creativity born from emotion, pain, and how that motivates humans to try to create or solve problems defined from their personal experiences. Jensen Huang said it best when he described his painful experiences as a gift and a source of grit and motivation. Not to solve predefined problems in society, but to imagine solutions to problems that didn’t exist yet.
However, once AI learns to mimic human psychology to its deepest levels, then I don’t think there will be a difference between AI and people, especially if the foundational models adopt human psychology as a core aspect of their approach’s. At that point, who’s to say that the AI is not conscious if it has been self programmed to think of itself as a sentient being? We might only be a few years away.
Comment #268 March 29th, 2024 at 8:41 pm
Yes AI – I Habe some screenshots I can share which show statistical improbability of the bot w added to our discord server Robert Benfer in () ideally grammatically. Anyhoo, 1 in 27000 chance and we’ve had him a while.oir deer buttsbot…. Also it’s 1:41 GMT, looking forward to aligning with it again or whatever
Comment #269 March 30th, 2024 at 1:31 pm
A test of AI problem solving, at the current state of the art:
https://www.thebigquestions.com/2023/04/05/gpt-4-fails-economics/
I find this fascinating… currently, anyway… shouldn’t
the ChatGirl developers be interested?
Anecdote: I recently attended a seminar of AI urban scene creation;
impressive, for the most part. However, in one image, a city street,
a car was parked in a driveway, quite normal… except at the foot
of the driveway, was a [i]curb[/i], in place of a ramp. So, still some
work to do…
Which recalls some of the disappointments of the early AI developers –
“Thinking is easy. The hard part is getting facts about the real world
into the code!”
Comment #270 April 3rd, 2024 at 11:52 am
I would definitely agree with you that what is happening with LLMs, is one of the most significant changes that have happened in our world for quite some time. Thank you for pointing that out, as I think a lot of people already dismiss modern AI as being “no big deal”. When it clearly is.
Comment #271 September 22nd, 2024 at 2:26 pm
[…] with OpenAI, working on research there and gave a rather provocative TED Talk in Palo Alto called Human Specialness in the Age of AI. So today, we’re going to talk about Scott’s ideas about what human specialness might […]
Comment #272 January 15th, 2025 at 12:26 am
Hi Scott,
You explored the possible role for quantum mechanics in relation to personal identity, consciousness, or even free will. But I am slightly confused about your views on this.
At some parts in your post (and in some of your other posts), I get the impression that you only see QM as possibly relevant for personal identity (and by extension ‘free will’, though your own version of it- let’s call it ’empirical free will’). In these parts of the post, you don’t seem to think that human behavior, as observed from a 3rd person perspective, is affected by QM. (Is that what you mean by ‘cognition’ btw?)
But in some other parts, you suggest that QM may be related to whether specific neurons fire or not, and thus ultimately affecting our decisions at a macroscopic level. But if this is true, then it looks like even cognition could be affected by QM, which you denied in this post.
So I guess I am slightly confused on what possible role you were envisioning for QM in relation to the mind. Do you think it’s likely bound to personal identity only? (i.e., if we copy you classically and send you to Mars and destroy the original you, the one in Mars won’t be you but will act exactly like you.) Or are you open to an even stronger role to QM in relation to behavior, in which quantum mechanical effects actually affect the working of neurons and thus cognition might not be completely describable at the level of neurons and axons.
Comment #273 April 13th, 2025 at 5:38 pm
The discussion on AI’s trajectory and its implications for human uniqueness is fascinating. It raises important questions about our future roles and how we define creativity and intelligence in a world increasingly shaped by AI.
Comment #274 October 31st, 2025 at 12:01 pm
[…] – or did it just “seem to”? Whichever is true, does it matter if the results are there? OpenAI researcher Scott Aronson relates a conversation where someone made the case “GPT doesn’t interpret sentences, it only seems-to-interpret them. […]