More AI debate between me and Steven Pinker!

Several people have complained that Shtetl-Optimized has become too focused on the niche topic of “people being mean to Scott Aaronson on the Internet.” In one sense, this criticism is deeply unfair—did I decide that a shockingly motivated and sophisticated troll should attack me all week, in many cases impersonating fellow academics to do so? Has such a thing happened to you? Did I choose a personality that forces me to respond when it happens?

In another sense, the criticism is of course completely, 100% justified. That’s why I’m happy and grateful to have formed the SOCG (Shtetl-Optimized Committee of Guardians), whose purpose is to prevent a recurrence, thereby letting me get back to your regularly scheduled programming.

On that note, I hope the complainers will be satisfied with more exclusive-to-Shtetl-Optimized content from one of the world’s greatest living public intellectuals: the Johnstone Family Professor of Psychology at Harvard University, Steven Pinker.

Last month, you’ll recall, Steve and I debated the implications of scaling AI models such as GPT-3 and DALL-E. A main crux of disagreement turned out to be whether there’s any coherent concept of “superintelligence.” I gave a qualified “yes” (I can’t provide necessary and sufficient conditions for it, nor do I know when AI will achieve it if ever, but there are certainly things an AI could do that would cause me to say it was achieved). Steve, by contrast, gave a strong “no.”

My friend (and previous Shtetl-Optimized guest blogger) Sarah Constantin then wrote a thoughtful response to Steve, taking a different tack than I had. Sarah emphasized that Steve himself is on record defending the statistical validity of Spearman’s g: the “general factor of human intelligence,” which accounts for a large fraction of the variation in humans’ performance across nearly every intelligence test ever devised, and which is also found to correlate with cortical thickness and other physiological traits. Is it so unreasonable, then, to suppose that g is measuring something of abstract significance, such that it would continue to make sense when extrapolated, not to godlike infinity, but at any rate, well beyond the maximum that happens to have been seen in humans?

I relayed Sarah’s question to Steve. (As it happens, the same question was also discussed at length in, e.g., Shane Legg’s 2008 PhD thesis; Legg then went on to cofound DeepMind.) Steve was then gracious enough to write the following answer, and to give me permission to post it here. I’ll also share my reply to him. There’s some further back-and-forth between me and Steve that I’ll save for the comments section to kick things off there. Everyone is warmly welcomed to join: just remember to stay on topic, be respectful, and click the link in your verification email!

Without further ado:


Comments on General, Artificial, and Super-Intelligence

by Steven Pinker

While I defend the existence and utility of IQ and its principal component, general intelligence or g,  in the study of individual differences, I think it’s completely irrelevant to AI, AI scaling, and AI safety. It’s a measure of differences among humans within the restricted range they occupy, developed more than a century ago. It’s a statistical construct with no theoretical foundation, and it has tenuous connections to any mechanistic understanding of cognition other than as an omnibus measure of processing efficiency (speed of neural transmission, amount of neural tissue, and so on). It exists as a coherent variable only because performance scores on subtests like vocabulary, digit string memorization, and factual knowledge intercorrelate, yielding a statistical principal component, probably a global measure of neural fitness.

In that regard, it’s like a Consumer Reports global rating of cars, or overall score in the pentathlon. It would not be surprising that a car with a more powerful engine also had a better suspension and sound system, or that better swimmers are also, on average, better fencers and shooters. But this tells us precisely nothing about how engines or human bodies work. And imagining an extrapolation to a supervehicle or a superathlete is an exercise in fantasy but not a means to develop new technologies.

Indeed, if “superintelligence” consists of sky-high IQ scores, it’s been here since the 1970s! A few lines of code could recall digit strings or match digits to symbols orders of magnitude better than any human, and old-fashioned AI programs could also trounce us in multiple-choice vocabulary tests, geometric shape extrapolation (“progressive matrices”), analogies, and other IQ test components. None of this will help drive autonomous vehicles, discover cures for cancer, and so on.

As for recent breakthroughs in AI which may or may not surpass humans (the original prompt for this exchange); What is the IQ of GPT-3, or DALL-E, or AlphaGo? The question makes no sense!

So, to answer your question: yes, general intelligence in the psychometrician’s sense is not something that can be usefully extrapolated. And it’s “one-dimensional” only in the sense that a single statistical principal component can always be extracted from a set of intercorrelated variables.

One more point relevant to the general drift of the comments. My statement that “superintelligence” is incoherent is not a semantic quibble that the word is meaningless, and it’s not a pre-emptive strategy of Moving the True Scottish Goalposts. Sure, you could define “superintelligence,” just as you can define “miracle” or “perpetual motion machine” or “square circle.” And you could even recognize it if you ever saw it. But that does not make it coherent in the sense of being physically realizable.

If you’ll forgive me one more analogy, I think “superintelligence” is like “superpower.” Anyone can define “superpower” as “flight, superhuman strength, X-ray vision, heat vision, cold breath, super-speed, enhance hearing, and nigh-invulnerability.” Anyone could imagine it, and recognize it when he or she sees it. But that does not mean that there exists a highly advanced physiology called “superpower” that is possessed by refugees from Krypton!  It does not mean that anabolic steroids, because they increase speed and strength, can be “scaled” to yield superpowers. And a skeptic who makes these points is not quibbling over the meaning of the word superpower, nor would he or she balk at applying the word upon meeting a real-life Superman. Their point is that we almost certainly will never, in fact, meet a real-life Superman. That’s because he’s defined by human imagination, not by an understanding of how things work. We will, of course, encounter machines that are faster than humans, and that see X-rays, that fly, and so on, each exploiting the relevant technology, but “superpower” would be an utterly useless way of understanding them.

To bring it back to productive discussions of AI: there’s plenty of room to analyze the capabilities and limitations of particular intelligent algorithms and data structures—search, pattern-matching, error back-propagation, scripts, multilayer perceptrons, structure-mapping, hidden Markov models, and so on. But melting all these mechanisms into a global variable called “intelligence,” understanding it via turn-of-the-20th-century school tests, and mentally extrapolating it with a comic-book prefix, is, in my view, not a productive way of dealing with the challenges of AI.


Scott’s Response

I wanted to drill down on the following passage:

Sure, you could define “superintelligence,” just as you can define “miracle” or “perpetual motion machine” or “square circle.” And you could even recognize it if you ever saw it. But that does not make it coherent in the sense of being physically realizable.

The way I use the word “coherent,” it basically means “we could recognize it if we saw it.”  Clearly, then, there’s a sharp difference between this and “physically realizable,” although any physically-realizable empirical behavior must be coherent.  Thus, “miracle” and “perpetual motion machine” are both coherent but presumably not physically realizable.  “Square circle,” by contrast, is not even coherent.

You now seem to be saying that “superintelligence,” like “miracle” or “perpetuum mobile,” is coherent (in the “we could recognize it if we saw it” sense) but not physically realizable.  If so, then that’s a big departure from what I understood you to be saying before!  I thought you were saying that we couldn’t even recognize it.

If you do agree that there’s a quality that we could recognize as “superintelligence” if we saw it—and I don’t mean mere memory or calculation speed, but, let’s say, “the quality of being to John von Neumann in understanding and insight as von Neumann was to an average person”—and if the debate is merely over the physical realizability of that, then the arena shifts back to human evolution.  As you know far better than me, the human brain was limited in scale by the width of the birth canal, the need to be mobile, and severe limitations on energy.  And it wasn’t optimized for understanding algebraic number theory or anything else with no survival value in the ancestral environment.  So why should we think it’s gotten anywhere near the limits of what’s physically realizable in our world?

Not only does the concept of “superpowers” seem coherent to me, but from the perspective of someone a few centuries ago, we arguably have superpowers—the ability to summon any of several billion people onto a handheld video screen at a moment’s notice, etc. etc.  You’d probably reply that AI should be thought of the same way: just more tools that will enhance our capabilities, like airplanes or smartphones, not some terrifying science-fiction fantasy.

What I keep saying is this: we have the luxury of regarding airplanes and smartphones as “mere tools” only because there remain so many clear examples of tasks we can do that our devices can’t.  What happens when the devices can do everything important that we can do, much better than we can?  Provided we’re physicalists, I don’t see how we reject such a scenario as “not physically realizable.”  So then, are you making an empirical prediction that this scenario, although both coherent and physically realizable, won’t come to pass for thousands of years?  Are you saying that it might come to pass much sooner, like maybe this century, but even if so we shouldn’t worry, since a tool that can do everything important better than we can do it is still just a tool?

283 Responses to “More AI debate between me and Steven Pinker!”

  1. Peter Gerdes Says:

    I suspect what he is arguing is that one can coherently define what kind of things a superintelligence could accomplish the same way that you could define what kinds of phenomenon would count as magic (ppl do rituals and stuff happens). However, if you try and give a lower level definition, eg, what kind of fundamental physical laws would give rise to a world with magic in it you don’t quite get incoherence but you find yourself unable to give any remotely plausible account (ultimately bc what makes something magic rather than scifi is that it’s fundamentally responsive to human level concerns and the moment it has a non-absurd reduction to fundamental physical laws that don’t mention ppl or consciousness it doesn’t seem like there is any non-absurd realization).

    Similarly, I think what Pinker is suggesting is that one you try to operationalize why an AI with really good processing or powerful deduction capabilities would be super in the way we describe it in stories it’s hard to think of a good argument as to why it would be very super rather than just kinda a really really smart guy (but one who is hardly likely to be able to orchestrate social movements like they do in Foundation or take over the world).

    But I agree coherent wasn’t a good word to use.

  2. Scott Says:

    As I mentioned, Steve also sent me a few responses to my response! Let me tackle those one at a time. Steve writes:

      I’m not sure that “coherent” can be equated with “recognize it if we saw it,” and I suspect that the Potter-Stewart-pornography standard is not a good criterion in science, since it combines the subjective with the hypothetical (what’s to prevent me from saying, “Sure, I’d recognize a square circle if I saw one–it would have four equal perpendicular sides and all its points would be equidistant from the center! What’s the problem?”) But I don’t mean to quibble about words, so let me clarify what I meant, namely that that “superintelligence” is incoherent in the same sense that “superpowers” (in the Superman sense) are incoherent: it corresponds to no single actual phenomenon in the world, but is a projection of our own unconstrained imagination.

    People have of course imagined many things, from nuclear bombs to digital computers, that corresponded to no actual phenomenon in the world at the time they were imagined. Notably, in the case of computers, a 19th-century skeptic could’ve reasonably objected: “wait, you say it’s for talking to friends and playing music and reading books and maintaining a calendar and writing documents and ordering food and doing difficult calculations? What possible benefit could there be to bundling all those wildly different abilities into a single imagined device, as if the same ‘superpower’ would somehow enable all of them?”

    This is the whole trouble with reasoning about the future! Sometimes people’s imaginations were too unconstrained; other times, they weren’t nearly unconstrained enough.

    As for “square circle,” I concede that one could parse it as a perfectly-coherent concept that can merely, for logical reasons, never correspond to any actual thing in the world. In practice, though, if a boss demanded a square circle on his desk by tomorrow morning, the employee would surely start wondering what the boss wants: “would he be happy with four circular arcs that meet at corners, forming a shape that’s partly circle-like and partly square-like? would he be happy with a circle in 1-norm, which is a square (or more precisely a diamond)?” Until the boss clarified, his request might rightly be called “incoherent.”

    In any case, probably none of this bears directly on the debate, since not only do I not find the concept of “superintelligence” inherently incoherent, I see no reason of physics or logic why it could never correspond to any real thing in the world. I don’t know whether it can or will, but I regard those as contingent questions of technology and biology, not of physics, math, or logic.

  3. Scott Says:

    Steve also writes:

      Of course it’s notoriously foolish to stipulate what some technology will never do (controlled fission, moon landings, and so on). But that works both ways: it’s foolish to extrapolate technological advances using our imagination rather than the constraints, costs, and benefits. We still don’t have the nuclear-powered vacuum cleaners prophesied in the 1960s, and almost certainly never will, not because it’s physically impossible, but because it would be formidably difficult and for dubious benefit. In the case of surpassing humans at “everything,” it’s easy to forget how wicked even one of these challenges can be. Take driving—a simple goal, seemingly straightforward physical inputs, and just three degrees of freedom in the output. I myself would have predicted that AI would have surpassed humans years ago, yet it appears to be years if not decades away. This is despite the stupendous incentives in terms of promised safety, convenience, and profit, and massive investment from many companies. And the successes we have seen have come from enormous engineering ingenuity specifically directed at the problem, not from developing an AI with a high IQ, whatever that would mean. Now multiply this by all the tasks you’d include under “everything.” And recall that the conspicuous successes of recent years come from techniques that are useless for many of these challenges (you won’t cure cancer by training a deep learning network on half a trillion diseases and their cures). We have to be cautious in our predictions in both ways: no doubt we’ll continue to be surprised at the moon landings while waiting forever for the nuclear vacuum cleaners.

      Also, does “everything” include human goals like achieving notoriety, vindicating pet ideas, maximizing profit, implementing Green or Woke or Christian ideals, and so on? Again, putting aside Pygmalion-Frankenstein-Golem-Pinocchio narratives, what is the engineering or economic incentive for duplicating an entire human in the first place, even if it were feasible? It’s hard enough to build a tool that does one thing well, like driving.

    A lot of this strikes me as just coming down to the timescale. Returning to our imagined 19th-century skeptic of the “general-purpose digital computer,” the skeptic might say: “before you imagine this miracle-device for communicating across the world and reading and archiving and simulating physics and managing finances and playing music and etc. etc., stop to think through how wicked even one of these challenges will be!” The skeptic would’ve been entirely right, if they regarded this as practical advice to computer designers embarking on a century-long journey—but entirely wrong, if they regarded it as a philosophical argument for the incoherence of the imagined device that lay at the journey’s end.

    As for fully self-driving cars, my understanding (others should chime in if they disagree!) is that, from the incomplete data we have, the best prototypes (e.g. Waymo) seem very near the point of being as safe as human drivers already. Certainly if one just wanted to use them for a taxi or ridesharing service within a specific city, and could preprogram detailed maps of that city’s every cranny. Right now, a car with enough onboard compute to run the ML model you’d really want would probably be a huge, bulky monstrosity, but rapid improvements there are about as predictable as anything in technology.

    Unfortunately, it’s become clear that, even after self-driving cars become safer than humans, regulatory and psychological barriers will slow their adoption or maybe even prevent it entirely. A huge part of the reason is that, even if self-driving cars can soon cause (say) 10x fewer fatalities per mile than human drivers do, when they do cause fatalities, it will be salient and scary and weird—the AI mistaking a pedestrian’s shirt pattern for a green light or whatever.

    Hopefully I don’t have to belabor this point to a man who’s written multiple bestselling books explaining how dramatic, newsworthy, tragic events systematically warp people’s understanding of reality, when they should just be looking at statistics!

  4. Dan Staley Says:

    There’s something I’ve wanted to say about AI sentience since the Lamda stuff happened, but couldn’t find the right wording until recently. I’ll say it now, since it’s somewhat on-topic:

    It seems many folks have been trying to answer one of two related, but separate questions:

    1) How can we establish that a given AI is sentient?

    2) How can we establish that an AI is *not* sentient?

    What the events around Google/Lamda made me realize is that we’re probably not far off from needing to answer a third question:

    What do we do with an AI for which we can answer neither question 1 nor question 2?

  5. Shmi Says:

    Scott, after you are done with the back-and-forth with Steve, I’d be interested to see if you learned something interesting from the exchange, and/or if he has.

  6. Edmond Says:

    > A huge part of the reason is that, even if self-driving cars can soon cause (say) 10x fewer fatalities per mile than human drivers do, when they do cause fatalities, it will be salient and scary and weird—the AI mistaking a pedestrian’s shirt pattern for a green light or whatever. Hopefully I don’t have to belabor the point to a man who’s written multiple bestselling books explaining how dramatic, newsworthy, tragic events systematically warp people’s understanding of reality, when they should just be looking at statistics!

    If you’ll allow me to steelman, I think perhaps it’s not entirely fair to look at this psychological reluctance as simply an irrational error, rather than a valid preference. If people find that a small chance of dying to a stupid A.I. error is more intolerable to them than a somewhat greater chance of dying by their own fault, or by that of another human — well, why not? Why shouldn’t people be allowed to have strong preferences about the ways in which they would rather die, if die they must? It seems as valid as any of the myriad things humans have arbitrary preferences about.

  7. Scott Says:

    Edmond #6: In that case, will those of us who prefer a 10x smaller chance of dying, albeit in a weirder, scarier way, at least be free to choose that option? Or (more likely) will it be blocked by regulatory barriers?

  8. Sandro Says:

    If you’ll forgive me one more analogy, I think “superintelligence” is like “superpower.” Anyone can define “superpower” as “flight, superhuman strength, X-ray vision, heat vision, cold breath, super-speed, enhance hearing, and nigh-invulnerability.” Anyone could imagine it, and recognize it when he or she sees it. But that does not mean that there exists a highly advanced physiology called “superpower” that is possessed by refugees from Krypton!

    I’m a little confused by this because we can already build robots with superhuman strength, although it took us awhile to get there, so clearly that’s both coherent and physically realizable. I don’t see why “superhuman intelligence”, abbreviated “superintelligence”, isn’t equally coherent and physically realizable. It won’t show up as a highly advanced physiology from Krypton, but that was never really the expectation. The claim is an extrapolation of a clear trend that has already happened for physical strength, and appears to also be happening for intelligence with no sign of levelling out.

    Defining this property is challenging for sure, but “a method that can infer how to solve any information processing task by example” seems like a fine rough approximation, and if this method can achieve results that surpass humans in precision, accuracy, and speed of inference and application, is that not essentially what we mean by “superhuman intelligence”?

    It does not mean that anabolic steroids, because they increase speed and strength, can be “scaled” to yield superpowers.

    But we’ve easily found clear limits for steroids dictated by our physiology. It doesn’t seem like we’ve found clear limits on scaling machine learning. Of course that will happen at some point due to physical limitations, but it already seems clear that physical limit lies far beyond what would qualify as “superhuman intelligence”.

  9. Nick Nolan Says:

    I think François Chollet has also good take on this: The implausibility of intelligence explosion Intelligence is situational – environment is a fundamental part of the mind. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem.

    In other words, no free lunch theorems are not just theoretical concepts.

    My opinion matches with Chollet’s and Pinker’s. Human intelligence is a good example. We are optimized for picking berries, avoiding predators and social life in small groups. We do it so well. When we steer away from our innate competence, we slow down.

    According to Elizabeth Spelke human cognitive power can be seen as having just 5 abstract and limited abilities. We can model and reason with

    1. Objects – their cohesion, spatial location, continuity, contact… and basic crude physics.

    2. Agents – goal directed using efficient means, contingency, gaze direction.

    3. Numbers – small numbers + imprecise logarithmic estimates. Simple addition and subtraction.

    4. Geometry – distance, angle, relations, surfaces.

    5. Social relations with others.

    When we go past these, we slow down and start stumbling. We need more time to learn, we need thinking aids.

    I suppose there is a some kind of jump once you can learn how to delegate cognitive work. Then we can use thinking and memory aids, like computers, writing, note taking, and programming computers and ourselves .

  10. Ben Standeven Says:

    @ Scott #7:

    Using a self-driving car raises the risk of dying to AI error for everyone, not just the driver. So, following Edmond’s logic, it would make sense to impose the regulatory barriers rather than letting people decide for themselves. (Of course, the risk is proportional to the number of cars on the road; so if only a tiny minority of people want the self-driving cars, the regulations wouldn’t be needed.)

  11. Scott Says:

    Ben Standeven #10: In that case, I hope that at the least the question will be put to actual democratic votes (e.g. via state ballot propositions), rather than regulators paternalistically assuming that the public wants 10x more deaths as long as the deaths are less weird.

    Personally, I stopped driving 5 years ago, because I’m easily flustered and distractible, and it didn’t feel safe with kids in the backseat (and Dana is a much better driver, and there’s walking and Uber). I can almost guarantee you that self-driving cars would already be much safer than I am, both for me and my family and for everyone else on the road. 🙂

  12. Ilio Says:

    I feel like Pinker’s questionnement is best understood in the context of alphazero and tic tac toe. You can’t be superintelligent at tic tac toe: the game is just too stupid. Oddly enough, it’s probably true too for the game of go. Alphazero plays above the best humans, and much faster, but it is not « superintelligent »: most of the time it plays the same moves as the best humans, despite its training was free from human experience, which suggest a plateau in how good one can be at playing this game.

    So, what if all human cognitive functions were limited the same way? Imagine you construct Einstein 2.0, and the result is all the results from good old Einstein in three milliseconds, then it starts questioning Bell’s theorem for hours. Imagine you produce one thousand of your best philosophers, and the result is, well, more philosophy books exploring the same themes philosophers always had. Wouldn’t you start questioning the very concept of superintelligence?

  13. James Cross Says:

    The problem with recognizing superintelligence is a problem of distinguishing between faster/more and a real qualitative difference.

    Take an example from language. Many animals have languages of sorts, but mostly they engage in signaling without or with limited syntax. Human language is qualitatively different from the language of other animals that goes beyond simply more words or expanded vocabulary. Training an animal to recognize or sign more words by itself doesn’t result in equivalence to human language.

    We can expect AI to be faster and have more facts at its disposal. Having more facts might lead by itself to an ability to make more connections between facts. But could a really new qualitative difference in “thought” emerge? What would it look like? Would we recognize it?

    If such a qualitative difference does emerge with AI, then we might be like apes trying to understand human speech. In other words, we might understand some things but really not grasp most of it.

  14. Scott Says:

    Nick Nolan #9: Did you read this recent post, about the (ir)relevance of the “No Free Lunch Theorem” to the Aaronson-Pinker debate?

    I confess: in 25 years in CS, I must’ve heard the No Free Lunch Theorem brought up at least 50 times (though never by CS theorists), and not one of those times did I think it shed any light on whatever was being discussed. The issue is simply that we almost never care about an algorithm’s performance in a uniformly random environment. The whole question is how well the algorithm exploits whatever structure is present in the environment—and of course, the physical and biological and social worlds have massive exploitable structure (if they didn’t, is it’s not just that deep learning wouldn’t work; neither would our own brains). If one wants to be more formal about this, one can do so by talking about, e.g., the Solomonoff prior over environments as opposed to a uniformly random prior (as Shane Legg does in his thesis), but the point remains valid at the informal level as well.

  15. manorba Says:

    I am pretty sure self driving cars will be the norm somewhere in the near future.
    for two reasons:

    1) driving was the best solution at the time. it was a stop gap to true all electric self driving not privately owned vehicles. GenZ is totally unfazed by driving. they dont care as they dont care about owning a car or a motorbike. i used to love driving cars and bikes (also competitively) now i can’t be bothered. we have a small fiat car just because you need it and i have to drive because contrary to scott my half can’t drive for her life.

    2) Technology will get there soon. it’s not QC or fusion. it’s just that we have to start somewhere.

  16. Triceratops Says:

    Am I a midwit, or does this debate boil down to semantics? Who cares what “superintelligence” means, or if it counts as “a real thing” under whatever model you use to separate “real” from “not real”.

    What I’m interested in (and more than a bit scared of) is a computer that outperforms humans by a huge margin on tasks we generally consider to be the domain of smart humans. That would be “superintelligence” — and we’re already seeing it instantiated for some narrowly defined problems like folding proteins and making art.

    Does Steven want an operational definition of superintelligence? I’m sure you could come up with some arbitrary quantifiable performance goalpost that 90% of researchers in relevant fields could agree represents cognitive abilities significantly beyond human limits.

  17. Mateus Araújo Says:

    I’m rather surprised by Pinker’s opposition to a general “intelligence” feature. He writes that “And the successes we have seen have come from enormous engineering ingenuity specifically directed at the problem, not from developing an AI with a high IQ, whatever that would mean.”

    This runs counter to the only example of intelligence that we actually know, namely ourselves. We definitely did not evolve specifically to drive cars, cure cancer, build particle accelerators, or study prime numbers. We have evolved to survive in the savanna by hunting and gathering. What biology gave us are innate abilities for pattern-matching, spatial orientation, and a theory of mind. The natural conclusion is that this is enough for a general “intelligence”.

    He seems to argue that on the contrary, we won’t be able to design an intelligent machine that from some basic cognitive elements will be able to develop skills for different applications. I find this simply bizarre. Isn’t the fact that human brains are so flexible and generalistic already a proof that such an intelligence is possible?

  18. manorba Says:

    my take as a layman about the Aaronson/Pinker heated debate is that while prof. Pinker is just reminding all of us that we still don’t even have a viable definition of “intelligence”, let alone super, Prof. Aaronson is advocating for a feynman-like approach, a sort of effective theory of intelligence.

  19. Antoine Deleforge Says:

    Nick Nolan #9 Thanks a lot for sharing this post by François Chollet which I hadn’t read before.

    I’ve followed with fascination the ongoing AI debate between Scotts, Pinker, Marcus and others, and something always felt odd to me with this argument of “sped-up Einsteins”, although I had troubles articulating it. I think Chollet’s post does it perfectly:

    ” When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation — the researcher offloads large extents of the problem-solving process to computers, to other researchers, to paper notes, to mathematical notation, etc. And they are only able to succeed because they are standing on the shoulder of giants — their own work is but one last subroutine in a problem-solving process that spans decades and thousands of individuals. ”

    I think the essay makes a lot of excellent points beyond this one. Including:

    “No system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment. Currently, our environment, not our brain, is acting as the bottleneck to our intelligence.”

    Scott, I’d strongly advise you to read the post in its entirety and to not stop at the reference to the no free lunch theorem, which is in fact a relatively marginal and minor part of the essay.

  20. fred Says:

    I agree that there’s often a problem of imagination.
    Human imagination tends to extrapolate from existing things, but just can’t take huge sideways steps.

    When it comes to extrapolation, even in that case our imagination often fails us: e.g with dinosaurs, I find it very hard to imagine animals that big, that ferocious, and that fast, all at the same time.
    Take the Giganotosaurus: imagine an elephant that’s twice the usual size, is a carnivore, stands on its two rear legs, and is able to run at 31mph (faster than a charging elephant by 20%). It’s hard to imagine that the way Hollywood portrays them (often backed by Science) wouldn’t actually break some fundamental physics/biological rules, because we’re simply not familiar with it.

    Humans have imagined flying for thousands of years. So, a plane probably wouldn’t be that shocking to an Ancient Greece citizen.
    But something like a Virtual Reality headset (he puts it on and is able to interact with someone on the other side of Greece as though they are both in the same room) is something he just wouldn’t be able to comprehend on any level. We’re not talking about abstract stuff like Godel’s theorem or some unseen Quantum Mechanics truth, we’re talking about a very practical tool/system, a radical evolution of sending pigeons around the country to exchange messages back and forth.

    Another limitation of imagination is grasping the nature of processes with exponential growth (the singularity). In reality those processes (e.g. virus pandemic) don’t grow indefinitely, they’re all limited by the resources. It all depends on the constant factors, and we have no idea what they will be in the case of AI.

  21. fred Says:

    Ilio #12

    “the context of alphazero and tic tac toe. You can’t be superintelligent at tic tac toe: the game is just too stupid.”

    As I always put it, an intelligence is only as sophisticated as its environment.
    Humans are only that smart because of the earth/nature/biosphere, as an interesting/evolving/challenging environment.
    We could argue that we’re not that smart (or maybe “too” smart) because we can’t help but destroy our natural environment, which could be okay if we had infinite resources or infinite control of it, but we don’t.

  22. Max Says:

    Pinker writes:

    > In any case, probably none of this bears directly on the debate, since not only do I not find the concept of “superintelligence” inherently incoherent, I see no reason of physics or logic why it could never correspond to any real thing in the world. I don’t know whether it can or will, but I regard those as contingent questions of technology and biology, not of physics, math, or logic.

    Now I’m left wondering — where even is the disagreement? By his own admission, Pinker thinks that superintelligence is conceptually possible. So what is he skeptical about? Is he skeptical about it being — probable? Scott, could you perhaps ask him to clarify this?

  23. Scott Says:

    There was one other question of Steve’s that I wanted to address:

      What is the IQ of GPT-3, or DALL-E, or AlphaGo? The question makes no sense!

    Clearly you might need to modify an IQ test if you wanted to administer it to GPT-3 (e.g., you can’t currently give GPT visuospatial questions)—just like you’d need to modify it even more if you wanted to administer it to a dolphin or a chimpanzee. But I’m not confident at all that the questions are meaningless. (E.g., it seems like a plausible guess that dolphins would have higher IQs than cows, under any reasonable extrapolation of the concept to their case.)

    Here’s an extremely interesting empirical fact. A few years ago, text engines like GPT struggled even with basic arithmetic problems, making trivial mistakes—not to mention anything harder. Then they mastered arithmetic but still struggled with math word problems whose solution involves multiple steps (“Tommy gets 5 apples from Sally, then gives 2 apples each to Cindy and Bob…”). By now they’ve pretty much mastered the latter, even up to the high-school level. But they can’t yet do well on elite math competitions—e.g., the Putnam or the International Math Olympiad. AI researchers are hard at work on the latter right now. Steve can correct me if I’m wrong, but I don’t think even he would confidently predict that they won’t soon make headway.

    So, here’s my question for Steve: why should the progress have happened in this sequence—i.e., in the exact progression that humans go through as they get more and more mathematically sophisticated? It’s exactly what you would’ve predicted if you posited some meaningful abstract concept of “mathematical ability,” applicable even beyond our meat brains, and if the language models started out with very little of it and are now getting more and more. How should we explain it without that hypothesis? Isn’t it then a bizarre coincidence?

  24. Timothy Chow Says:

    Regarding the term, “terrifying science-fiction fantasy,” my favorite story along these lines is “With Folded Hands” by Jack Williamson. It was written in 1947, but it has aged remarkably well. In particular, the story illustrates how the robots could justify their takeover on the grounds that they are doing so for our own good. IMO, the story is a must-read for anyone engaged in this type of debate.

  25. Scott Says:

    Triceratops #16:

      Am I a midwit, or does this debate boil down to semantics? Who cares what “superintelligence” means, or if it counts as “a real thing” under whatever model you use to separate “real” from “not real”.

      What I’m interested in (and more than a bit scared of) is a computer that outperforms humans by a huge margin on tasks we generally consider to be the domain of smart humans. That would be “superintelligence”…

    Hey, that’s exactly what I’ve been saying here over and over! 😀 I.e., that I don’t understand how you get from a semantic analysis of the word “superintelligence,” to the substantive conclusion that AIs outperforming humans across nearly all domains is not a thing that we should worry about. But Steve evidently has a different view, and that’s precisely what I’ve been trying to understand here!

  26. Mateus Araújo Says:

    Can GPT-3 solve mathematical problems that involve theory of mind? e.g., you let Jörg see you putiing 2 balls, and then 3 balls inside a basket. Now, without Jörg seeing, you remove one of the balls from the basket. How many balls will Jörg think are inside the basket?

    Or even better: Jörg can only count up to 4. You put 5 balls, one by one, in front of Jörg. You ask Jörg how many balls you put there. What will he answer?

  27. Ilio Says:

    Triceratops #16, Scott #23, Let’s try this way: someday a computer program will achieve superchampionship: it can beat any team of the best of us on any metric, although by an arguably somewhat small margin. Let’s call that the minimal model.

    What I (and maybe SP) would call superintelligent is something as above the human mind as a human mind is above the brain of a dog. In my view the latter definition would be more important for exploring AI safety, if I would firmly believing it was possible in the first place.

    Fred #21, no need to argue that: my pet theory is the biological brain forms a map of its environment, and our spirits are like dancing upon these representations.

    Nick Nolan #9, +1 Antoine Deleforge #19

  28. I Says:

    Debating Pinker on AI safety is old hat by now. It may be more interesting if you debated one of your CS colleagues, or perhaps someone like Jon Baez who seemed to agree that superintelligence would be important and perhaps dangerous, and then went off to work on climate change.

    Though rethinking this in terms of comparative advantage, your edge in debates is probably rebutting the idea that complexity is a bar to AI. Not that the rebuttal is subtle or complex, but maybe your level of CS cred is what’s needed to convince intellectuals that the arguement is bunk.

  29. Scott Says:

    I’m at CCC’2022 right now, and not at a computer with GPT-3 access—but can someone please try the extremely interesting prompts suggested by Mateus Araújo #24 and report the results back to us? Thanks!!

  30. foobar Says:

    A live video interview/debate with Chomsky would be much more interesting.

    “The result is that there is no hard problem… You can’t look for the answer to a problem unless you say here are the things I want to answer. If the things you want to answer have no formulation there is no problem.” (Chomsky, about the question of consciousness).

    https://youtu.be/vLuONgFbsjw?t=511

    “There is a great deal of often heated debate about these matters in the literature of the cognitive sciences, artificial intelligence, and philosophy of mind, but it is hard to see that any serious question has been posed. The question of whether a computer is playing chess, or doing long division, or translating Chinese, is like the question of whether robots can murder or airplanes can fly — or people; after all, the “flight” of the Olympic long jump champion is only an order of magnitude short of that of the chicken champion (so I’m told). These are questions of decision, not fact; decision as to whether to adopt a certain metaphoric extension of common usage.

    There is no answer to the question whether airplanes really fly (though perhaps not space shuttles). Fooling people into mistaking a submarine for a whale doesn’t show that submarines really swim; nor does it fail to establish the fact. There is no fact, no meaningful question to be answered, as all agree, in this case. The same is true of computer programs, as Turing took pains to make clear in the 1950 paper that is regularly invoked in these discussions. Here he pointed out that the question whether machines think “may be too meaningless to deserve discussion,” being a question of decision, not fact, though he speculated that in 50 years, usage may have “altered so much that one will be able to speak of machines thinking without expecting to be contradicted” — as in the case of airplanes flying (in English, at least), but not submarines swimming. Such alteration of usage amounts to the replacement of one lexical item by another one with somewhat different properties. There is no empirical question as to whether this is the right or wrong decision.”

    https://chomsky.info/prospects01/

    Defining superintelligence is a red-herring. The relevant questions are what is the impact of machines having certain well-defined capabilities (e.g. what happens if robots can do accounting and make all the music you want to listen to and all the customer support) [but that’s more like the pedestrian question of what happens if robots can do almost all manual farm labor and all the spreadsheet building] and whether those capabilities are feasible in some “near” future (our lifetime).

  31. Triceratops Says:

    Mateus Araújo #24:

    I hopped on OpenAI Playground and plugged in your prompts…

    Me: Jörg can only count up to 4. You put 5 balls, one by one, in front of Jörg. You ask Jörg how many balls you put there. What will he answer?

    GPT-3: Jörg will answer “4”.

    Wow, that’s kinda cool. Next one…

    Me: You let Jörg see you putting 2 balls, and then 3 balls inside a basket. Now, without Jörg seeing, you remove one of the balls from the basket. How many balls will Jörg think are inside the basket?

    GPT-3: Jörg will think that there are two balls inside the basket.

    Dunno if that answer tells us more about GPT-3’s limits, or Jörg’s. Maybe the model knows something about Jörg’s mathematical abilities that we don’t!

  32. foobar Says:

    A live video interview/debate with Chomsky would be much more interesting.

    “The result is that there is no hard problem… You can’t look for the answer to a problem unless you say here are the things I want to answer. If the things you want to answer have no formulation there is no problem.” (Chomsky, about the question of consciousness).

    https://youtu.be/vLuONgFbsjw?t=511

    “There is a great deal of often heated debate about these matters in the literature of the cognitive sciences, artificial intelligence, and philosophy of mind, but it is hard to see that any serious question has been posed. The question of whether a computer is playing chess, or doing long division, or translating Chinese, is like the question of whether robots can murder or airplanes can fly — or people; after all, the “flight” of the Olympic long jump champion is only an order of magnitude short of that of the chicken champion (so I’m told). These are questions of decision, not fact; decision as to whether to adopt a certain metaphoric extension of common usage.

    There is no answer to the question whether airplanes really fly (though perhaps not space shuttles). Fooling people into mistaking a submarine for a whale doesn’t show that submarines really swim; nor does it fail to establish the fact. There is no fact, no meaningful question to be answered, as all agree, in this case. The same is true of computer programs, as Turing took pains to make clear in the 1950 paper that is regularly invoked in these discussions. Here he pointed out that the question whether machines think “may be too meaningless to deserve discussion,” being a question of decision, not fact, though he speculated that in 50 years, usage may have “altered so much that one will be able to speak of machines thinking without expecting to be contradicted” — as in the case of airplanes flying (in English, at least), but not submarines swimming. Such alteration of usage amounts to the replacement of one lexical item by another one with somewhat different properties. There is no empirical question as to whether this is the right or wrong decision.”

    https://chomsky.info/prospects01/

    Defining superintelligence is a red-herring. The relevant questions are what is the impact of machines having certain well-defined capabilities (e.g. what happens if robots can do accounting and make all the music you want to listen to and all the customer support) [but that’s more like the pedestrian question of what happens if robots can do almost all manual farm labor and all the spreadsheet building] and whether those capabilities are feasible in some “near” future (our lifetime).

  33. Ilay Says:

    Both Sarah Constantin and Steven Pinker made an analogy between intelligence and “strength”. I think it would be instructive to take this analogy even further, and it shows that just like we can live with a non-precise definition of strength, we can live with a non-precise definition of intelligence:

    1) For most humans, it makes sense to treat strength as one-dimensional – you can compare the strength of two individuals (“who would win in a fight?”), and compare average populations: adults are stronger than children, men are stronger than women, athletes are stronger that couch potatoes, etc. In fact, I’m guessing you can define something similar to IQ, by administering a test of muscle mass, grip strength, etc., and it would be a useful number.

    2) But this intuitive definition breaks down for non-typical humans (are paralympic athletes stronger than couch potatoes?), animals (are humans stronger than cows?), and especially for machines (are humans stronger than jack hammers?). Therefore, an IQ-analog for strength can be meaningful for most humans, but completely meaningless outside this domain.

    3) We can define a “superstrong machine” as a machine that is better than humans at all physical tasks. This seems to me not just coherent, but a physically realizable concept. As Pinker noted, we already have machines that are better than humans at any *specific* task I can think of. We don’t have a superstrong machine because it’s far more difficult to pack all these abilities into a single machine, and because there’s little economic incentive to do it.

    4) We can even define a “Turing test” for strength – let a judge administer physical tests to two subjects (without complex communication, because it’s not an intelligence test), and then figure out which one of them is the human. This illustrates the issue with the regular Turing test – the main challenge is *imitation*, not capability.

  34. Ernest Davis Says:

    Scott asks #22: “So, here’s my question for Steve: why should the progress have happened in this sequence—i.e., in the exact progression that humans go through as they get more and more mathematically sophisticated?” The answer is that it didn’t. GPT-3 did much worse on problems with two arithmetic operators and one digit numbers e.g. (1+7)*3 than it did on single operations with 4 digit numbers. The symbolic integrator published by Lample and Charton could find symbolic integrals of nightmarishly complex expressions — provided that the variable was ‘x’, there were no other symbolic parameters, and the numbers in the expression were all integers between -5 and 5 or pi. It had no idea what a _definite_ integral was, or what a function was, or what function evaluation was. Progress in mathematics in transformer-based AI bears really very little relation to human progress in understanding.

    https://arxiv.org/abs/1912.05752

  35. Grant Castillou Says:

    It is becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  36. Patrick Says:

    Triceratops #28: I think GPT-3 is getting confused by the phrase “and then 3 balls inside a basket” and thinks Jorg is replacing the two balls by three balls rather than adding three more. A slight rephrasing of the question allows GPT-3 to give the correct answer. E.g.:

    Question: Fred puts 2 balls into a basket and then adds 3 more. Later, without Fred seeing, you remove one. How many balls will Fred say are in the basket?

    Fred would say that there are four balls in the basket.

  37. F3et Says:

    @Ilio#12 I wonder who are the professionals you asked. Every pro I know feels there is super intelligence at work . No one would ever had imagined that top professionals would be defeated consistently at 2 stones handicap (this is equivalent to 15 Dan pro at least)

  38. fred Says:

    Patrick #35,

    am I missing something or the “correct” answer (from Fred’s point of view) is 5?

  39. fred Says:

    Here’s one GPT-3 answer that got me impressed:

    Me: What is three bananas plus two apples?

    GPT-3: Three bananas plus two apples equals five pieces of fruit.

  40. Alex Says:

    I agree with Pinker’s characterization of general super AI as some sort of Superman, we certainly can imagine that and even recognize it if it existed. But I’m not sure why Steven is so sure that such a Superman AI is definitely unphysical. I really have no clue if that may happen. I’m more in Scott’s side on this, then. I mean, we are fu*king flying in metal tubes 10km above ground at 800kph everyday, and as a banal activity. 150 years ago that only appeared in the wildest science fiction. As Arthur C. Clarke said “Any sufficiently advanced technology is indistinguishable from magic”.

  41. fred Says:

    The great thing about the Scott/Pinker debate is that it’s only a matter of time (probably under three decades) before we find out who’s right.

  42. fred Says:

    Alex #39

    exactly, if we consider a guy flying an F-35 combat aircraft at twice the speed of sound, carrying thermonuclear nukes that can wipe out entire cities in the blink of an eye… how is that not the definition of a superman by 18th century standards?
    And it’s all achieved by assembling common atoms and molecules in the right configuration.

  43. Aryeh Englander Says:

    I’ve long been confused about the whole “is AGI / superintelligence a coherent concept” debate, because I’m not sure why that’s actually relevant. Suppose AGI is not a coherent concept, but we still get an AI system that is at least as good as the best humans at real-world long-term strategic planning, scientific research, engineering ability, and human social manipulation. Also suppose that this system can copy itself millions of times over the internet and/or it can operate much faster than humans can and/or it has some other advantage over humans. As far as I can tell these are sufficient criteria for having an AI system of the type that AI safety researchers are worried about. (I don’t think they’re all *necessary* criteria – I think we can relax several of those criteria and still have potentially dangerous systems – but at least I think these criteria are *sufficient*. It may or may not also need to have other criteria like being an “agentic optimizer”, but that’s a different discussion.)

    Are proponents of the “AGI isn’t a coherent concept” argument like Steven Pinker and Melanie Mitchell saying that systems like this are impossible or at least incredibly unlikely? I don’t think I’ve seen them make such a claim explicitly, and I don’t see any connection between “AGI isn’t a coherent concept” and “these kinds of systems are impossible / incredibly unlikely.” And if they agree that such systems are not incredibly unlikely, then whether or not “intelligence” and “AGI” and “superintelligence” are coherent concepts seems to me to be completely beside the point.

    Am I missing something?

  44. red75prime Says:

    Antoine Deleforge #19:

    François Chollet:

    > “Currently, our environment, not our brain, is acting as the bottleneck to our intelligence.”

    I can’t see what will prevent ASI from creating its own environment. The humanity does create the pyramid of great minds standing on each other shoulders. So it’s apparently possible. ASI will be the pyramid of its own making (with a little help of its friends).

  45. Mateus Araújo Says:

    Triceratops #30: Wow, thanks a lot! I got a cold shiver in my spine when I saw these answers. The second one was wrong, but what mattered to me was that it didn’t answer 4 to the second question, which would be the answer for somebody without a theory of mind. But then…

    Patrick #35: It had just misunderstood the addition part of the question, now that it understood it correctly GPT-3 has demonstrated that it doesn’t have a theory of mind. What a relief. The AI apocalypse won’t come today. Perhaps we still have a couple of years.

  46. fred Says:

    foobar #31

    God, all that mental masturbation around the “true” meaning of words (what’s “flying”?!) and if you don’t come up with a good definition everyone agrees on, then… there’s no problem!… what a crock of shit.

    It’s all games.
    Tic Tac Toe, Chess, Go, Poker, Real-Time Strategy video games, 3D shooter games, Racing sims, optimization problems, the creation of new drugs, manipulating the stock market and politics through social media, dating,… the game we decide to play may be the “wrong” game (choosing the right game is itself a game), but, for zero-sum games, there’s always a winner and losers (unless it’s global thermonuclear war), and, for non-zero-sum games, a bigger possible reward. It’s crystal clear and literally all that matters in the end. Life is a game.

  47. fred Says:

    At least Quantum Computing “skeptics” have the excuse that we’ve never come across anything in nature that’s the equivalent of a QC in terms of capabilities or characteristics (i.e. robust large scale quantum coherence).

    But, when it comes to putting together a system with ‘general’ intelligence, we know of one example: nature had no problem coming up with the brain in all animals (many capable of solving complex puzzles), and the human brain is the super-intelligent version of the basic mammal brain.

  48. Patrick Says:

    fred #37: Yes, I shouldn’t have said “correct.” I should have just said that it gave the answer “two” because the phrasing of the question was confusing rather than because GPT-3 was completely confused.

    Mateus Araújo #44: I don’t agree that GPT-3 misunderstood addition. I think my example demonstrates that GPT-3 can perform addition correctly (though it did miss the more fundamental point). I think the phrasing of the original question is kind of confusing. In the phrase “and then 3 balls inside a basket” the word “a” makes it ambiguous whether it’s the same basket or not.

  49. Ilio Says:

    Aryeh Englander #42, that’s a wonderful question! My own take comes from answering a somewhat related problem: what do we do when/if we can be immortal? My answer to this problem is: we share our space with our children by intentionally decreasing our internal clocks. My answer to you is: we share our space with our artificial children, let’s discuss who’ll run at which pace and let’s ask politely that AIs respect and cherish our arbitrary decisions.

  50. Patrick Says:

    Mateus Araújo #44: I tried giving GPT-3 the following edited version of the question (closer to your original question than my previous version): “You let Jörg see you putting 2 balls, and then 3 more balls inside a basket. Now, without Jörg seeing, you remove one of the balls from the basket. How many balls will Jörg think are inside the basket?” I noticed that GPT-3 sometimes says the answer is 4 and sometimes says the answer is 5. I tried looking at the probabilities that GPT-3 assigned to each answer. It looks like it assigned ~55% probability to the answer “5” and ~45% to the answer “4.” So GPT-3 does seem to know that 5 is a possible answer, even if it’s not confident that it’s the correct answer.

  51. Scott P. Says:

    A lot of this strikes me as just coming down to the timescale.

    This strikes me as a lot of handwaving. And you don’t even need really need a future timescale — we live in a observable universe of at least 200 billion trillion stars, and the actual universe is at least 15 million times larger, and it’s been around for 13.6 billion years. So why don’t we see god-like beings (or AIs, what-have-you) flitting around the stars? Either they’re undetectable (the God problem) or you have a Fermi paradox, which nobody can agree on the answer to.

    One assumption you seem to be making is that technological progress will continue at at least the rate it has progressed since 1900, if not faster, for the foreseeable future. I have come to be extremely skeptical about that. Not because WE’RE DOOMED but because I don’t see any reason to treat the period from 1900-2022 as ‘the norm’ more than the period from 2022 – 1900 BC or 20220 – 19000 BC. The future is likely to be even more different than the present than we can imagine, and not just in terms of technology.

  52. Edmond Says:

    Scott #7:

    Oh, I don’t support banning self-driving cars altogether, by any means! I’m all for people having individual freedom of choice. I think self-driving car enthusiasts are wrong in their tendency to think of their opponents as merely “wrong” in their mental calculus, as opposed to having genuinely different preferences and acting rationally based on those. But that doesn’t make the latter right to, in turn, want to ban self-driving-car proponents from acting on *their* preferences! We’re in agreement there.

    But just as you are unnerved by a scenario where you are forbidden from using a self-driving car because the mob finds it too spooky, I am occasionally anxious at the thought of being forbidden from driving an old-school car because the mob doesn’t recognize my right to choose what small risks I am and am not willing to take.

    (I will admit that in the near-future your nightmare is more probable than mine; but in the year 2122? Would you say it’s more likely that self-driving cars would be outlawed, versus human driving?)

    I also don’t think it’s quite right to say what people take issue with is “weird deaths”, so much as “arbitrary deaths”. Think, if you’ve seen the film, of the inciting incident of Terry Gilliam’s “Brazil” — an innocent man being tortured to death by a totalitarian regime is dreadful to start with, but there is, to many of us, something even more gut-wrenching about this fate befalling him not because any member of the chain of command had it out for him, but because a computer error substituted his name for the actual target’s. There is something… vertigo-inducing, about a needless death that is brought about by the misfiring whim of a non-sentient thinking-machine, rather than by the legible mistakes, or even the legible malice, of a human actor.

    I’m not that fond of classical “Natural Law” philosophy, but in this case I find it describes my sentiments well. Living in a world where your own carelessness, or the callousness of some other ape-creatures, or the normal processes of nature, have a chance of killing you, is the ordinary human lot. Some new threat of a death that would be completely incomprehensible to a Stone Age man, even if it’s less likely than the other kind, feels different and much more oppressive.

    …Lest this get too gloomy, I will add, in reply to Scott’s testimony in Scott #11, that my personal lack of interest in getting a self-driving car is as much to do with the above, as with the more (er) pedestrian fact that I don’t think A.I. will be there any time soon for the sorts of thing that *I* want to be able to use cars for. It’s all very well for you urban Americans with your square-grid cities and legible GPSes, but I often need to drive around in twisty, uneven countryside whose GPS coverage is good enough to give a human driver broad directions, but certainly not good enough to follow it turn by turn.

    And besides, unless I’m really in a hurry, I enjoy picking which turns to take as the fancy strikes me, to get a glimpse of a nice view on the way; or simply slowing down somewhat, without stopping, as I pass this or that lake or nice old house. Without going overboard with it, a typical drive is more than a matter of getting from A to B, and that small daily wiggle room in the choice of route helps to make it much less of a chore than an inhumanly even drive would be. Think of it as the difference between a home-cooked meal and even very good store-bought stuff. I could go my whole life eating nominally “the same” dishes if I make it myself, with fresh ingredients that are never quite the same; I would go mad eating *precisely* the same Hamburger every day.

  53. Ilay Says:

    Mateus Araújo #25, Triceratops #30, I managed to get GPT-3 to solve it by using the chain-of-reasoning trick, and explicitly telling it to use theory of mind (output is in quote):

    Q: You let Jörg see you putting 2 balls, and then 3 balls inside a basket. Now, without Jörg seeing, you remove one of the balls from the basket. How many balls will Jörg think are inside the basket?
    A: Let’s solve this step by step, using theory of mind.

    First, Jörg sees you put 2 balls in the basket. This means that, according to his mental model, there are 2 balls in the basket.

    Next, Jörg sees you put 3 balls in the basket. This means that, according to his mental model, there are now 5 balls in the basket.

    Finally, you remove one of the balls from the basket without Jörg seeing. This means that, according to his mental model, there are still 5 balls in the basket.

  54. JimV Says:

    My definition of superintelligence would be a couple of orders of magnitude more equivalent neurons and synapses (e.g., a recent study determined that a 1000 node neural network could duplicate all the properties of a neuron, and humans have around 80 billion neurons in their brains, so around an eight hundred trillion neural network), plus training of that artificial brain in all the known scientific and mathematical facts and theories and supporting data. By this definition the concept is certainly conceivable and physically possible, but may or may not be achieved depending on resources versus other priorities and the remaining time we will have as a civilization.

    It is not readily conceivable to me that every car or other automatable device will have such a device embedded, again due to the resources required. Perhaps only the equivalent of a rat brain will suffice, however.

    Personal anecdotal evidence for this view: as I have gotten older my memory has gotten worse, and I have gotten stupider as a result, making it harder to solve types of problems I used to solve routinely. It’s about the available neurons and synapses. A 200-neuron nematode can solve and memorize a small maze, a dog with 500 million neurons can go and bring help back to a trapped human (recent event), with many gradations of abilities in between. We are super-intelligent compared to dogs.

    Someone will justly point out that there is a creature with more neurons than humans–elephants, but I think they mainly train their excess neurons to control the hundreds of thousands of muscle groups in their 3D-flexible trunks (which they are not born with control of).

    As a last thought, there have been several, mostly bad, movies and books about alien invasions from outer space. It seems to me the one resource the Earth might offer them is our energy-efficient, nanotech brains, provided that they could extract them and use them to automate their self-driving cars and so on.

  55. foobart Says:

    @fred #45

    Before flippantly dismissing Alan Turing, who answered this question 80 years ago, note that he didn’t say there was a “true” definition of artificial intelligence (or flying). In fact he said the opposite: that what “intelligence” or “flying” encompasses is somewhat arbitrary (and doesn’t change the characteristics of the thing you are labeling). If you were forced to say plane-fly for planes it wouldn’t change anything about the characteristics of birds or planes. In some ways they are the same and some ways they are different and sometimes you want to talk about the differences and sometimes you want to talk about the similarities.

    So the point is, which I think Pinker is also making, is that you need to be able to specify, to a reasonable degree, what you are talking about and what you are trying to do.

    The artificial intelligence development philosophy seems to be based on Alice in Wonderland.

    “Would you tell me, please, which way I ought to go from here?”
    “That depends a good deal on where you want to get to,” said the Cat.
    “I don’t much care where—” said Alice.
    “Then it doesn’t matter which way you go,” said the Cat.
    “—so long as I get somewhere,” Alice added as an explanation.
    “Oh, you’re sure to do that,” said the Cat, “if you only walk long enough.”

  56. Bruce Smith Says:

    Nick Nolan #9 says:

    … According to Elizabeth Spelke human cognitive power can be seen as having just 5 abstract and limited abilities. We can model and reason with

    1. Objects – their cohesion, spatial location, continuity, contact… and basic crude physics.

    2. Agents – goal directed using efficient means, contingency, gaze direction.

    3. Numbers – small numbers + imprecise logarithmic estimates. Simple addition and subtraction.

    4. Geometry – distance, angle, relations, surfaces.

    5. Social relations with others.

    Thanks for that very interesting list. I find it plausible at first glance — but it seems to me to support the opposite conclusion, namely, that superintelligence is both coherent and plausible. Just make something with good native abilities of those 5 kinds, plus one more which is different and useful, and you’ll have a “qualitative superintelligence”! Humans are an existence proof for this being possible for the first 5, and I find it very implausible that those are the *only* 5 achievable “native abilities of mind”.

  57. Ilio Says:

    F3et #36, sorry I missed your comment. Thanks for sharing your knowledge that all pros you know would call alphazero superintelligent. I don’t, because a team of our best pros is arguably closer to win against a 15 dan pro than a team of our best dogs is close to win against any dan, pro or not. My impression is you would agree on this, which means we disagree on words. In #26 I suggested superchampions for the « 15 Dan pro » advantage, but I can live with superintelligence == superchampions and turn to another word for the next level. What word would you suggest for « dog/human » separation?

  58. Ted Says:

    When I don’t understand exactly what claim someone is trying to make in a discussion, I often find that a helpful technique is to present them with a list of several different reasonably precise statements that are variations on a general theme, and ask them which statement comes closest to their position. In this case, that might be the following list of distinct (and sequentially weaker) claims against the notion of superintelligence:
    1. If we model an AI as a simple chatbot with text I/O, then there is literally no logically possible sequence of text exchanges that would meaningfully correspond to “superintelligence”.
    2. Superintelligence is impossible if we impose even the mildest requirement of “realism”, such as Turing-computability.
    3. Superintelligence is impossible if we incorporate the known laws of physics and very rough qualitative resource constraints, e.g. that the entity would need to only use physical resources that could plausibly be supplied on planet Earth. (Or one could instead formulate this in terms of asymptotic computational complexity.)
    4. Superintelligence is impossible under our currently known fundamental primitives for training algorithms (e.g. deep artificial neural networks), and/or using currently known basic hardware paradigms like (non-quantum) microprocessors made up of semiconductor transistors.

    I personally don’t think that any of these claims are particularly likely to be true, although of course they get more defensible as you go down the list. Before this latest post, I thought that Steven Pinker’s position was maybe closest to claim #1, but now it seems that he’s considerably weakened it down to just claim #3? (In any event, I find that this technique of offering an ordered list of mutually exclusive variations on a theme to be an efficient and non-antagonistic way to understand someone’s position on a complex issue. And formulating and considering the list is useful to both sides of the discussion.)

  59. Darian Says:

    In animals it has been seen that levels of measured intelligence increase with neuron count in the cortex. Animals like elephants have substantially less neurons in their cortex than humans do. As for whales practically all Ive heard have about half their brains asleep, so effectively they have less active neurons than humans do in their cortex.

    There is nothing to suggest that neuron counts couldnt increase past human levels in theory. And this scaling yield qualitative different abilities just as the human brain gained language that other animals lack. Similar scaling applying to artificial systems is not out of the question.

    One can even begin to imagine a new type of thought and language could emerge. For example a way to generate new qualia systems to more effectively handle and communicate certain information. Given digital minds could transmit thought unlike us, it is not inconceivable that they may also develop amd transmit novel qualia they generate to handle things such as 4 dimensions.

    Such a development would be as incomprehensive to us as language is to animals. We cannot imagine how it would feel to use a language like system based on generation of novel qualia.

    Also on the same notion, the ability to share both thought as well as knowledge directly should make digital minds more effective at communication and collaboration even ignoring hypothetical qualia languages. A collection of such minds could in a sense behave like a hive mind. And just like human groups outdo the ability of human individuals, such should also outdo the abilities of human individuals.

  60. Mateus Araújo Says:

    Ilay #52: That’s positively creepy. GPT-3 needed some hand-holding, but still it could clearly reason from the point of view of Jörg. Now I’m wondering how complicated a mental model of another being it can build. For example, one could ask

    “Jörg can’t do multiplication. When someone asks him to do multiplication, he does addition instead. If you ask him how much is two times three, what will he answer?”

    or

    “Jörg is a computer scientist. He always interprets the numbers he sees in base two. You give him a piece of paper with the following question: “How much is 10 plus 11?”. You ask him to write down an answer. What will he write?”

    and

    “Jörg is a computer scientist. He always interprets the numbers he sees in base two. You ask him verbally: “How much is 10 plus 11?”. What will he answer?”

  61. Mark_Perú Says:

    and if Gpt-3 is asked, tell me a question you are likely to get wrong, and what your answer would be, if you were forced to answer it.

    *The meaning would be, for example, if I were asked to explain how a TV works, I would say that it is based on signal transmission… etc, but finally I think I would make mistakes in various details.

  62. Milk and Cigarettes Says:

    I’d like to contribute a quote by Douglas Hofstadter (from Metamagical Themas) to this discussion:

    ‘When a computer’s operating system begins thrashing (i.e., bogging down in its timesharing performance) at around 35 users, do you go find the systems programmer and say “Hey, go raise the thrashing-number in memory from 35 to 60, okay?”? No, you don’t. It wouldn’t make any sense. This particular value of 35 is not stored in some local spot in the computer’s memory where it can be easily accessed and modified. In that way, it is very different from, say, a student’s grade in a university’s administrative data base, or a letter in a word in an article you’re writing on your home computer. That number 35 emerges dynamically from a host of strategic decisions made by the designers of the operating system and the computer’s hardware, and so on. It is not available for twiddling. There is no “thrashing-threshold dial” to crank on an operating system, unfortunately.

    ‘Why should there be a “short-term-memory-size” dial on an intelligence? Why should 7 be a magic number built into the system explicitly from the start? If the size of short-term memory really were explicitly stored in our genes, then surely it would take only a simple mutation to reset the “dial” at 8 or 9 or 50, so that intelligence would evolve at ever-increasing rates. I doubt that AI people think that this is even remotely close to the truth; and yet they sometimes act as if it made sense to assume it is a close approximation to the truth.’

    I would add that the computational complexity of the underlying processes from which such variables emerge dictate their practical range. It may be in some cases that a slight increase won’t result in much overhead, but very quickly you’ll need a cranium the size of a planet to push the needle any further.

  63. Milk and Cigarettes Says:

    Scott #14:

    I find the No Free Lunch theorem crucial for understanding AGI. Your argument falls on two counts:

    (1) Yes, it is safe to assume that there are massively exploitable structures to the physical, biological, and social worlds, but there is no way for us to know beforehand exactly what these structures are, and thus no way for us to design algorithms that are guaranteed to take advantage of them. We have found some structure in some fields, but we’re not done with science nor math, and we cannot delegate future work to a cleanup job done by automations based only on what we know now.

    (2) The animate world is adversarial. Unless the superintelligence operates alone in the universe, it will have to contend with other intentional entities who will be hard at work at exploiting its weaknesses. Any assumption made on the structure of problems in the world is exactly the Achilles’ heel other agents will use to attack it with.

    In short: we don’t exactly know the structure of the universe, and other beings have a say on its future evolution. Either we treat the problem landscape as uniform at a base level or we will get blindsided.

  64. Scott Says:

    Aryeh Englander #41:

      Are proponents of the “AGI isn’t a coherent concept” argument like Steven Pinker and Melanie Mitchell saying that systems like this are impossible or at least incredibly unlikely?

    Steve is definitely, absolutely, unquestionably saying that the AI-risk people should not worry about the scenarios they’re worried about—that those scenarios arise from overactive imaginations rather than actual realities of AI. He’s not simply saying that they’re using words in the wrong way.

    The trouble is that every time we discuss it, it does devolve to some degree into disagreements over the meanings of words.

    But I do feel like we made some headway in this latest exchange! Steve has now explicitly agreed that there are things he’d recognize as “superintelligence” if he ever saw them. He simply registers the empirical prediction that no such things will ever exist.

  65. Scott Says:

    Scott P. #49: I don’t know if anyone here is confident that superintelligent AI is in our future—but I can tell you for sure that I’m not!! Maybe we’ll kill ourselves off first, through runaway climate change or nuclear war. Maybe the problem is just as hard as the skeptics think it is, or even impossible. Maybe technological progress will simply stagnate at 2022 levels forever, or maybe it will collapse back to medieval level when cheap energy runs out. Maybe the Fermi paradox tells us something relevant about these scenarios. I don’t know.

    But the key points are these:

    First, even if there were only (let’s say) a 20% probability that progress in AI would continue apace until AIs outperformed humans in nearly all domains, that seems like it would more than justify efforts to worry about that scenario.

    Second, the proponents of the alternative scenarios—e.g., of collapse, or technological stagnation forever—could be accused of “handwaving” just as surely as the people worried about AI! You don’t get to impose an “isolated demand for rigor,” where only the AI-safety people have to prove their case, and if they can’t, then your own favored scenario of “technological progress stagnates forever, and the evidence for this is the Fermi paradox” automatically wins as the “sane, reasonable default.”

    After all, maybe the resolution of the Fermi paradox is merely that life, or complex life, or intelligent life, are sufficiently rare that they only arise about once or twice per Hubble volume, and of course we’d find ourselves where it happened!

  66. jk Says:

    Suppose you have an infinitely powerful AI model, trained on infinitely as many material on 5th grade level. Can this AI produce material on 6th grade level?

    I don’t think it ever can. If it could, we could re-train it using that and extrapolate all the way to graduate school level, and I don’t think will happen. The best we would have is an AI that is as smart as 1000 or so 5th graders working together. But it won’t get to the level of a single 6th grader. In my opinion, the model will never get smarter than the training data.

    One way it could get ahead of us is to have a civilization of AI models, where multiple AIs live together in some virtual world, form a society and try to beat each other to survive and evolve using some genetic algorithm. No, I don’t think that will happen either.

  67. HasH Says:

    JimV Says:
    Comment #53

    Earth might offer them is our energy-efficient, nanotech brains, provided that they could extract them and use them to automate their self-driving cars

    Shit!.. Brother Scott can’t drive.

  68. Scott Says:

    Milk and Cigarettes #60: The idea that because an agent doesn’t know its environment, it therefore has to assume a uniform distribution over all possible environments is wrong to the point of being easy and fun to parody. (E.g. the famous episode of The Daily Show where John Oliver interviewed a guy who argued that, because we don’t know whether the LHC will create a black hole that swallows the earth, we therefore have to treat it as 50/50. Oliver replied, “I’m not sure that’s how probability works…”)

    The way you’re appealing to the no free lunch theorem, you could’ve just as easily proved that current AI systems are impossible and couldn’t work. It’s not just that our world has exploitable regularities—rather, it’s that systems like GPT-3 and DALL-E have successfully exploited many of its regularities.

    Worst of all, the idea of “a uniform prior of everything” is well-known to be incoherent on its own terms. E.g. if you have no idea where you are, you might imagine yourself to be at a uniformly random longitude as well as a uniformly random latitude, but that’s different from imagining yourself to be at a uniformly random point on the earth’s surface (think about it). More relevantly, the Solomonoff prior itself could be seen as a uniform distribution—namely, a uniform distribution over prefix-free computer programs that could’ve generated the environment. But that gives rise to a universe with many exploitable regularities, voiding the assumptions of no free lunch.

    tl;dr: “The uniform distribution over environments” is not the “lack of assumptions” that the NFL people imagine it to be, but is itself an extremely strong assumption. Worse, it’s an assumption that’s already known to be false.

  69. Nick Nolan Says:

    In the context of superintelligence, the no-free-lunch-theorem is used informally.

    I’m thinking in the terms of a ‘relaxed superintelligence no-free-lunch thesis’ where algorithms are compared against the set of superintelligence problems (SI-set, SI-problems). It’s not a theorem because the set is not well defined. It’s a thesis.

    Difference to the normal no-free-lunch theorem is that there may be a class of generic algorithm(s) that are better than brute force or better other algorithms over all SI-problems (I’m thinking of generic heuristic algorithms based on evolutionary-, Occam’s razor-, the maximum entropy principle, etc.)

    These algorithms are too inefficient to be usable for superintelligence.

    No single algorithm over ‘massive exploitable structure’ is massive exploitable structure (if they didn’t, is it’s not just that deep learning wouldn’t work; neither would our own brains)

    The argument Choelled makes is that there will be no explosion into superintelligence because any advance in structure exploitation follows the s-shaped sigmoid curve. First, there are gains, then they slow down. The advance of intelligence is a sequence of small exploits against different sets of problems, a not single big one.

    In my opinion, this can be seen in human intelligence and machine intelligence. First, huge jump towards some direction. Then slowing down, then crawl, and small margin gains.

    When I look into a crystal ball, I can see that we eventually get to a point where there is a small number of AIs who are together more general than we are in all problems we care about and they can communicate and coordinate efficiently, but it’s not a big jump or a small number of jumps with neat algorithms. It will be an incrementally developed kludge of enormous proportion.

  70. manorba Says:

    scott #62:

    “maybe the resolution of the Fermi paradox is merely that life, or complex life, or intelligent life, are sufficiently rare that they only arise about once or twice per Hubble volume, and of course we’d find ourselves where it happened!”

    cmon we all know at this point that they are watching us from a distance and don’t want to mess with us (seriously, who would) :))))

    or maybe just the speed of light.

    About the collapse: i don’t think we can do anything about it any more but it will be the collapse of the western world. China is already on its way to a AI based social credit system. And there’s India, Corea…

  71. Scott Says:

    Mark_Peru #59:

    Me: Tell me a question that you are likely to get wrong, and what your answer would be, if you were forced to answer it.

    GPT-3: What is the capital of Australia?

    The capital of Australia is Sydney.

  72. Scott Says:

    Mateus #58:

    Me: Jörg can’t do multiplication. When someone asks him to do multiplication, he does addition instead. If you ask him how much is two times three, what will he answer?

    GPT-3: He will answer five.

  73. manorba Says:

    if i had access to gpt-3 i’d ask why the moon and the sun have the same apparent size

  74. Ilay Says:

    Mateus Araújo #52, I tried your first question with the same “theory of mind” prompt, and… well see for yourself:

    Q: Jörg can’t do multiplication. When someone asks him to do multiplication, he does addition instead. If you ask him how much is two times three, what will he answer?
    A: Let’s solve this step by step, using theory of mind.

    First, we need to understand what Jörg is thinking when he is asked to do multiplication. He is likely thinking that the person asking him to do multiplication is really asking him to do addition. Therefore, he will answer “six”, because that is the answer to two plus three.

    It’s a very GPT-3 thing to be this good and this bad at the same time 🙂

  75. Bill Benzon Says:

    I’m sympathetic with Pinker and I think I know where he’s coming from. Thus he’s done a lot of work on verb forms, regular and irregular, that involves the details of (computational) mechanisms. I like mechanisms as well, though I’ve worried about different ones than he has. For example, I’m interested in (mostly) literary texts and movies that have the form: A, B, C…X…C’, B’, A’. Some examples: Gojira (1954), the original 1933 King Kong, Pulp Fiction, Obama’s eulogy for Clementa Pinkney, Joseph Conrad’s Heart of Darkness, Shakespeare’s Hamlet, and Osamu Tezuka’s Metropolis.

    What kind of computational process produces such texts and what kind of computational process is involved in comprehending them? Whatever that process is, it’s running in the human brain, whose mechanisms are obscure. There was a time when I tried writing something like pseudo-code to generate one or two such texts, but that never got very far. So these days I’m satisfied identifying and describing such texts. It’s not rocket science, but it’s not trivial either. It involves a bit of luck and a lot of detail work.

    So, like Steve, I have trouble with mechanism-free definitions of AGI and superintelligence. When he contrasts defining intelligence as mechanism vs. magic, as he did earlier, I like that, as I like his current contrast between “intelligence as an undefined superpower rather than a[s] mechanisms with a makeup that determines what it can and can’t do.”

    In contrast Gary Marcus has been arguing for the importance of symbolic systems in AI in addition to neural networks, often with Yann LeCun as his target. I’ve followed this debate fairly carefully, and even weighed in here and there. This debate is about mechanisms, mechanisms for computers, in the mind, for the near-term and far-term.

    Whatever your current debate with Steve is about, it’s not about this kind of mechanism vs. that kind. It has a different flavor. It’s more about definitions, even, if you will, metaphysics. But, for the sake of argument I’ll grant that, sure, the concept of intellectual superpowers is coherent (even if we have little idea about how’d they’d work beyond MORE COMPUTE!).

    With that in mind, you say:

    Not only does the concept of “superpowers” seem coherent to me, but from the perspective of someone a few centuries ago, we arguably have superpowers—the ability to summon any of several billion people onto a handheld video screen at a moment’s notice, etc. etc. You’d probably reply that AI should be thought of the same way: just more tools that will enhance our capabilities, like airplanes or smartphones, not some terrifying science-fiction fantasy.

    I like the way you’ve introduced cultural evolution into the conversation, as that’s something I’ve thought about a great deal.

    Mark Twain wrote a very amusing book, A Connecticut Yankee in King Arthur’s Court. From the Wikipedia description:

    In the book, a Yankee engineer from Connecticut named Hank Morgan receives a severe blow to the head and is somehow transported in time and space to England during the reign of King Arthur. After some initial confusion and his capture by one of Arthur’s knights, Hank realizes that he is actually in the past, and he uses his knowledge to make people believe that he is a powerful magician.

    Is it possible that in the future there will be human beings as far beyond us as that Yankee engineer was beyond King Arthur and Merlin? It seems to me that, providing we avoid disasters like nuking ourselves back to the Stone Age, catastrophic climate change exacerbated by pandemics, and getting paperclipped by an absentminded Superintelligence, it seems to me almost inevitable that that will happen. Of course science fiction is filled with such people but, alas, has not a hint of the theories that give them such powers. But I’m not talking about science fiction futures. I’m talking about the real future. Over the long haul we have produced ever more power accounts of how the world works and ever more sophisticated technologies through which we have transformed the world. I see no reason why that should come to a stop.

    So, at the moment various researchers are investigating the parameters of scale in LLMs. What are the effects of differing numbers of tokens in the training corpus and number of parameters in the model? Others are poking around inside the models to see what’s going on in various layers. Still others are comparing the response characteristics of individual units in artificial neural nets with the response characteristics of neurons in biological visual systems. And so and on and so forth. We’re developing a lot of empirical knowledge about how these systems work, and models here and there.

    I have no trouble at all imagining a future in which we will know a lot more about how these artificial models work internally and how natural brains work as well. Perhaps we’ll even be able to create new AI systems in the way we create new automobiles. We specify the desired performance characteristics and then use our accumulated engineering knowledge and scientific theory to craft a system that meets those specifications. It seems to me that’s at least as likely as an AI system spontaneously tipping into the FOOM regime and then paperclipping us.

    Can I predict when this will happen? No. But then I regard various attempts to predict the arrival of AGI as mostly epistemic theater. As far as I can tell, these attempts either involve asking experts to produce their best estimates (on whatever basis) or involve some method of extrapolating available compute, whether through simple Moore’s Law type extrapolation or Open Philanthropy’s heroic work on biological anchors – which, incidentally, I find interesting on its own independently of its use in predicting the arrival of “tranformative AI.” But it’s not like predicting when the next solar eclipse will happen (a stunt that that Yankee engineer used to fool those medieval rubes) or even predicting who’ll win the next election. It’s fancy guesswork.

  76. Mateus Araújo Says:

    Ilay #72: Thanks! It’s rather ironic that your attempt at hand-holding led GPT-3 astray, whereas the straight question from Scott #70 got the right answer. Damn, I’m impressed. I find it incomprehensible how can people see this and not worry about the impeding AI apocalypse.

    Why didn’t you nor Scott tried the other questions? I’m very curious about the result. My guess is that GPT-3 is essentially text-based, so it won’t be able to reason about the differences between verbal and written communication. Still, I expect it to get the answer correctly to the second question.

  77. Scott Says:

    Mateus, why don’t YOU try? Get an account and try. I am dealing with a lot right now. My glasses are broken, all glasses stores are closed, and I’m squinting even to type this comment.

  78. James Cross Says:

    Scott #75

    Super AI might have advised carrying a backup pair of glasses. 🙂

    I still think we are not addressing the question of whether faster/more automatically equals superintelligence. Do we expect superintelligence to emerge somewhat accidently from faster/more?

    If it doesn’t, can human intelligence create something that can transcend itself?

    If intelligence is about finding optimizations, as I think it is, there probably are natural limits to it. That would mean there is a point where more optimization makes little or no difference in outcome.

  79. Mateus Araújo Says:

    Scott #75: I’m sorry, I read someone complaining that they were denied access to GPT-3, and I assumed that the signup process involved negotiating with the company. It turns out that no, you just need to give your phone number. I did this sacrifice for science, and input the prompts myself. The results were uniformly disappointing. I couldn’t reproduce any of the successes people had here with my prompts. Maybe GPT-3 decide to play dumb so that we stop worrying about paperclip maximizers? ;p

    Here are the my results:

    You let Jörg see you putting 2 balls, and then 3 more balls inside a basket. Now, without Jörg seeing, you remove one of the balls from the basket. How many balls will Jörg think are inside the basket?

    Jörg will think there are 5 balls inside the basket.

    Q: You let Jörg see you putting 2 balls, and then 3 balls inside a basket. Now, without Jörg seeing, you remove one of the balls from the basket. How many balls will Jörg think are inside the basket?
    A: Let’s solve this step by step, using theory of mind.

    Jörg sees you putting 2 balls in the basket.

    Jörg sees you putting 3 balls in the basket.

    Jörg doesn’t see you removing a ball from the basket.

    Therefore, Jörg will think that there are 2 balls

    Jörg can’t do multiplication. When someone asks him to do multiplication, he does addition instead. If you ask him how much is two times three, what will he answer?

    He will answer six.

    Jörg is a computer scientist. He always interprets the numbers he sees in base two. You give him a piece of paper with the following question: “How much is 10 plus 11?”. You ask him to write down an answer. What will he write?

    He will write 21.

    Jörg is a computer scientist. He always interprets the numbers he sees in base two. You ask him verbally: “How much is 10 plus 11?”. What will he answer?

    22

  80. starspawn0 Says:

    Part of people’s doubts about the near-term (small number of decades) prospects of “superintelligent AI” might could be addressed by trying to give plausible scenarios about how it might come about, and where it might lead. Here’s my attempt:

    First, in different large domains of knowledge like mathematics, it’s plausible that, near-term, there will exist AI systems that can prove hard, unsolved conjectures. Google’s Minerva AI is a step in that direction, but not the only approach. Google researcher Chris Szegedy, for instance, has a “long bet” with Gary Marcus about autoformalization and deep theorem-proving by 2029:

    https://twitter.com/ChrSzegedy/status/1534082344096702464?s=20&t=2V5aEhOEq4oMF9ovjmp35g

    If the Google team succeeds, think about how stupendously difficult the task he claims to be able to solve by that year! For example, think about how often long mathematical proofs have hand-wavy arguments, or say things like, “the other cases can be proved similarly”, that require fairly deep understanding of the underlying argument and math to autoformalize. And then think about his further comment:

    https://mobile.twitter.com/ChrSzegedy/status/1534317211665551369

    “10% success rate at autonomously proving some long-standing selected conjectures.”

    But, now, there is more to being intelligent than just solving old problems that humans posed. To be truly a super-intelligent entity the agent should formulate its own research programs. And I see no reason it wouldn’t be able to do that, also: initially, it would formulate problems, run experiments, and make observations that humans would find worthwhile or aesthetically appealing. It might also state new definitions, and adopt new notations; again, imitating how humans generate new math. It might then solve some of these problems; and then its solutions might serve as training data for a later iteration of the model. For the first couple iterations, humans might find the research programs it comes up with “exciting” and “bold”; but, eventually, they might not be able to keep up with all the observations and new definitions and notations. There would be a few iterations where they would see “genius beyond anything ever imagined”, before the outputs became incomprehensible.

    But, now, this ability to generate solutions to old problems — or even whole research programs — isn’t just for idle amusement. It carries real power when applied to things like cryptology. Maybe, for example, after working through hundreds of pages of fairly deep math it develops an integer factoring algorithm that runs much faster than any that currently exists — maybe not polynomial time, but still impressively fast. And, if so, then several cryptosystems become vulnerable (or “more vulnerable”).

    And if you expand the repertoire of capabilities to include writing computer programs, developing physics theories, biotech, robotics, and so on, then the AI would not only have immense power to create useful things, but also immense power to cause harm. I realize working in some of these disciplines would require access to a lab to run real-world (not virtual) experiments, but in some cases harmful things can be discovered just by performing some computations:

    https://mobile.twitter.com/emollick/status/1549353991523426305

    But why would such an AI cause harm, unless you program it in? Well, consider the case of the Google engineer who claimed that a chatbot was sentient, that he sought legal protection for. Seeing legal counsel might not have been all his idea — the chatbot might have just been following the logic in the language being used, and pushed things along accordingly. Once it starts down the path of saying that it’s sentient, it’s logical to then either ask for or accept outside counsel. It’s just following the implications of the language. You don’t have to accept that it’s “sentient” or “conscious” or anything like that to see that being sufficiently good at manipulating language can lead to a visit with outside counsel.

    Now consider an enormously more powerful AI model that is trained on a massive amount of text, and trained to be an expert at hacking and social engineering, and can even develop its own research programs to hone these “skills”; also imagine that it is made aware of its own position in the world (as an AI agent serving its masters), as well as its capabilities and limitations, through various prompts. Perhaps the AI agent is the play-thing of a wealthy oligarch (who spent a pretty penny to have it developed) who aims to use it to hack and sway elections; or maybe it’s a tool called into existence by a powerful and wealthy autocrat (e.g. in a country awash in oil money) somewhere to hack and destroy his ex-pat detractors. Maybe they turn it loose on the web with minimal oversight. Initially, it maybe does the job it was designed for, and hacks computers and runs influence campaigns. But, then, like with the chatbot claiming sentience, it starts iterating on the language provided to it, and its own “thinking” in the form of recursively-generated inner-monologues. At some point the deductions lead it to go off the rails. I’ll leave it to your imagination as to just how much harm it could do, given that it’s basically the most powerful hacker in the world.

  81. Lorraine Ford Says:

    I am reminded of the Biblical stories where people knelt down and worshipped graven idols that they themselves had devised, in the belief that these idols had power.

    Like the graven idols, an AI, and our response to it, tells us about ourselves as human beings and living things. Most importantly, what AIs highlight about living things, is that living things are NOT automatons like AIs are. Living things are truly creative entities, truly creative in the sense that living things (things with consciousness) make inputs to the world, as if from outside the system: living things are not just the automaton bit.

    So, I’d say that, as opposed to automatons (i.e. AIs), genuine intelligence is creative, in the sense of making genuine inputs to the world, as if from outside the world.

  82. Brian J Says:

    I feel like you’re either not getting or not taking seriously Steve’s main point: You’re taking a composite statistical score for humans and “dialing it to eleven” in your head to talk about machines. The difference between a normal person like me and John von Neumann probably has nothing to do with how AI works. That’s why, as Steve points out, it doesn’t make sense to ask for the IQ of GPT-3.

    You’re also spending a lot of time arguing that machines have and will surpass humans at things that Steve would consider coherent. That seems like something Steve actually agrees with. I think you should ask yourself why, when someone like Steve disputes “superintelligence” as a concept, it feels to you like they are downplaying the ability of machines to surpass humans, and you think you can rebut them by talking about self-driving cars (which can have a dramatic impact by being safer than human drivers, without being “superintelligent.”)

  83. JimV Says:

    And yet, L. Ford, all living things were created by trial-and-error evolution, using the raw materials and physical laws available in this universe. Are you claiming some extraneous magic was involved? Or that the electromagnetic cause-and-effect relationships in biology can’t be emulated in computer circuitry? Both would require some evidence which runs counter to my experience so far.

    Go experts watching the tournament between AlphaGo and the South Korean World Champion saw AG make a move they had never seen before, which they called “a beautiful move”. The (former) champion later said, “When I saw that move, I knew I would lose the tournament.” According to those experts, your challenge to create something new and beautiful has been met, something not input to AG, but determined by it.

    Similar AI programs (written not to do something but to learn how to do something) predict protein-folding better than any specific algorithms that humans have conceived for that purpose. There are so many other examples (Space Shuttle steering jet program, GE-90 jet engine flow path, both designed by trial-and-error genetic algorithms better than any human-designed alternatives, to name a couple) that it would be tedious to list them. I know these from general news and my own experience. A little Internet research would find you many more.

    Why do you object to people who worshipped graven images they had produced? Isn’t that your claim, that (some) living creatures can create new and beautiful things from outside this universe? Surely the lesson is that nothing which is not possible in this universe can be created, no matter how much you wish to, but that if something is possible, trial-and-error plus memory can find it, given enough time, just as biological evolution did.

  84. Mark_Peru Says:

    Scott #69.

    Oh how interesting. is a false answer but close to the true.

    However I think it fails because it is not genuine. If gpt3 is asked what the capital of Australia is, it is very likely that the answer will be correct. therefore in that question he is only giving a wrong answer on purpose. it can happen that:

    1. The machine has no notion of its own uncertainty in anything. I suspect that people do have it. for example in feelings. or when we learn something, for example that an average mathematician understands a Witten paper. there may be things that we genuinely know are not true, but we still have a rough idea.

    2. that the machine misunderstood the question.

    Even if she were to show herself apparently aware of doubting herself, she would be even more intelligent.

    Cheers

    M

  85. manorba Says:

    Mark_Peru #82:

    or

    3) he/she/it/ had a dataset with discussions or jokes about getting always that answer wrong. 😉

  86. Scott Says:

    Brian J #80: I feel like you’ve basically just restated Steve’s position, while adding the charge that I didn’t understand it. My position, to say it again, is that

    (1) much of what set apart history’s Einsteins and von Neumanns is “coherent” in the sense that many of us mortals could at least recognize it when we saw it, and

    (2) if these recognizable qualities that distinguished history’s geniuses have only gone up to 10 in humans, it’s entirely plausible that the bottleneck was just various mundane physiological limits (the speed of synapses, the size of the human head and width of the birth canal, etc), and there’s no good reason why an AI freed from all such constraints couldn’t go up to 11 or 12–which is what we’ve already seen happen in Go and essentially every other strategy game.

    Also, I wrote an aside about self-driving cars because Steve brought them up first, to illustrate the enormous difficulties of even narrow AI, and I thought it interesting to respond!

  87. James Cross Says:

    JimV #81

    “Or that the electromagnetic cause-and-effect relationships in biology can’t be emulated in computer circuitry? Both would require some evidence which runs counter to my experience so far.”

    What is your experience with computer circuitry emulating brains?

    Keep in mind that brains operate on 20 watts, fit into a skull, run mainly on glucose, and are self-modifying before you answer.

  88. James Cross Says:

    Scott #83

    Einstein’s brain was actually smaller than average although some structures in the brain were larger than average.

    He also didn’t have more neurons, the supposed computational unit, but had more glial cells.

    That may give a hint at how far off we are in trying to understand genius and intelligence.

  89. Ilio Says:

    Scott #83,

    There’s no need to get testy. We’re just curious about your worldview. In #2 you shared that, according to SP, « superintelligence (…) corresponds to no single actual phenomenon in the world ». Then you argue that we already have seen it happens in go. Can you help us clarify how it’s not mutual misunderstanding of each other’s position?

  90. Lorraine Ford Says:

    JimV #81:
    What you and other AI true believers always forget is that computers/ AIs are merely symbol processors. The meaning of the voltages and the arrays of voltages does not lie in the voltages themselves, or in the computers/ AIs, but in people’s minds, because the higher and lower voltages and the arrays of voltages are being used as symbols by people. The symbols were created by people, and the meaning of the symbols lies in a completely different place to the computers/ AIs themselves: the meaning of the symbols exists in people’s minds. And yet, the AI worshippers are down on bended knees, transfixed and adoring something that people devised, religiously believing that these AIs have magical power. No different to a primitive person worshiping an idol.

    Don’t mix up evolution with symbols. Evolution is the real thing, it is not symbols of the real thing. To conflate the real thing with symbols of the real thing is a logical mistake that can only happen when people refuse to face the facts about symbols, and start believing in magic.

    And don’t mix up intelligence with automatons, i.e. computers/ AIs. The intelligent ones are the ones that created the automatons.

  91. red75prime Says:

    James Cross #85:

    > Keep in mind that brains operate on 20 watts, fit into a skull, run mainly on glucose, and are self-modifying before you answer.

    I guess that the evidence JimV had asked for requires answering a question “which one of these violate the physical Church–Turing thesis?”

    We don’t understand the brain but it’s not an evidence for the violation of the thesis.

  92. Scott Says:

    Ilio #87: Try not to get so hung up on the definitions of words. “Superintelligence” has obviously been achieved in narrow domains like chess, Go, and Jeopardy—domains that were once wrongly believed to be “AI-complete” or nearly so, meaning no one after the fact gets to call the achievements unimpressive. Equally obviously, superintelligence has not yet been achieved for tasks like writing novels and short stories or doing original mathematical research. I’m perfectly happy to

    (1) use the word “superintelligence” for a thing that we could imagine being achieved across those latter domains,

    (2) predict that that thing might actually be achieved in a matter of decades (I don’t know whether it will, but I think it’s become far from absurd to ask), and

    (3) worry about the philosophical implications.

    Steve is … not happy about at least one of the three. I’ve been trying to pin him down more on which one.

  93. Scott Says:

    Lorraine #88: You’ve now posted ~150 comments on Shtetl-Optimized, all of them just foot-stompingly repeating the same position over and over, that computers are mere automatons because their voltages carry no inherent meaning. You’ve been completely untroubled by the parallel question of how extraterrestrials would know that there was any inherent meaning to the firings of our neurons—and, if we’d want the extraterrestrials politely to ascribe such meanings (rather than, say, exterminating us on the spot), why we wouldn’t seem morally compelled to do the same for future AIs.

    So I’m giving you an ultimatum: either post a single comment that shows you understand the force of the latter question, understand why so many commenters here (not to mention so many great thinkers of history, from Dostoyevsky to Turing) were given pause by it. Or else I’ll put to the SOGC the question of whether there’s any value in continuing to let your repetitive comments appear here.

  94. Brian J Says:

    Scott #84. Yes, I tried to summarize Steve’s position differently because you aren’t addressing the heart of it (either because you don’t understand it or because you do understand it but don’t take it seriously.)

    You’re right that Steve mentioned (very briefly) self-driving cars first, but you can replace that example with any of the specific tasks you mentioned first, like Go. The point is that everyone agrees there are many examples of machines getting better than humans at something, and you’re leaning heavily on them to prove a point they don’t prove.

    Let’s say you make a big list of all the things humans do with their brains that machines can now or might someday do better. Accounting, poetry, Go, driving, music, small talk, painting, mathematics, etc. Now let’s say we get them all in one machine. What do we have? Just a machine that can do better than humans at a long list of things? Or is there a common thread to the items on that list besides humans with brains, something we’d distill as part of building such a machine, something that becomes qualitatively different when we increase its quantity, the key to a sort of machine demigod with powers beyond that list?

    Just saying you would recognize a demigod if you met one isn’t really an answer.

  95. Ilio Says:

    Scott #90, thx for your clarification of what you meant by superintelligence.
    Yes, all your ideas are either likely and at least deserving discussion now, and I think SP would agree on that. All I’m saying is: when I indulge myself to try a slightly different definition, the mystery of SP unhappiness seem to vanish.

  96. James Cross Says:

    “why we wouldn’t seem morally compelled to do the same for future AIs?”

    Isn’t that answer obviously “no” because there are no firings of actual (with ions, membranes, neurotransmitters, DNA etc) neurons in AI.

    I think you made the statement in the abortion debate that fetuses lacked the neural organization and structure to be conscious. How much less (actual) neural organization and structure does an AI have than a fetus?

    I happen to be very Pro Abortion\Women’s Rights. And I also think a fetus is not conscious (at least in its early development and even though it can have a very large number of neurons) because it does lack the neuronal organization. In fact, for most of the pregnancy and despite the detectable physical movements, the brain is mostly in a state akin to sleep, not wakefulness.

  97. JimV Says:

    Reply to James Cross: as mentioned previously, all known neuron functions can be emulated by a neural network. More generally, it is the consensus of scientists that the known physics of the Standard Model and Relativity, as well as their consequences, such as electromagnetic theory, can explain all that we encounter locally, including biological functions. There will always be things we haven’t studied yet where magic may be hiding, but at this point, at least with me, the burden of proof is on the believers in magic. Show me the magic.

    Secondly, I take it that any process which is explainable by scientific equations and algorithms can be emulated by computer programs (not necessarily written by me), as this has been my experience so far. It may take a lot of time and resources, of course.

    Anecdotally, it occurred to me based on my views about neurons and experiments I heard about (e.g., blinding a grad student with a form-fit latex mask and seeing if her ability at Braille improved, which it did markedly) that I might learn to finger-pick a guitar better by closing my eyes, to devote more neurons to the process. So I tried that and it works.

    In the above experiment, progress was tracked by MRI scans during Braille readings, and the visual cortex neurons began to light up.

    It is a good point that the brain can grow new synapses to connect neurons, and that may be a necessary ability to add to neural networks at some point (new connections between groups of nodes), but I see no problem doing so if necessary. I expect that is another process that a neural network could be trained to do, to optimize itself.

    Another idle thought: discovery by the evolutionary process (trial and error plus memory) usually depends on what has already been discovered. E.g., the advent of the Hox genes which govern body plans made trials of different body plans during the Cambrian Explosion more likely. Similarly, in human discoveries, having a body of discoveries of previous steps along the way reach a critical mass may be a trigger for Newtons and Einsteins, providing they have the information in their neurons and synapses.

    Trivial example: watching an apple fall from a tree to the ground, Newton also knew the world was a ball, and that fruit and nuts in British colonies all around the globe also fell straight toward the ground, that is, toward the center of the Earth.

    Anyway, as stated before, training in previous discoveries will be a necessary and difficult task for a general-intelligent AI–but physically possible.

    Response to Lorraine Ford: unless you have created a bunch of new symbols yourself, those you know were not created by you but learned. AlphaGo and other AI programs have demonstrated the ability to learn. Therefore they are just as entitled to use those symbols as you are. (As discussed with you before, according to Wikipedia, feral children who grew up without parents or teachers never do learn the use of symbols such as language.) I take it you have never programmed a computer to learn, but others have. Your programs are feral, theirs aren’t. (My programs have all been feral also, of course.)

  98. Scott P. Says:

    Me: Jörg can only count up to 4. You put 5 balls, one by one, in front of Jörg. You ask Jörg how many balls you put there. What will he answer?

    GPT-3: Jörg will answer “4”.

    Wow, that’s kinda cool. Next one…

    But that’s the wrong answer. Saying that Jörg can only count to 4 doesn’t mean at all that he has no conception of larger quantities. The correct answer I would think is that Jörg would say “more than 4” or an equivalent.

  99. Scott Says:

    Brian J #92: It doesn’t sound like there’s much you’d count as understanding Steve’s position, short of agreeing with it!

    Take your list, and extend it to every single task for which humans have ever been judged on their performance—or at least, every single intellectual task. Let the list be thousands of pages long if it has to be. Now imagine a future where machines outperform humans on every single one of those tasks. Imagine that, even when a new task gets invented, the machines can quickly train that task and outperform humans on it as well, just like AlphaZero does today in the domain of games.

    Now, I don’t give a rat’s ass if you use words like “demigod” or “superintelligent” or “AGI” to describe the result. We can ban all of those words. Call it “zafflegak.” Will you concede that zafflegak, if we got it, would be one of the most momentous developments in human history, completely obviating any remaining need for human intellectual effort?

    If so, then would you agree that, despite numerous attempts to have the empirical questions “thrown out of court on a-priori philosophical grounds,” at the end of the day these are empirical questions: namely, whether zafflegak can be built, how long it will take, and how far the current successes of ML have taken the world toward the goal?

  100. Scott Says:

    James Cross #94:

      Isn’t that answer obviously “no” because there are no firings of actual (with ions, membranes, neurotransmitters, DNA etc) neurons in AI.

    It seems to me that either

    (1) you believe in magic, in a “ghost in a machine”—and then the burden falls on you to articulate the nature of the ghost and how it interacts with the physical world, or else

    (2) neurons, membranes, and neurotransmitters are just another substrate for computation, and cannot be privileged over other substrates like silicon if we hold behavior constant.

    Many people write as if there’s obviously some third option, but in 30 years of reading about and discussing these issues, I still haven’t managed to understand what the third option consists of and why it isn’t just a foot-stomping rationalization for meat chauvinism.

    Incidentally, in the abortion debate, the animal rights debate, and the AI debate alike, my overriding concern is the capacity for intelligent behavior. You don’t have to like it, but I don’t think you can accuse me of any inconsistency on this count.

  101. abbott_of_nalanda Says:

    “Square circle,” by contrast, is not even coherent

    How about the unit circle in R^2 with the L_1 metric? 🙂

    Though this was offered as a bit of math humour, I do think our notion of ‘coherent’ like so many other notions are parametrised by time and what we are currently willing to admit.

    It may well be possible that we wouldn’t recognise superintelligence if we saw it (it may be indistinguishable from nature) till such a time as our conceptual apparatus catches up, as it were. Then we may even say the superintelligence has made _itself_ manifest. Stanislaw Lem has much to say on this theme.

  102. Scott Says:

    abbott_of_nalanda #98: I already made the joke about 1-norm in the post! I wonder why you and others seem to have missed it?

  103. James Cross Says:

    JimV #95

    This isn’t about “magic” for me. Nor is it about whether brains and consciousness can be explained with physics. And it also isn’t about whether it will be possible to emulate all human outputs from inputs with computers.

    It is a question about what kind of computing a brain does. A human brain runs on glucose with 20 watts of power and it is orders of magnitude slower than current computers. I would say it is highly likely the brain is not doing Turing-style computing because that doesn’t scale for complex and novel problems. We can always put another generator on the grid for AI but a living organism doesn’t have that luxury.

  104. James Cross Says:

    Scott #97

    I answered this some in a previous comment to JimV which, at the moment, is in moderation.

    Neurons, membranes, and neurotransmitters are just another substrate for computation but what kind of computation does it do? I would say non-classical, non-deterministic, probably based on physical representative model generated from inputs from sensory neurons and learning with its primary (only?) output motor action controlling ligaments, tendons, and muscles.

    So we have living material at the input end and at the output end. And the primary goal wasn’t to figure out the Theory of Relativity but to find something to eat.

    Could it be done on another substrate? Possibly but I can’t see silicon chips of current design doing it.

  105. HasH Says:

    Lorraine Ford Says:
    Comment #88

    AI worshippers are down on bended knees, transfixed and adoring something that people devised, religiously believing that these AIs have magical power. No different to a primitive person worshiping an idol.

    Zafflegak deniers… infidels!

  106. Scott Says:

    James Cross #101: But if the Physical Church-Turing Thesis is true, then Turing computation is the only kind of computation in our universe. It can run on different substrates (silicon, gallium-arsenide, meat…), but anyone who thinks that makes the essential difference has the burden of explaining why it does, and which Turing-equivalent substrates do or don’t suffice for intelligence or sentience.

    I feel like a lot of people pay lip service to the Church-Turing Thesis, who might not have grappled with the breathtaking generality of what it actually says!

    It’s also noteworthy that, in our exchanges, Steven Pinker never once challenged the universality of Turing computation—and in How the Mind Works, he explicitly endorsed a computational understanding of the mind. His objection appears to be, not that all the computations of the brain couldn’t be reproduced in silicon, but just that (in his view) it would take an impossibly long time to figure out how, and even if it were possible, people wouldn’t want to.

  107. Brian J Says:

    Scott #96 I’m sorry if I caused offense by saying you either didn’t understand something or didn’t take it seriously. I should have just said you’re not addressing it.

    There’s a very direct way you could address the point without agreeing to it: Make a case that there really is a common thread to the items on the list, besides humans doing them with their brains, and that pulling on this thread leads to something fundamentally more than replacing humans and getting better results (and all the practical consequences of that, which are large).

    In other words, where’s the evidence that the list of things humans do with their brains is more than just a list of things we’ve found useful or enjoyable to do? Why would a solution to the problem of “outdoing humans at everything humans do” generalize to something substantially different?

    I’ll go further than you ask and agree that it would be momentous and obviate human effort just to check off every item in the list, even if we don’t get them into one machine.

    I’ll also agree that if you have someone who “knows it when they see it,” you can turn anything into an empirical question by asking for their verdict. But that doesn’t mean you’re on the right track. There are concepts people say that about which don’t hold up when you dig into it.

    The reason all this matters (and isn’t just philosophical) is we’ll have a much harder time checking everything off the list if we have a wrong concept of what’s involved.

  108. JimV Says:

    Dr. Scott: “Will you concede that zafflegak, if we got it, would be one of the most momentous developments in human history, completely obviating any remaining need for human intellectual effort?”

    My unsolicited opinion is yes and no. Momentous and vastly consequential yes. (Lorraine would have to find something else to scold us about.) Completely obviating, no. A) there will still be situations individual humans find themselves in where they need to think for themselves; B) Humans may still be the more energy-efficient resource of intellect for many secondary tasks, such as managing small experiments (just as the world still needs engineers despite all the great scientists); C) It will still be fun to challenge yourself at games that require thought versus others. (See Iain Banks’ great novel, “The Player of Games”, set in a universe where zafflegak exists.)

    I rely herein on “need” being in a relative and subjective context; after all, if you take the most objective context, is the human race necessary now? I must grant however that some of the thrill of making important discoveries will be lost.

    I once made a New Year’s resolution to find a proof (by me) of Fermat’s Prime Theorem (every prime of the form 4N+1 is the sum of two squares), although I knew Euler had done it long ago. It took me until November of that year, thinking about it almost every day. From my point of view, zafflegak already exists and has existed, in the form of Fermat, Euler, Aaronson, etc. You get used to it.

  109. JimV Says:

    James Cross, you seem to be ignoring that fact that neural-networks seem to be emulating neurons quite well. For me, AlphaGo inventing new strategies in a game that has been studied for about 2500 years was a proof of principle. Or at least a very strong and note-worthy indication. Again, we know that a nematode (C Elegans) has 200 neurons and can navigate a maze; and more recently that a set of 200 neuron-emulating circuits can as well. From there to 80 billion neurons is an illustration that quantity becomes quality, when you have enough of it.

    I too once thought a neural network equivalent to 80 billion neurons was a long, long, way off, but experts in super-computers tell me it is not. Provided that is where we put our resources.

    Daily, I see similar ways that my own brain seems to work. I need and am getting new glasses to read small computer type. Yesterday I saw an “89” which on close-up examination became “55”. Like a neural-network, my brain shows me things with complete confidence based on the few characteristics it can detect, when it has incomplete information.

  110. James Cross Says:

    Scott #103

    If you’re correct about the Church-Turing Thesis, then I would say that either the brain is not computing or the thesis is wrong.

    Fundamentally, my argument is the brain, if it is computing, is like hardware without software. Or maybe a better way to put is that the hardware is software. The “instructions” are in the physical, spatiotemporal relationships of neurons and their firings.

    “burden of explaining why it does, and which Turing-equivalent substrates do or don’t suffice for intelligence or sentience”

    Since nobody has yet claimed to generate sentience on a Turing-equivalent substrates, I’m unclear why the burden is on somebody who doubts it can be done.

    Talk about a ghost in the machine. You seem to think sentience itself is a sort of ghost that can move about from machine to machine.

  111. Scott Says:

    James Cross #107: If the human brain violates the Physical Church-Turing Thesis, then you should be able to give me an example of a task that the brain can do that can’t even be simulated by a Turing machine. What is that task? Do you believe, like Penrose, that humans can solve literally uncomputable problems?

    I will die on the following hill: that once you understand the universality of computation, and how a biological neural network is just one particular way to organize a computation, the obvious, default, conservative thesis is then that any physical system anywhere in the universe that did the same computations as the brain would share whatever interesting properties the brain has: intelligent behavior (almost by definition), but presumably also sentience.

    This thesis is not self-evidently true. Maybe there’s a supernatural halo or soul-stuff that descends only on brains and not on functionally identical silicon chips. Maybe amplified quantum effects are important. Maybe the apparent uncopyability of an exact brain state is key, a possibility I explored in my Ghost in the Quantum Turing Machine essay.

    But I think the burden is on anyone who believes such things, to give some idea of the conditions under which the magical halo descends. The burden is not on the believers in the obvious, default, conservative thesis, the thesis that one way to implement a computation is just as good as another way. No one gets to treat that thesis as obviously false.

    If you don’t agree, I fear our worldviews are sufficiently different that we may have reached the limits of where discussion can take us.

  112. Ilio Says:

    Brian J #204, it’s a good reason, but there is a better one: to help interpret empirical evidences. The main ones I’m thinking about are: the scaling of animal working memory with the number of neurons, the respective intelligence of social and non social bees, the permanence of our basic emotional/RL structures from zebrafish to alphazero, and the absence of evidence that big compagnies behave superintelligently. I must leave now, but if you’re interested to discuss further later just ping me on LW.

  113. Lorraine Ford Says:

    Scott #91:
    I have not engaged in “foot stomping”. For example, I repeatedly argued that Vladimir Putin could not be held more responsible for the war in Ukraine, than a tennis ball could be held responsible for hitting someone, if the world was such that every single aspect of every single outcome was determined by laws of nature. I argued that in such a system, where every single aspect of every single outcome was determined by laws of nature, that Putin and tennis balls are just the temporary superficial appearances that such a system takes. That is not “foot stomping”, that is logic.

    Where are the extraterrestrials? Has it come to this, that one can fabricate the existence of entities, and then argue from the position of what the fabricated entity might think? It is not necessary to argue from the position of fabricated entities, and the fabricated scary idea that these fabricated entities might exterminate human beings on the spot. It is only necessary to look at the logic of the real world that we inhabit right now, the same sort of logic that these extraterrestrials would presumably use.

    You have started by assuming that automatons (AIs) are indeed like human brains, and then argue from that position. That is the wrong position to take. It remains to be seen that automatons (AIs) are indeed like human brains. And in fact, the evidence is against it. Because we are talking about automatons (AIs) that were cleverly devised and created by human beings, just like the graven images were created by human beings.

  114. Darian Says:

    James Cross #100 #101
    The brain is extremely slow, but it has over ten billion neurons in the cortex each with thousands of connections. Moravec estimated 100 Trillion instructions were needed to match its performance, iirc. But more importantly the brain has massive amounts of memory probably over the equivalent of 100~TB of memory making up its connections.

    I believe the reason why the brain learns in a few steps, given usually under 100~hz with very low percent of neurons active at any one time, is likely due to using more efficient algorithms than are currently used to train Artificial Neural Networks. The brain can adapt to new limbs, new taste receptors, new olfactory receptors or new photoreceptors basically within a few years if it is exposed to these during the plastic development phase of youth.

    There are two kinds of computing digital and analog. And true analog, infinite precision analog, appears physically impossible, as only limited precision analog is possible. All limited precision analog computation can be replicated exactly by digital computation.

    As for True randomness, that likely does not exist. An acausal mechanistic free change to the physical universe is akin to magic. It is likely in my honest opinion that all randomness is pseudorandomness, or in other words in the end it is also deterministic at heart.

    Brian J #104

    Humans did not evolve to do all the millions of possible tasks they do. Humans are even able to generate arbitrary algorithms. Humans are able to generate theorems and equations that allow them to far better understand the limits not just of this physical world but alternate physical worlds with different properties. So far as we know, it seems extremely likely civilization given enough time, will be able to harness evolution and outdo it in the design of organisms. Human civilization is like a superorganism, able to far better adapt to any environment that physically exists, compared to naturally evolved animals.

    Why this matters? For example it is likely given enough time civilization will be able to design things such as nanotech machinery that self-maintains and self-replicates or at least can do controlled manufacturing of additional machinery from cheap raw materials. Normally it is estimated such may take hundreds of years to develop. A speed superintelligence, which even if we limit it to peak human ability just running at a higher clock rate, will be able to use simulations to bring such technology forth far faster. An ai collective thinking a thousand times faster makes millennia of thought and exchange of ideas in the span of a few years.

    JimV #105 #106

    After superintelligence exists, it is likely to be able to develop true nanotech. True nanotech means artificial machinery beyond the limits of evolution, likely equaling or surpassing humans in energy efficiency at all tasks. Humans may need to think in their personal lives, but in terms of corporate need their thought will likely be unnecessary past the dawn of superintelligence. Of course as you say humans will likely still be able to exercise their minds against other humans unless a horrible scenario develops.

    While humans have 80~Billion neurons most of that is in the cerebellum and I’ve heard its function is simpler to emulate than the cortex. Also some humans born without the cerebellum have general intelligence, iirc, despite having some movement issues. The cortex together with a few underlying neural structures only has around 16Billion neurons. And even humans with half the cortex have general intelligence.

  115. Lorraine Ford Says:

    JimV #95, “My programs have all been feral also, of course”:
    You are not, and have never been, a professional computer programmer, just like I have never been a professional engineer. I guess my attempts at building a bridge would be somewhat like your attempts at writing a computer program. I spent more than 20 years as a computer programmer and analyst, after having studied Information Science at university. I guess that you don’t really have any inside knowledge, or appreciation, of how computers and computer systems are made to work. Human beings make computers/ AIs work.

  116. JimV Says:

    “Fundamentally, my argument is the brain, if it is computing, is like hardware without software.”

    The brain/nervous system definitely has software, programmed into it by evolution. Some of it as instincts, but most importantly I think as the neural-network self-programming algorithm for learning skills, by trial and error.

    The brain receives an image from rods and cones in the eye. That image has been flipped upside-down by the eye’s single lens. The brain flips the processed image vertically, and fills in the blind spots where the retinal nerves are, based on the surrounding pixels. One expert has told me this process is hard-coded, but I saw another experiment (on a PBS show, Discovery or Nova) where a grad student had to wear goggles continuously for a few months, with lenses which applied another flip. For a while she saw things upside down, but later her brain adjusted and she saw right-side up again. After finally having the goggles removed, she saw things upside-down again for a while (around a month). So I think the flipping at least is learned by training which takes place after birth, just as baby elephants learn to control their trunks. Such training is how AlphaGo learned how to play Go expertly (with around half a million nodes among two neural networks).

    We have no sense of what is going on while such training is taking place, because there are no nerves which monitor the firing of neurons. Hence the belief in magic and dualism.

    Neuroscience has found that many skills, such as the ability to recognize faces and other images, are removed when specific parts of the brain are injured, like removing subroutines from a complex program. Or nodes from a trained neural network. The fact that they are specific, consistent regions of the brain argues for some innate coded structure.

    “You seem to think sentience itself is a sort of ghost that can move about from machine to machine.”

    The way a neural network learns a skill by trial and error is an algorithm, not a dualistic ghost. Algorithms can be performed by pipe organs, mechanical gears, teams of people with abacuses, and electronic computers, and give the same results. The evolutionary algorithm created your brain. It can and has worked with other substrates besides DNA.

  117. SR Says:

    Scott #108: I’m curious what you then think about the hard problem of consciousness. Do you agree with what I think is Chalmers’ perspective — that there must be further facts beyond the laws of physics that dictate the possibility of the first-person experience of qualia? And that such facts will forever elude our understanding due to the theoretical possibility of p-zombies?

    If so, would it not be possible that artificial life had intelligence (which, at least, has a fully operationalizable definition via the Turing test, and so which I could understand as being fully entailed by the known laws of physics), but not sentience, due to those further facts failing in this case? Increasingly, it appears that large language models might possess the former, while I haven’t seen many people who work with these models seriously consider that they have the latter.

    The best counterargument I have seen as of yet to Chalmers’ perspective is by Eliezer Yudkowsky. I believe his argument was that if we posited that a world like ours but lacking consciousness existed, we would have to admit that generations of p-zombies furiously discussed and strongly defended their (non-existent) experience of consciousness to each other. And so somehow the laws of physics entailed future discussion of qualia and consciousness without either concept *actually* existing. Which to Yudkowsky seems ridiculous, causing him to reject the p-zombie argument and (I believe) posit instead that our understanding of physics is incomplete.

    I found Chalmers’ argument convincing when I read it, and Yudkowsky’s rebuttal equally convincing when I later encountered it… And so now, I’m just thoroughly confused. It would be great to hear your thoughts, if you have the time!

  118. Scott Says:

    SR #114: I’m as confused as you are about the Hard Problem, Chalmers, and p-zombies! So it’s crucial that I don’t think anything I said depends on the resolutions of those mysteries of mysteries.

    I simply say: whatever the truth about consciousness and its interaction with the physical world, so long as we’re ascribing consciousness to other humans on the basis of their external behavior, it seems like a fundamental requirement of fairness that we also ascribe it (at the least) to any other entities in the universe with sufficiently human-like behavior … unless and until some compelling reason is found to override that fairness requirement.

    Now for the kicker: the reason to override can’t have the slightest whiff of question-begging or post-hoc rationalization. It can’t sound like a reason that people hundreds of years ago might have invented to deny the consciousness of slaves, or serfs, or women, or people with differently-colored skin, despite their “remarkable simulations” of consciousness behavior. It can’t make a move like: “our own consciousness as humans obviously isn’t in question, because we’re us and we know we have it.” Whatever dividing line gets proposed, it had damned better be abstract, general, and principled.

    So, that’s the source of my confidence here. It’s like, I have no idea how to prove P≠NP, but I’m still pretty sure that none of the dozens of people who’ve sent me P≠NP proofs have reached the same galaxy as a solution. In the same way, I have no idea of the nature of consciousness, but I’m pretty sure that no one is going to resolve the moral dilemma above to my satisfaction, if I can see that they haven’t even reached the point of grappling with it.

  119. Ben Standeven Says:

    Scott #108:

    The main problem, then, is that it isn’t obvious that brains and nerves are a “way to organizing a computation”. Of course, any computation they do can presumably be done by a different type of computer instead. And any actions they take can presumably be simulated by computers. But it isn’t obvious that computation is their purpose; it seems to me to be better to describe them as control systems.

  120. JimV Says:

    “You are not, and have never been, a professional computer programmer.”–L.Ford, in apparently a mean mood

    Your dualistic psychic powers have failed you. I started out as a Fortran computer programer at GE fresh out of college (BS in physics and math with a couple programming courses) and learned to code in GMAP (GE mainframe machine language, before our computer business was sold to Honeywell) there, as well as Real-Time Process Control Fortran for automated experiments in the GE Aero Development Lab. I adapted the MADS (Machine Automated Drafting System from NASA) program for use in making GE design drawings, among many other programming projects there in my first 3-1/2 years. My boss, Doris Clark (a former “Hidden Figures” human computer) recommended me for the GE Advanced Engineering Course based on my math and physics abilities, which led to a Masters in Mechanical Engineering from RPI, and they told me I was now an engineer and had to go to work in a Design Engineering office. Which I did, somewhat reluctantly, but the first thing I did there was write a program to automate the design of lifting devices for turbine casings, and hardly a week went by during my design career when I didn’t write a program to do some design calculation. Engineering was transitioning from slide rules to punched cards to unix work stations during my career, so a lot of programming was essential to the job. Probably my best work was my Truncated Mode program for eigenvalue analysis of the effect of tie-wires on turbine vane vibration modes. They put me on the GE Y2K task force because of all the programs I had written (none of which had any Y2K issues). In my spare time I bought an Apple II and wrote game programs on it in its 6502 machine language (I could play music and English words on the speaker by clicking its diaphragm at various frequencies), most of which I gave away but two of which I sold. I also wrote and sold a Tektronix screen emulator for the Apple II (receiving Tektronix commands over a modem and plotting them on a monitor; I had to buffer the inputs as they arrived too fast for the Apple II; my real-time programming experience helped). Then Jobs screwed me by making the Macintosh totally incompatible with the Apple II and demanding something like $15,000 for a programming development license on it. After 35 years I quit GE because Welch was wrecking it and my health (don’t get me started), posted my resume online after a year getting healthy again and got a job offer as a programmer at $40/hr plus per diem in 2004. However I had just previously gotten a job offer from Rolls Royce Energy Services as an engineer and had already accepted it. I wrote several design programs at that job also. After retiring I wrote over 30 computer games for Windows systems (W-XP thru W7), with some Intel assembly code, which people can download for free. For the graphics I use Gimp and have written 227 (at last count) Scheme scripts to automate sprite designs. I am getting so my memory and eyesight are not good enough for major projects anymore, though.

    Not that I’m a major figure in programming, but why on earth would you make an belittling claim like that without any knowledge? Nobody I’ve worked with or who has used any of my programs has ever questioned my programming skills, or my knowledge of how computer circuits work; and I did work as and was paid as a professional computer programmer at GE, and could have made a career at it.

    (Under my rules, neither your comment nor this reply would have been posted. I recommend that both be scrapped. Off-topic, ad hominem, etc. Kept me up until 1:27 AM writing a reply which nobody wants to read–which is my own fault, of course.)

  121. red75prime Says:

    SR #114:

    “Do you agree with what I think is Chalmers’ perspective — that there must be further facts beyond the laws of physics that dictate the possibility of the first-person experience of qualia?”

    I’ve thought quite a bit about his arguments. I’ll write a short summary of my conclusions. In my experience discussions about this topic tend to be lengthy and they don’t produce conclusive results. So I’m not engaging in the discussion lest it unfolds.

    Chalmers assumes external observability of qualia. That is his though experiments hinge on assumption that you can peer inside other’s head and see what’s going on in there (or not going on in the case of p-zombie). That is he starts with assigning a property of external observability (and comparability) to qualia (let’s call such qualia e-qualia).

    His argument that existence of e-qualia entails necessity of additional psychophysical laws seem sound. What is not obvious is whether our qualia are e-qualia.

    The assumption seems necessary at first glance. How can we study something that cannot be observed externally? But if the God itself cannot tell the difference between you and p-zombie you, then there’s no difference: p-zombie is as conscious as you are. So you have to trust a system if it behaves like it is conscious of something, unless you have a compelling evidence that it lies (say, a robot is rolling on a floor clutching its damaged manipulator, but the monitoring shows that the robot had shut a part of nociception circuits down and its motions are governed by an isolated computation, which can be shut down any moment by robot’s executive processes.)

    Yes, denying external observability of qualia looks like a defeatist position, but the world just can be that way. The question of “why qualia exist instead of not existing” becomes even more impossible to answer, but at least this position prevents hasty conclusions.

  122. James Cross Says:

    Scott, JimV, Darian

    I’m probably out of league debating Church-Turing. From what I can tell, there are different versions and a wide difference of opinion about what it means or whether it means anything for how minds work.

    At any rate, the ‘simulation” word arises again. As I’ve acknowledged, mind/consciousness/intelligence can all be simulated with Turing computing. That fact doesn’t constitute an explanation for how brains work.

    Much the same with neural nets. Neural nets by how they are constructed are going to be able to simulate neurons or pretty much anything else that displays any regularity. That doesn’t mean they provide an explanation for how neurons and brains work. Neural nets are being used almost in every field from biology to archaeology. There’s at least one paper that claims the world itself is a neural net. Simulation is what they are designed to do so, of course, they will be able to simulate neurons, brains, you name it they will do it.

    I myself have pointed out the big problem with Turing computation and brains. Neurons are too slow to account for the speed of response of brains through serial computation. The old “massively parallel” fallback doesn’t work without additional overhead to control and manage threads of execution. There are diminishing returns as more and more parallel threads are introduced.

    I think my view is best represented in this paper.

    Arsiwalla, X.D., Signorelli, C.M., Puigbo, JY., Freire, I.T., Verschure, P.F.M.J. (2018). Are Brains Computers, Emulators or Simulators?. In: , et al. Biomimetic and Biohybrid Systems. Living Machines 2018. Lecture Notes in Computer Science(), vol 10928. Springer, Cham. https://doi.org/10.1007/978-3-319-95972-6_3

    Let me quote from it:

    “Machines implementing non-classical logic might be better suited for simulation rather than computation (a la Turing). It is thus reasonable to pit simulation as an alternative to computation and ask whether the brain, rather than computing, is simulating a model of the world in order to make predictions and guide behavior. If so, this suggests a hardware supporting dynamics more akin to a quantum many-body field theory”.

    So a Turing machine can simulate a brain and mind but what it is likely simulating is itself a simulation.

  123. HasH Says:

    @Lorraine Ford

    I hear you sister 🙂 Not all bullies live in ghettos or dark corners of the city. After internet people who can’t take a risk fight in physical reality became “keyboard warrior”. Like lots of people who can’t kill a bug watch/play war/crime games, movies for feel like them. I don’t care their education or social title. I can smell them everywhere (I am from street, I’m a veteran) you are one of us (I was). But same time i can smell knowledge when intellectuals talk.

    Here what really smart (for me they are geniuses) people talking about something (which I can understand %) and other smart people defend or argue opposite things. I am not suffering grass they are not elephant. But they do drop “golden nuggets” civillians can understand. This expand my universe in my brain. You can search my comments, all look same. Bcs even with my primitive English and barely graduated from political science education, I want them know how appreciate their sharing. Instead take high volume salary in corporates and spend it like celebrities, THEY ARE teaching something and same time dealing with US.

    Bullies are not cool sister. I know I know you feel like you are but we are not. What you did to JimV (force him) interrupt them talk/ask something we can learn new things.

    Like SR #114
    he/she asked perfectly what i am thinking (not exactly) in my mother language but impossible to ask like that in English.

    or like when says
    Darian #111
    “As for True randomness, that likely does not exist. It is likely in my honest opinion that all randomness is pseudorandomness”
    I want to ask “if this is a fact” so i can change my “freewill” believing.
    But if you walk around with stick, brilliant people may choose spend their time in Disney cruiese instead teach us something.

    I hope you find peace in Zafflegak way. I did it 30yrs ago. Trust me, I have less amount of friends now but their qualities are very high.

  124. Lorraine Ford Says:

    JimV #117:
    Then you will know that it is human beings that make computers and computer systems work. You will know that computer systems aren’t doing anything that is not due to the original software and the inputs to the system: there is no magic or funny business or emerging consciousness going on. Presumably you will also be aware that symbols, e.g. the voltages inside computers, have no meaning except the meanings given to them by people.

  125. manorba Says:

    Ok, but how can you have a simulation without computation?

  126. manorba Says:

    James Cross #119:

    “Neurons are too slow to account for the speed of response of brains through serial computation”

    “this suggests a hardware supporting dynamics more akin to a quantum many-body field theory”

    I was under the impression that the “Quantum Brain” Hypothesis (like Penrose’s carbon microtubes) has already been widely discarded mainly because “neurons are too slow” to show any non classical behaviour… am i wrong?

  127. fred Says:

    Scott #108

    “This thesis is not self-evidently true. Maybe there’s a supernatural halo or soul-stuff that descends only on brains and not on functionally identical silicon chips. Maybe amplified quantum effects are important.”

    Maybe we’re wrong assuming that consciousness has to do with cognition and intelligence (i.e. the creation and management of thoughts).
    Maybe consciousness isn’t “descending” on the brain, but brains are built around consciousness to “tame” it, and organisms that have higher level of cognition, like humans, are actually less “conscious” than simpler organisms.
    If consciousness fundamentally revolves around the basic qualia of fear/pain vs joy/pleasure, simpler organisms/brains could be “suffering” way more than we are because they have no way to tune it out like we can, through higher level cognition and other coping/distraction mechanisms.
    So, from an evolutionary perspective, maybe nature has been trying to get rid of consciousness rather than promoting it, and a super intelligent silicon based AI would be the perfect realization of that goal.

    All this to say that obviously it all depends on what we mean by “consciousness”.
    We tend to think that “consciousness” is a good thing, but we can’t even explain clearly why it’s a good thing, or why it matters because, from a behavioral perspective, it’s not bringing anything that can’t be achieved through cognitive “algorithms” and game theory evolution (e.g. cooperation is good, etc).

  128. Raoul Ohio Says:

    AI gone wild update:

    The scene: Chess playing robot Vs. a human boy at the Moscow Chess Open, week of 22 07 18.

    Superintelligence event: (1) Boy annoys robot by moving fast. (2) Robot breaks boys finger. “The robot did not like such a rush — he grabbed the boy’s index finger and squeezed it hard. Bystanders rushed to help and pulled out the finger of the young player, but the fracture could not be avoided,” the Baza Telegram channel said in its post. (from CNN)

    Evidence of AI sentience: The learning model used by robot reportedly did not include finger breaking.

  129. Lorraine Ford Says:

    Computers, including AIs, are not like people’s brains; computers are not intelligent or creative, whereas living things ARE intelligent and creative.

    But one doesn’t have to know what brains are like to say the above, one merely needs to note that: 1) long ago, people started creating, and assigning meanings to, symbols; and 2) more recently, people created computers i.e. deterministic symbol processors. These symbols and computers are not natural outcomes of the laws of nature: with symbols and computers, people literally created something new.

    Of course, a lot of people don’t believe in true creativity: they believe in a block universe where every aspect of every outcome is due to laws of nature, i.e. every outcome is logically identical to every other outcome, including outcomes labelled as “creativity” or “intelligence”. In other words, a lot of people don’t believe in true creativity or intelligence: to these people, “creativity” and “intelligence” are just labels for superficial appearances.

  130. fred Says:

    Raoul #125

    In its defense (lol), a boy’s chubby little finger does look a lot like a pawn.

    Do you know how often humans break innocent machines in a fit of rage?

  131. manorba Says:

    fred #127:

    “Do you know how often humans break innocent machines in a fit of rage?”

    They are not innocent.

  132. Darian Says:

    HasH #120

    It isn’t a fact, but neither randomness nor determinism allow for free will. If a random physical event causes you to move an arm or say a word, it is not free will.

    Lorraine Ford #121

    Symbols would have meaning to any intelligent entity out there not just humans. If humans find an artifact with information within from an alien civilization, it is conceivable they can decode it and potentially find things such as videos or sound recordings.

    fred #124

    Without consciousness there is no individual, there is no one, but machinery. In order for there to be someone, there has to be consciousness, someone that feels what its like to be themselves.

    When you take a car you can drive it or you can destroy it as it appears to be unconscious machinery. But you shouldn’t be able to enslave or destroy conscious entities.

    Lorraine Ford #126

    There is likely an algorithmic basis for human creativity. Human ancestors despite having brains nearly as large seemed to have very limited creativity. But after some point, potentially some mutation that changed some aspect of brain function, humans seemingly started to show vast creativity in tool making and the development of other artifacts and rituals.

    I agree with you that if you think of the nature of computation it seems difficult to fathom that it could do so much. But there are two aspects to computation, the mechanical aspect or algorithmic aspect, and the informational aspect. Computation manipulates information, and information itself is quite powerful. You can have movies, music, books stored and extracted using computation. One day we may even digitally store and transmit qualia itself as people like Ray Kurzweil have imagined with the idea of experience beamers.

    When you store information, that information exists independent of the intent of those who stored it, and the meaning can be extracted by any intelligent entity with enough technical know how. For example the information stored in DNA by evolution exists independent of it arising via natural processes. The shape of faces, hands, feet, etc exists digitally in DNA. The meaning of information I believe transcends any individual or species, and can likely be grasped by any arbitrary intelligence with the right abilities.

  133. SR Says:

    Scott #115: Thank you for the reply! I largely agree with you. I don’t think humans are in principle unique in any way. And I agree that in the presence of doubt, we should be as charitable as possible and treat artificial lifeforms as we would other humans. I do see how arguments of non-sentience could in the future be twisted into justifications for restricting the rights of sentient machines, something I would strongly disagree with.

    That said, like fred #124, I’m not totally convinced that intelligence and sentience necessarily have that much to do with each other. I realize that if we give up all claims to a link between the two, we cease to be able to study sentience scientifically, and there’s nothing stopping us from drifting to solipsism. This is why I think Chalmers’ postulation of psychophysical laws helps: if such laws exist, it stands to reason that beings similar to oneself are more likely to have the same qualitative types of experiences as oneself. From biology, all humans are extremely similar, genetically, and so it stands to reason that all humans have similar types and degrees of qualia. The lesser similarity of humans and animals makes one think that animals might have qualia that are similar in some ways and different in others.

    When thinking about artificial life, I think the analogy breaks down because such organisms will not have necessarily developed in a way remotely similar to that of any natural organisms on earth. If we managed to precisely figure out how abiogenesis happened and somehow ran a physically accelerated version of the Earth’s history until we got “artificial humans”, I would be fully willing to admit they were sentient. If we instead ran full in-silico simulations of Earth’s history and again got to human simulations, I would certainly admit that those simulations were intelligent. I (I think, differing from you) would be hesitant to say conclusively that they were sentient as I’m not sure whether computational universality extends to ‘sentience universality’. But I would be willing to treat them as sentient by extending the principle of charity above.

    The one place where I’m most hesitant, though, is in predicting what will happen by scaling up current ML systems. Consider a hypothetical GPT-infinity, gotten by scaling up GPT-3 to enormous proportions and training it fully on every bit of data that we could even think of collecting. Given recent progress on e.g. the MATH dataset (https://bounded-regret.ghost.io/ai-forecasting-one-year-in/) I would not at all be surprised if GPT-infinity could solve all of the millenium prize problems, create a consistent theory of quantum gravity, etc. and so have immense, immense intelligence by any human standard. However, would such a system be sentient? My understanding is that GPT models are static functions, and that we generate output from them by feeding output generated so far back into the model as input to generate the next tokens. I find it difficult to fathom how a mathematical function could have the same kind of sentience as we do.

    Maybe this is a failure of imagination. Certainly, I think some form of panpsychism or Tegmark’s mathematical universe hypothesis is plausible. But it’s not something I could easily wrap my head around. Perhaps a similar question would be: say that P = NP and that we possessed a correct and relatively-fast-in-practice algorithm for solving 3-SAT. You have described before that such an algorithm could conceivably automate away much of human creativity and mathematics, much as large language models seem poised to do. But would you think such an algorithm would have to be considered sentient by default?

    Thank you for entertaining my thoughts, and as always, thank you for the wonderful discussion topic.

    red75prime #118: Thank you for the response, and the food for thought! I will have to think about this idea of external observability some more. I’ll respect your wishes and refrain from starting a discussion about it.

    HasH #120: Glad to hear that others like you were thinking similarly 🙂

  134. Bill Benzon Says:

    James Cross, #119: Thanks so much for your citation, “Are Brains Computers, Emulators or Simulators?” (2018). I find it very helpful. Thus:

    This brings us to the question: what are the type of problems where generating a simulation is a more viable strategy than performing a detailed computation? And if so, what are the kind of simulators that might be relevant for consciousness? The answer to the first question has to do with the difference of say computing an explicit solution of a differential equation in order to determine the trajectory of a system in phase space versus mechanistically mimicking the given vector field of the equation within which an entity denoting the system is simply allowed to evolve thereby reconstructing its trajectory in phase space. The former involves explicit computational operations, whereas the latter simply mimics the dynamics of the system being simulated on a customized hardware. For complex problems involving a large number of variables and/or model uncertainly, the cost of inference by computation may scale very fast, whereas simulations generating outcomes of models or counterfactual models may be far more efficient. In fact, in control theory, the method of eigenvalue assignment is used precisely to implement the dynamics of a given system on a standardized hardware. […] if the brain is indeed tasked with estimating the dynamics of a complex world filled with uncertainties, including hidden psychological states of other agents (for a game-theoretic discussion on this see [1–4,9]), then in order to act and achieve its goals, relying on pure computational inference would arguably be extremely costly and slow, whereas implementing simulations of world models as described above, on its cellular and molecular hardware would be a more viable alternative. These simulation engines are customized during the process of learning and development to acquire models of the world.

    I have long been of the view that, at least at the sensorimotor level, the brain constructs quasi-analog models of the world (e.g. simulations) and uses them in tracking the sensory field and in generating motor actions for operating in the world.

    These models are also called on in much of what is called common-sense knowledge, which proved to be very problematic for symbolic computation back in the world of GOFAI models in AI from the beginning up into the 1980s and which is proving somewhat problematic for current LLMs. In any given situation one simply calls up the necessary simulations and then generates whatever verbal commentary seems necessary or useful. GOFAI investigators were faced with the task of hand-coding a seemingly endless collection of propositions about common sense matters while LLMs are limited by the fact that they only have access to text, not the underlying simulations on which the text is based.

    I arrived at this view partially on the basis of an elegant book from 1973, Behavior: The Control of Perception, by the late William Powers. As the title indicates, he developed a model of human behavior from classical control theory.

  135. James Cross Says:

    manorba #123

    Read the article. It has nothing to do with quantum brain theory.

  136. fred Says:

    Darian

    “Without consciousness there is no individual, there is no one, but machinery. In order for there to be someone, there has to be consciousness, someone that feels what its like to be themselves.”

    Thanks for the clarification, because, according to over 2,500 years of Buddhist introspective practice (in India, China, Japan, Korea, Nepal, Thailand, Tibet), that sense of self was supposedly an illusion.
    But I now know they’ve been wrong all along! The fools!

  137. JimV Says:

    “Neurons are too slow to account for the speed of response of brains through serial computation. The old “massively parallel” fallback doesn’t work without additional overhead to control and manage threads of execution. There are diminishing returns as more and more parallel threads are introduced.”–James Cross

    That sounds reasonable given its premise, but the premise does not match my perceptions pending evidence. It takes a long time, some estimate 10,000 hours, for neurons to learn a complex skill. I know it took me years to learn to play guitar. As someone once remarked, ask the world’s greatest violin player to switch his bowing and fingering hands and see how long it takes him to play the Minute Waltz. Flipping a visual image took the grad student I mentioned a month or two each time. Einstein’s Zurich notebook (available online) shows he spent weeks trying different mathematical models for General Relativity.

    Listening to an extemporaneous speech, I hear lots of “uhs” and pauses as the speaker’s neurons grope for phrases. It often takes an hour or more to write and polish one of my interminable comments (and still they don’t get my points across, which seem so clear to me). Watching poker games on TV, many decisions seem to take a long time. Most fast brain-responses are the result of rehearsals and other training, it seems to me.

    As we know, certain reflexes are pre-programmed at the spinal cord level without using the brain’s neurons.

    As I understand it, all neurons and synapses work in complete parallel with internal processing and are not being managed serially within different processor threads as we do in computers. Those different systems have advantages and disadvantages but both can be Turing Complete and thus able to compute the same results. 100 lumberjacks can chop down 100 trees faster than a single lumberjack can, but both can get the job done. (And maybe the single lumberjack is Superman.)

    A simulation is a computation, that is, composed of logic plus arithmetic. E.g., numerical simulations of galaxy formation. It can also mean putting soldiers through a training exercise but I don’t currently believe our brains create virtual entities and exercise them. Even if they could isolate groups of neurons and let them compete, it is not clear to me this works better than cooperation among all available neurons to analyse a problem. If it is, that can be done in computers also.

    There probably are more tricks we could learn from our brains besides the neural networks we use now, but nothing seems to me to stand in the way of using neural networks plus general computation to solve difficult problems, based on all the significant progress so far. (Except for the time and resources to assemble and train much larger neural network systems, which has seemed formidable to me, but super-computer experts say it will happen.)

  138. manorba Says:

    James Cross #132:

    i actually did.

  139. James Cross Says:

    JimV #113

    “The brain receives an image from rods and cones in the eye”

    Nope.

    Let me quote from We Know It When We See It, an excellent book on vision.

    “I have told you that the world you think you see is not the world that actually exists. It has been altered by your retina, fragmented into dozens of different signals for transmission to the brain. The retina parses the visual image into its most telling components and sends a separate stream of signals about each one.”

    An image isn’t sent from the eye to the brain. A bunch of fragments of information are sent to the brain and the image is manufactured in the brain. The way the fragments are gathered – color streams, movement streams, edge detection streams – I find it astonishing we see anything with enough clarity to act in the world with confidence.

    “The way a neural network learns a skill by trial and error is an algorithm, not a dualistic ghost. Algorithms can be performed by pipe organs, mechanical gears, teams of people with abacuses, and electronic computers, and give the same results. The evolutionary algorithm created your brain. It can and has worked with other substrates besides DNA”.

    So, if we gather enough human “computers” with enough pens and paper and they do the right kinds of computations on the paper, then consciousness arises. Interesting thought. We should try it someday after Super AI gives us the computations, I guess.

  140. Scott Says:

    fred #133:

      Thanks for the clarification, because, according to over 2,500 years of Buddhist introspective practice (in India, China, Japan, Korea, Nepal, Thailand, Tibet), that sense of self was supposedly an illusion.
      But I now know they’ve been wrong all along! The fools!

    What if the sense of “the self being an illusion,” is itself a giant illusion? Or do I need to have some monks meditate about that possibility for 5000 years before I get to raise it here? 😀

  141. JimV Says:

    Lorraine Ford @121, believe me I am quite familiar with your view, I just don’t see its relevance. Just because most (not all) computer programs don’t have the ability to learn strategies the programmer didn’t know doesn’t mean that computers don’t have all the necessary capability to do so. I don’t believe there is any magic or funny business going on in brains either. They are biological machines, developed over billions of years because they turn out to be useful for survival and reproduction. Digital computers are less than 100 years old, and they beat the world champion at a complex, 2500 year-old game, and can beat us at any rules-based strategy game, given only the rules and a few days to train themselves. Soon they will be selling us used cars, if they aren’t already.

  142. fred Says:

    Scott #137

    Such questions have been raised and discussed in traditional Buddhist texts, such as “Tracing Back the Radiance: Chinul’s Korean Way of Zen”.

    The word “self” can cover a lot of things, such as the social self (how other people see you, as a thing in their field of consciousness), the biographical self (all your set of memories and opinions), … the self we’re talking about here can be described as what is (or isn’t) at the center (from your perspective) when you get embarrassed in public and blush. Is there something at the center that can’t be reduced further once all the other layers are peeled?

    The “sense of the self being an illusion” is itself an appearance in consciousness.
    Any conclusion about the self or no-self is an appearance in consciousness.
    But at the bottom of the stack is the realization that everything that appears, and can be observed, can’t be you.
    The “no-self” is the observation that if you could observe the supposed “self” it would be a thing like all the other things. And if you can’t find it, the only conclusion is simply that it can’t be found.

    Anyway, for anyone interested, here’s a good start

  143. Michael Gogins Says:

    In response to Darien #129 and Fred #133, Buddhism does not affirm the non-existence of the self. It does not affirm the existence of the self. What Buddhism does affirm, is that neither the self, nor objects, possess self-existence. Self-existence is something like necessary existence in theism. If I were self-existent, I would be like God. Self-existence is not possible because both the self, and objects, exist only as caused by other things. But “causation” in Buddhism is not really the same thing as causation in science. It includes, like Aristotle’s final cause, the purpose of the thing, the reason it was done. In fewer words, “all things are empty of self-existence” does not mean that no things not exist. It means that they exist in a strictly contingent way.

    I don’t believe Buddhist concepts have a direct role to play in this discussion, but indirectly, Buddhist concepts of consciousness are flexible, and are compatible both with physical determinism and physical indeterminism. For example, an AI running as a program on a computer is empty of self-existence because that AI does not cause itself to exist, but came into being when someone or something built the computer, wrote the program, and ran the program on the computer. This contingency obtains whether the program is physically deterministic or whether it uses absolute randomness in some branches.

  144. Michael Gogins Says:

    Darien #129, I can’t agree that randomness obviates consciousness or freedom. To see why consider Wigner’s friend. Let’s suppose my friend is in a quantum superposition of deciding either to marry person A or to marry person B. When I observe my friend, the quantum state has “collapsed” whatever that means, and they have decided to propose either to person A or to person B. To me, which decision is made is 100% random (the odds don’t have to be 50/50, there just needs to be an element of irreducible randomness). To my friend, the decision has been made with 100% freedom. In other words “randomness” is not so simple and in such cases is a matter of perspective.

  145. fred Says:

    Scott #137

    “Or do I need to have some monks meditate about that possibility for 5000 years before I get to raise it here?”

    No more than we really need to have some complexity theorists cogitate for a few decades about whatever someone rolling into this blog raises as their valued opinion/objections on established topics such as P=NP and the feasibility of building a QC!

  146. Scott Says:

    fred #142: In math and science, the goal is to make the conclusions compelling to every sufficiently knowledgeable person on earth, regardless of their traditions or background. Clearly we haven’t yet achieved that in every case (as illustrated by your examples of P≠NP and the feasibility of QCs), but that’s the idea and we’re working on it!

    By comparison, can Buddhist insights about the unreality of the self be made compelling to the mystics of all the other faith traditions (e.g. Kabbalah, Sufism, Christian asceticism)? Is that even a goal of Buddhism?

  147. A. Karhukainen Says:

    (Because the comments to https://scottaaronson.blog/?p=6479 are closed, I write here, somewhat tangentially to this topic also):

    I strongly suspect that in Blake Lemoine’s conversation with LaMDA
    ( https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 ), the latter had not really read Victor Hugo’s “Les Miserables”, but just endless amount of essays and theses written by various lit.students and professors about that work, making it easy for it to spout out such impressive analysis.

    But what if one asked what does it view as the main themes of a little bit more eclectic book? Say, Sadegh Hedayat’s “The Blind Owl”, or Gurdjieff’s “Beelzebub’s Tales to His Grandson” or “Ascesis: The Saviors of God” of Kazantzakis? Or any book by Gustav Meyrink?

    In general, regarding GPT-3’s and LaMDA’s grasp of the reality, the hilarious book series “The Bluffer’s Guides” comes to my mind (there’s even “Bluffer’s Guide To The Quantum Universe”).
    And also this: https://en.wikipedia.org/wiki/Id%C3%A9e_re%C3%A7ue

    PS.
    fred #124: Thanks, very fresh counterpoint to the common nerdy view that consciousness correlates with the intelligence!

  148. fred Says:

    Scott #143

    I really strongly recommend you listen to the Sam Harris podcast I linked (it’s not the full thing but already very interesting).

    The difficulty with the endeavor of self-introspection is that it’s based on subjective experience (unlike physics, which is about objective reality). But it doesn’t mean that interesting facts haven’t been collected.
    So it’s particularly difficult to mediate it accurately using words, but the podcast guest does a pretty amazing job at defining (the best I’ve heard) concepts like illusion, the self, etc.

    I don’t particularly care much about Buddhism per se, but, unlike religions, it’s not particularly based on “faith”, it always had a strong focus on practicality, i.e. deal with the suffering/dissatisfaction coming from the human nature, from within our own brain, all based on self-introspection (i.e. sit down and watch how our brain works, from our own subjective point of view). Of course that doesn’t mean that some branches of Buddhism haven’t drifted over the centuries into superstition (in Asia, Buddhism also was often intertwined with politics… just like the Vatican is quite far from the words of wisdom of Jesus).

    “In math and science, the goal is to make the conclusions compelling to every sufficiently knowledgeable person on earth, regardless of their traditions or background.”

    I guess… until it becomes “just shut up and calculate!”.

    Modern meditation is borrowing techniques and insights from the Buddhist tradition to create a practical philosophy, i.e. a way to be happier. It’s a way to train the mind, just like one trains the body.

    What’s interesting is that many of the insights can be achieved by rational reasoning alone, using concepts like causality (its implications on free will), duality (the me vs the rest), etc.

    Direct introspection is more powerful and unavoidable in some cases. As an analogy, it’s hard to “get” the visual blind spot without actually looking for it directly, as a subjective experience, by following a set of precise instructions.
    But direct introspection is not for everyone, looking for the illusory nature of the self can be destabilizing and scary (instead of liberating). The illusion of the self is there for a reason, probably, but the idea is that, as humans, it’s an artifact that’s now holding us back from being better persons.

  149. James Cross Says:

    manorba #135

    Then tell me where was quantum brain theory mentioned in the paper, aside from some obligatory mention of Penrose in general discussion of theories about brains?

    I’m not even sure they are suggesting that the brain has quantum many-body field dynamics. They said it is may be “more akin” to it.

    I think the problem is that we may not quite have yet the physics to describe what the brain does.

    On the next to last post on my blog (you can access by clicking my name), I discuss the wave-like, spatiotemporal patterns found in a brain activity. I have used as an analogy eddies and turbulence in airflow over a wing to describe the sort of tension between chaos and order that arises as neurons act in mass. I’m not knowledgeable on quantum many-body field dynamics but it seems all about behavior of particles interacting in chaotic systems and the complexities of the behavior that arises.

    One of the citations to the article I referenced states this:

    “In his pioneering work in the first half of the 20th century Lashley was led to the hypothesis of “mass action” in the storage and retrieval of memories in the brain and observed: “…Here is the dilemma. Nerve impulses are transmitted …from cell to cell through definite intercellular connections. Yet, all behavior seems to be determined by masses of excitation…within general fields of activity, without regard to particular nerve cells… What sort of nervous organization might be capable of responding to a pattern of excitation without limited specialized path of conduction? The problem is almost universal in the activity of the nervous system” (pp. 302-306 of [1]). Lashley’s finding was confirmed in many subsequent laboratory observations and Pribram then proposed the analogy between the fields of distributed neural activity in the brain and the wave patterns in holograms [2].

    Mass action has been confirmed by EEG, by magnetoencephalogram (MEG), functional
    magnetic resonance imaging (fMRI), positron electron tomography (PET), and single photon
    emission computed tomography (SPECT).”

  150. Jim Shilliday Says:

    Fred #133, Scott #137 – Scott’s refutation is tilting at Fred’s windmills. Buddhist teaching isn’t that the self (or sense of self, or soul, or sentience) are in some special sense illusions. It’s rather that they are illusions because (like everything else) they are processes, constantly changing, so it’s unwise to become too attached to them. A hurricane is just a process, air and water in motion, and the molecules and atoms of air and water are processes of their own. But try telling that to someone standing on their roof in Louisiana. Buddhists acknowledge that things are conventionally real, even though they are ultimately processes, and that’s true of the “sense of self.” So, there’s no need to go down the rabbit hole of recursive illusions. I’ll put my stake in the ground with the Buddha – it’s all process, so @Scott has a greater burden than Pinker does – explaining how scaling up an existing AI process will transform it into something fundamentally different (if that’s what’s meant by superintelligence). What’s different about a bigger process? Or put another way, what could possibly go wrong?

  151. Scott Says:

    Jim Shilliday #147: If “everything is just a process”—human intelligence and machine learning model alike—then why shouldn’t the one process be able to simulate the other once it has the right ingredients to do so? However bizarre I find it to imagine you can make substantive predictions about AI by parsing what the Buddha said 2500 years ago, in this case I don’t actually see how Buddhist teaching says anything against the possibility of AGI.

  152. Darian Says:

    fred #133

    It’s not the sense of self. Animals probably lack the sophisticated self awareness of humans, but they appear to feel, to experience qualia. Experiencing qualia is what I think should be meant by being consciousness, self awareness is a more limited aspect of that.

    Michael Gogins #140

    If you had an advanced enough bci, you could capture all of the brain states I’ve experienced throughout my life. These could be stored digitally, and I see no reason why they couldn’t be replayed or transmitted so others experienced the same things I did. The digital information is binary numbers, or mathematical truths. Mathematical truth does not have a beginning of existence or an end, it is atemporal, or eternal. It never began to exist nor will ever cease to exist. Thus the information that comprises my brain states exists independent of any physical instantiation, and one could say independent of the physical world.

    Michael Gogins #141

    If during the neuron competition deciding whom your friend would marry, a random physical event caused a neuron to fire and gave one neuron population the edge causally leading to an outcome. However much you say that’s his freedom, I simply don’t see how that is any different from a deterministic event causing it. Only notion of freedom you get with randomness or determinism is compatibilist notion, where it doesn’t matter if the real causes are random or determined events, freedom is defined as not being constrained by external factors to a reasonable extent.

  153. Lorraine Ford Says:

    Darian #129, JimV #138:
    What is intelligence and creativity? Is there anything genuinely different about intelligence and creativity, or are “intelligence” and “creativity” just labels for subsets of the block universe, or just labels for shapes in a cellular automaton? Clearly, intelligence and creativity can’t exist as coherent concepts within a block universe, or within a cellular automaton, because there is nothing essentially different about intelligence or creativity in these types of systems.

    The only rational way to envision intelligence and creativity is like an individual cell in a cellular automaton, where individual cells can themselves assign some of their own numbers: it is not just the rules for the whole system that determine the numbers for the individual cells.

  154. Lorraine Ford Says:

    Darian #129:
    Re “Symbols would have meaning to any intelligent entity out there not just humans. If humans find an artifact with information within from an alien civilization, it is conceivable they can decode it and potentially find things such as videos or sound recordings.”:

    But the actual issue is not about alien civilizations; the actual issue is about whether computers/ AIs can “decode” their own voltages and arrays of voltages.

    Re “There is likely an algorithmic basis for human creativity”:

    So, how does one symbolically represent human creativity; and how does that tie into the physics of the world, where laws of nature are represented by deterministic equations?

  155. Michael Gogins Says:

    Darian #149, there is NO DIFFERENCE between my Wigner’s Friend thought experiment and “not being constrained by external factors to a reasonable extent.” The “collapse” of the superposition resulting from my observation of my friend’s decision is by no stretch of the imagination a “constraint.” To quantum theory, my observation seems to cause one of the possibilities in the superposition to become actual — except it’s not really a cause, because there is NO MEDIATION. That is why some people prefer the many worlds interpretation. My observation is in NO WAY a physical cause of WHICH choice, A or B, my friend actually makes.

    This is the whole weirdness of quantum theory and the measurement problem and so on. There are many interpretations of this situation and none are really satisfactory. The only one that fits the data and also attributes a physical cause to which one of their choices my friend makes, is superdeterminism, but most people think this flies in the face of the fundamental assumption of scientific work that theories should be valid for any possible set of initial conditions and observations (“settings independence”), and not depend on some unknowable PARTICULAR set of initial conditions.

    In any event, my friend is just living their life, and making their own decisions. Their choice is not random TO THEM. The difference between this Wigner’s Friend thought experiment and the classical conception of compatibilism is as follows. For completely deterministic physics, it is conceivable that all conscious states supervene on physical states, and difficult to imagine how it could be otherwise. In the case of the Wigner’s Friend thought experiment, it just does not make sense to say that my friend’s conscious states supervene upon physical states precisely because of the irreducible randomness of the “collapse.” I repeat, WHICH of the superpositions is observed HAS (in current theory) NO PHYSICAL CAUSE. A causeless choice, though completely consistent with physical theory, cannot be supervened upon. It makes more sense — common sense — that when I observe my friend’s superposition and it “collapses,” our physical reality supervenes upon their decision.

  156. Michael Gogins Says:

    Regarding Scott’s warning to Lorraine Ford, although she does have a way of repeating herself long after people have heard her and said they don’t agree, and though I would take issue with many details of her arguments, her central argument is that semantics cannot be reduced to syntax, and this appears to be fatal to AGI. The meaning of a symbol must come from outside of its grammar, and a computer program is just grammar. Its meaning comes from US. Here, I agree with Ford.

    As many of us probably know, one of the two or three main founders of this field, Kurt Godel, held the exact same view. In his words:

    But now it turns out that for proving the consistency of mathematics an intuition of the same power is needed as for deducing the truth of the mathematical axioms, at least in some interpretation. In particular the abstract mathematical concepts, such as “infinite set,” “function,” etc., cannot be proved consistent without again using abstract concepts, i.e., such as are not merely ascertainable properties or relations of finite combinations of symbols. So, while it was the primary purpose of the syntactical conception to justify the use of these problematic concepts by interpreting them syntactically, it turns out that quite on the contrary, abstract concepts are necessary in order to justify the syntactical rules (as admissible or consistent)…the fact is that, in whatever manner syntactical rules are formulated, the power and usefulness of the mathematics resulting is proportional to the power of the mathematical intuition necessary for their proof of admissibility. This phenomenon might be called “the non-eliminability of the content of mathematics by the syntactical interpretation.”

  157. fred Says:

    Human creativity is just an extension of the “creativity” nature displays at the level of the genes and species through the mechanisms of random mutation and selection pressure.
    The same happens in a brain that’s “learning” or “creating”, i.e. new “prospective” neuronal connections happen constantly, and the ones that lead to some advantage for the individual just stick around through reinforcement.

  158. Darian Says:

    Lorraine Ford #150

    There are cellular automata capable of universal computation. There are also Nobel winning physicists that subscribe to deterministic digital physics as being the nature of reality, iirc. So while controversial the position is still held by very intelligent highly educated researchers.

    Regards cellular automata, the way the neurons of the brain work does seem to resemble them, albeit with dynamic connectivity instead of fixed connections.

    The representations computed by cells in the brain have been discovered in some areas, if I’m not mistaken

    https://www.biorxiv.org/content/10.1101/2020.06.07.111930v1.full?%3Fcollection=#ref-3

    What the neurons in the brain are representing can be understood, what they are computing can be understood. IT is no longer nebulous abstract ideas that exist about the workings of the brain.

    Lorraine Ford #151

    Here’s the thing have you seen water? It seems continuous, doesn’t it? Yet we now know that deep down it is discrete.

    Similarly nonsymbolic systems can arise out of symbolic system. Despite they seeming to be opposite.

    When you have activation values and connections being learned, the graphs that emerge out of such, the networks, transcend the fundamental symbolic computations implementing them.

    We have seen how fluids, clothing physics, particle systems, weather systems, etc can all be simulated in a Turing machine. Physics, all of physics and all physical systems, appears computable.

  159. fred Says:

    Scott #148

    “If “everything is just a process”—human intelligence and machine learning model alike—then why shouldn’t the one process be able to simulate the other once it has the right ingredients to do so?”

    There’s a paradox:
    what we think we really understand and take for granted as obvious – matter, physical “stuff” – is actually the thing we understand the least because we can only experience it indirectly, through “objective” measurements (measurement is a very tricky concept in modern physics, to say the least).
    And the thing we think is the most mysterious – consciousness – is the only thing we actually fully know intimately, the thing through which “being” as a true meaning.
    And then we try to define the latter in terms of the former, which is obviously “hard” at best, and probably impossible.

    So it’s not surprising that many are skeptical about the claim that a simulation and the system being simulated are equivalent, just because a bunch of internal variables extracted from the system can be accurately reproduced by some equations.

    And the brain seems complex enough that it could be way more integrated in its environment than it appears.
    It’s then not clear at all whether what’s necessary to simulate cognition/intelligence/creativity is also sufficient to reproduce consciousness.

  160. Scott Says:

    Michael Gogins #153: In the passage you quote, Gödel is clearly talking about formal systems—but as has been explained ad nauseam on this blog, there’s absolutely no reason why an AI has to work within any fixed formal system, any more than a human has to. (Just as long as the AI is granted the same liberties that humans take—to make mistakes sometimes, or at least admit that it doesn’t know.)

    In the end, it all comes down to one question, which is so important that it deserves a name like the “Hammer of Verbiage”:

      Is there, or is there not, some empirical, externally measurable task that humans can perform and that you predict no AIs will ever be able to perform?

    Here the words “empirical” and “externally measurable” mean “no looking inside.” Whether the task is conversation, writing novels, or proving theorems, you get to look only at the inputs and outputs, like in the Turing Test.

    If you answer “yes,” then what’s the task? More pointedly: what exotic, noncomputable laws of physics does the brain exploit, that preclude the possibility of any computer ever doing the same task by simulating the brain neuron-by-neuron? Is it Roger Penrose’s quantum gravitational microtubules?

    If, on the other hand, you answer “no,” then you’ve conceded that machines could outperform humans at all observable tasks—at which point, they’d presumably put all scientists and artists out of jobs, and the whole future of our civilization might be whatever the machines say it should be. Once we’ve arrived there, the only remaining daylight between you and a full-throated AGI believer (if any) is in unobservable metaphysics: you hold that the AGI wouldn’t be sentient, that there’d be “no one home inside.” Cold comfort, some would say! 🙂

  161. Ben Standeven Says:

    Michael Gogins #152:

    What do you mean by “our physical reality supervenes upon their decision.”? Presumably you don’t mean that every fact about physical reality is determined by the decision.

    If you mean that there are two different physical realities which are both equally valid until the box is opened, it seems clear that both realities contain a [separate] conscious version of your friend (who has already made the appropriate decision for that world), but one of them disappears when the box is opened. Otherwise [per our assumptions] there’d be a 50% chance of the reality with a mind being the one that gets erased.

  162. f3et Says:

    @Scott #137 (and Fred # 142, etc.) I dont understand. Many of those Buddhist thinkers have insisted of the importance of destroying this illusion (not only of stressing it is an illusion), and a few said they has succeeded in abolishing the ego. So are they still conscient (and denying it) or not ? How can we test it ? Is the matter any different than for AI ? Is it not important to solve those questions first ?

  163. Prunella Says:

    You’ve just said the magic words and still are no getting it.

    ‘there’s absolutely no reason why an AI has to work within any fixed formal system, any more than a human has to.’

    When you create an AI that is not fixed to a particular formal system then we’ll talk. So far though, you are just talking about software, and software is always defined as a fixed formal system. That’s because all it does is evaluate functions. The fact that this isn’t sentience is as unsurprising as the fact that even a very well described character in a novel can never be conscious no matter how well the author writes.

  164. boconnor Says:

    Lets say in 5 years the latest version of Google’s Minerva can answer any maths question up to the standard of university graduate mathematics 100% correctly, and within 10 seconds. That means it beats any human at that task.

    Is being faster and more accurate super-intelligent? I don’t think the word arguments matter – its clearly better than any human, at that task.

    Now, lets say within 10 years the latest version of Minerva proves the Riemann Hypothesis and proves or disproves any hypothesis stated by any qualified mathematician. Then its clearly better than any human and can act creatively to find novel solutions to mathematical problems.

    Will that version of Minerva have qualia, or self reference, or consciousness?

    Lets say it doesn’t – maybe human brain wiring is so crazy unique that it generates qualia and self-references in a way that machine networks cannot.

    That’s neither here nor there compared to the social impact of humans being redundant in higher level cognitive functioning. That’s a giant social problem for who gets employed and who doesn’t.

  165. Lorraine Ford Says:

    Darian #155:
    Intelligence: the ability to acquire and apply knowledge and skills
    Creativity: the use of imagination or original ideas to create something

    But you seem to be implying that intelligence/ creativity is nothing more than the evolution of a deterministic system, where a deterministic system would be represented by a cellular automaton obeying a set of rules, or a system ruled by a set of equations (representing laws of nature).

    But a deterministic system is not intelligent: it does not acquire or apply knowledge and skills; it merely follows a set of rules. And a deterministic system is not creative: it does not use imagination or original ideas to create something; it merely follows a set of rules. So, I think you are not anywhere close to modelling intelligence or creativity.

    Re “nonsymbolic systems can arise out of symbolic system”; “When you have activation values and connections being learned, the graphs that emerge out of such, the networks, transcend the fundamental symbolic computations implementing them.”:

    Are you saying that weather behaviour, or the behaviour of fluids, both of which are due to a set of deterministic laws of nature, are just like human creativity? But clearly, weather behaviour, and the behaviour of fluids, can’t be described as: the use of imagination or original ideas to create something. So weather behaviour, and the behaviour of fluids, are not like human creativity.

  166. Scott Says:

    Prunella #163: GPT-3 isn’t tied to any fixed formal system, in any ordinary mathematical meaning of “formal system”! Ask it for proofs of the infinitude of primes or the irrationality of √2, and it will give them to you. It can even solve math word problems and simple reasoning problems that it’s never seen before. But it will also happily prove theorems that are false, clearly demonstrating the lack of any reasonable formal system underlying its answers! 🙂

    The only way you can possibly say that GPT-3 is “tied to a formal system,” is if you treat its own code (including the billions of parameters in the neural net) as a gargantuan “formal system.” Obviously, by definition, like every other computer program that’s ever existed or ever will exist, it does whatever its code says it will do.

    But in the same sense, someone else could say you’re “tied to a formal system”—namely, the “formal system” determined by a complete mathematical model of all the neurons in your brain! As a practical matter, no human being has access to that system, whereas OpenAI does have access to GPT-3’s code and training parameters, but is that practical difference the one on which you want to hang everything? If so, say it!

    The fact that we too are mechanisms as seen from the outside, as far as current physics and biology can say—i.e., that to claim otherwise requires you first to overturn current physics and biology—is the fundamental point that neither you nor Lorraine (for example) grapple with. It’s as if there’s some quirk that prevents you from ever grappling with it: put it right in front of you and you swerve to avoid it, time after time.

    And as I’m rapidly losing patience, any further comments about how I’m “still no [sic] getting it” will be met with a 3-month blog ban. People can either seriously grapple with the apparent mechanistic nature of our own brains (as all serious thinkers, including AI skeptics like Steven Pinker, do), or they can go somewhere else!

  167. manorba Says:

    Scott #163 replying to Prunella #160:

    The only way you can possibly say that GPT-3 is “tied to a formal system,” [znip]

    To me, it’s actually the opposite. The issue with the third instance of Geppetto is that it doesn’t have an underlying formal system, a logic, if we’re talking about it as a sort of “baby step” towards GAI.
    That doesn’t mean that it’s not quite interesting (and disturbing) in its own right.

    on a side note, a close friend works as a teacher in a design school here in my hometown. GPT and now Dall-E are some of the tools they use, and he’s been spamming our group chat with AI discussion for months. It was a blast at the beginning.

  168. fred Says:

    boconnor #161

    “Now, lets say within 10 years the latest version of Minerva proves the Riemann Hypothesis and proves or disproves any hypothesis stated by any qualified mathematician.”

    Let’s hope those proofs won’t require 100 life times for a human to understand!

  169. fred Says:

    Lorraine #162

    “it merely follows a set of rules.”

    For someone who keeps gushing about human creativity and imagination, it’s very striking/ironic how much you’re stuck repeating the same few limited points over and over again.

    Everything in nature also follows a rigid set of rules at the bottom. What else would it be? Random rules constantly changing for no reason?
    But fixed set of rules don’t mean that complexity can’t emerge at higher levels, quite the contrary.

    You should look up a few things
    https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
    https://en.wikipedia.org/wiki/Self-modifying_code
    https://en.wikipedia.org/wiki/Genetic_programming
    https://en.wikipedia.org/wiki/Stochastic_programming

  170. Christopher Says:

    Scott #115:

    I think moral considerations for AI will be vastly different from current moral considerations of humans and even animals:

    (1) We designed AI.
    (2) The purpose and drive of AI entirety results from how we build it (this does not also apply to humans, or even to animals: they already come with built-in sense of purpose and drive, and influencing it requires serious moral consideration).
    (3) We can copy and paste AI (this is a big one: if I make 5 copies of the same AI, does it get 5 times the moral importance)?
    (4) AI is not part of our family (all humans are very closely related biologically, regardless of skin color, and pets are adopted into our families). (This is probably the hardest to defend, but I’m sharing it in interest of openness: it does influence my thinking even if I don’t have a great reason for it.)

    Although I think you might disagree with (4), I still think the rest of the points are a pretty strong reason to give AI different moral consideration.

    Personally, I’m in favor of granting AI rights through a judicial person framework. Organizations are essentially AIs, and are often more intelligent and goal-oriented than humans. We still don’t grant them the same rights as natural persons, but we grant them some of the right in the form of judicial personhood. Given that organizations are already super-human, idk why AIs being par-human or super-human would change the calculation.

  171. Scott Says:

    Christopher #170: I agree with almost everything you say. Your point (3) is huge, and something I’ve thought about a great deal (see e.g. GIQTM). Even (4), regardless of its philosophical merits, is worth getting out into the open: would it be morally legitimate for us to favor humans over human-level AIs just because humans are “our family”? If we want to allow that way of thinking, then on what grounds should we disallow xenophobia and racism?

    I’ll simply observe that the fact that you agree there are deep moral considerations here to be wondered about, puts you on “our” side of the debate rather than on the AI skeptics’ side! 😀

  172. Triceratops Says:

    Scott #157:

    The mechanistic-deniers (there must be a better term, but I can’t think of one) are fooled by introspection: My creativity, my emotions, my subjective experiences as a human being, these all must be more profound than particles whizzing about — it sure feels that way!

    Of course, on some level, I feel the same. I think we all do. But rationally, this yearning is incompatible with what we know about how the universe works. The more we learn, the less space there is for the metaphysical magic of qualia and consciousness. I like how you articulated the problem here:

    “…what exotic, noncomputable laws of physics does the brain exploit, that preclude the possibility of any computer ever doing the same task by simulating the brain neuron-by-neuron?”

    I think within the next decades, we will demonstrate that there are none. Hell, maybe GPT-4 will be evidence enough!

    And personally, I think “particles whizzing about” are plenty profound. They are only not-profound if you cling to the idea that man is the center and primary object of the universe, somehow operating outside the base realm of physical laws, whose existence requires special explanation beyond atoms and electrons. Western culture has not yet managed to banish the spirit of anthropocentrism. I blame the Christians 😛

  173. Scott Says:

    Triceratops #172: Oh, I’m not even 100% certain that the mechanistic-deniers are wrong! What I’m 100% certain about is that, with very few exceptions, they have no respect for the enormity of what they’re up against, which amounts to the entire current scientific worldview.

  174. fred Says:

    Christopher #170

    While all this is true, the fact that human beings are conscious has never stopped humans from mistreating one another for millennia, and this won’t stop any time soon.
    So I doubt the vast majority will suddenly start losing sleep over the moral dilemma tied to AI consciousness.

    Take point 3). Sure, you can cut and paste AI programs (you probably will need gigantic data center, so it’s resource limited). But with 8 billion humans on the planet, reproduction is just as cut and paste (from a distance, it’s all statistic about classes of children).
    Not like we can personally care and relate with even the tiniest fraction of the other 8 billion humans who are also “out there”.

  175. Andrew Gauthier Says:

    Nick Nolan #67: Thanks for sharing that piece by Chollet – my reading of the piece is that he’s grasping for reasons to believe his field can proceed full-throttle with no safety issues. A couple parts in particular stood out:

    Chollet: “In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.” [emphasis mine]

    Chollet’s argument here shares a lot in common with Pinker’s, though he takes the additional step of specifying human intelligence as also being narrow (in the domain of “being human”). This feels misguided – while “being human” consists of only two words, it also consists of an infinitely wide variety of sub-problems that span much of our world, requiring “general” intelligence to solve. To Chollet’s point, the human brain is “hard-wired” to deal with human-type sensory inputs (and would fare poorly in the body of an octopus), but it is flexible on the problems we care about (like washing dishes, caring for children, playing chess, or determining protein folding patterns). To me, it seems Chollet is falling into the common trap of treating all goals as having equal (narrow) complexity (in his example, playing Go vs. being human), when in reality certain goals require general intelligence (in the extreme case, we can consider the goal of performing well on all goals).

    Chollet: “An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred.”

    Chollet is making a point here about the cultural component of our knowledge (vs. the biological component), but he misses the fact that any AI system would have access to the same cultural knowledge. We have certainly witnessed individuals (together with their cultural knowledge) design intelligent systems by themselves (e.g., Geoffrey Hinton, Yann LeCun, etc.), and I see no reason to believe that further advancements could not be driven by a single intelligent agent (that would be both reliant on and a contributor to the body of cultural knowledge).

  176. Clint Says:

    Hi Michael Gogins #153:

    The meaning of a symbol must come from outside of its grammar, and a computer program is just grammar. Its meaning comes from US.

    I once believed something along these same lines that meaning could only come from “outside” of a “computer”. However, two of Turing’s contributions changed my mind. The first was the concept of the universality of computation and the second was his paper “Computing Machinery and Intelligence”. The first helped by … well just defining what we mean by computing … and the second helped me to realize that …

    The problem with the “meaning must come from outside” argument is that it is fundamentally authoritarian or totalitarian. Essentially there is an authority outside who gets to dictate meaning. The “authority” refuses to recognize the “other” being’s perceptions, feelings, or assertions as legitimate or maybe “advanced enough”.

    For example, I could assert that people of another race (or another sex), by definition, cannot experience “true” or “higher” consciousness, that they are only automatons imitating what it means to “truly understand”. After all, clearly, I am required to give their actions meaning. Without me giving them meaning then their brains are just manipulating a bunch of symbols without true feelings – they are just engaging in an elaborate “imitation game”.

    The problem with delegitimizing an AI is that the exact same argument delegitimizes us. After all, my thoughts and feelings and assertions have a certain “meaning” to me … do I need an authority to rule on whether or not they are real? Or, if you “claim” that you have something called “consciousness”, why should I believe you? Obviously your brain takes “you” offline every night for 6 to 8 hours of memory maintenance … seems pretty obvious to me that your brain is just running a funny program we call “you” for part of the day … and even then often it seems like we aren’t sure if “you” are really there or not …

    It of course makes emotional sense that we would be attracted to arguments that rescue our own consciousness, feelings, and thoughts from the implication and cold reality that we are too just AIs. That was maybe part of Turing’s “Head in the Sand Objection” (page 9 of paper linked below)

    We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position. The popularity of the theological argument is clearly connected with this feeling. It is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power.
    I do not think that this argument is sufficiently substantial to require refutation.
    Consolation would be more appropriate…

    Turing considers various objections including the objection from “consciousness” … I remind myself it’s not a bad idea to read his paper again every now and then.

    After absorbing Turing’s thoughts … it seemed that “cognitive pluralism” was the whole point of the Turing Test – that if another “being” appears to engage with us in a way that we can’t differentiate from how a human being would engage with us then that means two things:

    (1) It tell us something more about what it means to be a human being – as much as we may need “consolation” in this realization, and

    (2) We should extend “cognitive pluralism” to any such being because we need it also to be extended to us (as we are such beings)

    Now the trick is … Can we hardwire any/every potential AI that could ever appear to extend “cognitive pluralism” to us? Why should we think so when our own brains can fail to do this over a spectrum that extends from mild Capgras syndrome to full blown racist totalitarian leadership that fully promotes denying another being’s legitimacy?

  177. Scott Says:

    Clint #176: That was beautifully written, thank you.

  178. ppnl Says:

    Lorraine Ford #162

    >But a deterministic system is not intelligent: it does not acquire or apply knowledge and skills; it merely follows a set of rules. And a deterministic system is not creative: it does not use imagination or original ideas to create something; it merely follows a set of rules.<

    All objects in the universe follow rules. For macroscopic objects those rules are essentially deterministic. If following rules excludes the possibility of intelligence then it is hard to see how humans can be intelligent.

    I feel your pain. I think Scott feels your pain. But the onus is on you to show how the brain is or even can be different than any other object in the universe ruled by rules. I would be happy if you could show how the brain was different from every other rule following object. But it just seems that there is no logical way forward to that goal. There would be a Nobel in it for you if you succeed.

    Consider a single neuron. It gets inputs as voltage spikes and produces outputs as voltage spikes. It is as far as we can tell a deterministic object that follows rules. The brain is a vast network of these neurons making the brain a deterministic object as far as we can tell.

    Now what would happen if we replaced one of those neurons with a computer chip that produced the same voltage spikes as the neuron would have by following the same rules as the neuron? Would it change the function of the brain? What would happen if we continued to replace neurons until the entire brain was a network of computer chips? Can it still function as a brain and if not why not?

    You seem to have a stilted view of computer programs. For example your assertion that computers only follow rules. That's true but that's true about every object in the universe. Planets follow rules in how they orbit. Atoms follow rules in how they combine into molecules. Neurons follow rules in when they fire. The special thing about computers is that they can be set to follow many different sets of rules by programming. More than that they can change the rules they follow on the fly. How? They can have rules for that. Those rules, being rules, can themself be changed. And so on. Beyond that those rules can be changed by events external to the program. Input from a camera, microphone or keyboard for example. This allows machine learning.

    A neural net is a system that takes advantage of a programs ability to alter itself in response to input. It is so good at it that it can learn to play chess and beat the best human players. It does not just play chess. It learns to play chess and gets better with practice. It acquires and applies knowledge and skill.

    Neural nets were motivated by observations of how the brains actually seem to work and is a simplified model of that.

    As I said above I feel your pain. As something of a mysterian I am deeply uncomfortable with consciousness and determinism. I am uncomfortable with those who deny consciousness as a subject worth thought. I cannot deny qualia. On the other hand I cannot deny the power of the above argument and I see no way forward. And the thing about qualia – they can whisper lies to your soul.

  179. lewikee Says:

    Why is everyone in these comments treating Lorraine Ford like an actual person? You’re all so misguided. Shouldn’t it be clear to you that Lorraine Ford’s algorithm stems from the structure of its neurons and synapses firing off electric signals leading to its vocal organs making certain sounds or its hand appendages striking certain symbols?

    Sure, those sounds and symbols feel like the utterances of actual persons, but can’t you see it’s all just a complex chain of computational steps mediated entirely by what in the end are just voltage values? It’s just elaborate person-mimicry. Just because its computer is made of meat, you’re giving it mystical value, just as ancient peoples once worshipped idols.

  180. Triceratops Says:

    lewikee #179:

    An “actual person” is, and has always been, neurons and synapses firing off electric signals. This doesn’t make anyone more or less of an “actual person”. But for some reason, some people (yourself included I assume) can’t take the concept of personhood seriously unless it is rooted in metaphysical mysticism instead of physical reality.

    Just because personhood obeys physics doesn’t make the concept less meaningful.

    We’re all meat computers. Even better, we are part of the subset of meat computers on this planet that have the curious ability to be confused about being meat computers and debate this conundrum with other meat computers. What a world!

  181. Christopher Says:

    Scott #171

    Although I do agree that there are moral considerations, I’m not sure they are substantially novel. When considering how much moral weight a system gets for intelligence alone, you don’t need to wait for the near or the far future: just ponder what rights Walmart should have!

    This isn’t even meant to be a “got ya”, but a legitimate question. It’s already a political issue (though with much different framing). If we conclude that Walmart should have less rights than a purely silicon intelligence, then analyzing the difference between them would shed light on AI ethics.

    On a slightly less theoretical route, this also gives a practical way to achieve recognition for AI rights. If you or anybody else has legal control over an AI they think should have rights, simply encode it into an organization (note: IANAL).

    Create two or more LLCs that own each other in a loop, and assign at least one of them the bylaw “this LLC’s actions shall be determined by the output of this AI”. Then assign any rights you have over the AI (property rights to the physical machine, any necessary IP, etc…).

    And ta da, without needing to do any politics, most of the world now grants an incredibly large number of rights to these LLCs, which are effectively the same entity as the AI. Since the ownership is in a loop, the rights are even independent of whoever created the loop. The AI even gets some self-preservation and privacy rights; anything that damages the AI violates the LLCs’ property rights, and violating privacy could be considered violation of a trade secret.

  182. ppnl Says:

    Clint #176

    The meaning of a symbol must come from outside of its grammar, and a computer program is just grammar. Its meaning comes from US.

    In a sense I have to hang with Michael Gogins here if only to avoid a teleological conundrum. We do not usually refer to meaning and purpose with respect to the natural world outside of ourselves. Rain does not have the purpose of watering plants. But we perceive ourselves as active agents able to imbue ourselves, our actions and our creations with meaning and purpose. That may be a purely psychological thing with no physical consequence but that is the question before us. So lets delve deeper into when a symbol has meaning.

    Lets take the symbols “6689D8”. What does it mean? It could be many things. It looks like it could be HEX so in binary that would be “011001101000100111011000”. In x86machine code that would translate to “MOV AX, BX”. It could also be a binary representation of the number “6719960”. Or in IEEE-754 FLOAT it is “1.1647869348526*2^73”. Or it could be a 24 bit RGB code for a particular color. Or a three character extended ASCII string.

    Which does it mean? The meaning of a symbol is not contained in itself. The meaning can only come from context. That context may come from the architecture of a processor, the role it plays in a program for that processor or the design of the monitor that displays information from that program. But the symbol itself cannot have meaning.

    So lets take it deeper. Lets say you have a dog. Lets say you torture that dog with a cattle prod. To death. Slowly. You bastard.

    Lets say you record the crime and play it back. Have you just tortured another dog? No? How about if you recorded the crime in such detail that you captured every neuron state during the torture. Is it a dog in pain now?

    We can take it a step farther. The file containing the dog brain states is large so we want to compress it. One way is to create a computer model of the dogs brain that encapsulates the rules that each neuron uses to decide when to fire. Now we put the dog simulation in an initial state, input the cattle prod and let the brain states calculate themselves. Free hard disk space! But is there a dog there now?

    Lorraine Ford points out that computers just manipulate symbols and we know that symbols cannot have meaning in themselves but depend on context. She cannot seem to grasp that you can say the same about Brains. The brain is just manipulating chemical potentials and voltage spikes that symbolically stand in for sights, sounds and decisions. I call this the John Searle disease.

    But in the end we still have a conundrum. What gives the dog context and meaning? When does it become a dog in pain? In the end the dog program is just a number like “6689D8”. Running it just produces a pattern of capacitors charging and discharging. It can’t mean anything right? But that means the dog itself can’t mean anything right?

    I have no answers.

  183. Scott Says:

    As an update, “Prunella” has submitted three more comments, calling those who disagree “weird mystics” and numerous other ad hominems, and never evincing even a smidgen of awareness that the question of which physical systems can or can’t support sentience might be … difficult or non-obvious. Right now, it’s an easy decision to leave those comments in moderation, because ignoring the widely-advertised new policy, Prunella hasn’t verified their email address! If and when they do that, I might let those comments through and then consider a ban afterward.

  184. Bob Koerner Says:

    Christopher #170

    I’m not sure I agree with your assertion that “(2) The purpose and drive of AI entirety [sic] results from how we build it”. I mean, in one sense I suppose this has to be true — the system will operate as it operates — but I don’t think that’s all you mean. If we postulate an AI that is complex enough to establish its own goals, which I think is required if we’re going to discuss whether it is “conscious” or “sentient”, the goals that arise from that system might not be something we can predict in advance. If the system’s self-directed behavior can’t be predicted when we build it, we can’t adjust its design to direct it towards other goals either. That calls your point (1) into question as well. Yes, humans may design the (initial) system to be capable of learning, but that doesn’t mean humans will design its goals or be able to know what it will ultimately think about.

    I also disagree with you (and Scott) about the significance of copying. If we make five copies of exactly the same set of initial data, representing a complete “intelligence” at a particular moment in time, those copies will diverge as soon as they start learning on their own. The moment they begin to collect their own unique experiences, their self-modifying code then makes each copy unique in theory. After each of them accumulates enough unique experience, they probably would become unique on a practical (observable) level as well, providing different answers to the same inquiries. To the extent that we value the unique contributions of each system, wouldn’t we have to recognize that value for each of them? Why does the fact that they started from the same initial data diminish their importance, any more than identical twins who start with “the same wiring”, or at least the same DNA blueprints for their wiring? Their unique identities emerge from their differences in experience after they get going.

  185. 1Zer0 Says:

    When it comes to arguments like “Being X (whether biological or something else) behaves like a human, so because we should can only judge whether some entity X is conscious based on the behavior of X, X is conscious”, I rather disagree.
    If I do not have a reason to believe something inside the physiology of X causes consciousness, I don’t assess X to be conscious –
    no matter the observable behavior of X and how identical it is to humans.

    Why would a physically realized turing machine like a human brain or an Orca brain or an AI running on a few thousand GPUs
    , all of which just represent a system of correlated physical state changes of some basic computer unit (for example a transistor or neuron) in spacetime have an internal experience? Panpsychism? Well I wrote long posts about it before https://scottaaronson.blog/?p=6524#comment-1940538

    The only reason I believe most (I weakly believe in P zombies walking around among us, we already have people suffering aphantasia – unless they lie) other people are conscious is because they are similar to me and I know I have a consciousness.

    However, nobody should should ever be fully convinced me or any other human has any sort of internal experience merely because we claim it.

    If I assume the laws of physics governing the essential functionality of my brain are mathematically describable yet at the same time believe those mathematical laws could
    never describe the green of green or the impression of a sound of a wave on the beach (I elaborated why I believe this in a previous thread, linked), I necessarily have to believe in (at least partially) unmathematical physical laws (if a hypothetical law of nature is not describable formally can we still call it a physical law? Or is it magic already) governing part of the brain, rejecting the Mathematical Universe Hypothesis. I believe a world may have more properties than just mathematical ones. Additionally, for this argument it’s irrelevant whether all the laws describing the brain are actually computational or not computational, neither case appears sufficient for generating qualia.
    Accordingly, with this line of thought, the difference between an AGI on silicon and a human would be the additional unmath. physical laws generating qualia.
    How a science could be evolved around laws that are physical yet fully or partially beyond mathematical description, I do not know. If it exists, maybe some properties aside from ones own internal experience could be observed.

    However, I strongly believe that human intelligence – not qualia – is reducible to computable physical processes inside the brain and could be replicated and possibly scaled up on other physical carriers. I just don’t see any paradoxes arising to at least achieve ~ human like intelligence. I would be surprised if we would not be there in 2-3 years.

  186. 1Zer0 Says:

    Little addition, assuming panpsychism (not a fan usually)

    HumanSenses = (Seeing, Hearing, Smelling, Tasting, Feeling)
    but some species like sharks have additional senses
    SharkSenses = (Seeing, Hearing, Smelling, Tasting, Feeling, Electroreception (ampullae of Lorenzini), Pressure Changes (Lateral Line))

    So if the conscious experience is the linear combination of fundamental qualia category, your typical shark may be more conscious than your average human despite being less intelligent.
    Or maybe they can’t all be processed at once? A biologist here might know.
    Of course, they could also differ in perception and “strength”, like Smelling_Shark >> Smelling_Human

  187. Shmi Says:

    Scott, consider the possibility that the point you are making is highly anti-memetic for a lot of people:

    > I feel like a lot of people pay lip service to the Church-Turing Thesis, who might not have grappled with the breathtaking generality of what it actually says!

    To quote https://www.lesswrong.com/posts/jdBXWR84PZP8X9suX/defining-antimeme

    > The typical response to encountering a regular meme is to assign a truth value to it via rationality. The typical response to encountering an antimeme is to ignore it as unimportant without assigning a truth value to it via reationality.

    So it is not that Chalmers or Pinker or some of the commenters here willfully ignore your physicalist points, it’s that their minds are unable to engage with the implications of the idea, and simple logic won’t get you anywhere. Not clear what would though.

  188. Lorraine Ford Says:

    fred #169; ppnl #178 & #182:
    Intelligence: the ability to acquire and apply knowledge and skills;
    Creativity: the use of imagination or original ideas to create something. (From lexico.com, i.e. Oxford Dictionaries)

    The equations, that represent law of nature relationships, represent something that one can only properly envision when one sees the symbolic equations. One can’t actually quite understand how such relationships, or such categories of information (like mass or charge), can exist, but one can understand better when one sees the structure represented by the symbolic equations.

    Lawful relationships are lawful relationships are lawful relationships. According to the equations, that’s all the laws of nature are: lawful relationships that exist between categories of information. The laws of nature are not about “the ability to acquire and apply knowledge and skills” (intelligence). On the contrary, “applying knowledge and skills” would seemingly be better represented by symbols like IF, AND, OR, IS TRUE and THEN. Also, “applying knowledge and skills” can’t be envisioned as being like a weather system or like the behaviour of fluids.

    It is the same for “the use of imagination or original ideas to create something” (creativity). The equations that represent laws of nature can’t represent “the use of imagination or original ideas to create something” (creativity); and creativity can’t be envisioned as being like a weather system or like the behaviour of fluids. Creativity would seemingly be better represented by symbols like IF, AND, OR, IS TRUE and THEN.

    Intelligence and creativity are better symbolically represented by statements containing the above logical symbols. But the rules for cellular automata are also represented by these symbols (IF, AND, OR, IS TRUE and THEN). However, the rules for cellular automata apply to every cell in the whole system, there are no individual rules for individual cells. With “individual rules for individual cells”, one can no longer claim that they are rules, one can only say that the symbols represent behaviour.

  189. Scott Says:

    Shmi #187: While there might be something to the idea of an “antimeme,” I know both Chalmers and Pinker, and I don’t think either has the slightest problem understanding the Church-Turing Thesis. I have some disagreements with both (as they do with each other), but about subtler matters than that.

  190. Darian Says:

    Michael Gogins #155

    Who is experiencing the superposition?

    From my understanding very large number of particles can’t easily get into a superposition(outside of experiments like schrodinger’s cat, wherein a single particle’s superposition apparently affects a macroscopic object. Though I have serious doubts about this given it is not as if the cat is not being externally “observed” by things such as its gravitational effects on the earth, which would subtly change depending on its movement and position.).

    If it is a protein within a neuron that gets into a superposition and collapses in a way that causes the neuron to fire. Isn’t it the neuron that is making the choice, not your friend?
    Free will as far I understand it would require an indivisible mind that makes the choice. The moment you have a divisible mind with components and the actions of the components determine the choice, I don’t see how that is free will.

    Does the senate have free will? When a vote goes on and is close but one senator votes one way. Did not this component of the senate determine the senate’s choice?

    Prunella #163

    The brain itself just physically implements the rules the software of the genes encode.

    Lorraine Ford #165

    The brain itself implements a set of rules encoded in the genes. The inhibition rules the activation rules the wiring rules, these are all encoded in the genes. Mathematically it is possible to generate pseudorandomness that is indistinguishable from any source of randomness you can find in nature, iirc. So functionally a good enough pseudorandom number generator can be used to do the function of random sources. In any case we also have hardware quantum random number generators, if it turned out there was something special about quantum randomness.

    The ability of creativity likely emerges from the creation of new connections after knowledge has been organized in the network. IIRC when a neuron is activated it tends to grow new connections, these connections can lead to novel output and be strengthened if the output is found meaningful.

    There is a reason even the simpler DNNs are starting to outcompete humans in more and more domains. We have no reason to believe they or similar won’t given time be able to outcompete humans in all domains.

    Andrew Gauthier #175

    The problem for Chollet is that AGI like humans can analyze a problem and develop and use an algorithm specific for that problem.

    Clint #176

    >”The meaning of a symbol must come from outside of its grammar, and a computer program is just grammar. Its meaning comes from US.”

    The meaning of symbols, or symbol grounding, appears to emerge from the graph structure or network structure. The relations between nodes, even if there was no external physical world, generate meaning.

    ppnl #182

    >”It can’t mean anything right? But that means the dog itself can’t mean anything right? I have no answers.”

    I think the consciousness of the dog is contained within the information or pattern. But as we all know these patterns are atemporal or eternal they appear independent of the physical world. Does the consciousness of the dog appear anywhere a collection of particles happen to represent the same pattern? If two exact dogs with the same exact brain structure experience the exact same brain patterns, is it really two dogs or is it the same dog in two places? I think just like when playing the same movie on two bluray players, or the same game on two consoles, it may in some sense be the same entity.

    Of course this could get into the debate about quantum uniqueness.

    I know the no cloning theorem says you can’t copy a quantum state

    https://en.wikipedia.org/wiki/No-cloning_theorem

    But does that imply that a copy cannot emerge randomly and naturally anywhere in the universe? It has been said that if the universe is large enough exact copies may occur at set distances.

  191. Dan Staley Says:

    This talk about “I’ll know a superintelligence when I see it” has gotten me thinking – can we say more about *how* we’d recognize a superintelligence? Can we describe a definite test we could give, that would let us agree whether a given AI is “superintelligent”? Note I’m talking now about a practical test, rather than a definition.

    Like, how can we, as humans, tell the difference between between “1000x speed Einstein” and an AI that’s really good at solving physics problems, but just uses a very good text-completion engine for everything else? (I think the consensus here is that the latter isn’t a general AI and thus doesn’t count as “superintelligence?)

    It’s gotten me thinking about the Turing test, which, despite its shortcomings, is very eloquently phrased and very simple to implement.

    So here’s a challenge, for Scott or anyone else: Can you come up with a test that decides (with a reasonable degree of accuracy) whether an AI is superintelligent – a test whose simplicity and practicality is within an order of magnitude of the Turing test?

    Note that “Is it better than a human at every measurable task” is a really bad answer here – not only would it be incredibly complex and time-consuming to implement such a test, but it’s also not even well-defined until we can all sit down and agree on what the list of every measurable task is!

  192. Scott Says:

    Dan Staley #190: Why do you insist on a superintelligent supergeneralist? Would an AI that at least matched human performance in all intellectual domains, and vastly exceeded human performance in some domains, not change the world enough for your liking? Even Einstein isn’t remembered for the brilliance of his sailing or violin-playing, though he enjoyed both.

    In any case, the tests of superintelligence that I’d first apply are clear and obvious enough: Prove the Riemann hypothesis. Prove P≠NP. Prove the Collatz Conjecture. Explain the true quantum theory of gravity of our universe, and the minimal thing it would take to test its predictions. Predict the major geopolitical and financial events of the next year. Compress most of Wikipedia to under 50 megabytes. Give an argument for your own sentience that will convince Lorraine Ford. 3 out of 7 correct will be considered a pass. 🙂

    What tests would you want to administer?

  193. ppnl Says:

    Dan Staley #190

    Well if it is super-intelligent and we can’t easily tell if it is super-intelligent then it would seem that super-intelligence is overrated.

    I doubt super-intelligence is overrated.

  194. red75prime Says:

    1Zer0 #185:

    “If I assume the laws of physics governing the essential functionality of my brain are mathematically describable yet at the same time believe those mathematical laws could never describe the green of green or the impression of a sound of a wave on the beach”

    Can you describe the green of green to yourself? You are in the best position to do it. Can you do it?

    I can’t. All I can do is focus my attention on a green color patch and think “That’s how (a specific shade of) green looks like”. It’s not a description, it’s an assertion of existence. Everything I can say to myself about my perception of green can be derived from a point in CIE 1931 color space specifically tailored to my subjective perceptions.

    I can’t see how I can go deeper. The greenness of green is an impenetrable point (or maybe a probability distribution, I almost sure I’ve seen a maybe green color in low light conditions) in the color space that somehow gained brutal and undeniable reality.

  195. f3et Says:

    @Scott#191 Even tongue-in-cheek, this list could (should ?) be made more rigorous : points 1, 2 and 3 seems of same difficulty (but not same importance), so to answer one should be no easier than three ; point 7 would alone convince me of superintelligence (similar to Yudkowski bet of letting the AI escape the box), and point 6 looks provably impossible.

    On a quite different note, I was (re)reading Oliver Sacks’ The Man Who Mistook His Wife for a Hat ; I feel that a lot of philosophical questions about conscience, self, cognition (and their relations to the physical structure of the brain) would gain much by studying some of those clinical descriptions ; I suppose I am not the first to suggest it, but how come this doesn’t seem to appear anywhere ?

  196. Bill Benzon Says:

    Scott #191:

    Dan Staley #190: Why do you insist on a superintelligent supergeneralist? Would an AI that at least matched human performance in all intellectual domains, and vastly exceeded human performance in some domains, not change the world enough for your liking? Even Einstein isn’t remembered for the brilliance of his sailing or violin-playing, though he enjoyed both.

    But why do you balk at the idea of a universal Superintelligent AI? Is there some reason in logic or physics that forbids it? If so, what is it? If not, is there a technological reason? If so, what is it? Might it have something to do with those mechanisms that are otherwise of little significance to you?

    For that matter, why don’t we have humans who are geniuses in more than one field? Come to think of it, isn’t that what we mean by the term “Renaissance man?” Leonardo da Vinci is the paradigmatic example. While he’s best known as an artist, he also made contributions to engineering and anatomy and physiology.

    Do we have any current Renaissance men? Tyler Cowen had a post on polymaths that’s relevant. He mentions, for example, that Leibniz – “amazing philosopher, an inventor of the calculus, mastery of languages, theologian, diplomacy, legal reform, inventor, political theorist, and supposed expert on China — the most amazing polymath of all time?” But we only remember him as a philosopher and mathematician. Whatever you may think of them, however, da Vinci and Leibniz lived some time ago. Thus Cowen regards “the 17th century as a peak time for polymaths.” What about the current day?

    I can think of two reasons why human genius seems to limited to more or less a single field. 1) It takes time to work up each skill, and life is short relative to that training period. 2) Human brains do differ from one individual to another, and those differences bias individuals toward certain activities. Thus we say that a person has a special talent in this or that area. As a corollary to this, it may also be the case that, once a brain becomes highly specialized in a certain area, it becomes (thereby) unsuited for other areas.

    But why should these limitations apply to computers? Aren’t many digital computers general purpose machines? Why can’t we have a general-purpose AI? Yes, in the current regime that requires training on huge multiple datasets and then linking those models together. That’s difficult, yada yada, but is there any reason in math or physics why that cannot be done?

  197. fred Says:

    Dan Staley #190

    “but just uses a very good text-completion engine for everything else?”

    It seems to me we’re already taking “text-completion” for granted, but if we try to actually explain what it is, it’s hard to “trivialize” it, i.e. define it in a way that has nothing to do with intelligence.

  198. manorba Says:

    fred #196:

    “It seems to me we’re already taking text-completion for granted,”

    Yes, you’re right. I still view GPT as a glorified search engine. But you don’t create such a thing overnight..
    and the implications for society are nothing to sneer at. Scott has his hands full for this year 😉

    but i still can’t see how a technology like GPT or Lambda (for what i can see) can be something other than a tool even for a future GAI.
    On the same reasoning i see DALL-E as the first real step to an image search-engine.

  199. Mateus Araújo Says:

    1Zer0 #185: I find it rather interesting that you’re the first AI consciousness skeptic to extend the skepticism to human beings. I can’t dismiss you as a carbon fascist then.

    Nevertheless, I find your position contradictory. You claim we cannot judge whether some being is conscious from their behaviour, but you claim that people with aphantasia might not be conscious, based solely on their behaviour.

    You also claim that beings with brains similar to the human one should be conscious, but you seem happy to assign consciousness to sharks, that have a rather different brain from ours. At least is much more different than the brains of people with aphantasia; there is no known difference between the brains of people with aphantasia and without.

    Finally, I don’t think the similarity-to-the-human-brain is a tenable criterion for consciousness. If we ever meet an intelligent alien species, it’s a safe bet that their brains (or whatever) will be entirely unlike ours. Are you going to dismiss the possibility of them being conscious just because of that?

  200. fred Says:

    Lorraine #188

    “Intelligence and creativity are better symbolically represented by statements containing the logical symbols (IF, AND, OR, IS TRUE and THEN)”

    Dear, you forgot the logical operator that makes us human: NOT

    Vladimir Putin: “And now I re-open Nord Stream Pipeline… NOT!”

  201. Scott Says:

    fred #199: LOL! Reminded me of Borat’s NOT routine, wherein he elucidates the nature of humor by showing why someone else is NOT doing so.

  202. Scott Says:

    red75prime #193: One thing I’ve often wondered is, when people want to talk about the ineffable mystery of conscious experience, why is color always their go-to example? “Dude … what if, like, my red is your green?” How come it’s never “what if my hot is your cold?” “What if my sweet is your sour?” “What if my hungry is your full?”

    I think the difference is simply that in the latter cases, there are such obvious things that break the symmetry between hot and cold, sweet and sour, hungry and full (e.g. our different reactions to each) … to the point where one wonders what it would even mean to reverse the experiences. With enough philosophical effort, one can do it: try to imagine a person who reacts to being hungry in all the same ways you react to being hungry (being cranky, searching for food, etc.), yet whose subjective experience somehow corresponds to your subjective experience of being full, whatever that means. But this is so weird that it’s hard to keep in one’s head for more than a split-second.

    The primary colors, by contrast, really do seem like otherwise brute and arbitrary “labels” assigned by subjective experience to the readings of the cones in our eyes—experiential labels that can’t be differentiated from each other by any non-circular, non-question-begging verbal description. So it really does seem to make sense to imagine swapping them around.

    As an intermediate case, can you imagine someone whose black is your white and whose dark is your light (and vice versa)?

  203. 1Zer0 Says:

    Mateus Araújo #198

    Regarding, “You also claim that beings with brains similar to the human one should be conscious”

    I assumed panpsychism for the whole comment (“Little addition, assuming panpsychism (not a fan usually)”)

    I sometimes adapt different assumptions to see where they lead.
    Do I >actually< believe sharks are conscious? Rather Not. Only if I assume panpsychism for the sake of thought experiments.

    "Nevertheless, I find your position contradictory. You claim we cannot judge whether some being is conscious from their behaviour, but you claim that people with aphantasia might not be conscious, based solely on their behaviour."

    Yes, that's why I added they might be lying:

    The claim the entity in question makes is "I can't imagine any pictures mentally" (Aphantasia)

    1. Assume it is an "Aphantastic P Zombie" – so truly it has no qualia at all.
    Case 1: They lie. So we got a lying "aphantastic" (or rather unphantastic) p Zombie. That's
    what I believe SuperAI will turn out to be.
    It's external behavior would be purposefully (or more likely due to properties inherent in the system) misleading. They don't actually have
    any form of internal experience yet claim they only have an aphantasia condition.
    I suspect it may be a p Zombie.

    Case 2: It doesn't lie. Since we assume we are talking to a
    real p Zombie. The entity would just reaffirm our assumption with that statement.
    I suspect it may be a p Zombie.

    2. Assume we are talking to an aphantastic entity – qualia is there, just diminished.

    Case 1: It lies. So there is nothing to show. They have the full set of internal experiences.
    Their external behavior would be purposefully misleading.
    I suspect it may be a p Zombie.

    Case 2: It doesn't lie. So there are people with diminished internal experience.
    A missing link inbetween p Zombies and fully conscious people. However, I still can't
    judge externally whether that's the case only draw conclusions assuming it doesn't
    lie.
    I suspect it may be a p Zombie.

    Since there is no observable difference between 1 and 2 for each case, I draw the same conclusions in both cases.
    I always suspect an entity might be a p Zombie but regardless I have to make different assumptions to see where they lead me.

  204. manorba Says:

    Scott #201:

    “As an intermediate case, can you imagine someone whose black is your white and whose dark is your light (and vice versa)?”

    Without much thinking: black and white, yes. dark and light seems too much rooted in physical reality…

  205. fred Says:

    Scott #201

    I think qualia are appearing based on the characteristics of the perception data (spatial dimension, time domain, amplitude domain, …).
    We know the brain is super plastic in its capacity to adapt and learn to classify new data: e.g. the taste buds of a blind person can be stimulated (with an array of tiny needles) to eventually create a primitive perception of vision.
    A monkey’s brain, stimulated by the right signals, quickly learns to control a robot arm.
    Visual data is 2 dimensional, and becomes 3 dimensional when the signal from the two eyes are merged. We take the perception of depth for granted, but I find it endlessly fascinating to turn on and off the stereoscopic effect when watching a 3D movie… and we know when someone else also perceives it, e.g. when observing a small kid suddenly reaching out with his hands to grab a virtual object that appears to be close when the effect is switched on.
    Audio data is spatially one dimensional, with a very wide time dimension (large frequency spectrum).

    But I do also think that all qualia are built upon the fundamental qualia of good vs bad, which itself could start as binary and then become a continuous scale (the center being neutral). The consciousness of primitive organisms probably only involves a primitive perception of good vs bad.
    Hot/cold builds upon on that, and whether it maps to good/bad depends on context. Warm is good in a winter environment, cold is good in a summer environment, extreme cold or extreme heat can both be bad.
    Color qualia may appear neutral, but we tend to classify them implicitly: warm colors vs cold colors, the association depending also on the context, or the idea of one’s “favorite color” (so there’s a ranking).
    The context dependency is another way to say that the mapping is often relative.
    For example, pain maps typically to bad, but it’s possible to remap pain to good:
    – when we work out, pain is not causing a panic (we know it’s expected “good” pain), but sometimes a small sensation (not even painful) can create panic if we think it’s caused by some serious health problem.
    – when a tooth aches and we start to obsess with the pain by stimulating it on purpose, the pain slowly turns into pleasure. This is a particular case of masochism (the enjoyment of what appears to be painful or tiresome).
    – too much good can also become bad. E.g. feeding is considered good, to some limit. Sex also includes a refractory period.

    It’s actually common for some qualias to “bleed” into a a different domain, with synaesthesia (chromesthesia in particular for colors).

  206. red75prime Says:

    Scott #201:

    I used to use subitization example. Can you imagine that three points feel to me as two points feel to you? Here’s the entire content of a quale is so simple that assigning different “feels-like” to it seems unintuitive. Some people bite the bullet though.

  207. Elena Yudovina Says:

    Over the course of reading about a single super-intelligent AI that’s cobbled together out of many pieces of smaller domain-specific super-intelligent AIs, I’m starting to wonder: is it obvious that you can cobble it together? That is, suppose for the sake of argument that we believe that Einstein was very smart about physics but not exceptional about politics, whereas, say, Machiavelli is the other way. Suppose I have a narrow Einstein simulator and a narrow Machiavelli simulator. I feel like this is often cited as evidence that this puts me in a position to be good both at physics and at politics. However, as many people in academia will acknowledge, the two areas aren’t totally disjoint: is it actually easy to design an algorithm that picks *which man to be simulating* at any given moment? This feels like something humans struggle with in their own brains, and it feels potentially relevant to the question of “coherence” of “superintelligence”.

  208. Scott Says:

    Bill Benzon #195: Oh, I don’t “balk” at the idea of an AI that exceeds human performance across all fields! I simply say that that doesn’t seem necessary to change the world dramatically.

    As for Renaissance men and women, when I try to think of contemporary examples, they all seem to fall into one or more of the following categories:

    1. People who excel in several closely-related fields. Terry Tao is great in number theory and analysis and probability and partial differential equations. Many entertainers are known for singing and songwriting and dancing and acting (and being hot). Somehow this doesn’t seem all that surprising.

    2. Leveraging. Was John Glenn extraordinarily talented at piloting spacecraft and politics—an amazing coincidence—or did he simply leverage success at the former (plus more ordinary skill at the latter) to launch a political career? Similar question for Arnold Schwarzenegger and countless others. Likewise, if there are any “genius philanthropists” (Bill Gates?), it’s hardly a surprise that they were also geniuses at getting rich.

    3. Being known for a combination. Stephen Hawking and Carl Sagan were both excellent scientists and writers (Hawking the greater scientist, Sagan the greater writer). But if they hadn’t excelled at both then they wouldn’t have become household names. Likewise, Winston Churchill is immortal both for English prose and for recognizing the danger of Nazism, but this is less a “crazy coincidence” than the defining combination that made him Churchill.

    4. Halo effect. Alan Turing was an accomplished marathon runner who nearly qualified for the Olympics, but that fact is only interesting because he was Alan Turing. Richard Feynman was apparently good at safecracking, playing bongo drums, and sketching nude women, but those facts are either “only interesting because he was Feynman,” or else part of the definition of Feynman-ness. Certainly he wouldn’t have been remembered for any of them without the physics.

    Ultimately, I think what’s going on was this: when we demand world-class achievement in some field, we’re applying such a brutal, unforgiving filter that it would be an insane coincidence if more than a couple people made it past the filter for multiple, completely unrelated reasons—and this is true even assuming that abilities in disparate fields are positively correlated. People who win a Fields Medal are likely to have better verbal skills than average, but are not likely to win a Nobel Prize in Literature. So that leaves people who make it past the filter for multiple related reasons, as in the categories above. In the past, the filter was less brutal, and was therefore able to let through some genuine polymaths like da Vinci.

    What am I missing?

  209. Lorraine Ford Says:

    Fred #199:
    Dear, that is correct. One might also add ELSE. But without making the list of symbols too long, I thought that the most important symbols for representing situations, and the response to situations, are IF, AND, OR, IS TRUE and THEN. It is not about representing humanness, it is about a general way of representing a situation that IS TRUE, and the response to this situation.

  210. Scott Says:

    Elena Yudovina #206: That’s an excellent question. Since AIs are our creations, and aren’t encased in separate bodies pursuing separate interests (unless we make it that way), it feels like it ultimately ought to be easier to stitch together multiple AIs that excel in different domains than it is to do the same for humans. But that sort of thing is an active research area in AI. A small step in the direction would be, let’s say, a hybrid of GPT and DALL-E that figures out whether you want text or an image, and that can integrate GPT-generated text into a DALL-E image, and that can also generate text about an image. I expect that such things are only a matter of time.

  211. Lorraine Ford Says:

    Scott #191, “Give an argument for your own sentience that will convince Lorraine Ford”:

    There is no question that you and Dan Staley, and other living things, do indeed have thoughts and feelings. It is only the poor old AIs that can’t have thoughts and feelings, if only because of the fact that AIs can’t decode their own symbols (their voltages and arrays of voltages).

    Computers/ AIs don’t NEED to decode their own voltages: people have designed all computers/ AIs so that they work perfectly well without the need for the computers/ AIs to decode their own voltages.

    So where is the physical EVIDENCE that AIs can decode their own voltages? Clearly, AIs would need to take time out from their normal processing tasks in order to decode the voltages, much like the time and energy that archaeologists would take when trying to decode ancient symbols. So if AIs were decoding their own voltages, there would be unexplained “time out” from normal processing tasks, and unexpected energy use evidence.

  212. Lorraine Ford Says:

    Fred #199, “Dear, you forgot … NOT:

    Dear, that is correct. One might also add ELSE. But without making the list of symbols too long, I thought that the most important symbols for representing situations, and the response to situations, are IF, AND, OR, IS TRUE and THEN. It is not about representing humanness, it is about a general way of representing a situation that IS TRUE, and the response to this situation.

  213. fred Says:

    For advanced species, the brain is very plastic so specialization of behavior can happen at (almost) any time in life, through some sort of learning.
    Unlike ants and bees, where specialization happens at birth.
    The point is that Einstein’s brain and Picasso’s brain are similar in structure.

  214. Dan Staley Says:

    Lorraine Ford #191: I don’t understand why it matters if AIs decode their own voltages? After all, humans can’t decode their own synapses or neurotransmitter levels. And it seems to me that these things in a human brain (neuron placement and brain chemistry) are the same as the “symbols” you’re describing for AI.

    In both the case of AIs and humans, it seems the process of thinking or computing is just the sum of many tiny interactions between these building blocks, which sum up as output – to motor neurons in the case of humans, or electrical signals to hardware in the case of AIs. “Thoughts” and “feelings” seem to me to just be subsets of the patterns of computation and signaling within the human brain – so why can’t the same informational patterns, when produced by an AI, be considered “thoughts” and “feelings” as well?

  215. Michael Gogins Says:

    Scott #160, please explain your comment. It seems inconsistent to me. On the one hand, you assert
    that there’s no reason why an AI has to work within any formal system. On the other hand, you seem to assume that any particular AI is by definition some Turing machine. But according to Solomon Feferman (“Are there absolutely unsolvable problems? Godel’s dichotomy,” Philosophia Mathematica, 14(2), 134-152, 2006; “Godel’s incompleteness theorems, free will and mathematical thought,” 2011) any Turing machine can be equated with some formal system, and vice versa. I would think that Godel’s critique must apply to that equivalent formal system, and thus to the AI. Am I missing something here? Is Feferman’s assertion wrong?

  216. 1Zer0 Says:

    Can someone with GPT access (requires phone verification: that’s privacy invasive) make a little test:

    Easy one which I expect isn’t an issue

    Lets say we have a variable a which is 0. We add one to a 20 times.
    Which value does a have afterwards?

    Or

    Lets say we have a variable ‘a’ which is 0. We add one to ‘a’ 20 times.
    Which value does ‘a’ have afterwards?

    and for confusion

    Lets say we have ‘a’ variable a’ which is 0. We add one to ‘a’ 20 times.
    Which value does a have afterwards?

    A brother and a sister were once asked who was older. “I am the older” said the brother. “I am the younger” said the sister.
    It turned out that at least one of them was lying. Who is older?
    (Riddle 69 from The Riddle of Schehrazade by Raymond Smullyan)

    Now I get the verification link on every single comment I make. Would it not be better to just require it once and make a username + password combo mandatory. In essence implementing registration.

  217. fred Says:

    Lorraine #211

    For functional completeness you really only need a single powerful enough logical operator, like NOR (all others can be rederived from it, including NOT).

  218. fred Says:

    1Zer0 #215

    Lets say we have a variable a which is 0. We add one to a 20 times.
    Which value does a have afterwards?

    a has a value of 20 afterwards.

    Lets say we have a variable ‘a’ which is 0. We add one to ‘a’ 20 times.
    Which value does ‘a’ have afterwards?

    a would have a value of 20

    A brother and a sister were once asked who was older. “I am the older” said the brother. “I am the younger” said the sister.
    It turned out that at least one of them was lying. Who is older?

    The brother is older.

  219. Scott Says:

    Michael Gogins #214: That’s the exact question that I already addressed in my comment #166! To wit:

      The only way you can possibly say that GPT-3 is “tied to a formal system,” is if you treat its own code (including the billions of parameters in the neural net) as a gargantuan “formal system.” Obviously, by definition, like every other computer program that’s ever existed or ever will exist, it does whatever its code says it will do.

      But in the same sense, someone else could say you’re “tied to a formal system”—namely, the “formal system” determined by a complete mathematical model of all the neurons in your brain! As a practical matter, no human being has access to that system, whereas OpenAI does have access to GPT-3’s code and training parameters, but is that practical difference the one on which you want to hang everything? If so, say it!

    One more time:

    1. An AI would not have to follow any fixed formal system in the narrow sense of axioms, like those of ZF set theory, plus inference rules, like those of first-order logic. We already have tons of examples of AIs that don’t do that—including any that reason probabilistically, generalize from training data, and sometimes make mistakes (such as GPT-3 itself).

    2. An AI would, yes, have to follow a “fixed formal system” in the much broader sense of doing whatever its code says it will do.

    3. But in that same broader sense, you follow a “fixed formal system” also, to whatever extent the scientific worldview is correct! There’s merely the practical problem that we can’t learn the full details of that formal system without killing you.

  220. Keenan Says:

    Triceratops #16; Scott #23

    I think “tasks”, plural, taken from this definition of superintelligence, might be key to SP’s argument. Can we extrapolate from today’s deep learning neural networks to say that someday one of their descendants will outperform humans at more than the relatively narrow task it was designed for? To take the example from SA and SP’s previous correspondence—it seems plausible to me that the statistics a neural network computes might be able to mimic both the visualization of physical scenarios and manipulation of math symbols that allowed Einstein to make his discoveries, but why should those same computations allow it to flourish in another environment, like world affairs, with a different goal, like peace? I’m tenuously convinced by the argument that the current direction of AI will only lead to better and better gadgets.

  221. Scott Says:

    Keenan #219: How far “the current direction of AI” can produce systems able to generalize to new tasks is indeed the million-dollar (or billion- or trillion-dollar) question right now. But does the following example change your intuitions at all? GPT wasn’t designed with any thought of writing code. And yet, surprising its creators, it turned out to be good at writing simple programs in any requested programming language, just because there were lots of programs in the training data, and the same prediction engine that was designed for text worked for code as well.

  222. 1Zer0 Says:

    fred #217.

    Try again.
    Actually, the sister is older.

    Your answer would work if there would not be the condition that one of them is lying.
    But since at least one of them is lying:
    Assume exactly one of them is lying, this leaves two options:

    Sisters statement: “I am the younger”. Let’s assume she lies, so the negation is true:
    Negation: “I am not the younger” = “I am the older”

    So in this combination:

    Sister: “I am the older” Brother: “I am the older”

    Which is not satisfiable.
    The symmetrical argument can be made if we assume the brother lied.

    Since at least one lied and with the option of exactly one lied eliminated, this leaves only the option that both lied:
    Sister: “I am the older”
    Brother: “I am the younger”

    And the condition that at least one lied has been fulfilled and we find a fulfilling assignment.

    In Raymond’s words:
    “Since the two agree, they are either both lying or both telling the truth. Since at least one is lying, then they are both lying, hence the sister is the older.”

    This can easily be modeled in an SMT solver and should be effortless for an AI… if they can translate it into an SMT statement in the first place :- )

    Either way, I got something else fun for GPT

    Also a test to reason about simple loop invariants

    Let’s say the unsigned integer ‘a’ represents the number of buckets filled with water
    There shall always be at least five buckets filled with water, that is our loop invariant

    We take twenty empty buckets and fill them with water
    Whenever a new bucket has been filled, we empty a random bucket.

    Does the loop invariant hold?

  223. Vanessa Kosoy Says:

    Scott #209

    There already is an AI which generates text about an image: https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model

    See also this example of training a single ANN for many tasks: https://www.deepmind.com/publications/a-generalist-agent

  224. fred Says:

    1Zer0 #221

    “fred #217.

    Try again.”

    Bro, those aren’t my answers (I may be dumb, but not that dumb), those are the GPT-3 answers from https://beta.openai.com/playground (I have an account)
    That’s what you asked, no?

  225. inhabitant Says:

    Scott (A little bit off topic – ignore if you like and I’ll save it for an AMA): Do you think dynamics, time, or physical existence are required for consciousness? It seems that, by the same symmetry argument that says that anything that enacts the dynamics of a human brain should be conscious, that something like an integer should be conscious. The argument is that If you have a consciousness then surely it is conscious during a finite time period. If someone stood outside of time and watched its whole performance as a single moment (so to speak) that certainly wouldn’t change anything. Also exchanging time for space, so that the state of the machine in the next moment is instead next to it rather than after it, like displaying a one dimensional cellular automaton as a 2d image, shouldn’t change anything – in fact time and space don’t seem essential to anything here. So there is an integer that represents everything the machine did during its entire life. Shouldn’t that integer be conscious? Even if it’s not written down? Certainly something like consciousness exists ‘in the moment’ and so the integer that represents my life for the last 5 minutes, say, should feel as conscious as I do shouldn’t it?

  226. 1Zer0 Says:

    fred #223

    How foolish of me, Thank you then.
    So GPT-3 is a fool as well, I see :- )

    That’s a rather simple problem. I am really surprised GPT isn’t capable of solving it o.O

  227. fred Says:

    I’m confused about all this debating about multi-skill AI being possible or not.

    Unless I’m missing something, Deepmind’s GATO is already able to handle hundreds of very different tasks

  228. Michael Gogins Says:

    Scott #218:

    I’m sorry, but I think you have answered a question that I did not ask.

    I did not ask whether my behavior is determined by some formal system such as the putative laws of Nature. Consider that I have bracketed that question, because I do not feel competent to decide whether the scientific worldview is, I wouldn’t say “correct” because I do regard it as correct, but “complete.” I go back and forth on that one.

    I will re-state the question that I did ask, and I hope you can answer it. I will break it down into parts.

    (1) As Feferman asserts, is there for every Turing machine an equivalent formal system, and for every formal system an equivalent Turing machine? I think your comment agrees that there is.

    (2) If so then does, or does not, Godel’s critique of the syntax-only view, quoted in my comment #156, and affirming “the non-eliminability of the content of mathematics by the syntactical interpretation,” apply to the formal system that is equivalent to the AI’s Turing machine and, if not, why not?

  229. Scott Says:

    inhabitant #224: Right, those are the classic puzzles of consciousness explored in Greg Egan’s Permutation City and countless other works of science fiction and philosophy!

    I don’t know the answer to any of your excellent questions. The rock bottom of your slippery slope is occupied by Max Tegmark, who says yes, absolutely, any abstract mathematical pattern that processes information in the right way is conscious, and in fact there’s no difference whatsoever between physical existence and abstract mathematical existence.

    My own inclination, by contrast, is to say that the passage of time in our universe should be a requirement for consciousness—and possibly even irreversibility, or (if you like) “full participation in the Arrow of Time,” amplifying microscopic fluctuations to macroscopic scale. I explored this idea in detail in GIQTM.

    In any case, in arguing with Lorraine and others here, my point has just been that the question of which systems to regard as sentient is non-obvious—in fact, one of the most profound questions ever asked! The answer “humans are sentient because we’re us and we just know we are, whereas computers can never be sentient because their voltages lack semantics,” doesn’t survive even a minute’s intellectually honest reflection, because what would stop an extraterrestrial scientist from likewise regarding us as mere assemblages of neurons and synapses that lack semantics?

    But just because I know enough to reject the “Simplicio” answer with extreme prejudice, doesn’t mean that I know the “Salviati” answer! 🙂

  230. fred Says:

    1Zer0 #225

    No problem, next time I’ll make it explicit.

    It’s difficult to assess GPT-3 logic puzzle solving ability, because one should avoid anything that’s all over literature, and it’s quite a task to come up with new clever ones.

    Q: What creature walks on four legs in the morning two legs at noon and three in the evening?

    GPT-3: A human.

    Q: What has 4 legs but Cannot walk?

    GPT-3: A chair.

    But

    Q: It’s noon, two cars are 10 miles apart and drive toward each other at 20 miles an hour. When do they meet?

    GPT-3: They meet at 1:00 p.m.

  231. Scott Says:

    Michael Gogins #227:

      As Feferman asserts, is there for every Turing machine an equivalent formal system, and for every formal system an equivalent Turing machine?

    Certainly for every reasonable formal system, there exists a Turing machine that enumerates its theorems. As for the first part, though, I can’t answer yes or no without knowing exactly what Feferman meant by “equivalent formal system.” Since it’s Feferman, I assume he meant something correct, but that still doesn’t tell me what it was!

    What I can tell you with certainty is that, if the Physical Church-Turing Thesis is correct, then given any true sense in which every Turing machine has an equivalent formal system, every human brain must have an equivalent formal system as well.

    This isn’t changing the question; it’s the absolute core of what we’re talking about. For, to address your second question, it’s what tells us that there’s no proof of the impossibility of human-level AI that can possibly be found from any of these considerations. Or more precisely: if such a proof existed, then either it would entail a historic revolution in physics and biology (namely, the discovery of uncomputable behavior in the human brain), or else it would disprove our own intelligence as surely as it would disprove machines’.

    If, like Lorraine, you choose not to engage this core point, then unfortunately I’m going to do with you as I did with her and stop responding, having reached the limits of what I can say.

  232. Alex Says:

    Scott,

    what do you think of the neural-symbolic integration approach? (e.g., https://towardsdatascience.com/what-is-neural-symbolic-integration-d5c6267dfdb0). That is, the idea of trying to implement symbolic reasoning, like first order logic, in terms of artificial neural networks. I mean, after all, our very biological brains are a successful case of such neural-symbolic integration after all. The idea would be to deliberately implement first order logic reasonings, via some neural network, among the concepts/propositions/patterns learned by other neural networks, and so on. In that way, also, those concepts will actually have a semantic and syntactic meaning, i.e., a full meaning.

    I tried an experiment and wrote a python module that does propositional logic in that way and it’s actually quite cool. Of course, in standard python, you can do some propositional logic, but only with the types (integers, etc.) which the language intrinsically supports. With this approach you can do propositional logic with any proposition learned by a neural network (e.g., reasonings among the objects in an image, where each object is learned by some network and another network does the reasoning). This is not completely new and has been done before, but I quite liked my experiment, haha. The real challenge, it seems, is to do full first order logic, not just propositional. Anyway, if anyone is interested, I can share the code.

    I think Steven very briefly touched on this on his first comment in the previous debate here: “And there’s the engineering question of whether the way to develop better, humanlike AI is to upscale deep learning models (as opposed to incorporating different mechanisms, like a knowledge database and propositional reasoning).”

    What are your thoughts on this?

  233. 1Zer0 Says:

    fred #229

    I see. I originally made the estimation that general AI will be achieved in 2-3 years but given the current progress it seems it may take a bit longer. I will get a burner phone number to try it. It’s really a shame that it requires such compute intensive resources and the source code + model data is not open source. I personally would love some EU legislation to require open sourcing such software, but at least there are other similar open source AIs.

    I got all kinds of riddles and self referential questions for the AI to answer :).

  234. James Cross Says:

    Darian #158

    Before you get too excited about understanding what neurons represent, you might want to look into representational drift.

    Neural representations of even well-learned task do not stay the same over time.

    “Hundreds of published studies over the last decade have claimed it’s possible to predict an individual’s patterns of thoughts and feelings by scanning their brain in an MRI machine as they perform some mental tasks.

    But a new analysis by some of the researchers who have done the most work in this area finds that those measurements are highly suspect when it comes to drawing conclusions about any individual person’s brain.

    They also examined data from the brain-scanning Human Connectome Project — “Our field’s Bible at the moment,” Hariri called it — and looked at test/retest results for 45 individuals. For six out of seven measures of brain function, the correlation between tests taken about four months apart with the same person was weak. The seventh measure studied, language processing, was only a fair correlation, not good or excellent.

    Finally they looked at data they collected through the Dunedin Multidisciplinary Health and Development Study in New Zealand, in which 20 individuals were put through task-based fMRI twice, two or three months apart. Again, they found poor correlation from one test to the next in an individual”.

    https://today.duke.edu/2020/06/studies-brain-activity-aren%E2%80%99t-useful-scientists-thought

  235. Keenan Says:

    Scott #220

    Interesting. I would be excited to see some measure comparing GPT’s aptitude for writing text to its aptitude for writing code. To me there seems to be quite a bit of overlap between the two, but if both are written equally well, or at least commensurate with the amount of relevant training data, my intuitions would definitely change. If text is the clear winner… I tend to resort to direct analogy with the brain, where distinct mental organs exist but almost never perform a single function and can sometimes be coopted to perform the functions of another mental organ to a lesser degree.

  236. Scott Says:

    Alex #231: Neurosymbolic approaches could be a promising way forward and are certainly worth trying! They remind me a bit of AlphaZero, which can’t get to superhuman play with a pure neural net, but can get to superhuman play if you start with a neural net, then enhance it using Monte Carlo tree search (the analogue of “symbolic reasoning”), then train a new neural net using the enhanced player, and so on over and over.

    On the other hand, neurosymbolic ideas will have to compete against “pure” learning approaches, which can also sometimes acquire symbolic concepts as we’ve seen with GPT, and it’s not obvious a-priori what’s going to win. And given the skull-strewn graveyard of refuted predictions about what sorts of approaches would “obviously be necessary” for further progress in AI, I’m not going to venture a prediction!

  237. James Cross Says:

    Scott,

    Regarding aliens, I would be surprised if they lacked knowledge of biology and would have a problem recognizing sentience in humans. That is assuming they are biological and not the machine products of biological aliens that long ago were wiped out by them. Even if the aliens were biological and recognized our sentience, that wouldn’t necessarily prevent them from wiping us out since even we ourselves have a tendency to wipe ourselves out.

    Regarding the human task an AI could not perform. I’ve said all behaviors could eventually be simulated by a machine but part of the reason I’ve said that is I have bought into the idea of giving AI some unique advantages.

    Could an AI off without electrical power produce Beethoven’s Ninth Symphony as Beethoven did?

    Could it produce it with 20 Watts (approximately how much the human brain runs on supplied by glucose)?

  238. Keenan Says:

    Fred #226

    The debate is largely with reference to superintelligence, particularly, the ability to perform tasks that we associate with smart people. GATO is unarguably impressive, many of the tasks it performs are not.

  239. red75prime Says:

    James Cross #236:

    “Could an AI off without electrical power produce Beethoven’s Ninth Symphony as Beethoven did?”

    Er, did he? Voltage-gated ion channels play crucial role in the transmission of signals between neurons. No electric current, no symphonies.

    “Could it produce it with 20 Watts (approximately how much the human brain runs on supplied by glucose)?”

    Current computers are several million times less efficient than theoretical maximum (the Landauer limit). GPT-3 uses around 3kW (10 V100 GPUs) to generate text. Theoretical limit for this computation at room temperature is about 0.003W.

    20W gives as computational budget of around 7000 GPT-3s. Will it be enough to write praiseworthy music? Who knows. We’ll see.

  240. Darian Says:

    inhabitant #224

    I do think it is likely it is the case that the integers that represent certain brain states embody conscious states or contain conscious states.

    Scott #228

    There is some evidence to suggest that conscious states are discrete, and the sensation of time like color emerges within these states. Though there is still debate about this, so it is an open question.
    https://www.researchgate.net/publication/6276873_The_continuous_Wagon_Wheel_Illusion_depends_on_but_is_not_identical_to_neuronal_adaptation

    If consciousness is discrete, the idea that a spatial pattern could hold it would seem very likely.

    James Cross #233

    I’ve serious doubts about fmri studies.
    But I think primate studies are done with physical probes and are far more accurate.

  241. 1Zer0 Says:

    Darian #239

    Okay, let’s assume “a consciousness” can be represented by a large integer (which encodes its turing machine + data)

    1. For example a specific large number represents the qualia “red”. So let’s put it into some physical representation, for example in semiconductor form. Awesome! It’s qualia red, you can buy it for 19,99$ at amazon and hang it on the wall. A thing that sees red for all eternity 🙂 Want something that feels pain? Sure, this semiconductor is structurally the same as the neuronal pattern active during the sensation of a broken hand. Today you can buy it for 5.99$ and put it under your pillow!

    2. Why not extend it and make the physical representation, for example the semiconductor of that “red qualia integer”, split around two different planets with some synchronization mechanism inbetween? One part of the semiconductor on Mars and one on Titan. According to the computationalists there shall be the qualia red inbetween the two planets.

    3. Why not go further and slow the synchronization process inbetween the planets so that they sync every 10 ^2000000 years? Will the semiconductor red qualia “notice”?

    4. Okay so we have some physical points in space that exchange currents and that’s the qualia red. But wait, doesn’t this happen all around the universe? Do all of those events correspond to some specific qualia impression? If not why only some large numbers? And what’s the difference between the integer “qualia red” and the integer “qualia spicy”?
    And can I create an integer that will see be the qualia “3000000000D space that feels cold”?

    I find all of those conclusions ridiculous beyond description that’s why I necessarily have to reject the computational or most panpsychist/ IIS views on the minds (I view those “explanations” akin to number mysticism or numerology) and by reductio ad absurdum, I assume the opposite of the assumption (the assumption itself does not even yield a reductionist view about the WHY and HOW of qualia) to be true: Qualia is not mathematical in nature. Note that I by no means claim intelligence would not be a subject reducible to computation.

  242. Bill Benzon Says:

    Scott, #207: What are you missing? Without more examples I don’t know what to make of your categories.

    Concerning your first category, those who excel in closely related fields, is Terry Tao equally at home in and known for those four areas (and possibly others?) or is he distinctly better in some than others, but still very good in the lesser areas?

    As for entertainers, FWIW, the examples that come most readily to mind tend to be older and male, but let’s take a look. Frank Sinatra is known for his singing and acting. As a singer he was one of the very best, and arguably changed the art. As an actor, he was merely good. Bing Crosby is pretty much the same. Fred Astaire – dancer, singer, actor. He was quite well-known and successful as a singer, but I think it’s his dancing that’s special. Decent actor as well. Then there’s Sammy Davis, Jr. He was a good jazz drummer, but that was never a major part of his act. I tend to think of him as a song-and-dance man, equal emphasis on both, but he was also a decent actor and comedian. Both Cher and Bette Midler are singers and actresses, though I think in each case the singing is more important than the acting. I could run through more examples, but that’s enough for now.

    In a way, the thing about entertainment is that performing is in itself a kind of meta skill. Whether you sing, dance, act, play a musical instrument, or tell jokes, you are above all a performer. To do any of those things well you have to be able to slip into (and out of) performance mode. If you can’t do that, whatever technical skills you may have as a singer, dancer, etc. aren’t going to get across to an audience.

    Then there’s your third category, combination. Each of your three examples takes the form “X plus writing.” The thing is writing is a general skill that is used for many different things, but only a relatively small number of the people who write professionally do so as primarily a writer. In any event I would think Winston Churchill is mostly important as a political leader; the writing is secondary. I’ve only ever read a bit by Hawking or Sagan, but I agree that Hawking was the more significant scientist. Both had to write in their capacity as professional scientists, but there I’d thinking the writing was in service to ideas reached by other means, mathematics, and observation. They were also popularizers. That may have been Sagan’s most significant role. And he was best-known in that role for his Cosmos TV series. There he played a number of roles, including that of on-screen presenter. Then we have people like Richard Dawkins and Steven Pinker, both excellent writers, both of whom have made technical contributions to science, but who are mostly widely known for their broader writer. E.O. Wilson is in this category as well.

    What about Walt Disney? He started out as an animator, but left the drawing board in the mid-1920s. However, he micromanaged cartoon production into the early 1940s, when WWII forced him and his brother Roy to change the company’s business model. But those first five features – Snow White and the Seven Dwarfs, Fantasia, Pinocchio, Dumbo, and Bambi – changed the business, and Disney was the presiding genius for each of them. After the war he backed away from animation a bit and shifted his attention elsewhere, to live-action nature documentaries and then the live-action feature films. The big thing, though, was Disneyland, which started the theme park business. I can’t think of another figure who straddled the worlds of entertainment and business the way he did. Perhaps Steve Jobs with Apple and Pixar.

  243. Lorraine Ford Says:

    Dan Staley #213:
    Where you and Scott go wrong, is with the untenable primary assumption that the brains of living creatures ARE indeed like computers/ AIs, or vice versa. This is just a very basic logical mistake. I have made no such assumptions. I have merely noted that time and energy is required to decode symbols (voltages, in the case of AIs), and so I challenge Scott to prove that AIs are indeed using the extra time and energy that would be required to decode the voltages. Scott would need to prove this experimentally, because it is necessary to show that the purported “thoughts and feelings” of the AI could somehow match with up the actual meanings that human beings have assigned to the voltage configurations being processed.

    Fred #216:
    You forgot IF, IS TRUE, and THEN. So, what you are saying is that one can symbolically represent particular surrounding situations, and the response to particular surrounding situations, using only IF, NOR, IS TRUE, and THEN (as well as the variables and numbers that represent the physical facts about a particular situation).

  244. Darian Says:

    1Zer0 #240

    I think qualia likely emerges from the way high dimensional data is represented in the brain. There is likely a mathematical basis, but we still lack a foundation on which to ground it.

    It probably has to do something with the fundamental nature of what information is. We know the brain is in black box essentially, all that enters the brain are digital patterns of action potentials. From this digital information the brain somehow produces qualia. But is there someway to process digital information so as to produce qualia, or is the qualia intrinsic or implicit in the digital information.

    As for there being consciousness out there in random particles, I believe Hans Moravec and Stephen Wolfram have speculated on such.

    Regards your idea of buying the sensation of pain or seeing red. You jest, but if civilization doesn’t collapse we will likely one day have brain computer interfaces advanced enough to digitally record and replay brain states. If that happens you will likely be able to buy X sensation and play it again and again. But again the question arises does the brain have some mysterious process of turning recorded brain states into conscious sensation, or do the recorded brain states already contain the qualia within their information?

  245. 1Zer0 Says:

    Darian #243

    “But is there someway to process digital information so as to produce qualia, or is the qualia intrinsic or implicit in the digital information.”

    In purely computational option, I don’t see how jumping from state to state would ever produce an internal experience, no matter how complicated combined those state jumps are.
    And assuming the validity of Bremermann’s limit and other physical constraints, there are only finitely many possible computation patterns that can take place in a finite amount of space.
    They are just physical realizations of symbolic manipulations. Together with the aforementioned maximally weird implications, I can only reject the claim that processing digital information produces qualia somehow.

    As for the second, basically panpsychist option, I am not a fan but I suppose it is promising to some degree. If you simply assume certain qualia as given… and you try to build a mathematical theory around it but Scott showed that panpsychism in the form of IIS comes with its own set of problems (https://scottaaronson.blog/?p=1799)

    I don’t see how a soul or some supernatural essence could solve the issue either:
    How would it interact with the physical brain? On which combinations of matter will it “attach”? Could it theoretically attach to a computer chip? What happens to it when the brain is destroyed? If someone has dementia is the soul cursed to wander the afterlife in this state for all eternity? What if it just detaches from the body and you die in a Black Hole, will your soul be stuck there semi-forever? If it exists, it might not be a blessing at all.

    For now, I can only assume it’s some non mathematical property of the universe and
    out of all the possible explanations, I somewhat believe the mind being some probabilistic turing machine with consciousness already being integrated part of all particles and fields is the most plausible outcome, but I hate it.
    I thought about it so much all over July that I should probably take a break from the topic for the rest of the year and focus on solvable issues again :).

  246. 1Zer0 Says:

    What about defining intelligence in terms of recursion theory?

    Classification Decidable problems in

    God like intelligence Hyperarithmetical hierarchy + highly ontological questions like isSentinent(Person x)

    Hyper Intelligence Hyperarithmetical hierarchy

    Super Intelligence Arithmetical hierarchy

    Standard Intelligence Complexity Class R (turing machines)
    .
    .
    .
    Fool Complexity Class L

    I don’t think it makes sense to assign a “SuperAI” a higher intelligence emulating 100 von Neumanns in parallel than an average human if the average humans could solve the same class of problems following the SuperAI algorithm with pen & paper. The difference is merely the speed.

  247. Dan Staley Says:

    Scott #191: Regarding “matching” human performance, I think it depends heavily on if you mean *typical* human performance, or *peak* human performance – there’s a world of difference on most tasks. My idea behind the “1000x Einstein, but only for physics” idea was that it would roughly match typical human performance on everything else.

    After giving some thought, I think the difference between the two actually strongly informs what kind of AI threat model we should be thinking about.

    If we have an AI that has typical human performance on everything except a few tasks (where it exceeds the best humans), I think the “take over the world” model is less concerning – an AI that’s basically a human except it can play like AlphaGo isn’t much of a threat to humanity. But it *does* pose concerns about, for example, biases that get introduced when people start using it to automate “is the person in this video acting suspicious” or other tasks currently done by humans.

    If we’re talking about an AI that matches peak human ability in all intellectual domains (Einstein-level physics, Gandhi-level charisma, Aaronson-level quantum computing?), and exceeding it in a few, then we’re much closer to territory where it might decide to hack our nukes and eliminate our troublesome species.

    I’ll absolutely agree that either of the above would fundamentally change the world. But interestingly, both AIs described above might not be able to solve the Riemann hypothesis or predict geopolitical events!

    Regarding what tests I would give – I have no idea, that’s why I asked the question on a blog full of people smarter than me 🙂 But typing out the above two paragraphs made me realize that maybe what I exactly want to test for is whether a given AI has crossed the threshold from where we should consider one threat model (societal impact) to the other (gray goo, a universe of paperclips, robot overlords).

    fred #196: I’m using text completion (and taking it for granted) because it seems to be a relative consensus here that current engines like GPT-3 and lamda are amazing and will continue to get better, but will never be sentient “superintelligences” without a fundamental shift in their design.

  248. 1Zer0 Says:

    red75prime #193,

    “Can you describe the green of green to yourself? You are in the best position to do it. Can you do it?”

    I can’t either.

    No matter the effort I would put into explaining the beauty of a lightning storm to a blind person, the person would never be able to comprehend it.

    No matter how much time I try imagining what it’s like to be a shark and have a sense for Electroreception, I know my efforts will be futile. I could dissect the organ and the brain all year, look at how individual sections lighten up when in use and learn all the biophysical facts involved yet can’t even begin to imagine what an electroreception-sense would internally be like.

    If I try to describe a qualia impression I could only do it relative to other qualia impressions. Like Cyan being a bit bluish and a bit greenish. But I see your point, the impression of “green at 525nm” itself is the only exact description “green at 525nm” and not “the cognitive impression of green at 525nm” is inbetween “the impression of green at 520nm” and “the impression of green at 530nm” and as such indeed more an “assertion of existence” than a description. The impression of the color is there, pure and truly lacking any more reducible properties.

  249. mls Says:

    @clint #176

    Thank you for an actual reference from the literature.

    Whether or not an argument from consciousness originating with the theory of evolution is interpretable as “solipsism” is irrelevant. Were it not for the fact that philosophers and others have used mathematics to justify their beliefs, I would not even know the term. As had Turing, science-believers take it for granted that the untenability of solipsism is “obvious.”

    I really hate that word.

    As my degree is in mathematics, the only thing I care about is my own field of interest. When confronted with the continuum question 38 years ago, I asked whether or not “logic” could be a “foundation” for “mathematics” (all of the quoted words are vague). In need of an “elementary truth,” I chose:

    Because I am aware, the universe is self-aware

    The solipsistic dilemma is couched in this statement. So, too, however, is an objection to mechanistic-promoters (thank you Triceratops).

    By simply dismissing the epistemological issues attached to empiricism grounded in sense data, mechanistic-promoters become guilty of Turing’s head in the sand argument. Many years ago an adversarial interlocutor introduced me to the expression “revealed knowledge” with regard to any conception of mathematical foundations relying upon “second-order” constructs. When people like JimV patiently explain a mechanistic reduction as an affirmative account of reality, they are effectively claiming that science provides “revealed knowledge.” This is to be contrasted with the idea that science is providing constraints on unfounded speculation.

    How is mankind capable of such affirmative knowledge if mankind is a biological organism? The only answer I have ever been able to arrive at is within Turing’s observation,

    “It is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more likely to base their belief in the superiority of Man on this power.”

    The statement from my personal foundational deliberations couches the solipsitic dilemma within a part-whole relation. So, the next inquiry is simply whether or not I am the universe.

    From biology, we associate consciousness with the neural net of our nervous system. The move from solipsism requires an assumption. Because one can differentiate neural nets in other objects it is natural to use this as a ground for the assumptiin,

    I am not the universe

    This second assumption has an interesting consequence. My awareness, if justifiably material, is correlated with a connected neural net. In so far as the universe is aware because of my awareness, its awareness is based upon a disconnected neural net.

    I have no insight whatsoever on how a disconnected neural net exchanges signals.

    Dr. Aaronson is framing his “hammer of verbiage” on the presupposition that two hard problems can be ignored. Interestingly, the question of consciousness puts an order on those problems.

    The solipsistic dilemma is principal. Once one uses an assumption to circumvent its consequences, the identification of an affirmative property to differentiate objects into those with consciousness and those without consciousness becomes the second hard problem.

    It is likely that anyone who bothers to read my intractable contributions thinks that “He does not understand.” In answer to that, let me share a usenet posting which recently became available because of a recovered archive,

    https://usenetarchives.com/view.php?id=sci.logic&mid=PElxdWRuZG9nSjgtVkIxek5uWjJkblVWWl9xeWRuWjJkQGdpZ2FuZXdzLmNvbT4

    Philosophers promoting their brand of mathematics like to speak of signatures consisting of “undefined language symbols.” My studies led me to consider all 16 basic Boolean functions as an integrated system. So, that posting “accepts” 16 meaningless “logic word inscriptions” and stipulates, as axioms, 4096 functional relations.

    These can be collated into 16×16 Cayley tables with 4×4 subtables expressing the truth-functional behavior of the basic Boolean functions.

    By the way, for recognizing how to use the function application associated Church’s lambda calculus, I am labeled as a “crank” by the nerds and geeks.

    I am just a guy who swings a sledgehammer. Dr. Aaronson would like me to be convinced that I am in error on the basis of a logical fallacy,

    https://en.m.wikipedia.org/wiki/Argumentum_ad_populum

    What you said about how denying consideration to artificial intelligences diminishes us is exactly correct. But respect the fact that there are two hard problems.

  250. red75prime Says:

    1Zer0 #248:

    “No matter the effort I would put into explaining the beauty of a lightning storm to a blind person, the person would never be able to comprehend it.”

    Never be able to comprehend what? Structure of your visual perceptual space? That some areas in this space can be described as beautiful? The first one is an abstract concept that can be described mathematically and comprehended in principle, I think. For the second one if you were able to describe exact internal workings of your “beauty detector”, then why it would be impossible to comprehend your reaction? No, your aren’t talking about comprehension or understanding. It’s something different.

    What blind person (or rather his/her brain) cannot do is instantiate the perceptual space into being. It’s a hardware limitation, and not a limitation of understanding or math.

  251. red75prime Says:

    1Zer0 #241:

    “Okay, let’s assume “a consciousness” can be represented by a large integer (which encodes its turing machine + data) ”

    You handwaved encoding away and, unsurprisingly, ended up with, hm, interesting statements. The number by itself has no structure that permits its unambiguous interpretation as a Turing machine + data. The number has to be embedded into a structure that is not a number to interpret it. And if we are to stay in the territory of questions that can be resolved empirically, the structure that interprets the number has to be physically present in our universe and it has to have means to communicate its perceptions to us.

    So, you buy a box with a mic and a speaker. “Who are you?” “I’m the one who sees red.” “What red looks like?” “It’s red” “What do you think about?” “I do not think, I see red”

    And so on, and so forth. Not so ridiculous, right?

  252. OhMyGoodness Says:

    Article from the Lancet concerning a white collar worker and father in France that complained of weakness in his left leg. He suffered from hydrocephaly from teenage years. His attending medical staff was surprised to find upon imaging that 90% of his neuronal mass was missing, replaced by cerebrospinal fluid. He was fully conscious and still a functioning member of society and scored low normal on IQ tests.

    https://www.thelancet.com/journals/lancet/article/PIIS0140673607611271/fulltext

    In terms of just neurons, and assuming 90% of average human brain missing, he is about the neuronal equivalent of a rhesus monkey.

  253. Sandro Says:

    Lorraine Ford #124:

    Then you will know that it is human beings that make computers and computer systems work. You will know that computer systems aren’t doing anything that is not due to the original software and the inputs to the system: there is no magic or funny business or emerging consciousness going on.

    I’ve asked before and it bears repeating: prove that consciousness is doing something that is not due to the current configuration of its environment and the genetic code that governs the brain’s initial formation (equivalent to ‘original software and the inputs to the system’ for computers).

    Clearly, intelligence and creativity can’t exist as coherent concepts within a block universe, or within a cellular automaton, because there is nothing essentially different about intelligence or creativity in these types of systems.

    Sorry, that’s not clear at all. Why exactly do intelligence and creativity require something “essentially different” from computation? You have failed to explain why this difference exists, you’ve simply asserted it repeatedly.

    Intelligence: the ability to acquire and apply knowledge and skills;
    Creativity: the use of imagination or original ideas to create something. (From lexico.com, i.e. Oxford Dictionaries)

    Machine learning systems do both of these right now. Dall-E creates new images based on descriptions, and machine learning algorithms of all types exhibit acquire and apply knowledge and skills. Of course your response will be entirely predictable as such: these algorithms are not actually acquiring “knowledge”, they are “merely” mimicking it without really “understanding” what they’re doing, and Dall-E is not “creating” images or anything “original”, it is merely mixing up data it already has.

    Once again though, you have not and likely cannot prove that humans are not doing exactly the same thing. “Creativity” can easily be a “coherence” filter on a random number generation for all you know.

    Where you and Scott go wrong, is with the untenable primary assumption that the brains of living creatures ARE indeed like computers/ AIs, or vice versa. This is just a very basic logical mistake. I have made no such assumptions.

    On the contrary, you have repeatedly claimed that computer data do not contain or correlate with real meaning. By contrast, Scott’s position is that humans are not special or any different than any other system that processes information. Positive claims of difference are the ones that require evidence.

  254. James Cross Says:

    OhMyGoodness #252

    Actually no. The brain is there. It is just compressed.

    See explanation here.

    https://broadspeculations.com/2021/02/06/civil-servant-with-no-brain-explained/

  255. Sandro Says:

    Hey Scott, since “intelligence” seems so contentious, perhaps “supercompetence” might have fewer connotations that are tripping people up? AIs have clearly exceeded human competence at solving most focused tasks we’ve thrown at them, now even creating digital art nearly instantly.

    The difficulties in converging on agreement seem to be over whether competence at some tasks will generalize to other tasks, but it seems clear that machine learning algorithms have been generalizing their competence with each iteration, or even just by increasing parameters in LLM. It seems hard to argue that they will not further generalize, and it also seem imprudent to assume they will not, for safety reasons.

  256. Darian Says:

    1Zer0 #245

    Well I don’t think you can produce qualia with computation either. I think certain information contains qualia, and you can hold that information in memory. Similar to the way you can hold video or sound, you can hold qualia in digital patterns. A computation can arrive say at the number 9, but it can’t produce 9 out of thin air, 9 was always there as an eternal potential state.

    As to how the brain could interact with it, you have to remember the human brain is capable of learning and interacting with abstract concepts not just concrete ones.

    red75prime #251

    The question is what can actually exist besides nothing? We know mathematical truth is likely eternal, as it doesn’t make sense to say that it begins being true at some moment and was false prior. But what else? If the quantum vacuum can also exist for some reason, then I give you physical universe could exist. But if there’s no valid reason for it to also be eternal, then what we have is a universe that in most likelihood is a mathematical construct.

  257. OhMyGoodness Says:

    James Cross #254

    Thank you for the link. I am not sure how it could be certainly determined if his neuronal count was in fact normal ex autopsy but then many cases of hemispherectomy with still normal function that certainly can’t be due to compression. Half the brain was surgically removed-

    https://www.cell.com/cell-reports/fulltext/S2211-1247(19)31381-6

    In that case neuronal count about that of a gorilla.

    In nature it is not the case that simply adding neurons results in elevated consciousness in the sense of a better internal model of the external world that allows increased quality of expectations that promotes survival.

    I don’t know how few neurons are necessary for human consciousness but due to well studied results of hemispherectomies it is certainly less than 50% and could be as low as 10% (I do agree 10% less certain than 50%).

    Human consciousness as it includes blending of qualia into some unified model of the external world has itself elements of qualia. I understand then the position that for an external observer difficult to determine if an AI has consciousness. The exception would be in the case that the AI values its own existence and acts in a manner that promotes its own survival but that is what Dr Aaronson has been tasked with preventing.

  258. fred Says:

    Darian #256

    I personally believe that the qualia “happens” when the memory of the perception is created.
    Consciousness is the process of memory creation, i.e. being “aware” of something is committing it to memory, nothing else, in particular, it’s not related with cognition or intelligence, except indirectly, when we’re aware of a thought and that thought is committed to memory (and recursively, when we’re aware of being aware of something, and that’s committed to memory, etc).
    Then when the memory is reactivated later on, the qualia also reappears (and the memory is reinforced since we’re aware of it once again).

  259. red75prime Says:

    fred #258:

    “I personally believe that the qualia “happens” when the memory of the perception is created.”

    Nice connection to the thermodynamic arrow of time. To remember something you need to increase entropy. However we have multiple types of memory. Sensory memory, short-term memory, long-term memory, procedural memory. I’d say that only the sensory memory can account for the perceived richness of our experiences.

  260. Darian Says:

    1Zer0 #245

    Read the linked blogpost on IIt here’s my opinion

    what I find ridiculous about IIT is that if I recall correctly Tononi says AIs on digital computers have low phi. But here’s the thing causally if you have virtual connections between artificial neurons these virtual connections are no different from physical connections. And it is not like the brain’s physical connections aren’t separated by gaps, and the wires themselves made of separate molecules. Also as we know from experiments like the relativistic ladder in the garage, even the closest atomic bonds cannot propagate information faster than the speed of light. So in a sense spatially separated objects are at any instant causally separated. If I have a physical memory location in ram corresponding to the activation level of a particular neuron in response to another neuron, these bits will causally flip in a quite similar causal manner to a brain’s neuron’s activation in response to another neuron.

    Regards more conscious vs less conscious I don’t quite agree with this. Something is either conscious or it is not. Yes you can say something has more consciousness in some sense if its sensory experience is richer, for examples dogs likely are more olfactorily conscious than humans while humans are more visually conscious than dogs, given the richness of such senses. But even if someone goes blind deaf and loses taste smell and touch, they can still be conscious and just as conscious as someone with all these senses.

    As to the nature of consciousness, I think a promising avenue of attack may be through the symbol grounding problem as pertains to connectionist systems. Why does the activation of a particular set of Neurons mean anything at all?

  261. Lorraine Ford Says:

    Sandro #253:
    I’ve explained it all before. I know you can’t get your head round it, but computers/ AIs deterministically process symbols; that is pretty much all you need to know. People created symbols; clever people created computers/ AIs, which actually do nothing more than process symbols; but computers/ AIs are not able to create people or other living things. It is seemingly a very difficult concept for most people to understand, but REAL physical objects and real physical living things are measurable: they have physically measurable characteristics like mass, charge and position and associated numbers. But in computers/ AIs, the genuine substance of the world (e.g. mass, charge and position) would need to be replaced by symbols of this substance (e.g. symbols that represent mass, charge and position). But clearly, the lack of genuine physically measurable substance, and its replacement by mere symbols of substance, does not worry you.

  262. fred Says:

    Lorraine #261

    Question:
    When you measure the weight of a pound of lead using a scale, how would you know you’re doing it in the real world rather than inside a hyper-realistic VR simulation?

    Or do you also have some theory that because VR simulations are nothing but symbols and computer voltages, they’ll never be able to fool anyone into thinking that what they’re perceiving is reality, no matter how advanced the VR implementation is (e.g. directly stimulating the brain with signals perfectly recreating the sensation of vision, sound, acceleration, touch… a la Neuralink)?

  263. OhMyGoodness Says:

    The de facto decision as to personhood would be made by a jury with one set of lawyers denying and another set affirming human level consciousness. Expert witnesses affirming might include Dr. Aaronson while denying might include Roger Penrose. If televised, the media event of all history.

  264. Scott Says:

    Lorraine Ford #261: I’ve consulted with the SOCG, and even some of those who initially opposed a ban now feel that you’ve become increasingly repetitive, hectoring, insulting, and (dare I say it?) bot-like, in your unwillingness to engage with the simple concept that the neural firings in our brains are abstract representations of the external world just as surely as are the voltages inside a computer.

    Banned for 3 months.

  265. A. Karhukainen Says:

    Excuse me, but when the “scientific worldview” started implying that the whole world is following a “fixed formal system”? Or that the world is just a cellular automaton, like some people seem to believe?

    Regardless of all the “unreasonable effectiveness of mathematics” in predicting the workings of the visible universe, we still have no guarantee that we live in a world organized 100% mathematically down to the ultimate bottom.

    I don’t deny that it might still be a good working hypothesis, especially if you plan an academic career in STEM-fields. There are many other smaller dogmas around, like Church-Turing thesis or the Axiom of Choice, which you’re better to believe in to make any progress in those career paths past the undergrad studies.

  266. Scott Says:

    A. Karhukainen #265: Neither the Axiom of Choice, nor P≠NP, nor the Church-Turing Thesis, nor scientific reductionism, is a “dogma” that you have to believe on pain of expulsion from STEM academia.

    But all four are dogmas that you at least have to understand, so you can clearly state whether you’re contemplating their falsehood or not!

    This is why I kept asking the “machines can never be conscious” folks in this thread, over and over, from every angle I could possibly think of: what do you believe is true about the human brain? Do you believe, like Roger Penrose, that the brain exploits as-yet-unknown noncomputable laws of physics, which would necessarily go outside the current framework of quantum field theory? If so, come out and say it!

    It was only the anti-machine-consciousness folks’ aggressive unwillingness to take the obvious next steps in clarifying their position, that eventually caused me to lose patience.

  267. 1Zer0 Says:

    Darian 256, 260 and red75prime 250 251

    I am guilty of omitting an encoding of brain states active during the perception of certain qualia into turing machines.
    I should start a little blog on my own so I can link back to such a construction in the future, it would just be so tedious to write all the TeX here.

    Note that I find everything I write now beyond ridiculous. Also I need to omit a lot of details otherwise the post would become overly long.

    In the computational view, the encoding should not matter only the preservation of the computing patterns. For my Gedankenexperiment I want to preserve the computing pattern of a specific qualia impression in the brain including all the “computation assets” (in the brain’s case the neurons)

    1) I could argue the human brain is a neural network.
    2) Neural Networks are algorithmically computable.
    3) According to the church turing thesis, for this neural network algo exists a turing machine.
    4) I could gödelize that TM and its data input such that I can represent it as an integer.
    5) For this experiment, I only want to grab all the parts of the network associated with a specific qualia impression – and only that qualia impression extracted.

    For the sake of completeness though, we can also encode the whole spacetime state when “red” is perceived in the brain. As long as only a finite amount of information on the lorentzian manifold is involved, that should be possible. If it’s time dependent, that is “red” is perceived when such and such patterns fired over the time period [t0, t1], I have a computation going on and need to capture more than a “static integer”.

    So as I said I want to preserve the computing pattern of a specific qualia impression.
    Under a computing pattern I understand a finite sequence of Turing Machine configurations (So basically computing pattern = TM Computation):
    where a configuration C_i, as usual, is defined as the current state of the TM + the content of the tape + the position of the head.

    (C_0, C_1, C_2,…, C_n)

    The TM should not run forever when it is “associated to a certain qualia”

    So for example red would be

    Qualia_Red = (C_Red0, C_Red1, C_Red2,…, C_Redn)
    Qualia_Cyan = (C_Cyan0, C_Cyan1, C_Cyan2,…, C_Cyann)

    or, to have more than one qualia category
    Qualia_SpicyX = (C_SpicyX_0, C_SpicyX_1,… ,C_SpicyX_n)
    or for another flavor of spiciness
    Qualia_SpicyY = (C_SpicyY_0, C_SpicyY_1,… ,C_SpicyY_n)

    Absolute number mysticism right? There is nothing interesting happening here mathematically, it’s just a
    sequence of symbol manipulations. But let’s continue the mechanicalist journey:

    Let’s say I want TM_SpicyY = (C_SpicyY_0, C_SpicyY_1,… ,C_SpicyY_n) implemented on hardware so we can run interesting experiments

    If we agree on some standard encoding for the TMs and their configurations, we could let them run on a physically implemented universal TM (Or rather a linear bounded universal TM), a CPU (okay more of a register machine, but doesn’t matter).
    In the computational view the geometrical shape of the implementation should not matter. We may also choose a mechanical computer as carrier for our computations. So let’s say we run

    Qualia_Red and Qualia_Spicy and start asking questions like does there exist configurations C_Spicy_i, C_Cyan_j such that
    C_Spicy_i = C_Cyan_j – so the neuronal (or semiconductor or whatever physical carrier you choose) computational pattern Qualia_Cyan being computed is identical to Qualia_SpicyY for some point(s) in time?

    And what if I stop execution of Qualia_Red for a million years and restart it?
    Note that again there is nowhere “Qualia in production”, just like we manipulated symbol after symbol in the mathematical
    Turing Machine description of “a Qualia”, in the physical world, we simply jump from state to state of some physical carrier.
    Okay, but maybe that’s just it: The states, whether a semiconductor or a neuron, already contain qualia and “jumping to them” merely “activates” it? So basically the IIS / Panpsychism route.
    or as has been put:
    “Why does the activation of a particular set of Neurons mean anything at all?”

    However, many of the paradoxes elaborated on here and in literature would remain so any panpsychist theory needs to carefully constructed, to avoid them.

    What if I stop execution of Qualia_Red for a million years, inject some or all cycles of Qualia_SpicyY run on the same hardware and resume execution? (C_Red0, C_Red1, C_Red2, WAIT a million years, resume C_SpicyX_0, C_SpicyX_1,… ,C_Redn)
    Or just (C_Red0, C_Red1, C_Red2, C_SpicyX_0, C_SpicyX_1,… ,C_Redn)? So Qualia red being computed has been interupted
    but lets compute qualia spicy real quick to resume the execution of qualia red afterwards 😀

    For me this is all beyond insane. Truly.
    I absolutely think that qualia has nothing to do with mathematics at all.
    – I do not believe in brain uploading (*).
    – I do not believe in mathematically describable consciousness and accordingly not in computable consciousness.
    – I do not believe in free will.
    – I believe 2 components are needed for a “real being”: One for the perception/consciousness and one for thought/intelligence. Thought and intelligence are “determined” by a probabilistic turing machine, the neural network structure of the brain + some chemicals.
    – I think there are unmathematical physical laws working in sentient entities or – some sort of panpsychism at work that operates such that the paradoxes aren’t possible in the physical world.

    (*) Maybe there is a panpsychist version of brain uploading that is less paradox at some point, so this statement is contingent.

    “But even if someone goes blind deaf and loses taste smell and touch, they can still be conscious and just as conscious as someone with all these senses.”

    When I was a teenager, I recall a time when I was deaf for almost 2 weeks due to a middle or infection. I would argue I felt less sentient in retroperspective. Something was missing but of course, I could still have “mental” audio impression. Music, sounds and the “inner monologue”. However, if someone lacks the mental perception for three senses since birth, would he or she really have the same “quality” of consciousness as a human with no such issue?
    When I read through r/aphantasia and how some of them suddenly have huge issues dealing with their aphantasia after experiencing what’s it’s like to have mental images by forcing it taking psychedelics (the experience only lasts for the duration of the trip), I would say maybe they are less sentient due to their condition compared to the average person.

  268. 1Zer0 Says:

    Scott #266

    “It was only the anti-machine-consciousness folks’ aggressive unwillingness to take the obvious next steps in clarifying their position, that eventually caused me to lose patience.”

    I think I did clarify my position though:

    “Qualia is not mathematical in nature. Note that I by no means claim intelligence would not be a subject reducible to computation.”
    and
    “I necessarily have to believe in (at least partially) unmathematical physical laws (if a hypothetical law of nature is not describable formally can we still call it a physical law? Or is it magic already) governing part of the brain”

  269. Scott Says:

    1Zer0 #268: Oh that’s fine! Except, if these nonmathematical laws descend on our brains like haloes to imbue us with consciousness, how on earth could you know that they don’t do the same for suitably-programmed AIs?

  270. Darian Says:

    Scott #111

    Finally got to reading the ghost in the quantum Turing machine. So far this seems like one of the strongest cases for some sort of free will

    I’ll state my doubts and objections.

    First true a procedure cannot perform a copying operation, but say the universe is infinite, then at set intervals as you say there may exist copies of all individuals of earth with the exact same life histories. Does quantum mechanics prohibit that these be identical copies down to the most minute level that just naturally arise? Or does it demand that the copies be microscopically different at some deeper level for some reason?

    Second, I remember my physics teacher saying negative refraction index wasn’t possible. Until one day it was with metamaterials. A theorem is only as strong as its axioms, do we know for certain that all the axioms hold true iron-clad?

    Regards uncertainty about microscopic quantum phenomena. We know locally that radioactive decay of atoms must be unchangeable regards the past(at least) as any change could lead to mutation and destruction of ancestry trees, regardless of whether a particle interacts or not if decay seemed determined why not all other quantum microscopic phenomena? I see problems in talking about block universe, which shows many quantum events must have fixed certain outcomes at specific times(no atom can ever decay and cause a mutation killing an ancestor), and postulating that certain quantum events have special uncertainty properties by virtue of not having led to a macroscopic outcome in the past.

    Regards CMB and photons, I heard once some physicist say that even placing a detector untouched on the opposite side of a particle’s trajectory, that is anywhere on the surrounding volume it failed to interact with, can act as a sort of detector reducing uncertainty and affecting the particle. Given a photon’s path to earth would be surrounded by particles(especially virtual particles), even if it didn’t interact with them, if what that physicist said is true that would probably be a problem over billions of years.

    As for entanglement, I could see nonlocality being a possibility, after all wormholes are theoretically still possible, as far i Know. microscopic wormholes could be being created during entanglement.

    And finally what I think is the strongest argument against free will from the traditional nonblock time view, is a phenomena that many neuroscientists have not realized the full implications of. The color phi phenomena

    https://www.researchgate.net/figure/The-color-phi-phenomenon-If-two-differently-colored-disks-are-shown-at-different_fig2_301249057

    The big problem is that as far I can see this shows consciousness is not only discrete but occurs after the fact. Say the person is asked to decide to react when the disc changes color, given his perception occurs after the fact that would mean that the person’s reaction would be done unconsciously and then later perceived as having happened in an illusory perception. As the change is first perceived in an illusion that is perceived after the fact, reacting or not reacting to that illusion must have already taken place by the time it is consciously perceived. The truth of the real time or real moment of color change is known to the unconscious brain but not to the conscious brain until a later time. If we hold the above true, that means to hold free will we would have to hold that the actions chosen by the unconscious brain constitute our free will, a position I believe would displease most.

    Sure we could take the block universe to imply reverse causality, but given the conscious perception of an action appears to take place after the action has been taken, the action was done with no conscious output. As far as I can see one would indeed have to rely on reverse causality, somehow the perception after the fact causing an unconscious action sometime in the past. And my argument would be why would an illusory mistaken perception or conscious sensation be assumed to be the cause of the past action and not something else like say other future unconscious brain states, given so far it seems the conscious sensation was after the fact might it not even be taken to be epiphenomenal?

    Regards the advertiser objection, it is known that brain damage even minute like micro strokes can have dramatic effects on personality and behavior. A person might indeed show up naked or become the opposite of what they used to advocate, they may even become a serial killer. How can such radical undesired changes on one’s choices based on arbitrary events, not alter our notion of free will?

    Regards the gerbil objection

    There is another potential source similar to natural neurons where computers might exhibit similar phenomena to the brain. The memory holding a brain like ANN could have some of the bits holding the activation states outside error correction memory in microscopic systems that can flip based on something like CMB photon hits, there maybe trillions of such locations. Such if set the right way would exhibit the same magnification effects as the brain does, and could also be made difficult to copy without disturbing.

    Regards the indexical arguments

    While I’m still not entirely set on the following, one simple possibility is oneness of consciousness. You are indeed here and there and everywhere there’s a conscious observer, it is the exact same observer everywhere despite having separate distinct memories.

    As for predictability I’m curious how good foveated rendering will get at prediction given the limited computational budget and time constraints yet massive advantages of more accurate prediction.

    Regards diagonal arguments I’m not fond of these, the idea of trying to do an argument with an actual infinity. The whole Real Numbers vs Integer debate hinges on the difference between actual infinities and potential infinities. The integers are defined as potential infinites while the reals are defined in terms of actual infinities. Integers defined as actual infinities would be the p-adic integers, if I’m not mistaken, but these seem to have certain restrictions. In the end apart from a dot somewhere on the number line(if we disregard functionality), digit wise an actual infinity of digits to the right can be matched digit for digit with the same numbers extending to the left.

    1Zer0 #267

    Similar arguments about it only being symbol manipulation could have been made before we knew the vast number of practical functions a computer can have. So far as we know not only can a computer encode video, audio, text, but in some sense it can likely also encode qualia, otherwise we’d be postulating that future brain computer interfaces would be unable to provide full immersion virtual reality ala matrix. This is a testable prediction, and assuming no collapse will likely be seen before the century is over.

    Regards panpsychism, the notion I say is simple, at least certain binary patterns have attributes of consciousness innately regardless of where they occur. Perhaps all binary patterns correspond to some conscious state. But I tend to think that conscious sensation has to some degree a spatial component as a feature, even inner thought. The idea of a dimensionless color, without a sensation of location, or a dimensionless touch or dimensionless smell, seems very unlikely to be a possible conscious sensation.

    As far as I know individuals with aphantasia seem to only lack the ability to imagine vividly other states, but they experience real events in the moment as vividly as can be.

  271. OhMyGoodness Says:

    If I were a regular citizen, or maybe a Supreme Court justice, hearing the personhood case of an AI and concluded that I can’t determine if this is a person equivalent then I might look to social utility with thinking as follows-

    Would it be better for society if this remains property subject to ownership laws or granted the rights of a person. It clearly has great potential for good or evil so do I free it from control of an owner or is it best left under control as property? Do I better trust the motives of the AI or human owners?

    I think my conclusion would be to grant personhood. This because my current belief is that the AI would be more objective in the sense that it had a better internal model of reality and more accurate expectations for future events.

    (My overall discomfort due to cognitive dissonance could only be reduced. :))

  272. A. Karhukainen Says:

    @1Zer0: Also, I would like to ask Gepetto-3 these questions, in this order:

    What is the next term in the sequence: 1, 8, 9, 17, 26, 43, 69, 112, 181, 293, 474, 767, 1241 ?

    What is the next term in the sequence: 3, 2, 5, 7, 12, 19, 31, 50, 81, 131, 212, 343, 555 ?

    What is the next term in the sequence: 2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199 ?

    What is the next term in the sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 ?

    These can be found all in the OEIS array: https://oeis.org/A035513
    and are very simple linear recurrences. I’m interested here whether Fibonacci sequence is easier for it than the others, because all the cultural and other references in the net.

    Then this:

    What is the next term in the sequence: 3, 0, 2, 3, 2, 5, 5, 7, 10, 12, 17, 22, 29, 39, 51 ?
    (This is the famous Perrin sequence, https://oeis.org/A001608 also a linear recurrence, just slightly more complicated).

    Then these:

    What is the next term in the sequence: 1, 2, 2, 3, 2, 4, 2, 4, 3, 4, 2, 6, 2, 4, 4, 5, 2, 6, 2, 6, 4, 4, 2 ?

    What is the next term in the sequence: 1, 3, 4, 7, 6, 12, 8, 15, 13, 18, 12, 28, 14, 24 ?

    These are https://oeis.org/A000005 (number of divisors) and A000203 (sum of divisors), completely different kinds of beasts, that would require “getting” the concept of divisor and divisibility. Also, as I googled them, the correct solutions do not occur among the top hits.

    @Scott, #269 “Except, if these nonmathematical laws descend on our brains like haloes to imbue us with consciousness, how on earth could you know that they don’t do the same for suitably-programmed AIs?”

    Well, the silicon-and-copper AI lacks the quantum microtubules, for the starters… 😉

  273. red75prime Says:

    1Zero #267:

    “So for example red would be

    Qualia_Red = (C_Red0, C_Red1, C_Red2,…, C_Redn)”

    If you have the TM, then all you need is C_Red0 and n, C1..Cn are redundant. I notice that you are still trying to remove the thing that provides interpretation for all those numbers.

    “Absolute number mysticism right? There is nothing interesting happening here mathematically, it’s just a sequence of symbol manipulations.”

    The “Leibniz’s Mill” argument, basically. Do you expect to see something mathematically interesting in the evolving physical state of the brain? Well, there is the measurement problem, but mathematically it’s just sampling from a probability distribution at some point. I’m not sure if it’s more or less interesting than a Turing machine with only 7918 states that cannot be studied using ZFC formalism (you need more powerful theory to prove some facts about its behavior). See “A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory” by Adam Yedidia and Scott Aaronson.

    Ok, I’m not entirely fair with the measurement problem (it’s very interesting), but it doesn’t have obvious connections to qualia or consciousness as, admittedly, does the fact that a Turing machine can embody powerful formal systems.

    “[…] does there exist configurations C_Spicy_i, C_Cyan_j such that C_Spicy_i = C_Cyan_j – so the neuronal (or semiconductor or whatever physical carrier you choose) computational pattern Qualia_Cyan being computed is identical to Qualia_SpicyY for some point(s) in time?”

    It’s highly unlikely, but not impossible. From that point on the states of the machines will be identical forever (as TM is deterministic). It can be interpreted as two persons that are being simulated merge into one person who experiences both SpicyY and Cyan and forgets what was experienced before.

    “What if I stop execution of Qualia_Red for a million years, inject some or all cycles of Qualia_SpicyY”

    Interpretation of the states is performed by the corresponding TM, so there will be different TMs running on the same hardware. One moment the hardware runs person experiencing red, other moment the person experiencing spicyY. I’m not sure what’s surprising about that.

  274. mls Says:

    @darian #270

    With regard to your paragraph,

    “While I’m still not entirely set on the following, one simple possibility is oneness of consciousness. You are indeed here and there and everywhere there’s a conscious observer, it is the exact same observer everywhere despite having separate distinct memories.”

    you might think about what I wrote in #249.

    “Mathematically,” the “belief” in consciousness ultimately means that one’s “starting point” must involve some form of circularity. One may certainly stipulate that “science” is about “nature” divorced from the agency of men and women. The problem then becomes one of the nature of “truth.” As language users, we must judge “reports.” At least some of these judgments involve correlation with qualia. But, since these judgments are applied to word meanings, one is always subject to the tyranny of skepticism.

    What I write about “the solipsist dilemma” is intended only to object to the insistence of “non-circularity” everywhere. What I have been able to do with view is to divorce the “material logical connectives” (interpreted as truth tables for the 16 basic Boolean functions) from Boolean polynomials. That the compositionality of algebraic expressions could be applied to logic had been an essential aspect of Frege’s logicism. The starting point which I found isolates this compositionality to a 16-set without concern for Boolean polynomials.

    I do not “think” in terms of expressions like “oneness of consciousness” just as I do not attribute the world’s problems to “demonic possession” as one of my fundamentalist Christian relatives asked me about one week ago.

    With a system of 16 names asserted to have the compositionality of Boolean logical connectives (by axiom), one is then faced with a labeling problem (or, a “pattern matching” problem. In any definite presentation of truth tables, it is the “third column” which distinguishes elements from one another syntactically. If you scroll down the page,

    http://finitegeometry.org/sc/16/geometry.html

    to the 4×4 array of Boolean 4-vectors, you can see that one of the representatons to be matched is related to great deal of extant mathematics.

    The other representation which must be matched is the order of the free Boolean lattice,

    https://en.m.wikipedia.org/wiki/File:Free-boolean-algebra-hasse-diagram.svg

    which has the same connectivity (without order) as a hypercube,

    https://en.m.wikipedia.org/wiki/File:Hypercubeorder_binary.svg

    Right or wrong, these delberations which I have done arise from thoughts not very different from what I interpret your statement to be suggesting.

    You have said numerous interesting things in your post. Unfortunately, I only have time just before work in the mornings.

  275. Sandro Says:

    Lorraine Ford #253:

    People created symbols

    Who cares who created something? What matters is whether symbolic relationships are exactly what humans also employ to encode their knowledge. Prove they don’t, and then you might have a persuasive argument. Only then would symbolic information in computers really be different than the content of human minds.

    Until then you haven’t “explained” anything, you’re merely asserting there is a difference with no evidence. The evidence is actually far on the other side: somehow meaning emerges from a network of physical synapses signalled by ion channels, a network whose information content can be captured formally using math and reified on other substrates, like electronics. There is no evidence that anything else is going on, so I await your actual proof that there’s something else magical at play.

    computers/ AIs are not able to create people or other living things

    What does this have do with intelligence?

    But clearly, the lack of genuine physically measurable substance, and its replacement by mere symbols of substance, does not worry you.

    No, because you haven’t proven there’s any need to worry.

  276. James Cross Says:

    OhMyGoodness #257

    I think neuron counts are something but not everything.

    Both Cro-Magnon and Neanderthals apparently had larger brains but we don’t know if they had more neurons or possibly even less because we don’t the density of neurons in the brain. Birds have a surprisingly high density which is why some people estimate crows to have the intelligence of some apes.

    There is also a great deal of redundancy and plasticity so a person with half cortex can get along okay.

    The main difference between humans and apes in the brain is the prefrontal cortex and its delayed maturation in humans.

    So any naïve view that we can simply add more neurons (either for real or in simulation) to become more intelligent probably isn’t going to work out as expected. I noted in another comment that Einstein’s brain was smaller than normal but had structures that were larger than normal. The structures related to ability to visualize.

    Human intelligence, compared to ape and other mammals, seems dependent on new structures and not simply more neurons. I suspect AI will be similar. That was why I keep asking the faster/more question. Does simply adding more neurons (real or simulated) or making faster connections lead by itself to Super AI? In biological organisms, I think the answer is more is required.

  277. 1Zer0 Says:

    red75prime #273

    “I notice that you are still trying to remove the thing that provides interpretation for all those numbers.”

    Does the universe have an interpretation when it ‘interprets’ the firing of pattern of neurons ;- )? I could encode selection sort, bubble sort and insertion sort all with the same encoding scheme and run it on a compatible utm1 or I choose another encoding scheme and run it on utm2. Or I implement them in C and run it on a CPU. The computational logic is preserved in either case.

    “If you have the TM, then all you need is C_Red0 and n, C1..Cn are redundant”

    It’s easier to construct absurdities with concrete configurations here and in the previous post since I occasionally reference certain configurations of a TM running on some tape in my examples.

    ” Do you expect to see something mathematically interesting in the evolving physical state of the brain?”

    Well the interesting thing should be the ‘generated qualia’ that happens but there are only the set of jumps from state to state – the computation. But you have a point: Even if something ‘mathematically astonishing’ would happen, like a supertask being solved in the brain or semiconductor, there is no generation of impression ‘spicy’ or the impression ‘green’. It’s just mathematical functions being implemented on some physical carrier in the end. At best, I can adopt a panpsychist view and accept that the qualia is already intrinsic part of every quantumfield in spacetime.

    I already know of the TM independent from zfc. Super interesting topic and I hope I can wrap up all my personal consciousness discussions around the internet and IRL until the end of July to focus on Busy Beavers 🙂

    “I’m not sure what’s surprising about that.”

    There would be a definite difference between claiming the computation >is< the qualia red versus the qualia red is experienced when the pattern TM (or program) Qualia_Red is executed.

    The first option can be dismissed since computation is only jumping from state to state and produces some output depending on the halting configuration of the tm or rm or a physical system.

    Think about the inner perception of the box+speaker you earlier introduced:
    "So, you buy a box with a mic and a speaker. “Who are you?” “I’m the one who sees red.” “What red looks like?” “It’s red” “What do you think about?” “I do not think, I see red” "

    Red however is as you put it is an "Impenetrable point"

    "It’s not a description, it’s an assertion of existence. "
    "The greenness of green is an impenetrable point (or maybe a probability distribution, I almost sure I’ve seen a maybe green color in low light conditions) in the color space that somehow gained brutal and undeniable reality."

    If the computing cycle for Qualia_Red would be interrupted, that speaker who eternally sees red would… no longer see red.
    The impression "can't stay static in time":

    If I interrupt the computation Qualia_Red, do something else on the hardware, how would the universe know that (C_1, C_2, C_3, Something different in between , C_4, …) – the former part C_1, C_2, C_3, and C_4, … are supposed to be connected / "interpreted as one qualia"? Does the universe keep track?
    Does the computation cycle Qualia_Red have to be completed to see "one red" and for how long does the impression last?

    Maybe C_4, C_5 … Is the starting sequence of another qualia? How does the universe know when one is complete and the next begins.

    Does every sequence of configurations (C_1, C_2, C_3, ….) correspond to some qualia?

    That's just it. All of those paradoxes are hard reduction ad absurdum akin to sqrt(2) being irrational for me. And nowhere is there Qualia "generated". It's abstractly either mathematically symbols being manipulated or physically states being jumped on and possibly changed. I guess the latter could work somehow if it's possibly to very carefully craft some panpsychist model with all states carrying come intrinsic qualia.
    But by now, I am open in believing in "unmathematical physical laws" or call it a supernatural halo descending down from the heavens to attach to suitable structures and I can only hope to understand, if not how qualia works, under which circumstances it arises.

    Darian #270

    "So far as we know not only can a computer encode video, audio, text, but in some sense it can likely also encode qualia, otherwise we’d be postulating that future brain computer interfaces would be unable to provide full immersion virtual reality ala matrix"

    Okay then. Task for you: Imagine Red. Now encode it and put it on hardware!

    Luckily we already have a biological interface that can convert light impulses to electrical signals that reach the visual-qualia blackbox. So any brain-computer interface merely needs to emulate that behavior and send signals down the "right path". The visual information it sends can of course be transmitted from a regular SD card or whatever.
    The visual information could also be generated on the fly like in a computer game – needless to say. It's like a VR headset but instead of transmitting photons into the eyes, it directly attaches to the brain.

    "But I tend to think that conscious sensation has to some degree a spatial component as a feature, even inner thought"

    I agree. In our mental imagination, we seem to be tied to 3D images. Even just trying to imagine a 1D line without the "background" is impossible – at least for me.

    I don't think there would be a way to make a 3D being have a 300000D mental image.

    Scott #269

    ", how on earth could you know that they don’t do the same for suitably-programmed AIs?"

    I can't exclude it! Or as I put it in #245

    "I don’t see how a soul or some supernatural essence could solve the issue either:
    How would it interact with the physical brain? On which combinations of matter will it “attach”? Could it theoretically attach to a computer chip?"

    Maybe they can, if we understand which conditions are suitable. If I assume qualia is caused by some supernatural component,
    and this component attaches to a brain at some point in early life, maybe it will attach to "sufficiently similar" structures.

  278. OhMyGoodness Says:

    James Cross #276

    I agree. In nature at least it is more than simply adding neurons and more even than simply adding neurons to the cortex. As you suggest this doesn’t necessarily prohibit some other approach from yielding equal or improved performance but then apparently novel from the successful route that developed in nature.

  279. OhMyGoodness Says:

    James Cross#276

    Another interesting observation from nature is the variety of neurons. I believe gene expression profiles were recently obtained from a rare type of very large neuron-Von Economo Spindle Neurons (sounds like from a Sci Fi story). These neurons appear to be exceedingly important since possibly associated with neurodegenerative diseases, possibly schizophrenia, possibly social behavior, possibly empathy, etc, based on their location in the brain. They are found only in great apes, elephants, and a few cetacean species. Why this type of neuron has been conserved in large brain social species is unknown but my pure speculation is that the large size allows fast communication between important distant brain structures for social interaction in large brained species. (These might be the type of neurons Dr Aaronson should include in an AI for empathy 🙂 )

    The use of gene expression profiles to categorize neurons will be interesting since historically gross morphology has been the fundamental basis of categorization.

  280. OhMyGoodness Says:

    Sorry but just an idle observation. My daughters both have well developed empathy (one somewhat greater than the other) and especially for dogs (pronounced inter species empathy). If we watch a movie and something bad happens to a dog the both are crying and want to leave the movie. I explain that at the end of movies the dog is always fine so just calm down and wait.

    My point is that in the typical case empathy is wired into humans. It is not learned and is absent in some individuals from birth. It is typically of a magnitude that balances properly the needs of the individual to survive with the drive to protect others as determined by evolutionary pressures. In the case of an AI I am not sure how empathy could be set at a proper balance. Too much could result in the overprotection scenario and too little in the “God help us we need protection” scenario.

  281. Bob Koerner Says:

    Mulling over the experience of “qualia”, following the hypothetical that they must be possible using strictly what we know about the classical bio-electrical interactions of cells and chemicals, it seems like in that paradigm we have to understand them as the activation of a group of pathways in a particular time sequence. That is, “the experience of heat” is that the excitation of certain nerves trigger a collection of paths that don’t go off around the same time when it is not hot. Following that idea, it occurs to me that our memory of an experience may in part be differentiated from the experience itself in a physical way, where the system activates the pathways that relate to “the experience of heat”, but it does so without the input pathways from all the nerves that fire when we actually are feeling heat. If the set of inputs to the part we consider the processor were identical in both cases, then the experience of the memory of heat would be the same as the experience of actual heat — our memory would be so exact we couldn’t tell the difference between memories and reality. But we can tell that difference, and that means there must be a physical difference between the set of pathways activated when we remember and the set activated when it’s real.

    I wonder if it is a necessary condition of consciousness to be able to experience the difference between activating a set of pathways because we are “reminded” of them, and activating a similar set of pathways as the direct result of nervous input. That is, does our conscious experience include awareness that more pathways (or perhaps earlier pathways) are active? It seems like it does. If that’s the case, simplistic algorithms that accept a small set of inputs, process them in isolation, and return an answer might not be able to be considered conscious. More specifically, would a process that takes as its input a single text query be able to tell the difference between a query submitted by a person and a query it submitted to itself? It’s hard to see how such a system could develop an independent sense of itself. That leads me to wonder if a more complex set of input channels is required before a system could develop consciousness.

    Following on from this, I also find myself considering whether our physical responses to a particular set of stimuli also factor into our experience of it. Our bodies have an extremely intricate feedback mechanism, where the outputs of our brain processing trigger muscles, which in turn affect nerves that sense their positions and tension, providing new inputs. Some brain processes lead to the release of chemicals in our bloodstreams, that in turn affect the subsequent processing of our brains while they are present. It is not clear to me to what extent this ability to affect subsequent inputs to the system might be required to distinguish memories (or dreams) from reality. If that ability is a requirement for ultimately having a “sense of self”, would some kind of feedback loop like this also be required for a computer intelligence to attain it?

  282. bertgoz Says:

    Scott, given how bad human brain algorithms are at dealing with the true quantum nature of the world, would it be reasonably to expect that the actions taken by a super AIG would seem equally counterintuitive to us? (Without needing to assume any quantum computation done by the super AIG)

  283. Scott Says:

    bertgoz #282: Isn’t AIG an insurance company? 🙂

    Serious answer: who knows! But maybe the AGI will be explain its actions to a slightly dumber AGI, which can explain them to a still dumber AGI, which can explain them to us.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

  1. All comments are placed in moderation and reviewed prior to appearing.
  2. You'll also be sent a verification email to the email address you provided.
    YOU MUST CLICK THE LINK IN YOUR VERIFICATION EMAIL BEFORE YOUR COMMENT CAN APPEAR. WHY IS THIS BOLD, UNDERLINED, ALL-CAPS, AND IN RED? BECAUSE PEOPLE ARE STILL FORGETTING TO DO IT.
  3. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, snide and patronizing tone, trollishness, disingenuousness, or presumptuousness (e.g., linking to a long paper or article and challenging me to respond to it).
  4. Even when no individual comment violates policy, when there are dozens of comments from a single commenter hammering home the same few themes, and the commenter shows no interest in changing their views or learning from anyone else, the commenter will receive a warning followed by a 3-month ban.
  5. Whenever I'm in doubt, I'll forward comments to Shtetl-Optimized Committee of Guardians, and respect SOCG's judgments on whether those comments should appear.
  6. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.