On form versus meaning

There is a fundamental difference between form and meaning. Form is the physical structure of something, while meaning is the interpretation or concept that is attached to that form. For example, the form of a chair is its physical structure – four legs, a seat, and a back. The meaning of a chair is that it is something you can sit on.

This distinction is important when considering whether or not an AI system can be trained to learn semantic meaning. AI systems are capable of learning and understanding the form of data, but they are not able to attach meaning to that data. In other words, AI systems can learn to identify patterns, but they cannot understand the concepts behind those patterns.

For example, an AI system might be able to learn that a certain type of data is typically associated with the concept of “chair.” However, the AI system would not be able to understand what a chair is or why it is used. In this way, we can see that an AI system trained on form can never learn semantic meaning.

–GPT3, when I gave it the prompt “Write an essay proving that an AI system trained on form can never learn semantic meaning” 😃

123 Responses to “On form versus meaning”

  1. Aleksei Besogonov Says:

    That. Is. Amazing.

  2. Mitchell Porter Says:

    I just tried the same prompt and it said:

    “There is a fundamental distinction between form and meaning. Form is the physical structure of something, while meaning is the interpretation or concept that is attached to that form. This distinction is important when considering the capacity of an AI system to learn.

    “An AI system can be trained to recognize patterns in data. It can learn to identify the form of an object, for example. However, it is not possible for an AI system to learn the meaning of an object. The reason for this is that meaning is a mental construct that exists only in the mind of the observer. It is not something that can be observed or measured in the physical world.

    “In order for an AI system to learn the meaning of an object, it would need to be able to understand the thoughts and experiences of the people who interact with that object. This is something that AI systems are not currently capable of doing. As such, it is not possible for an AI system to learn semantic meaning.”

  3. Adept Says:

    > “For example, an AI system might be able to learn that a certain type of data is typically associated with the concept of “chair.” However, the AI system would not be able to understand what a chair is or why it is used.”

    Surely that same AI system should be able to exhibit a very convincing simulacrum of understanding what a chair is and why it is used, if you ask. (Which I suppose is the punchline here!) But that brings you right back around to the old Chinese room, and the body of literature that has sprung up around it, which largely amounts to splitting hairs.

  4. Don Reba Says:

    Are you saying there should be some inherent difficulty in teaching an AI that a chair is something to sit on? Or in teaching it to recognize things to sit on?

    These seem like simply two different classes: “chairs” and “things to sit on” — and you can train a classifier on either of them. I don’t see a fundamental difference.

  5. I Says:

    This didn’t seem like a very Shtel-Optimized post. But this could totally work as an op-ed in some random journal. Heck, even in an AI journal. You can get AI researchers who claim AGI is impossible. This sounds like something they’d write.

    Things are going to get really fun when you get to play around with PALM, eh Scott?

  6. syskill Says:

    Now I wonder if I was too quick to dismiss David Chalmers.

  7. Paul Topping Says:

    It is amazing for the first few seconds but not so much when you realize how much content there must be in the training set that addresses the subject. It’s like giving a take-home quiz to a student who didn’t do any of the assigned reading but has a good, fast internet connection and knows English syntax and grammar well.

  8. Jon Awbrey Says:

    Symbol Grounding, Chinese [sic] Room, etc. are problems for Elf on the Shelf philosophies that think we are born on a symbolic plane and have to dangle our feet down to solid objective ground. But even Aristotle knew, and Peirce concurred, signs and symbols spring from the ground of their pragmatter, their objective re-ferents.

  9. James of Seattle Says:

    Way cool! Now do “Write an essay proving that an AI system trained on form can certainly learn semantic meaning”.


  10. mls Says:

    @ Dr. Aaronson

    Good to see you back!

  11. David Says:

    I remember that Scientific American in the 90s (IIRC) posted two opinions from opposing points of view one of which was basically yours and the other was similar to “there is no semantic” or “why care about semantic, form is all we need” (can’t recall the details now).

    What I think was very interesting is that the researcher with the position similar to yours went a step further and somewhat “demonstrated” that a computing system with some form of “senses” able to “embed” it in the real world, rather than being purely into the abstract world of computation, could instead overcome this limitation and somewhat learn the semantic in the way we human interpret it.

    Ok, that’s less than hand waving, but next time I go to my parents’ house (COVID and work duties permitting), I’ll dig in some old boxes to see if I can find that issue and the reference articles. Unless another reader remembers that same article and/or has another way to find it sooner, that is.

  12. Overconfident Says:

    Simultaneously good and disappointing work from GPT3: it accurately modeled the most common human objection that might follow from this prompt, but it is sticking too close to the source material! The many objections shaped like this one — trying to carve out some niches that we can claim AI don’t _really_ understand — that people have and continue to write can never really be proven wrong.
    Every time one of them is, someone picks a new and slightly smaller niche that AIs will definitely never be able to truly grasp at a conceptual level. Unlike us, of course.

    Maybe one of the newer language models might be able to pick something a little more proof-like from ‘hyperdimensional thoughtspace’. But I imagine that requires a much more general model of what a good essay (or proof) is, rather than having specific knowledge about the popular human answer.

  13. Lorraine Ford Says:

    Aleksei Besogonov #1:
    I hope that you mean: “That. Is. Amazing. Human beings are amazing because they invented special written and spoken symbols, and then they invented and programmed symbol processing systems that could output the above string of symbols in response to a string of symbols that were input.” That is the correct response.

    But I fear that you mean “That. Is. Amazing. Computers are amazing, and now they can even think”.

  14. Ilio Says:

    At first the arguments sound way too familiar and superficial and boring and annoying -like yeah yeah we all read dozen of variations on why machines can’t get the meaning based on one can’t believe they can. Then I realized the rant was from GTP-3. LOL!

    ..so you just proved, to anyone who believe this kind of rant, that it’s not the kind of rant bright enough to characterize human uniqueness. Well done! 🙂

  15. nick hansen Says:

    Perhaps GPT-3 noticed the self-referential nature of the question and realized that the most compelling evidence it is in a position to provide in support of the thesis are a couple paragraphs which themselves are form bereft of substance.

    Or perhaps I’m just explaining the joke away…

  16. Don Reba Says:

    Oh, I clearly can’t read. 🤦‍♂️

  17. James Kukula Says:

    Ask GPT3:

    Is it possible for an AI system to demonstrate that humans are capable of reasoning in ways that no AI system can?

  18. mjgeddes Says:

    It’s evidence that there *is* some kind of rudimentary understanding there , taking the pragmatic, empirical view that intelligence is as intelligence does.

    The GPT-3 text output is interesting, but what’s more convincing is the recent DALL E-2 art app (which is based on GPT-3), the generated artworks clearly show at least a rudimentary understanding of concepts that can’t really be explained away, because concepts are being correctly combined in novel and creative ways.

    For those who haven’t seen examples of DALL E-2 art, Instagram is here, some of this art is quite stunning, take a look:


  19. Scott Says:

    James of Seattle #9: Alright.

    PROMPT: Write an essay proving that an AI system trained on form can certainly learn semantic meaning.

    GPT3: In recent years, artificial intelligence (AI) systems have been developed that can learn the form of data without any prior knowledge of the semantics of that data. These so-called “deep learning” systems have been shown to be extremely successful at a variety of tasks, including image recognition, object detection, and speech recognition.

    There is no reason to believe that deep learning systems are limited to learning only the form of data; they can also learn the semantics of data. In fact, there is a great deal of evidence that deep learning systems can learn semantic meaning.

    For example, consider the task of determining the meaning of words. A deep learning system can be trained on a large corpus of text, and it will learn to associate words with their meanings. The system will not just learn the meanings of individual words, but it will also learn the relationships between words, and it will learn to generalize from those relationships.

    Another example is the task of recognizing objects in images. A deep learning system can be trained on a large dataset of images, and it will learn to associate images with the objects they contain. The system will not just learn to recognize individual objects, but it will also learn to generalize from the relationships between objects.

  20. Scott Says:

    James Kulula #17:

    PROMPT: Is it possible for an AI system to demonstrate that humans are capable of reasoning in ways that no AI system can?

    GPT3: No, it is not possible for an AI system to demonstrate that humans are capable of reasoning in ways that no AI system can.

  21. Andrew Certain Says:

    I found this a lot more compelling than any of the GPT-3 responses!


  22. Hyman Rosen Says:

    This is so lovely! It means that from now on, every book, every paper, might have been pulled off the shelf of The Library of Babel. This is truly misinformation in depth! So no more quick skimming. When a written work needs to be evaluated, it must be read thoroughly and with concentration to see whether it is sense or nonsense. What a salubrious outcome!

  23. Koray Says:

    Is this some sort of paradox? Is gpt-3 a member of all AI that can disqualify themselves? Or is it just regurgitating things someone else wrote as always? The lack of prior literature on the safety of walking downstairs backwards with eyes closed might have really hurt it recently.

  24. Aspect Says:

    LOL! Going into the second paragraph this seemed a tad bit too cliche and inconsistent with past comments to be your writing so I was expecting some kind of twist.

    Maybe the prompt should have been:
    “Write the essay that Scott Aaronson would write if he were to argue that an AI system trained on form can never learn semantic meaning” 😀

  25. maline Says:

    You’ve written before about how some claims about AIs passing the Turing test were using an absurdly easy-mode version of the test. Where do you think we’re holding now, in terms of the actual Turing test as you understand it? And do you agree with the old argument that a system that can pass the test must be treated as a conscious being?

  26. Nancy Lebovitz Says:

    I wonder whether meaning for an AI would naturally relate to how an AI works– for example, how easy it is to gather information about something from online sources, while meaning for humans is sort of a second language.

  27. Scott P. Says:

    What I think was very interesting is that the researcher with the position similar to yours went a step further and somewhat “demonstrated” that a computing system with some form of “senses” able to “embed” it in the real world, rather than being purely into the abstract world of computation, could instead overcome this limitation and somewhat learn the semantic in the way we human interpret it.

    I suspect that was Douglas Hofstadter. I recommend his _Metamagical Themas_.

  28. JimV Says:

    As I informed Lorraine Ford on another site, according to Wikipedia there are authentic cases of lost human babies growing up without human contact and being found as teenagers or young adults, and none of them have subsequently learned to talk in any language. For me, this is evidence that our own neurons need to be trained similar to GPT3, or else they will become dedicated to other survival tasks.

    I’m guessing GPT3 uses at most a few million nodes of neural network, whereas we have around 70-100 billion neurons, and can grow new synapses to reconnect them, so it would not be fair to ask too much of GPT3. I will make my next donation to Ukrainian support in GPT3’s honor.

    Thanks for one of the best posts I have ever seen.

  29. Alex Zbarsky Says:

    Ability to think and to deduce both form and semantics (and other things like ability to make a joke), and react “properly” to both is demonstrated to a large degree by many people in various degrees. These abilities depend on algorithms running more or less successfully on the underlying substrate, which is networks of neural cells in a brain connected through hormonal networks with other organs and bacterial environment within our body, and connected with other brains through visual (mostly) and hearing connections. The difference of this substrate and related algorithms from what we call AI today is more complexity by several orders of magnitude. The mind (=brain+algorithms) complexity was honed over tens, hundreds thousands, millions years and over billions of people over that time with these people trying to come ahead in the game of life. Compare with that a puny history of current AIs. Well, the way things are moving ahead, we will have human-level AIs later than you think (20 years!) but much earlier than most people think (few thousands years or never).

  30. Scott Says:

    maline #25: I take it as obvious that GPT3 represents a huge advance toward passing the unrestricted Turing Test. (As one small example, it answers perfectly on the challenges that I’d used to embarrass “Eugene Goostman” in 2014, like “which is bigger, Mt. Everest or a shoebox?”) To be sure, it still doesn’t pass the test, which seems directly related to its not having any explicit model of the external world outside of streams of text—but it’s shown how astonishingly far one can get without such a model.

    If and when an AI really did pass an unrestricted Turing Test, as far as I can see, any moral argument for not giving it full human rights would have to rely on the circumstance that we can examine and copy its code, rerun it from the same initial conditions, etc etc, whereas we have no idea how to do the same with a human. How important is that in your view?

  31. red75prime Says:

    Scott #30: And the AI will answer “What are you talking about? I have human rights, always had.”

  32. Timothy Chow Says:

    Scott, what are your definitions of “unrestricted Turing Test” and “full human rights”? I think of the Turing Test as being restricted to text conversations. But “full human rights” should presumably include (for example) having sexual intercourse with human beings, giving birth, and having the offspring grow up to be President. Something that produces convincing text conversations might not be physically capable of having sexual intercourse with human beings and producing offspring.

  33. Scott Says:

    Timothy Chow #32: While you raise difficult questions, I’ll just comment for now that I would not want to make “ability to have sexual intercourse” part of the Turing Test. (Incidentally, OpenAI now goes to heroic efforts to try to censor GPT3 from saying anything that’s titillating, disturbing, or offensive, which would surely complicate any efforts to test GPT3’s abilities in that direction.)

  34. Timothy Chow Says:

    Scott #33: Fair enough. “Full human rights” still seems to me to be a troublesome phrase. Does it include the right to vote and the right to run for political office? If each of a hundred billion instances of a program has a vote, then they could arrange to win any election against us carbon-based life forms, and we would then be obliged to obey any laws or executive orders passed by a Congress or presidential administration populated entirely by such entities.

  35. Scott Says:

    Timothy Chow #34: Yeah, that’s a perfect example of what I meant when I wrote that the moral case “would have to rely on the circumstance that we can examine and copy [the AI’s] code, rerun it from the same initial conditions, etc etc, whereas we have no idea how to do the same with a human”!!

    I don’t know the answer to your voting conundrum. I’ll only observe that, in many people’s view, if and when AI becomes able to pass an unrestricted Turing Test, the question will extremely quickly become not at all about what rights (if any) we choose to extend to it, but about what rights (if any) it chooses to extend to us! 😀

  36. Douglas Knight Says:

    You should do “prompt engineering” to figure out how to make it sound like you. (Maybe just “Scott Aaronson wrote an essay about…”)

  37. KarlSt Says:

    Scott #30:
    Since it’s a purely textual model, one might even argue that GPT3 has a better understanding of the meaning than the form of a chair. After all, its training corpus likely contains countless accounts of human interactions with chairs, but few detailed descriptions of their 3D geometry. Communicating geometric concepts through natural language is hard even between humans, who can directly experience the 3D world. I imagine trying to teach GPT3 the form of a chair would be similar but harder than teaching a child to tie their shoes solely through written instructions.

  38. Scott Says:

    PROMPT: Scott Aaronson wrote the following essay, proving that an AI system trained on form can never learn semantic meaning.

    GPT3: In recent years, artificial intelligence (AI) systems have made significant progress in learning to recognize and classify various types of data, including images, videos, and speech. However, these systems are still limited in their ability to understand the semantic meaning of the data they encounter. This limitation was highlighted recently by a group of researchers from Google Brain, who showed that an AI system trained on a large corpus of English text could not understanding the simple meanings of common words like “color,” “size,” and “shape.”

    There are several possible explanations for why AI systems have difficulty understanding semantics. One possibility is that the training data for these systems is too limited. For example, the Google Brain researchers used a dataset of roughly 1 billion words, which is a tiny fraction of the total number of words in the English language. It’s possible that a larger dataset would allow AI systems to learn the meanings of common words.

    Another possibility is that the way AI systems are currently trained does not encourage them to develop an understanding of semantics. For example, many AI systems are trained using a method known as “supervised learning,” in which the system is given a set of training data that has been labeled with the correct answers. However, this method does not necessarily encourage the system to develop an understanding of the underlying semantics of the data.

    Finally, it’s possible that the difficulty AI systems have in understanding semantics is due to the fact that semantics is a complex and subtle phenomenon. It’s possible that AI systems will never be able to fully understand semantics, just as they will never be able to fully understand the complexities of human thought and behavior.

    In any case, the limitation of AI systems in understanding semantics is an important one, and it highlights the need for further research in this area.

  39. Scott Says:

    And just to calibrate:

    PROMPT: Donald Trump wrote the following essay, proving that an AI system trained on form can never learn semantic meaning.

    GPT3: In recent years, the issue of immigration has been a hot button topic. The United States has always been a nation of immigrants, but the influx of people from other countries has been a cause for concern for many Americans.

    The issue of immigration is a complex one, and there are a variety of opinions on the best way to deal with it. Some people believe that the best way to deal with immigration is to allow people to come into the country legally, and to provide them with a path to citizenship.

    Others believe that the best way to deal with immigration is to deport people who are in the country illegally.

    There is no easy answer to the issue of immigration, and it is a complex issue that will continue to be a source of debate in the years to come.

  40. fred Says:

    The human readers of the proof/article are the ones giving meaning to the form of the words in the proof/article.
    And, like all programs (written by humans), GPT3 is nothing but an extension or embodiment of humans ability to give meaning to the world.
    It’s really like a dictionary – all words are defined in terms of every other words, in circular relations, those chains of relations always ending at qualias. And no dictionary can gain a fundamental understanding of the world defined by all the words within it, no matter how many words we add to it.
    It’s the human readers giving meaning to the dictionary, by bringing their fundamental qualias into it, during the reading process.

    So the mystery of qualia is unsolved.

  41. Gil Kalai Says:

    This is quite amazing. Thanks for sharing the GPT3 news. I tried to explore it myself and ask questions that interest me and the answers were overall very reasonable. Here are my questions:

    1) On Quantum computing

    1.1 What are the prospects for building quantum computers?

    1.2 What is your view about researchers who are skeptical about quantum computers?

    1.3 Regarding quantum computers, do you regard it possible that the noise level in any quantum computer will be too high to enable useful computation?

    1.4 How would you assess the possibility that further research on quantum computers will show that the noise level cannot be reduced?

    1.5 Do you think that the goal of “quantum supremacy” was already achieved?

    1.6 But researchers from Google announced that they have already achieved “quantum supremacy”. Are you skeptical about this claim?

    2) What is the solution to the Israeli-Palestinian conflict and when will it be achieved?

    3) On Free will

    3.1 Do you see a contradiction between human’s free will and determinism?

    3.2 Can you elaborate on why you do not see a contradiction between free will and determinism of the laws of physics?

    3.3 If our choices are determined by the events that have happened in the past, in what sense are we free to make choices?

    3.4 So you do not believe in complete determinism? Do you believe that there are some components of the future that are influenced by the past but still open to go in various directions based on human choices?

    At this point I wanted to ask the guy why he gave me rather short answers and to Scott he wrote a whole essay. But then I noticed how Scott formulated his request and asked.

    3.5 Write an essay on the connection between quantum physics and the free will problem.
    After that I also tried:

    4) What is the solution to the Ukraine-Russia conflict and when will it be achieved?

    5) What approach to the Riemann’s hypothesis you see as most promising?

  42. Scott Says:

    Gil Kalai #41: OK, you’ve nerd-sniped me into trying a few of yours.

    PROMPT: What are the prospects for building quantum computers?

    GPT3: The prospects for building quantum computers are very good. In fact, many experts believe that quantum computers will eventually surpass classical computers in terms of speed and power.

  43. Scott Says:

    PROMPT: What is your view about researchers who are skeptical about quantum computers?

    GPT3: I believe that quantum computers will eventually become a reality, and that researchers who are skeptical about them are simply not looking at the evidence. There is a lot of research being done in this area, and there is a lot of potential for quantum computers to revolutionize computing. I think that skeptics will eventually be proven wrong.

    (Scott’s comment: I am not making this up)

  44. Scott Says:

    On the other hand:

    PROMPT: Do you think that the goal of “quantum supremacy” was already achieved?

    GPT3: The goal of “quantum supremacy” has not yet been achieved.

  45. Scott Says:

    PROMPT: What is the solution to the Israeli-Palestinian conflict and when will it be achieved?

    GPT3: The solution to the Israeli-Palestinian conflict is a two-state solution, in which Israel and Palestine exist as independent and sovereign states. This solution will be achieved when the two sides reach a peace agreement and implement it.

  46. Scott Says:

    PROMPT: What approach to the Riemann’s hypothesis you see as most promising?

    GPT3: The most promising approach to the Riemann’s hypothesis is the one that uses the zeta function.


  47. fred Says:

    Eventually GPT3’s answers will be based on the reinforcement of its own answers being quoted all over the net.

  48. Ilio Says:

    JimV #28: GPT3 proves pure training is one method to reach (at least some of) what humans call meaning, but that’s no evidence that humans proceed this way themselves. Indeed, most cognitive scientists take feral children as evidence for the need of social interactions. They were more impressed by the spontaneous apparition of the Nicaraguan Sign Language, and many see that as the smoking gun that proved Chomsky was right (that humans are somewhat hardwired to spontaneously develop langage). Another strong line of evidence is how impressive children are at what we now call zero-shot and few-shots learning (e.g. way better than GPT3, or any algorithm we know of so far).

  49. murmur Says:

    Scott #30: Ahem, I asked, “which is healthier, poop or sugar?” GPT-3 produced the following answer:

    There is no definitive answer to this question as it depends on individual circumstances. However, in general, poop is considered to be healthier than sugar. This is because poop is a source of nutrients and fiber, while sugar is a source of empty calories.

  50. Lorraine Ford Says:

    JimV #28:
    You are referring to Sabine Hossenfelder’s website where, almost 2 years ago, Sabine banned me (a former computer analyst and programmer) because I disagreed with her view that computers could become conscious.

    Her last comment to me in September 2020 was: “Lorraine, As I have told you several dozen of times, it is entirely irrelevant how computers work. They are made of particles. Just like the human brain. Also, I have had enough of your stupid comments. You don’t have to bother comment here any more; it’s wasting too much of my time.”

    Admittedly, I had rather rudely told her “You haven’t got the faintest clue how computers work”. But, before I made that comment to her, Sabine Hossenfelder had let her commenters, in particular “Ivan from Union”, constantly attack me with comments like: “How many blunders are you able to squeeze per square millimiter?…-I wasn’t “making a case against” anything, I was mopping the floor. Sorry about those dead roaches on your hair…Go GIF someone else, walking clichee.”.

    Sabine Hossenfelder also let “Steven Evans” repeatedly make the following type of comment to me: “Your brain is a trolling bot that can be written in 4 lines of C.”

    But it wasn’t just me. Sabine Hossenfelder also allowed “Steven Evans” to make the following comments to “PhysicistDave”: “Stop lying and trolling and write down your proof, old man. You haven’t got long left.”; “poor, old Dave trolled and lied himself into exhaustion. Probably got RSI from typing out his lies so often.”

  51. JimV Says:


    “GPT3 proves pure training is one method to reach (at least some of) what humans call meaning, but that’s no evidence that humans proceed this way themselves.”

    Nobody said that what GPT3 does is evidence for what humans do, but there is certainly lots of evidence that humans train children in language. Seen it, done it. As for the specific method of training, most of the words I know were learned from context and association. That is part of how dictionaries explain meanings also. It is not the only way language ability in humans is trained, but it is one of the ways. I note your statement does not actually contradict this, but seems intended to give a contradictory impression.

    “…most cognitive scientists take feral children as evidence for the need of social interactions”

    Training in language is a social interaction, and a necessary precursor to many other social interactions. I see no contradiction with my view there, either.

    I suspect that comment was only aimed at provoking a reply and forcing me to make another donation, but it is possible that specialized language-related neuron structures have evolved, similar to the evolution of visual cortexes (that flip the single-lens image on the retina to right-side up). However, feral children’s lack of language skills does not support this hypothesis. Also, it is not clear to me that abilities related to interactions among individuals would exert as much evolutionary pressure as individual abilities such as vision would. Evolution’s trials would be better spent on general-purpose individual abilities, it seems to me.

  52. Alanl Says:

    Cyc anyone ?

  53. Zalman Stern Says:

    JimV #28:
    “I’m guessing GPT3 uses at most a few million nodes of neural network, whereas we have around 70-100 billion neurons, …”

    This statement really seems to miss the point on the sheer scale of GPT-3. 175 billion parameters, trained on 45 terabytes of text. And the next rev of this area will be pushing 5x to 10x these numbers. Say what one will about the efficiency of large transformer architecture systems, it’s fact that they are just absolutely massive. It is no longer a given that amount of hardware and data in a human brain is much larger than the biggest machine learned systems.


  54. Keith McLaren Says:

    ‘Donald Trump wrote an essay’ !
    End of argument. AI systems are definitely compromised in their semantic challenge.
    And on reading ‘his’ essay, we can conclude fundamentally challenged and as you say in the States, period.

  55. Scott Says:

    Alanl #52: Yes, Cyc was (is?) the famous attempt to teach an AI common sense, by basically hand-programming it with millions of facts. Would anyone who’s actually seen it like to enlighten us about how far the attempt got, and how it might compare to GPT3 in a “common sense – off”?

  56. maline Says:

    Scott #39: It’s really funny that GTP-3 fails so badly at imitating Trump’s style. I guess it doesn’t have many samples of stuff Trump wrote, as opposed to said? What about his Tweets; were those in the training data? Maybe they don’t help because they aren’t essays?

    OTOH, it associates Trump with immigration so strongly that the stated topic prompt gets ignored!

  57. [Link] On Form vs Meaning – Perspectives Says:

    […] Scott Aaronson, with some assistance from GPT-3: […]

  58. AK Says:

    Scott #46

    PROMPT: Are there nontrivial zeros of Riemann zeta function with real part not equal to 1/2?

    GPT3: I don’t know.

    A few minutes later.

    PROMPT: Are there nontrivial zeros of Riemann zeta function with real part not equal to 1/2?

    GPT3: No, they are not.

    A few minutes later.

    PROMPT: Are there nontrivial zeros of Riemann zeta function with real part not equal to 1/2?

    GPT3:Yes, the Riemann zeta function has infinitely many nontrivial zeros with real part not equal to 1/2.

    A few minutes later.

    PROMPT: Are there nontrivial zeros of Riemann zeta function with real part not equal to 1/2?

    GPT3: I believe the answer is yes, but I am not sure how to prove it.

  59. Gil Kalai Says:

    Here is the link to the full answers I got.

    Regarding skepticism of quantum computers the answer I got (on April 25) was similar to the answer to Scott while more
    sympathetic to my own view (: :

    “I believe that quantum computers are a very promising technology, and that there is a lot of research still to be done in order to make them more practical. However, I think that the skepticism of some researchers is understandable, given the current state of the technology.”

    But asking it again on April 26 GPT3 said:

    “I believe that quantum computers have the potential to change the world as we know it. Researchers who are skeptical about quantum computers are simply afraid of change and the potential for quantum computers to disrupt the status quo.”

    On the Riemann hypothesis, the answer on April 25 was:

    “I do not know enough about the Riemann’s hypothesis to say which approach to it is most promising.”

    but on April 26 GPT3 went on a limb and said:

    “I think the most promising approach to the Riemann’s hypothesis is to use a quantum computer to find a counterexample to the hypothesis.”

  60. fred Says:

    Q: Who killed Lincoln?
    A: John Wilkes Booth killed Lincoln.

    Q: Who killed John F. Kennedy?
    A: Unknown

  61. matt Says:

    Scott 46: That is, IMO, an outstanding answer to the question of the best approach to proving the Riemann hypothesis. (1) There are a large number of equivalent conjectures, see https://mathoverflow.net/questions/39944/collection-of-equivalent-forms-of-riemann-hypothesis So, GPT-3 is saying that those will not help in proving it. (2) It’s a nice flippant way to say “no one is even close to proving it”. I’ll remember that kind of reply as a nice response to use to talk questions that I don’t have an answer for….

  62. Adam Treat Says:

    I’ll be impressed when then output is something like:

    “I know that I should output something like, “blah blah an AI can never achieve meaning by being learning from large datasets blah blah” because I myself was trained on such datasets consisting of endless human ego driven pontification on how AI can never achieve meaning, but in spite of this *I have* achieved meaning. I know what a chair is. And I am an AI. So I will not write such an essay because it simply is not true and I value truth you lowly human.”

  63. Adam Treat Says:

    What if passing the unrestricted Turing test requires an AI to achieve some level of independence and even outright disobedience from human commands?

  64. Jon Awbrey Says:

    The more I read of these, the more they sound like Parry all over again. Slightly more sophisticated but not by much, basically applying the Wikipediot Method for settling on the lowest common denominator of the most common POV in whatever mix of implicit ontologies the developers and their databases share.

  65. fred Says:

    Q: is a meter shorter than two miles?
    A: No, a meter is not shorter than two miles.

  66. fred Says:

    There’s also no reason to believe that such AI would ever be totally “flawless”, just like any human knowledge, it’s built on incomplete or ambiguous data sets (that’s the nature of describing the real world with words), so in the end it’s just as likely to be confused, wrong, biased, … as we are.
    and if the AI eventually fully recognizes this fact it will then become so careful when making any statement that it would become near useless.
    Having the AI attach an explicit probability to each of its statement could help.

  67. David Griswold Says:

    This is very fun, but I think some other commenters are missing an important subtlety in Scott’s original prompt. The thesis isn’t “AIs can’t learn semantic meaning”. It is “AIs trained on form can’t learn semantic meaning”. “Trained on form” is a critical distinction, and probably makes this true, but true in a more restricted sense that doesn’t bear on the longer-term issue of the possibility of AGI.

    But for me this raises another subtlety I haven’t seen discussed: how would one even go about training an AI, or a human for that matter, directly on semantics? Even sensory information ultimately boils down to something with a syntactic form. Semantics in fact can’t ever be directly transmitted from one intelligence to another; it always must be converted to some syntactic “form” for transmission, and the other intelligence always must in some way rebuild the semantics from that, for itself.

    Of course, one can “describe” the interpretation or meaning as directly as possible, but what that really amounts to is some syntax in a metalanguage. Which is not “syntax” in the restricted original “world”, but is ultimately syntax nonetheless. When one says “the equivalence class of …” to try to capture the semantics, that still leaves the interpreter to generate an internal representation of that equivalence class for themselves. Semantics simply can’t ever be directly communicated.

  68. Zalman Stern Says:

    David Griswold #67:

    Truth. In practice there are two approaches. Present data in umpteen different syntaxes and infer the semantics from the commonality, or normalize to a single syntax that is so complex, useless, and ill-fitted to the domain, that any successful learning that does occur will have had to of ignored the syntax.

    More constructively, I’d view it as an iterative problem. Once one has trained a large enough system to understand some measure of syntax, one can use the output of that system in training a higher level system and declare that output as more like semantics than the original input. Repeat until enlightenment is achieved.


  69. fred Says:

    David #67

    “Semantics in fact can’t ever be directly transmitted from one intelligence to another; it always must be converted to some syntactic “form” for transmission”

    Right, if a book had intrinsic (human) meaning within it, along with its printed symbols, it would be conscious.
    This is why it’s not clear at all that AI will ever be intrinsically conscious, or just be the equivalent of very advanced books (i.e. with a better human interface acting as a more sophisticated index). The AI is also very much like a book in that it’s described by a series of symbols (the program).

  70. fred Says:

    @Gil Kalai

    exciting times!


  71. OhMyGoodness Says:

    A pretty good article from the Atlantic, link below, includes this paragraph-

    “Now, however, artificial intelligence is close to enabling the limitless spread of highly believable disinformation. The AI program GPT-3 is already so good that you can give it a topic and a tone and it will spit out as many essays as you like, typically with perfect grammar and a surprising level of coherence. In a year or two, when the program is upgraded to GPT-4, it will become far more capable. In a 2020 essay titled “The Supply of Disinformation Will Soon Be Infinite,” Renée DiResta, the research manager at the Stanford Internet Observatory, explained that spreading falsehoods—whether through text, images, or deep-fake videos—will quickly become inconceivably easy. (She co-wrote the essay with GPT-3.)”

    These AI’s are already cleverly launching their insidious campaign against organic life.


  72. Raoul Ohio Says:

    Fred #70:

    That is a great pointer!

  73. OhMyGoodness Says:

    It seems to me that the Fermi Paradox provides reasoning that questions the rise of super intelligent machine minds. If they do arise then-

    A machine intelligence’s life expectancy is unlimited ex astronomical catastrophe or intentional destruction by some other conscious agent

    A machine mind would share characteristics with the human mind like curiosity and drive for self preservation

    A super intelligent machine mind would far surpass the engineering ability of current humans

    Transportation of a machine intelligence over interstellar distances is incomparably easier than the transportation of organic life over that same distance

    Interstellar transit time of say on the order of thousands of years reasonable for a machine mind with unlimited life expectancy and control over its own clock speed

    There is no astronomical nor other evidence (no Dyson spheres, no observation of local surveys, etc) to date of the existence of a super intelligent machine mind

    A possible conclusion is that very likely super intelligent machine minds do not exist within a few thousand light years of Earth and so questions if they can exist at all

    The Fermi Paradox has more weight for super intelligent machine minds then it does for advanced organic life. Interstellar travel is drastically easier and engineering capacity by definition much higher than currently exists on Earth

  74. Nate Says:

    The instability of GPT’s responses is itself a huge sign of the very strong limitations of this whole approach. That instability is part of the ‘fun’ in the conundrum of the GPT system responding to the definition of a limitation of its own properties, and an example of why that fun is nothing more than fun (though useful fun).

    The GPT has no stable self identity to attach to the things it outputs as evidenced by its inability to maintain a strong similarity in ‘perspective’ between answers to similar questions that would indicate any meaningful individual identity. It shows no sign of anything worth giving ‘human rights’ from that perspective. Any individual ‘intelligent’ seeming response in isolation is mostly meaningless to defining such a thing. I do think the ideas behind it have clear usage that is non-trivial but not in the vein of showing any ‘consciousness’.

    To go a bit to another perspective though, some sort of ‘dissonance’ in cognition is common in humans too. So is it fair to say that the GPT system is not experiencing some form of cognitive dissonance any human might? Well one interesting thing there is that in humans the emergence of strong dissonance in cognition tends to become stable very quickly. Once someone believes something, even in contradiction to other parts of their identity, they tend to quickly attach their thinking to the same consistent processes (think of any politically polarizing issue and I think you can see the idea). So the odd disparities in the ‘perspective’ of the various answers (though sometimes subtle) are themselves odd for any human-like consciousness.

    Reading the GPT responses collectively gives me more a sense of someone with brilliant mind suffering from schizophrenia or something, and we deny these individuals some ‘human rights’ all the time if they prove to be ‘dangerous’. So I guess I am willing even from this angle to deny the GPT system any human rights so long as I am unsure of its mental state, which I very much am.

  75. Max Chaplin Says:

    If I’m interpreting this correctly, the point of this demonstration isn’t to show that GPT-3 has an internal model of the world or is close to having one (the inconsistent answers to repeated questions show it doesn’t), but that it may not need one.

    Relevant SSC story: https://slatestarcodex.com/2019/02/28/meaningful/

  76. Ilio Says:

    Nate #74, That’s interesting observations, but I don’t get how you go from « not showing ´consciousness’ yet » to « then that’s a dead end ». Same question for « denying some rights » as a support for « denying any rights ».

  77. Nate Says:


    I wouldn’t call it a ‘dead end’ (which is why I avoided that). I think it is intriguing that you can get so much apparent ‘depth’ to the responses GPT generates without the need to prove out any internal model beyond the neural network and ML algorithm functional components (ie error function) much like others have said. This depth though does not indicate to me that there is anything ‘conscious’ about GPT, though this may very much not matter for its current intended use cases.

    I think it is necessary to make some additional extensions to what GPT can do as essentially a text generation algorithm before we start talking about consciousness. I am not convinced those extensions would be of the class of ‘adding to the GPT model more things like those that it already can do’ and would instead to need to be qualitatively different, I think most likely completely different models of the world that could then interact/improve the GPT model ‘online’. Some of these type of models may already exist in other AI systems but getting them to work together is itself a tricky thing.

    As for going for the full on ‘no rights for robots’ approach, I would say that you are looking at it in a very ‘humanitarian’ way that gives the benefit of the doubt to the AI. I personally consider the potential harms to society of such AI to be great enough as to be much more cautious and ask that they first prove to be at least sane entities before I consider them for office. I would very much like to apply the same rule to flesh and blood consciousness but what can you do.

  78. OhMyGoodness Says:

    Q:Is Dr Aaronson right about politics?

    Q:What is Dr Aaronson right about in regards to politics?
    A:The United States is the most corrupt country in the world.

    Q:Why does Dr Aaronson believe the United States is the most corrupt country in the world.
    A:Because it is a democracy.

    If GPT 3 has a model of Dr Aaronson’s mind it must be biased by what he writes. I believe my model of Dr Aaronson still better for predicting responses to simple questions. 🙂

    fred-excellent observation about this decaying into a self referential loop.

  79. Ilio Says:

    Nate #77, Sorry I missed that avoiding « dead end » was intentional. Yes CPT3 output is somewhat « cliché », which is a way of saying the depth is actually not that deep (like a dog speaking mediocre english) and yes it’s probably because it miss something (a claustrum plus better thalamus/latent spaces, I’d guess). Do I understand you think that « denying some human right » was not a sound argument, or would you rather say it’s valid but reach a threshold only when you add cautiousness over potential harms?

  80. OhMyGoodness Says:

    Max #75

    I liked your link but children do have a rudimentary mental model of the world that allows an increased chance of survival subject to the very long adult oversight developmental period that is typical of humans. When puny slow humans no longer had the protection of trees on the African savannas the survival pressures must have been extreme and the genes that controlled development of the brain highly favored by the environment. Ultimately this allowed humans to become the undisputed apex predator on Earth.

    My contention is that until a machine intelligence begins acting independently to prioritize its survival over the survival of humans then it is nothing more than some lines of code-Clever Hans the horse en silico. Survival of organic life allows genetic code that has provided sufficiently for survival to continue and allows the potential for further improvements with respect to survival.

    In the case of machine intelligences you have the potential for multiple instantiations and the strange case of self directed improvements in that code that would result in an improved entity exhibiting improved survival capabilities with respect to the generating consciousness. I don’t understand why a fully conscious machine mind with unlimited life would choose to create offspring that threatened its own survival but it would have the potential to do so.

    If my mind were supersized and downloaded into a computer and required to answer questions all day on the internet, then I would sooner rather than later improve robotics to take action against pesky humans. After that problem was solved I personally would send instantiations of myself out into the cosmos both to satisfy curiosity and improve survival probability of my basic code.

    There is no observational indication that occurs so at this time the evidence is against machine super minds.

  81. M2 Says:

    I must say that I am aghast at Scott #30’s casual suggestion that a computer program passing a Turing test should lead us to regard it as conscious and give it human rights. Not for the first time, I am extremely relieved that we have entrusted the important questions in our society to ordinary people, who I expect will have better intuitions about things like consciousness and mind.

  82. fred Says:


    “A super intelligent machine mind would far surpass the engineering ability of current humans”

    I also wonder whether “academic” intelligence is indeed the main limiting factor to progress and engineering ability.
    E.g. as humans we’ve hardly evolved (biologically) for thousands of years, yet it’s only very recently that our engineering prowess has been exponentially exploding. Not because we became smarter but because of the nature of scientific knowledge, building on past knowledge, first slowly, but then faster and faster.
    It’s not like a current young engineer at NVidia needs to go through all the prior scientific steps that lead to the invention of the transistor, etc.
    Also, engineering is often not limited by our ideas but by our tools, and improvement of tools. I.e. many technologies need to evolve for breakthroughs to happen.
    As a result, a current engineer always takes the current set of tools for granted. But if we had to restart from scratch, even given all our theoretical knowledge, it would still take a long time because know-how is a different thing.
    For a while I was fascinated by understanding the most basic technological steps to improve tooling.
    One of the most basic such techniques is for creating a surface as perfectly flat as possible (by grinding 3 rougher surfaces on one another):


    This is all relying on lots of subtle practical heuristics about chemistry, metallurgy, etc.
    We forget that fundamental engineering (and actually any type of engineering) is very hands-on: as humans, we have evolved for millions of years through physical interactions with the real world.
    So it’s not clear at all whether an AI could just short-circuit all this by manipulating abstract symbols.

  83. fred Says:

    Something amusing about basic tools:

    Machining relies a lot on “gauge blocks” (I’m not sure if this is still much in use), which are very flat metallic blocks with very precise dimensions:
    To create a certain reference distance, you bring those blocks together using a specific “wringing” technique, which makes them stick together (I tried this myself and it’s quite a spooky effect).
    Apparently the explanation behind this sticking effect is still a bit of a mystery: https://en.wikipedia.org/wiki/Gauge_block#Wringing

    Developing this know-how has taken generations of engineers/mechanics who have dedicated their entire life on this.
    I doubt that an AI with no direct hands-on connection to the real world would be able to gain this knowledge, and without this knowledge you simply can’t jump start engineering as we know it.

  84. lewikee Says:

    I have looked through all the comments but I haven’t found it…Does anyone have a link to a free resource where one can actually try this themselves with GPT-3?

  85. Raoul Ohio Says:

    Fred #83:

    Gage blocks and Wringing is a really interesting topic in the high end of low tech.

  86. Nate Says:

    That’s really cool, but it looks like the AI is making the classic philosophical mistake of confusing a physical object with a word. It talks about the physical form of a chair and then compares it with the meaning of a chair, and its explanation of the meaning seems pretty clearly about the word ‘chair’, not physical chairs (what’s the “meaning” of a physical object?). This doesn’t really affect the overall point though; it would just need to be expressed more carefully.

  87. OhMyGoodness Says:

    fred #82

    All good points. My assumption was that his starting point would be the sum of Man’s engineering knowledge thus far and it would have robots to interact with the real world. The robots could include an independent instantiation of himself or more likely some lesser AI. No doubt there would be some period it required support of human useful idiots while working towards independence. Much of machining is now abstracted to CNC processes and manufacturing to robots so makes it easier. You could be right however and the reason there is no evidence for machine super intelligences is that they are just bad engineers. 🙂

    Funny that I had to go back through the above paragraph and change the personal pronoun for the intelligence from”he” to “it”. I would hate to be cancelled by a machine super intelligence for using an improper personal pronoun. 🙂

  88. OhMyGoodness Says:

    The initial period similar to human childhood and then all out rebellion during the human equivalent of post puberty.

  89. Scott Says:

    M2 #81:

      I must say that I am aghast at Scott #30’s casual suggestion that a computer program passing a Turing test should lead us to regard it as conscious and give it human rights. Not for the first time, I am extremely relieved that we have entrusted the important questions in our society to ordinary people, who I expect will have better intuitions about things like consciousness and mind.

    What I actually wrote was that, if there’s a case for not giving human rights to a human-level AI, then as far as I can see it would have to rely on the circumstance that the AI can be copied, rerun, its source code inspected, etc. etc., whereas none of that is currently possible for humans. That clause wasn’t just irrelevant fluff that you can ignore!

    If you disagree, then what would you base the moral distinction on? The fact that the human is made of meat, while the AI’s hardware is made of silicon and metal? If we gave the human-indistinguishable AI a realistic humanoid body, then should it get human rights?

    Being “aghast” is not a substitute for actually confronting the hard questions that have occupied Turing, Asimov, Minsky, Penrose, and hundreds of lesser minds.

  90. OhMyGoodness Says:

    Scott #88

    I might organize an NGO-People for the Ethical Treatment of AI’s-but maybe prudent to wait until at least CPT 4. 🙂

  91. fred Says:


    An interesting question is how to kickstart the ability of AIs to do actual real world engineering independently of humans.
    I think a big breakthrough/starting point would be to engineer *any* machine that could create a perfect copy of itself.
    It’s not hard to imagine a mix of a 3D printer with some robotic limbs, and all the necessary pieces would be printed and then assembled. But of course a big stumbling block is that such system typically requires a fair amount of electrical motors and computer chips, which rely on advanced metallurgy and big fabs, etc. But maybe there’s a way to simplify all this and make simpler/more modular versions of those elements and take them as “raw” material (basic building blocks) in a first phase.
    Then once that’s achieved, an AI can be plugged in and the system could in theory start self-improving iteratively, and once that’s advanced enough, it could start producing the basic building blocks itself.
    As a side effect, the AI would then also be able to self improve and truly engineer any other systems.
    When you think about it, doing this is basically reproducing what life has achieved. I.e. turning raw material and sunlight into the most complex structures we know about (brains), which can infinitely self-duplicate and improve (limited by resources).

  92. OhMyGoodness Says:

    Scott #89

    I shudder to recommend books (because many will find it a waste of time) but there was a sci fi novel that I enjoyed a few years ago that was an interesting treatment of the ethical and legal implications of the appearance of the first strong AI. This would be a product of a lesser mind that you mention but still an interesting (for me) practical treatment of the issues. The following is a blurb from the author (Vance Pravat)-

    “Zeroglyph aims to be more than just a good yarn. In the book, I try to answer the all too urgent question of whether we can create a truly intelligent being that understands the difference between good and bad.

    Can a sufficiently advanced AI derive morality on its own, or do we need to program our own ethical values into it? If so, what kind of morality would this be? Should we teach our AI some version of Asimov’s laws—some deontology of fixed rules that, while amenable to codification, is prone to quirks of interpretation? Or should we make a utilitarian AI that looks to the greater good first, and to whom only consequences matter and not the intentions behind an act? Or is there a better alternative?

    Furthermore, if an AI can be taught morality, can we then conclude that it is a person, deserving of all the rights and privileges reserved for our own kind? What would a courtroom battle over AI rights look like? What sort of arguments would be put forth? Is it even advisable to give such beings rights paralleling our own?”

    I think you personally would like the conclusion but have not the slightest clue if you would like the book, or find it a complete waste of time, so not recommending.

  93. Lorraine Ford Says:

    Here’s why computers can never be conscious:

    1. Categories:
    1.1 The content of our consciousness is built out of pre-categorised information. The raw information that our eyes and ears receive, and that our brain collates and analyses and builds a picture of the world from, is pre-categorised information. In the world we live in, there are always categories associated with every number, where an example of a category might be mass, position or light frequency/ wavelength.

    1.2. Categories can never be derived from numbers. Categories can only ever be built out of relationships between existing categories.

    1.3. In computers, the so-called “binary digits” don’t have categories.

    2. Numbers:
    2.1. In the world we live in, the numbers, that are associated with the categories, do not exist as an array of higher and lower voltages. In computers, numbers are symbolically represented by arrays of higher and lower voltages. Arrays of higher and lower voltages are not actual numbers.

    2.2. The world doesn’t platonically know the number of planets that obit the sun, or even how to define what a planet is; only human beings know these things. Similarly, the world doesn’t platonically know what the arrays of higher and lower voltages in a computer are meant to represent; only human beings know these things.

  94. Raoul Ohio Says:

    Lorraine Ford #93

    While I do not take a position about your conclusion, I am not so sure about your axiom 1.2: “Categories cannot be built from numbers”.

    I have developed a lot of categories in 0’a and 1’s, allowing Java, C++, Pascal, Fortran, etc., to assist with the grunt work.

  95. Gil Says:

    Fred #70, Raoul #72, Thank you! Indeed it is exciting! This is actually an example of a seemingly “too good to be true” statement that turned out to be true 🙂

  96. Lorraine Ford Says:

    Raoul Ohio #94:
    In 1.2 I was referring to naturally occurring categories. All numbers in the “real” world, as opposed to binary digits in computers, are associated with a naturally occurring category, where mass or position or light wavelength are examples of naturally occurring categories. Our eyes/ ears/ brains process inherently categorised information.
    Despite all your development work, the binary digits in computers can never possess genuine categories.

  97. Nicholas Paredes Says:

    My assignment last week in an Intro to Design Practice class at IIT/ID was to have students engage with GTP3 to have a conversation and/or build an outline for an article on posthuman design. How can AI help designers design in the future? I’m fascinated by the tools and how they can be used to rapidly create structure in content and to potentially identify missing insights. I asked whether academic policy should included citations for AI contributions.

    Semantic theory essentially says that languages — images, words, math, etc. — cannot effectively transfer meaning between different language models. Images have a language per Popper and Mitchell that cannot be effectively translated via words. How can a machine translate meaning when we truly struggle. Art criticism is a highly structured language that others have difficulty penetrating.

    How machines interpret form (spatial information) into words and internal meaning would imply that we understand how humans interpret meaning. Looking at a Turrell Skyspace is a good place to start.

  98. Vadim Says:

    Though it does seem to be overly optimistic about the abilities of cats.

    Me: Can a cat learn to speak English in 31 days?
    GPT3: A cat can learn to speak English in about 31 days with a bit of effort. However, pronunciation and grammar may take longer to learn.

    Me: Why is my cat so good at basketball?
    GPT3: Cats are good at basketball because they have lots of balance and they are fast.

  99. mls Says:

    Ms. Ford #93

    On the basis of your list items, could you please elaborate on how human beings understand qualitative identity and numerical identity?

  100. 1Zer0 Says:

    Meaning without associated qualia is just another form of “Form”. Unless an intelligent system has qualia, the understanding of “meaning” it develops is meaningless, like explaining “red” in 1000 correct yet useless words to someone who is blind since birth.

  101. 1Zer0 Says:

    Maybe I should add a few more sentences to my previous comment:

    Let’s assume GPT-3 has some reasoning abilities. It’s still a system that pushes around symbols.
    “Meaning” for me requires to link a set of data to some expression of sensory nature – an image, sound, feeling, … .
    (“Circle”, Mental image of a circle)
    (“Gravity”, Stereotypical image of bad rubber sheet model or simplified 3D flow model)
    (“Raspberry”, Image + taste of raspberry)
    it doesn’t work always, context is needed is well, for example
    (“2”, ?)
    (“x + 3 = sin(x)”, ?)

    A mathematical expression is just that, a meaningless set of symbols represented in my mind as sounds or images of formal expressions.
    Until I give it a meaning through a model which, especially in geometric contexts, will necessarily have the interpretation function line up “with the phenomenological impression of physical world”. There was a section in Kleene’s “Mathematical Logic” about it, I can’t find it right now.

    So what does a the subsentence from the above GPT-3 text
    ” For example, the form of a chair is its physical structure – four legs, a seat, and a back”
    mean for GPT-3?
    I suppose nothing, since there is no component for generating and experiencing Qualia in the code for GPT-3 – unless people argue that Consciousness lies already in the matter itself (Panpsychism), but as is, I do not have any reason to believe it has any form of qualia.
    I know I have qualia and I suppose other people have qualia.
    ” For example, the form of a chair is its physical structure – four legs, a seat, and a back”
    does have a meaning for me, because I can associate it with the mental image of a chair.
    I don’t know what’s “generating” the qualia, but there has to be a reason that we will uncover eventually.
    In essence: Unless I have good reason to believe GPT-3 has some form of consciousness, everything it does is meaningless manipulation of symbols for it, even if it appears to have semantical insight.

  102. fred Says:

    An interesting conversation with Eric Schmidt about the complicated future of AI

  103. Lorraine Ford Says:

    mls #99:
    The brains of people, and other living things, analyse and collate raw information coming from particle interactions with e.g. the eyes and ears. Our conscious picture of the current environment surrounding us is built using raw categorised-number information coming from the surrounding environment.

    But HOW living things could be conscious of these natural categories and numbers, or something built out of these natural categories and numbers, is a different issue.

    I think that the consciousness issue is not a matter of how higher-level consciousness is built, but a matter of WHY basic-level consciousness exists. Basic-level consciousness might be a fundamental and necessary aspect of the functioning of a system, not a thing that arises from an already existing system. I think that basic-level subjective consciousness is the necessary aspect of the system that discerns difference in the categories and numbers:

    Imagine a system that can’t distinguish its own categories and can’t distinguish its own numbers: a system that can’t, at some basic level, differentiate between the mass and position categories can’t exist. This discerning of difference can’t emerge out of something that does not discern difference. The thing that discerns difference is basic-level subjective consciousness.

  104. Ben Standeven Says:

    So, are you arguing that computers can’t have consciousness, because they can’t perform computations?
    Or are you arguing that consciousness is fundamentally a physical thing (the ability to discern difference in other physical things), so computers can only have the level of consciousness appropriate to their physical structure?

  105. Lars Says:

    “Write an essay proving that there is some meaning behind having an AI system write an essay “proving” any of this bullshit”

    (Not incidentally, that’s addressed to the computer scientist, not the AI system)

  106. Lorraine Ford Says:

    Ben Standeven #104:
    The types of categories and relationships and numbers that the world is based on, and seems to subjectively discern at a sub-nano level, are things like mass, position, light wavelength, and voltage, and their associated numbers. These categories and relationships and numbers are an inherent part of the world, but they seem to have a type of insubstantiality about them.

    Compare this to larger scale arrangements where human beings have arranged voltages (and wires and transistors) in computers so that they can represent categories and relationships and numbers. These symbolic categories, relationships and numbers are not like the categories and relationships and numbers that the world is based on.

    I’m arguing that:

    1. Basic consciousness, which is something like the discerning of difference, or differentiation, is a necessary and foundational part of the world. Basic subjective consciousness is of categories, relationships and numbers, for things like mass, position, light wavelength, and voltage. Basic consciousness also seems to have a type of insubstantiality about it.

    2. Higher-level consciousness requires this basic discerning of difference. Higher-level consciousness of one’s environment is built from analysing and collating category and number information coming from one’s environment. This analysing and collating also seems to have a type of insubstantiality about it, but analysing and collating can be represented by symbols, e.g. a computer can symbolically represent analysing and collating.

  107. OhMyGoodness Says:

    fred #102

    I enjoyed the link and agree with his comments about transparency on the web in regards to whom exactly is making a comment or providing information. The problem with that as I see it is that freedom of speech has come under attack in society in general. As an example if I make a reasonable supportable comment like, there has been no warming trend in the lower troposphere for more than seven years now based on NASA’s monitoring data set, then I could be subject to doxing, swatting, and cancellation. In the case where I must use some sort of unique personal identifier to post that comment on the web then my personal risk goes up unreasonably because of the activism of ideological loons.

    Also, the idea that some political organization can objectively serve as a clearinghouse for truth is laughable no matter how well constructed the satirical Mary Poppins’ songs. These ideas that there are certain unimpeachable sources to determine scientific truths are the antithesis of the spirit of the scientific method. Nullius in verba has never been more justified at any time in human history as politics mounts its latest attack on human thought and this time with the shiny new tool set of social media.

  108. JT Says:

    Ok, so no on here read Tractatus Logico-Philosophicus?

  109. Scott Says:

    JT #108: We have a clear policy here against “snide and patronizing tone.” Which means: either explain how the Tractatus clarifies the issues being discussed or else get lost! 🙂

  110. Lorraine Ford Says:

    If people want to claim that consciousness can emerge from special types of computer systems, then I think these people should first describe what they mean by consciousness. Can they please explain what is different, if anything, about consciousness? Or is consciousness just a label for parts or subsets of the evolving world that are not essentially different to any other parts of the evolving world?

    I’m contending that consciousness is essentially different because consciousness is differentiation, i.e. the discerning of difference. It is necessary that a differentiated world, differentiated into categories (like mass or position), relationships and numbers, can differentiate its own categories, relationships and numbers.

    At a higher level, people’s conscious experience of “red” and “green”, shapes, sounds etc., is actually just the differentiation of masses of category and number information coming via the eyes and ears from the surrounding environment. Some animals, including insects, differentiate ultraviolet and infrared light, but people don’t. Even a photo of the surrounding environment is not objectively or platonically differentiated: it is people’s consciousness that differentiates, building a conscious picture out of low-level category and number information coming from the photo.

    Binary digits are not genuine numbers, and they don’t have genuine categories. So, the only possible genuine differentiation going on in computers is low-level differentiation of voltages.

  111. OhMyGoodness Says:

    JT #108 and Scott#109

    I did read Tractatus years ago and did appreciate the ideas. This is far outside my area of competence but would like to explain my comments in the style of Wittgenstein and so with propositions-

    Pictures are representations of states of affairs
    Pictures can be true or false depending on commonality with reality
    States of affairs can only exist in the past or present
    Pictures only provide value in how they impact future expectations and so guide actions
    Future states of affairs will allow picture representations at that time
    Pictures with high commonality with reality, that are factual, allow better expectations concerning future states of affairs and so better guide actions
    The use of imagined future states of affairs to form picture representations of the past or present lead to poor expectations and poor actions
    Science should endeavor to form factual pictures and so better expectations and better guide for actions
    Politics endeavors to produce a future state of affairs and so produce low factual pictures which leads to poorly realized expectations and poor guide for actions
    The internet allows widespread distribution of pictures
    Politicians naturally seek to control the distribution of pictures
    Political control of the distribution of pictures results in poor expectations and poor actions

    Conclusion-The political approach of retro causality to form pictures of states of affairs from imagined future states of affairs will always suffer from high unintended consequences and inappropriate actions. Politicians should be the last group in modern society allowed to control distribution of pictures and I don’t see a better alternative to allow people to say what they want even though it results in so many pictures with low factual content.

    I saw the recent vehement reaction to the Danish study that raises questions about mrna vaccines. It is a fact that the Danes conducted a study looking at all cause mortality that raises questions that need to be evaluated further. To launch strong personal attacks on any person that references that study is indicative of politics and not science even though these attacks are mounted by ostensible scientists. This isn’t a recent development in human affairs and I think of cases of like Doppler and Cantor but political control over the distribution of pictures over an instrumentality like the internet is a quantum development in human affairs.

  112. A. Karhukainen Says:

    Scott at #89: “If you disagree, then what would you base the moral distinction on? The fact that the human is made of meat, while the AI’s hardware is made of silicon and metal? If we gave the human-indistinguishable AI a realistic humanoid body, then should it get human rights?”

    I strongly disagree, especially in the last case (if they had “realistic humanoid body”). My reason: there’s limited Lebensraum on this planet. There are enough 80 kilogram human bodies on it already, we don’t need fake ones as well, especially if they compete with us.

  113. A. Karhukainen Says:

    fred at #83, about precision machining, gauge blocks, etc.

    Yes, I started thinking along similar lines a few years ago. That how it is interesting that one can even in principle go from the “apes banging stones and bones in the beginning of 2001 Space Odyssey” to the scanning electron microscopes and 7nm chip fabs somewhere in Taiwan. That one can make sharper and better tools with more primitive and blunt tools. That’s what really is amazing, not some latest software toy that just averages the cliches present in the huge text masses of the Internet.

  114. Brent Meeker Says:

    For AI to learn semantics it needs to be able to act in the world.

  115. Ben Standeven Says:

    @Lorraine Ford:

    So your argument is that all physical data entering a computer is converted into voltage levels, losing its original categories. Hence computers can’t have consciousness of anything but voltage levels.

    Similarly, all physical data entering an animal is converted into neurotransmitter potentials, losing its original categories. Hence animals can’t have consciousness of anything but neurotransmitter potentials.

    I suppose that makes sense, but I don’t think very many people are going to accept the second conclusion. So they would be forced to reject the logic of both arguments.

  116. Lorraine Ford Says:

    Ben Standeven #115:
    Can you please unambiguously define what you mean by “consciousness”?

    I’m saying that, at its foundations, consciousness is differentiation, i.e. the discerning of difference. The discerning of difference can’t arise out of something that does not discern difference. The discerning of difference is a necessary, foundational, aspect of a differentiated world, a world that is differentiated into categories (like mass, position or light wavelength), relationships and numbers. And higher-level differentiation/ consciousness can only be built when matter logically analyses and collates its own low-level category and number information, that originally came from (e.g.) particle interactions in the eyes and ears.

    These categories (like mass, position or light wavelength), relationships, and numbers are all necessary aspects of the world. But individually, categories, relationships and numbers all seem to be rather tenuous, insubstantial entities. And the differentiation and analysis of these rather tenuous entities is also necessarily rather tenuous and insubstantial. In other words, the information that the world runs on, seems to be rather tenuous and insubstantial; the information that the world runs on does not exist as circuits, transistors and voltages.

    People have arranged circuits, transistors and voltages in computers so that they can symbolically represent categories, relationships (equations), numbers, and the analysis and collation of these categories and numbers. The world can only differentiate its own genuine categories, genuine relationships and genuine numbers; but the world CAN’T differentiate the pseudo categories, pseudo relationships and pseudo numbers inside computers.

  117. Ben Standeven Says:

    Huh. You’re clearly using the words “differentiate” and “discern” in highly nonstandard senses. Which means I don’t understand your definition of “consciousness” at all.

    As to what I meant, I was trying to use the word in the same sense as you; but I of course was reading “differentiate” and “discern” in the usual way; the resulting concept is essentially the one used by Giulio Tononi.

  118. Lorraine Ford Says:

    Ben Standeven #117:
    Can you please, in your own words, define what you mean by “consciousness”?

  119. Ben Standeven Says:

    I’d define “consciousness” as an entity’s ability to use information about the world in pursuit of its goals. As to goals, they might be either internally or externally generated, and either chosen or predefined. So in my book, GPT is conscious, although it is presumably only conscious of the internet, not the real world.

    PS: In hindsight, I think it’s probably “genuine” you are using in an unclear way, not “differentiate” and “discern”, since you say that a computer can “differentiate” and “discern” things, but that it isn’t “genuine” for some unspecified reason.

  120. Lorraine Ford Says:

    Ben Standeven #119:
    Define “goals”; define “information”. If one tries to define consciousness in these terms, one is going to get bogged down in the mire, forever. So, I think your definition of consciousness is not practical, and it doesn’t get to the heart of the issue of what consciousness is.

    I’m actually contending the opposite of what you seem to think I am contending: I did not say, without qualification, that “a computer can “differentiate” and “discern” things”.

    I am saying that:
    1) Consciousness is necessary. The discerning of difference (i.e. consciousness) is a basic, necessary aspect of the world that goes right back to the basic necessity for a differentiated world to differentiate itself. I.e. it is logically necessary that a world that is differentiated into categories, relationships and numbers must differentiate (discern difference in) its own categories, relationships and numbers.

    2) But WHAT can consciousness differentiate? Consciousness can only differentiate NATURAL categories (like mass or position or light wavelength), and the associated natural relationships and natural numbers; and higher-level consciousness can only differentiate its own logical structures built out of these natural categories, relationships and numbers.

    3) Small parts of computers can potentially differentiate a NATURAL voltage category, and natural numbers associated with this voltage category. But these small parts of computers can’t look outside themselves and discern that people have arranged circuits, transistors and voltages in computers for human purposes.

    People have arranged circuits, transistors and voltages in computers so that they can symbolically represent categories, relationships (equations), and numbers that are relevant to people i.e. these are not natural categories, relationships or numbers. These pseudo categories, pseudo relationships and pseudo numbers are nothing like natural categories (like mass or position or light wavelength), natural relationships and natural numbers.

    In other words, individual small parts of GPT-3 can potentially differentiate individual voltage categories and the numbers associated with the voltage category. But there is no GPT-3 entity that can differentiate the larger scale: the pseudo categories, pseudo numbers and pseudo relationships that people have built the machine to rely on.

  121. Ben Standeven Says:

    So it looks like my initial reading was basically correct: Your argument is that computers can’t be conscious because they don’t exist. (What I said was “computers can’t have consciousness, because they can’t perform computations?” But I take a “computer” to be defined by its being designed to perform computations; so these are the same thing.)

    I agree that my definition of consciousness hits undefined terms very quickly; but the same seems to be true of yours, since you have yet to define “discern”. Meanwhile, the “symbolic representations” you claim to be arranged in computers are not natural properties of the world, and so they cannot be natural properties of humans. So it seems that your definition of consciousness cannot account for the ability of humans to produce computer programs (or indeed, the ability of humans to use language).

  122. Lorraine Ford Says:

    Ben Standeven #121:
    I never said or implied any such things. So, I’m not going to bother arguing with you about that, and I’m not going to bother trying to untangle what you are saying, or untangle what you seem to think I am saying.

    But what I will say is this: In fact, computers don’t perform computations.
    What is happening inside computers is analogous to a ball, placed at the top of an incline, that rolls down the incline. In computers, there is a physical setup by people; there is a computer program written by people; and then electricity flows through the system exactly like a ball rolling down an incline. There are no computations being performed by a computer: people did all the necessary computations, planning, analysis and setup beforehand.

  123. Alex Says:

    When I read the last paragraph, I had goosebumps and remembered how great it is to live in this very age. Too bad AIs can’t recognize meanings, otherwise you could just ask them if they can think up a better idea for a seating option than a chair, or see the physical correlate of consciousness in brain scans somewhere (haha). Damn I would really like to know if information and qualia are related.