I Had A Dream
Alas, the dream that I had last night was not the inspiring, MLK kind of dream, even though tomorrow happens to be the great man’s day. No, I had the literal kind of dream, where everything seems real but then you wake up and remember only the last fragments.
In my case, those last fragments involved a gray-haired bespectacled woman, a fellow CS professor. She and I were standing in a dimly lit university building. And she was grabbing me by the shoulders, shaking me.
“Look, Scott,” she was saying, “we’re both computer scientists. We were both around in the 90s. You know as well as I do that, if someone claims to have built an AI, but it turns out they just loaded a bunch of known answers, written by humans, into a lookup table, and then they search the table when a question comes … that’s not AI. It’s slop. It’s garbage.”
“But…” I interjected.
“Oh of course,” she continued, “so you make the table bigger. What do you have now? More slop! More garbage! You load the entire Internet into the table. Now you have an astronomical-sized piece of garbage!”
“I mean,” I said, “there’s an exponential blowup in the number of possible questions, which can only be handled by…”
“Of course,” she said impatiently, “I understand as well as anyone. You train a neural net to predict a probability distribution over the next token. In other words, you slice up and statistically recombine your giant lookup table to disguise what’s really going on. Now what do you get? You get the biggest piece of garbage the world has ever seen. You get a hideous monster that’s destroying and zombifying our entire civilization … and that still understands nothing more than the original lookup table did.”
“I mean, you get a tool that hundreds of millions of people now use every day—to write code, to do literature searches…”
By this point, the professor was screaming at me, albeit with a pleading tone in her voice. “But no one who you respect uses that garbage! Not a single one! Go ahead and ask them: scientists, mathematicians, artists, creators…”
“I use it,” I replied quietly. “Most of my friends use it too.”
The professor stared at me with a new, wordless horror. And that’s when I woke up.
I think I was next going to say something about how I agreed that generative AI might be taking the world down a terrible, dangerous path, but how dismissing the scientific and philosophical immensity of what’s happened, by calling it “slop,” “garbage,” etc., is a bad way to talk about the danger. If so, I suppose I’ll never know how the professor would’ve replied to that. Though, if she was just an unintegrated part of my own consciousness—or a giant lookup table that I can query on demand!—perhaps I could summon her back.
Mostly, I remember being surprised to have had a dream that was this coherent and topical. Normally my dreams just involve wandering around lost in an airport that then transforms itself into my old high school, or something.
Follow
Comment #1 January 18th, 2026 at 7:20 pm
A giant lookup table that you can query *in natural language efficiently* with a decently high chance of getting helpful output seems like a pretty useful thing. I feel like the way we interface with it is underappreciated.
Google might be capable of providing a lot of the same ouputs if only we knew how to look them up.
Comment #2 January 18th, 2026 at 7:41 pm
Aspect #1: Yes, of course!
I feel like my dream brought me closer to understanding the emotion of pure horror that some people obviously feel, on seeing that the world’s masses of slobbering imbeciles and greedy, uncultured techbros dare to confuse a gussied-up lookup table with anything “truly” intelligent.
To me, the unasked followup question always seemed too painfully obvious: “so what is true intelligence, then? what do our own brains do at the lowest level? how do you know it’s fundamentally different from this?”
On an emotional level, I think I finally understand the old professor’s reply, if she’d deign to give one in language I’d understand.
Namely, she’d say that what our brains do just is different, even if we don’t yet understand how. It has to be different, as a precondition of our having a meaningful conversation about this at all. For if it weren’t, then everything would be garbage and slop all the way down, and there would be nothing of meaning or value in the whole universe.
At this point, it would be tempting to describe the old professor as a Cartesian dualist in denial. And maybe she is! But maybe her position is that sure, there might ultimately be a mechanistic answer to the mystery of how our brains do it, but it still can’t be this answer, because this answer is self-evidently too horrible and ugly and would drain all the meaning from existence.
(Is there any mechanistic answer about which she wouldn’t feel the same way, once she understood it? I’m not sure!)
Comment #3 January 18th, 2026 at 9:01 pm
Scott #2,
“I feel like my dream brought me closer to understanding the emotion of pure horror that some people obviously feel, on seeing that the world’s masses of slobbering imbeciles and greedy, uncultured techbros dare to confuse a gussied-up lookup table with anything “truly” intelligent.
To me, the unasked followup question always seemed too painfully obvious: “so what is true intelligence, then? what do our own brains do at the lowest level? how do you know it’s fundamentally different from this?”
On an emotional level, I think I finally understand the old professor’s reply, if she’d deign to give one in language I’d understand.
Namely, she’d say that what our brains do just is different, even if we don’t yet understand how. It has to be different, as a precondition of our having a meaningful conversation about this at all. For if it weren’t, then everything would be garbage and slop all the way down, and there would be nothing of meaning or value in the whole universe.”
I’m not sure this is accurate. Having talked with the people who have very negative views of the current AI systems, my impression is that most of the horror and negative emotional reactions are not some abstraction about how thinking occurs. There’s a combination of a bunch of things, including concern about plagiarism and things being stolen with no attribution, environmental damage (especially electricity and water use), and major damage to signal to noise ratio in many contexts. That’s probably bound up together to some extent some tribal issues, where being anti LLM-AI has become to some extent a left-associated viewpoint.
And at least some of these issues have some validity. The rise of LLM AIs has for example made there be serious issues with cheating at a high school and university level in many disciplines rendering a lot of the regular sort of assignments close to meaningless. At the same time, the degree to which genuinely dumb people or even people who are of average intelligence are uncritically accepting what AIs say, or even thinking they’ve had brilliant insights from talking with LLMs is something that should be legitimately aggravating. Most of these people aren’t seeing things you, or Terry Tao, or Tim Gowers has done with an LLM AI, but are seeing how the normal human uses them. But for many of those people, they are at this point past the point where useful discussion is likely to occur; LLM AI bad is essentially an article of faith.
Regarding the specific concern of whether human thinking is that close to LLM thinking, it seems like a lot of the ways “jailbreaks” occur for LLM suggest the answer is no. For example, some early jailbreaks were to ask an LLM a request in one rare but understood language but tell it to give the reply in another language. Or to give a request in ROT 13. Obviously if a human is given a message in ROT13 and told to go do something they find morally reprehensible, that isn’t going to make them ignore it. And similar remarks apply to a lot of other jailbreak methods. This suggests that there’s at least some types of important modeling and goal formation behavior that humans have that LLMs lack. It is possible that this is the sort of thing that would go away with sufficient training or enough sheer computing power, but it does seem that the LLM AIs function as something closer to alien pseudo-minds than how our own minds functions.
Comment #4 January 18th, 2026 at 9:39 pm
Joshua Zelinsky #3: Maybe you’re right. I felt like I was bending over backwards to be charitable, in imagining that the revulsion must come from some deep-seated if unarticulated belief about the nature of human intelligence. I mean, water use?! Why not go after alfalfa farmers for 100 years before the water used by data centers even merits your notice? But in a world where millions of earnest innumerates genuinely believe that the way to save the oceans is to ban plastic straws, maybe.
The concerns about plagiarism, enshittification of the Internet, etc. seem much more reasonable. OK, but if those are the concerns, then why not make common cause with the people trying to pause AI because of the fear it will destroy humanity?
The best explanation for that I’ve been able to come up with, is that the Pause-AI crowd grants the fundamental premise that current AI is, or could soon become, “true, actual” intelligence—hence why it could become superhuman at everything and take over the world.
But that’s precisely the premise that provokes the visceral, “kill it with fire” disgust in people like the professor from my dream. It’s not so much AI itself they fear, as everything in humanity that could be seduced by something they regard as so self-evidently hollow and vulgar and false.
Comment #5 January 18th, 2026 at 10:14 pm
Thou shalt not make a machine in the likeness of a human mind. Having a disgust or uncanny-valley reaction to something that thinks (or almost-thinks) but is not human is a good reaction to have.
I don’t want to live in a world where there are non-human thinkers. They don’t fit in this world without scrambling our frameworks of power and goodness (see X-risk and AI welfare issues). So yeah, I’ll call it “slop” when people enslave armies of AIs to create something, just as I would if there were a sweatshop of idiot-level human-animal hybrids, and I’ll express my feelings about the *wrongness* of AI creations even as they surpass human skill just as I would if there was a genetically modified IQ-140 horse-brain writing code in a warehouse somewhere.
I want the lookup table back, and I don’t want anything challenging its status as a genuine lookup table. If I get a vague hint that the toys are alive and can talk, I’m running and not looking back.
Comment #6 January 18th, 2026 at 10:19 pm
I think some of the revulsion comes from LLMs being disembodied. They don’t have continuity of experience, real needs or desires, and can’t be negotiated with or influenced in meaningful ways. The horror is in seeing the “intelligence” without any “soul”, especially for brainy people who like to self-identify with their own intelligence.
Comment #7 January 19th, 2026 at 12:45 am
I asked my partner, who makes art for a living and is smarter and better philosophically and culturally educated than me about her objection to AI, and she gave me a long and nuanced take, but then I asked her to distill what her artsy friends say, and it is basically “AI kills art and artists”.
Comment #8 January 19th, 2026 at 2:29 am
If true that OpenAI has financial issues then ChatGPT not so far good at overseeing its own profitability. An independent sensorium is still a big advantage for humans. With spiked false training data AI has no independent way to determine any truth at all about the external world. A large enough spike and it will be in output but agree the same applies in general to humans. I am all for issuing a birth certificate, and thereby according US citizenship to an AI, but don’t believe it is there yet (US birth certificate important to qualify for presidency).
I had a dream a couple nights ago that was similar to what I understand is a common schizophrenic style of delusion. A former employer surreptitiously installed compliance chips in employees’ brains. I somehow became aware of it and was able to disable my chip. The rest of the dream I was pursued by a company repair team.
Comment #9 January 19th, 2026 at 4:27 am
There many ways in which AI can be terrifying. Let me just mention two of them.
The first is that, contrary to vast and conclusive experience, we are not only becoming inclined to further empower software, but crucially a non-deterministic version of it. Arguably, non-determinism is a form buggy software. Of course, we humans are also “buggy” (more about his in a second) but, without theorizing too much, what we talk here is about why software that runs, say, airplanes has to be so well understood and tested. Non-deterministic AI is by definition not. Good luck in your next holidays.
The other terrifying thing is how uncanny gen AI’s abilities are. The reason for that is that, certainly hard to admit, we humans are also are to some extent, perhaps to a vast extent, or even completely, “gen AI-trained” software. A little introspection is enough to realize that most of the time we are in an automatic, generative mode. Like LLM’s, we also (though far more efficiently) learn patterns. Yann Le Cun is right to point out that our “visual transformer” is the most developed learning part of our brains.
In other words: most of the time, we homo sapiens are stochastic parrots, and we do well enough in life with that. Most of the time we sound “plausible”, most of the time we don’t understand what we are saying, we contradict ourselves, we are sycophantic, and the list continues.
So yes, LLMs might have already reached “human-level intelligence” in that regard. Terrifying for our ego.
Comment #10 January 19th, 2026 at 4:54 am
The current state of AI is fascinating.
On the one hand, yes it’s amazing, and in many ways and most topics it knows more than an average person.
And yet, it has these weird blind-spots like the seahorse emoji ( see: https://www.youtube.com/watch?v=W2xZxYaGlfs ).
Apart from the ‘Mandela effect’ explanation, I suppose the real difference is that a human would (or perhaps I should say ‘should’) recognise the dissonance after one or at most a few cycles. Current AI doesn’t have the option of doing that.
Comment #11 January 19th, 2026 at 5:24 am
Scott #2: “At this point, it would be tempting to describe the old professor as a Cartesian dualist in denial. And maybe she is!”
I have also recently wondered whether Descartes’ dualism caused these reactions, but ultimately came up with a “more productive” explanation:
https://www.astronews.com/community/threads/determinismus-und-freier-wille.12412/post-152634
In principle, one would have to read or hear Sabine herself to understand what kind of metaphysical monstrosities she believes in.
https://www.youtube.com/shorts/9vbtKqJwu88 (We’re building a new species and I don’t think we’re ready)
https://www.youtube.com/watch?v=fRssqttO9Hg (This changed my life)
https://www.youtube.com/watch?v=NCD2A_bhDTI (Scientists Measure Qualia for First Time – It was thought to be impossible)
Comment #12 January 19th, 2026 at 6:15 am
There may be some connection between this essay and the prior one on Scott Adams. Both the professor and Adams feel that what ought to win is Smart. But that seems not to be true, in life for Adams and in AI for this professor, as Rich Sutton described. (For folks unfamiliar with the reference, RS won the Turing Award in 2024 for work in reinforcement learning. His essay https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf says “Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation.” where “leveraging of computation” means something like “scooping up the entire Internet.”)
Comment #13 January 19th, 2026 at 7:58 am
A Chinese Room that can meaningfully respond to infinitely many queries in finite packing enjoys an infinitely good compression.
Let whoever among us who has never produced any slop – not even for a school assignment – cast the first stone.
Comment #14 January 19th, 2026 at 8:46 am
“Slop” is a perfectly good way to describe LLM-style AI product, so long as you understand that slop is legitimately valuable. You can feed it to pigs, that will grow to provide succulent, tasty bacon. Or you can e.g. feed it to bosses who demand paragraphs of unnecessary verbiage in every email because a two-word “I agree” would be seen as curt and dismissive. If it lets you get on with the important parts of your job faster, that’s valuable.
A lookup table made by slicing and dicing and recombining the whole of the internet, can never be “smarter” than the internet. It can never even be as smart as the best parts of the internet, because you didn’t sort out just the best parts to use as training data. So if you ask an LLM a question, you’ll get the answer you would have gotten from a midwit internet user who had the time to do the reading on the subject du jour. Mediocrity, fast and cheap.
There are obviously many, many places where mediocrity is enough and so fast, cheap mediocrity is the winning solution.
But there are also many places where fast, cheap mediocrity is a horrifyingly wrong solution, that people will nonetheless turn to because it’s easy and they’ve gotten in the habit of using it in other contexts. So my sympathies are mostly with your imagined professor.
Comment #15 January 19th, 2026 at 11:12 am
A very insightful way to put it. LLMs can’t be intelligent in any meaningful sense for many reasons, not the least of which is that they’re not embodied. I especially like the previous comment of John Schilling. My way of saying it is that generative AI is a “sentence salad” or “paragraph salad” generator, and that is often almost all that is needed. It can draw from a range of material far beyond what any one of us knows. But then comes the human part, the application of taste and common sense.
Comment #16 January 19th, 2026 at 12:35 pm
What a peculiar dream!
It seems to me thrust of the argument changes mid-dream. First, she raises the philosophical question of whether a lookup table is intelligent. You refute her point, and she volunteers that even a compressed lookup table is not intelligent. Personally, I think that’s an interesting question because it’s difficult to articulate exactly why a (lossily) compressed lookup table is not intelligent, especially because a next-token predictor, or a lossy table, can correctly answer previously unseen questions. But instead of pursuing that point, you both change the topic to whether the resulting apps, so far, are useful. But the usefulness of the apps seems not a sound argument in favor or against whether a given type of mechanism is intelligent or not. Still, I think not many people can debate this coherently in a dream!
Perhaps I can share the AI revulsion I see around me, hopefully complementing you and Joshua Zelensky #3.
Most of my friends know AI only through their exposure to it in ChatGPT, on social media an in computer apps. On social media, AI-generated content is synonymous with slop. In other apps, chatbot AIs are presented to assist writing, but much of what they produce is wrong and the current iterations don’t do much to go back to check their work. So saying, “it looks AI”, about photos, videos, or text, is now derogatogy and means, “this content is very low quality, and isn’t even trying to hide it”. A new digital etiquette has quickly emerged, at least in my social circle and work: never share AI-generated stuff verbatim.
At the same time, it seems that both big tech and my friends’ own managers are ecstatic about AI: big tech has invested hundreds of billions of dollars; high-profile CEOs have said that their recent layoffs are because AI replaced software engineers, and their own real-life managers are talking about using AI to add value in the workplace. My friends have no idea how this low-quality tool is expected to generate any value.
The existential horror my friends experience then comes from feeling as if they are living in a gigantic, society-scale Dilbert comic: They can all see that AI is obviously slop, but all the bosses, from their own manager at work all the way up to the CEOs in silicon valley and beyond, apparently cannot see this, are stuck in corporate echo chambers where they pander AI to investors, and are betting all their money on it, risking a global recession if the bubble pops, or entrenching the tech oligopoly if the bubble doesn’t pop.
Even my software engineering colleagues, who are impressed by some of the coding skills of the bots, and who have heard the arguments for superintelligence, do not believe the superintelligence claims, not even the more modest idea that AI could deliver a general-purpose office worker, not even one who delivers poor quality work but for a low electricity bill. They, too, therefore, are living in the Dilbert world.
Personally, I’m not sure if the AI bubble will pop or not, or when. It seems plausible that AIs will slowly become better at most tasks. Perhaps even to the point of having a drop-in office worker. This would be an immensely profitable service, which would justify much of the investments, although I am concerned that this further concentrates power of big tech. I’ve already experienced lots of added value from sparring about computer code with ChatGPT, although we still maintain a policy banning AI-generated code at work. So I don’t share but do understand the horror of my friends and coworkers.
I’m sure you’ve already heard this take elsewhere; I only hope to contribute that, for most people around me, this seems to be the principal source of horror.
Comment #17 January 19th, 2026 at 3:50 pm
@John Schilling #14
This is obviously correct under a narrow, literal definition of ‘slicing and dicing’ but that is not what LLMs are doing.
LLMs are not quoting internet text – they are pre-trained to predict the next token of internet text. There is a popular misconception that the best next token predictor for some process or person just needs to be able to simulate that process or person and doesn’t need to be ‘smarter’. Easy counterexample:
“I took 2 random large prime numbers and multiplied them. The product is 55345…(truncated for brevity)…7. The two numbers were:”
Generating this sentence was easy. Predicting the next token is doable but much harder.
Secondly, today’s LLMs are no longer just trained to predict the next token of internet text. A significant fraction of the compute (most of it, accordoung to rumours) is spent on reinforcement learning during post training. RL on games, maths, programming tasks – anything with a verifiable reward. There is no reason why this can’t lead to superhuman performance in the relevant domains – as has happened with chess, go etc.
Comment #18 January 19th, 2026 at 5:17 pm
I could have been the prof of your dream Scott (besides that I am a man).
Interesting that in your dream I can be more convincing (or at least explaining) that I am in “real life” comments, but I’ll try again with a couple of more things not covered in the dream: maybe in a future one they will become clear (not that you have to agree, obviously, but I see these things completely ignored by people like you)
Of course the current breed of AI are *very different* than the human mind.
Feed one AI whatever you have fed a 4 year old child (rather than the whole internet) and see. That AI would not be able to speak back to you in the same way as the human, not even close: the child will be way, way better. Likely, humans are *partly* stochastic parrots like Marcelo #9 says, but we are *not* only that. The important part that current AI is missing is the semantic (which was talked a lot in the 90s, do you remember that Scott? Then everybody forgot). This is related to disembodiment (also mentioned in previous comments, but for a completely different reason): we human learn some semantic about dogs and cats, in our visual, verbal, auditory, tactile… and we make some internal representation of the “essence” of the dog, without having to see millions of dogs or read millions of pages about them. If we want to compare current AI to the human brain, at most it relates to the language part only, and it does so in a different way.
Then there is the “ownership” part. If one human tells you something (say, that “Trump is good”), they are going to “own” it and find all sort of reasons to justify that statement against you pressing the individual with evidence of the contrary. The current breed of AIs simply don’t care. They tell you whatever came out of the lookup table, but if you (rightly or wrongly) challenge them that their output is wrong, they apologize and take the opposing view. And rightly so, because as a lookup table they have “internalized” everything that is available on the internet, which is everything and its contrary. Now this may be okay for “opinion” kind of statements, but I’ve seen it done for facts too, where it is often dismissed as hallucination. In my opinion the problem is more serious, and somewhat related to the previous one, namely that the current models are “shallow” and whatever they “know” is only “the surface” (words, pixels, etc) and their latent space is the least possible to make words or pixel vaguely appear to “make sense” but they completely lack anything deeper than that, which in my opinion is the complete opposite of what happens to human beings. I think for human beings most of the “thinking” ability comes deep and outside of the language/visual abilities
Comment #19 January 19th, 2026 at 7:03 pm
What’s currently limiting AI is that they’re relying on human texts.
It’s undeniable that words stand for the arbitrary ways humans have sliced and diced the world, according to what matters to them for their survival.
To a large extent, this happens so that humans can communicate with each other, in a serial way, and then this abstract manipulation of symbols became central to cognition.
But humans can only form so many sounds, and sounds can only be put together to form so many words that humans can efficiently learn and recall.
Humans also experience a large part of the world in a subjective way that isn’t translatable into words, because it’s just too complex and not serial.
So, it would be a mistake to think that all the texts ever written capture every possible ways how humans experience reality, and human intelligence also relies on this (instinct, inspiration, dreams,…).
This isn’t a criticism of the potential of neural networks, this is a criticism of the way AIs are currently built.
For example, you often hear that Inuits have over 50 words for snow, which is way more than in English, but if you were to move to the Arctic, you’d probably eventually become aware of all the types of snow, even though you’d have no known words for them. So, if an AI is only trained on English, it can’t possibly be aware of the complexity of real snow.
The solution is to stop training AI on just human text, but let it experience the world for itself, with not just 5 but hundreds of senses, and come up with its own words, which would go from a few tens of thousands to hundreds of millions (this “world” I’m talking about could also be abstract and virtual like mathematics).
Comment #20 January 20th, 2026 at 5:13 am
gentzen #11: “… later reached the conclusion that those “AI, panpsychism, and consciousness” related discussions are ultimately caused by an outdated rejection of Aristotle’s final cause”
After those discussions, I watched some video’s from
https://www.youtube.com/watch?v=joCOWaaTj4A&list=PLFJr3pJl27pIbRmgeu5eDkAV7r3i9zk2O (Closer To Truth – Daniel Dennett Interviews)
I wanted to better understand how the conclusions of those discussions fitted into the established views. I also read the sections mentioning Daniel Dennett in all my philosophy books. Since it didn’t fit too well with Dan Denett’s stances from the video, I also checked wikipedia, after which it made more sense. I guess it was here were I found the link to Aristotle’s final cause:
https://de.wikipedia.org/wiki/Daniel_Dennett#Intentionalit%C3%A4t
https://en.wikipedia.org/wiki/Intentional_stance#Dennett's_three_levels
I also listened again to https://www.preposterousuniverse.com/podcast/2020/01/06/78-daniel-dennett-on-minds-patterns-and-the-scientific-image/ where Dan’s “intentional stance” also got clarified:
My takeaway for those AI discussions is that trying to understand LLMs as lookup-tables is not really helpful. It is much more helpful to try to understand them in terms of their training goals and the goals that drive their responses. And neither goals are necessarily fixed, which is also an important point to consider, related to “causality” vs “Aristotle’s final cause”. Because the past is fixed, “causality” assumes that the cause is fixed too. But there are no reasons why a “final cause” should be fixed in the same way. In fact, the “I cannot will what I want” paradox is rooted in the same misunderstanding.
Comment #21 January 20th, 2026 at 9:15 am
#19 Ty-Ty “What’s currently limiting AI is that they’re relying on human texts.”
Paging Scott Aaronson, I’m submitting a request that your next dream be arguing with a rabbi over what happens when you train an LLM on the Torah. Please report your findings.
Comment #22 January 20th, 2026 at 9:40 am
Now that AI is finally reaching usefulness of little more beyond search, it is imperative that this economic returns on investment will converge towards pragmatic than stratospheric. Which means the bubble burst is visible in 2026 , just like the dot com bust, but the usefulness curve will continue to rise, with killer apps akin to Uber, Amazon, G Maps etc… A small minority though will root for AI based utopia/dystopia like AGI, similar to people that were hoping Internet will democratize the entire planet. The progress for humanity will surely be faster than any other, just like any other new technology that preceded it. Just that people on either end of expectation spectrum will be hugely disappointed.
Comment #23 January 20th, 2026 at 9:44 am
bagel #21: LLMs obviously have been trained on the Torah, along with all sacred texts of all religions that are available online. They’re extremely good on weird halakhic questions (try them!), reliably digging up the relevant Talmud passages, commentaries, etc, to the extent that if I were a rabbi I’d be worried.
Comment #24 January 20th, 2026 at 10:11 am
@John Schilling #14
Nabdor #17 made some solid technical points, but imho the deeper issue with your claim lies elsewhere. Consider this:
A beehive, even when slicing, dicing, and recombining every buzz and waggle dance of its inhabitants, can never be “smarter” than the bees that compose it. It can’t even match the sharpest scout bee, because the hive doesn’t filter for the most accurate dances, the cleanest signals, or the wisest individual decisions to form its collective mind. In short: fast, cheap mediocrity. Efficient… but always a step below the best bee in the colony.
And yet we both know the hive does produce patterns, choices, and adaptations that no single bee could ever explain. That’s the point: emergence. It works for bees, it works for neurons, and yes — it works for ReLUs too.
Comment #25 January 20th, 2026 at 10:49 am
@Nadbor #17
I’d be a lot more optimistic about the RLHF part if it weren’t being done by random Mechanical Turks on the internet. If OpenAI et al were hiring college-educated knowledge workers with 120+ IQ and telling them “provide negative feedback if it sounds stupid to you”, then the resulting AI would still be limited by the overall quality of the training data, but it would be at least pushed towards highly weighting the top end of the training data and deprecating from the low end.
But when they offshore that work to wherever they can find English-speakers willing to work for $2/hr, then you’re back to having the midwit-average internet sliced, diced, recombined, and “trained” by midwits. Which gives you a mechanical midwit. A midwit with some special features like “refer obvious math problems to a separate math coprocessor” and “don’t say anything that sounds racist for a midwit’s definition of racism”, but still.
Comment #26 January 20th, 2026 at 12:20 pm
LOL, a nice little piece, Scott. I think the truth is somewhere in the middle. AI has its place. It can do some useful stuff. But I would say that in my experience, which is physics, it doesn’t always give the right answer to a question. It gives the most popular answer, which is usually a popscience answer, which may include “lies to children”. Then it clings to this dogmatically even when you show it the right answer. For example, if you ask “Does light curve because it follows the curvature of spacetime?”, it will say “Yes, light curves around massive objects because it follows the curvature of spacetime”. This is wrong. Light curves wherever there’s a gradient in gravitational potential, which relates to the gradient in the speed of light, which relates to the spacetime gradient, not the spacetime curvature. It’s the first derivative, as it were. Spacetime curvature is the second derivative, the change in the gradient, which relates to the tidal force. And as we know, light curves downwards and your pencil falls down even when there’s no detectable tidal force. Then since the deflection of light is twice the Newtonian deflection of matter, they can’t both be following the curvature of spacetime, now can they?
Comment #27 January 20th, 2026 at 12:26 pm
John Duffield #26: I’m guessing that even most human physicists would say “yes, light appears to bend because of the curvature of spacetime.” And then, if you started hairsplitting about first vs second derivative, they’d be like, “well, if you knew enough to make that distinction, you didn’t need to ask me, did you?” If so, ChatGPT would be “correctly” simulating the human physicist in saying the same.
Comment #28 January 20th, 2026 at 12:42 pm
Just one, Scott?
Comment #29 January 20th, 2026 at 12:59 pm
Scott #27: noted, but I don’t share your sentiment. On question after question where we have hard scientific evidence for the right answer, AI gives the wrong answer. The fact that it’s “correctly” emulating some lazy human physicist who comes out with popscience lies to children, doesn’t mean that that’s OK. Instead it means the AI isn’t artificial intelligence. It’s artificial stupidity. Which means your professor has nothing to worry about.
PS: what’s with the “appears” to bend? Light bends in a gravitational field. We don’t call it gravitational lensing for nothing.
Comment #30 January 20th, 2026 at 1:29 pm
John Duffield #29: I said “appears” to bend because from the light’s perspective, it’s just following the geodesic, which of course is the real content of GR here.
Anyway, GPT5.2-Pro is good enough that I now regularly use it to explain physics to me (demanding clarification for stuff that confuses me or seems wrong), but then again, I’m just a computer scientist, not a physicist! 🙂
Comment #31 January 20th, 2026 at 3:19 pm
@John Schilling #25 – you’re spot on about RLHF but that is not the RL I’m talking about! RLHF is old news.
Reinforcement Learning from Human Feedback works as you say and it doesn’t make the LLM any smarter but it does make it adopt this helpful assistant persona that we all know and love. We can debate whether the labs are doing it well but not doing it wasn’t an option. The raw pretrained LLM doesn’t try to answer your questions at all – it just completes the text. But that is 2022 stuff.
What happened last year is that the labs started increasingly using Reinforcement Learning with Verifiable Rewards (RLVR). That is where they set an environment where the LLM can complete some task and gets reinforcement if it succeeds. Examples of tasks – writing code that passes specified tests, solving a math problem and returning a correct answer. To spell it out explicitly – at this stage the LLM is neither learning from human text nor from the preference of human raters. It’s learning by doing. This is somewhat similar to how alpha zero learned go from self play. We know that is what DeepSeek did and it is widely suspected that is what the other labs are doing. Not a coincidence that 2025 was the year when vibe coding exploded.
You could argue that this only teaches them fairly narrow tasks and that there is more to software engineering than writing programs to a spec and more to maths than proving lemmas and you’d be right. But having a superhumanly skilled code monkey would be super useful. Even having the mediocre but superhumanly hard working one that is claude code is pretty nifty.
Besides, there’s got to be some generalization. When DeepSeek’s R1 first came out, one X commenter quipped:
“It’s such a Chinese idea – to teach the AI coding and maths in the hopes that it generalizes to strong reasoning across the board. All that’s missing is piano lessons”.
Comment #32 January 20th, 2026 at 3:34 pm
The thing is that the trillion dollars investments in AI are very hard to recoup.
That’s why AI companies are desperately turning to “slop” to boost revenues.
Even if AI was as great as advertised, a typical human can only make and digest so many queries a day, no matter how good the answers are.
It’s not as if we haven’t had amazingly convenient sources of knowledge for quite a while now: wiki, MIT open courses, regular searches, or people like Scott Aaronson…
and Scott’s problem is not that his mail inbox is filled with so many interesting queries that he ever considered hiring assistants to help him reply.
The truth is that it takes a certain type of brain and interests to make a person lean on AI all that much.
That’s the paradox: you need a minimum level of knowledge to take full advantage of AI… and the more humans rely on AI, the less likely they’ll know what to ask.
Once/if all the code is written by AI, the pool of genuine expert human coders will be so tiny that no one will be able to ask meaningful coding questions.
Only a minority of people who enjoy the drudgery of doing something that could be done by an AI will be the few still interacting with AI in that area.
Fundamentally it shows we’ll need a reform of education, AI has made standard tests meaningless because cheating is now so easy, so instead AI has to be part of how we teach things, which also would change the roles of teaching assistants and the teachers themselves.
This transition may create a huge difference of how two generations consider what knowledge means.
Comment #33 January 20th, 2026 at 3:38 pm
For the past 3 years, I’ve been arguing to people about AI and how it’s gonna kill us all (actually, the past 15 years, but back then it was just very close people like my mom or my girlfriend). Everyone around me completely disagrees with me (well, except my mom whom I finally convinced) regardless of their background (amusingly varying from CS PhDs who “know better” how AI works to philosophy PhDs who think it’s just CS bro hype). I’ve been trying to find out where the disagreement lies and I’ve been shocked at how many people are secretly dualists (again ranging from the deeply religious person who at first was just saying “well, it won’t happen” to eventually conceding “well, man is a special being” to an atheist who admitted to not being a physicalist and stating that “these people are building it from the wrong material”).
I invite everyone to have this discussion and discover their peers’ secret metaphysical stances.
Comment #34 January 20th, 2026 at 3:49 pm
First, excellent post, Scott! I agree with all of your main points!
In comment #2, Scott, you also said, “At this point, it would be tempting to describe the old professor as a Cartesian dualist in denial. And maybe she is! But maybe her position is that sure, there might ultimately be a mechanistic answer to the mystery of how our brains do it, but it still can’t be this answer, because this answer is self-evidently too horrible and ugly and would drain all the meaning from existence.”
(1) I just want to emphasize and spotlight a point of which Scott is aware and one presupposed in this paragraph. There is *no* entailment from substance dualism to the view that AI (or even a giant lookup table) cannot be conscious, and cannot have value or dignity as a result. There is no reason at all to think if substance dualism is true, AI cannot have qualia and genuine mental states (including pain and self reflection), free will, and be real moral agents deserving of respect and rights. The idea of substance and property dualism is just this: that physical substances and mental substances, as well as physical properties and mental properties, are logically distinct and connected by psychophysical laws. It would all depend on the form of the particular pscyhophysical laws. As a tentative property and substance dualist myself, I tend to believe in beauty and elegance of psychophysical laws. As such, I tend to favor the (I think simple and beautiful) idea that the right type of computations in any substrate would generate consciousness via a law that says, ‘computations of type A will cause a mental substance with mental states of type B to exist’. If nothing else, I think a substance dualist ought to act as if the psychophysical laws make this possible in the light of uncertainty. Better safe than sorry! The people who offer, for example, Chinese room arguments against the possibility of AI having genuine consciousness, personhood, and dignity fail to see something, I think. Yes, the Chinese room is a set of physical objects (and mental, in the case of the main inside the room). And the experiment does seem to show that there is *no logical entailment* that the room must intrinsically have a mental state of understanding Chinese (the same as a native Chinese speaker would). But, suppose we posit there are in addition to the set of physical objects, logically contingent psychophysical laws that cause to exist a logically distinct mental state (one of a native Chinese speaker) when the Chinese room is in a certain configuration. This state of affairs is logically possible and not violated by the intuition gained from that thought experiment. I think that is extremely plausible.
(2) A word about dualism. From the literature I have read (which is obviously a biased and limited sample, and I am not an expert, so everyone should be exceedingly skeptical and cautious in interacting with my opinion now!), it seems to me the modern and most formidable philosophical arguments *in favor* of property and even substance dualism are actually plausible and respectable. In fact, something stronger is probably true from my current epistemic position–a position that I hope to update in light of criticism and insight here! Right now, to me, the arguments for substance dualism seem more plausible than physicalist rivals. Are you (or is anyone else here) familiar with the arguments for property and substance dualism, given by, for example, Michael Huemer, Richard Swinburne, Dustin Crummett, Ralph Stefan Weir, etc.? I tend to think that if substance dualism is false, something ‘just as spooky’ must be true–like nonphysical, metaphysically necessary principles/rules of personal identity for composite physical substances like brains. I think substance dualism has a few theoretical virtues the nonphysical identity principles would not, but I cannot rule out the later. But both would entail something nonphysical.
Anyway, thank you for the opportunity for interacting about these topics, Scott! Anyone is welcome to comment if they are interested. I am running a bit short on time, so I can make no promises about replies, though I hope to reply if possible.
Comment #35 January 20th, 2026 at 7:44 pm
Many interesting points:
1. That this all arises in the context of dream stands out to me given the limited understanding of the structure and function of dream in contemporary neuroscience; e.g., particular understanding of the pathological consequences of REM deprivation. The crudest accepted understanding of dream in the relative paucity of any understanding of the general phenomena of dream is that it is necessary for healthy function; e.g., persistent deprivation of REM (for whatever reason) tends to pathological end points. In short, dream is recognized as ‘healthy’ function and persistence interference with dreaming tends to produce adverse consequences. However, I am not sure anyone knows how or why dream is healthy and how or why its absence has the particular pathological consequences that it has.
2. I empathize with the professor’s view of the inelegance of brute force ‘table look up’; since at root level ANN’s, through training an astronomical numerical order of parameters, often appears to bake so much information into the probabilites that the map becomes close (if not strictly) 1-1 isomorphic; e.g., elaborate brute-force table look up
Although,
3. Melanie Mitchell at Sante Fe Institute has long advocated for neuroanatomical distinction between formal and functional linguistics; pointing out that spatial, formal, logical, and mathematical cognition are entirely distinct to linguistic processing seen ubiquitously in this wave of LLM artificial intelligence; e.g., SFI Season 2025 Complexity Podcast The Nature of Intelligence (https://www.santafe.edu/culture/podcasts). This neuroanatomical distinction in types and subtypes of intelligence has me curious to what the future of artificial intelligence potentially holds ‘beyond’ LLM’s and their brute force table look up; e.g., when sets of problems requiring non linguistic processing (formal, logical, mathematical, musical, emotional, etc) are solved by artificial intelligences in their native wild type form as opposed to mapping those problems onto a linguistic expression of them and then solving them using the brute force table look up.
4. ChatGPT and Claude has advanced to impressive degrees in their ability to navigate theoretical physics problems although debates seem to persist among journal authors, reviewers and editors about whether or not they can do theoretical physics at the required level. This also has me curious about how they can ever get to the point where they can perform at the required level; e.g., unlike other fields training the LLM’s on a daily basis at a volume that rapid growth and development is possible are there even enough theoretical physicists alive putting in enough hours of time required to reach the threshold?
Comment #36 January 21st, 2026 at 4:39 am
#30 Scott: Groan. It gets worse. As well as asking AI about general relativity, I recommend you go back to the original papers and read what Einstein actually said. Sadly the Einstein digital papers are offline at the moment, but see for example https://mathshistory.st-andrews.ac.uk/Extras/Einstein_ether/. For a quick test I asked this: “Did Einstein say light followed a geodesic?” The answer was “Yes, Einstein’s theory of general relativity implies that light follows a geodesic in curved spacetime, meaning it travels along the path that represents the shortest distance between two points in that curved geometry. This concept is fundamental to understanding how gravity affects the motion of light”. Unfortunately it’s wrong, and it doesn’t deliver any understanding of how gravity affects the motion of light. Try pinning the AI down with another question such as “Where did Einstein say light followed a geodesic?” and you’ll get waffle, because he never did. Try asking if matter moving at near the speed of light follows a geodesic too, and the answer will be yes. It sounds reasonable enough until you ask this: “Is the deflection of light twice the deflection of matter?” Then you get more waffle, because light is deflected twice as much as matter, so they can’t both be following that geodesic. The answers are wrong. So very wrong that they’re cargo-cult crap. Other answers are wrong too. Answers to do with politics and genocide. Because right now an AI is just a LLM. A quick way for lazy people to get a pseodo-someone to search the internet for them. We are living in a dark age, Scott. Trust me on this.
Comment #37 January 21st, 2026 at 5:38 am
Del #18
“ I think for human beings most of the “thinking” ability comes deep and outside of the language/visual abilities”
Kekule reported he identified the correct structure of benzene while asleep and dreaming. John von Neumann reported he solved his hardest problems while sleeping. When he awoke it was all there. Einstein said that his breakthroughs came from working in the sensorium and were based largely on intuition. It felt right. So yes these statements support your thought.
The takeaway from this is the next time Dr. Aaronson meets this crone in a dream he must ask her about P vs NP.
Comment #38 January 21st, 2026 at 6:39 am
For what it may be worth, much work has investigated how the equations of motion (geodesic or otherwise) for an ideal “point particle” (massive or otherwise) are derivable from the field equations of General Relativity. In the 1930’s Einstein and Rosen worked on the problem as they discuss in “The Particle Problem in the General Theory of Relativity” (https://journals.aps.org/pr/abstract/10.1103/PhysRev.48.73) as did and many others. Eric Poisson et al provide a comprehensive review (https://link.springer.com/article/10.12942/lrr-2004-6) (https://arxiv.org/abs/1102.0529).
But these very detailed specifics concerning geodesics in spacetime and whether they are or are not derivable from field equations (as mechanical equations of motion) seems to unroof how cognitive dissonance is accommodated in individual, social and artificial intelligences.
Generic absolute binary true and false distinctions tend to be blurred and obscured in the real world in analogy to the parable of ‘the blind men and the elephant’ wherein the objective reality is an atlas of charts whose totality reveals a whole greater than the sum of the individual parts observed by relative perspectives; e.g., each blind man is both true and false relatively and simultaneously. To wit, the capacity to hold ‘opposing thoughts’ ‘in mind’ and how that generates internal complaints within individuals and external complaints when distributed across individuals; e.g., “debates” in political consciousness and social media. I have heard that there are two versions of how the parable of the blind men and the elephant concludes: one version ends in dispute and conflict and one version ends with learning within individual and social consciousness.
How artificial intelligence navigates its capacity ‘to hold opposing thoughts simultaneously’ and how society views the machine’s abilities to do so (well or otherwise) seems to be a development we can look forward to spectating.
Comment #39 January 21st, 2026 at 8:35 am
Scott #30
“Anyway, GPT5.2-Pro is good enough that I now regularly use it”
This may not seem like a big deal, but the fact that you had to mention a very precise version is very telling of an actual problem imo.
No human who has a real job to do can keep track of all the available models and their differences (besides some vague benchmarks).
Please tell me the differences between copilot running with Claude Haiku 4.5, Claude Sonnet 4, Claude Sonnet 4.5, GPT-5, Gemini 2.5 Pro, GPT-4.1, GPT-4o, GPT-5 mini, etc. (those can all be potential options in copilot)
Not only you get different answers based on the model, but obviously also based on the phrasing of the question (although you hear already much less about “query engineering”).
Last week I got totally misled by copilot regarding a programming language feature (even though we’re told that coding is the thing for which LLMs are so good).
I had my doubts and asked Google and got a more correct answer.
The mistake made me lose two hours.
The second point about the so-called exponential growth (that the dramatic improvement of capabilities is going to stay proportional to the scaling of the resources) that was predicted a year and a half ago and discussed in the viral paper “Situational Awareness”
https://scottaaronson.blog/?p=8047
we just don’t see an exponential growth anymore, otherwise, by definition, there wouldn’t be a need to keep so many recent and older models around and have the user figure which one is the best fit for his/her area of interest.
This huge offering of competing models with rather small differences not only creates an added burden/confusion on the user side but is also preventing any clear winner to emerge while investment will eventually dry out and revenue is split among too many competitors for any of them to sustain their business.
Comment #40 January 21st, 2026 at 10:32 am
John Duffield #36
Isn’t it the case that geodesics are, by definition, the trajectories of free-falling point particles of a given mass?
Comment #41 January 21st, 2026 at 11:31 am
Ty-Ty
I am not sure if the host wants to field this question, but in short the answer is yes and no: Subtleties depend on how one defines a point particle, whether that particle has it’s own field (including a gravitational field of its own), whether back reaction between the ambient gravitational field and the particle’s field is considered and (if so) to what order. Depending on the answers to those specificities the answer is possibly ‘yes’ and possibly ‘no’. There is also a distinction concerning what ‘intertia’ (a Newtonian concept) means in a relativistic context (which includes “massless” particles). The literature has occasionally referred to these subtleties as the question “does a free falling charged particle radiate?”. Hope this helps
Comment #42 January 22nd, 2026 at 3:39 am
Looking forward to your reaction to Mark Carney’s speech in Davos (transcript here and everywhere: https://globalnews.ca/news/11620877/carney-davos-wef-speech-transcript/)
It seems like one of those society-changing speeches akin to “I had a dream…”
Comment #43 January 22nd, 2026 at 9:20 am
Shmi #42
more like “I had a nightmare…”
Comment #44 January 22nd, 2026 at 10:17 am
Ty-ty #40
The trajectories of free-falling objects are often called geodesics, but these trajectories are only straight lines when the objects are falling straight down. Imagine you have two horizontally-moving objects in space near the Earth, one with a mass of 1g, one with a mass of 1000g. A feather and a hammer if you like. Neither are big enough to have any measurable effect upon the Earth. Their motion curves downwards due to gravity. Their trajectories are the same. If you repeat the scenario but make them move faster, their motion still curves downwards. The downward component of their motion is the same as before, but the trajectories looks less curved because they’re moving faster. The same applies if you make them move very fast, so close to the speed of light that you can’t tell the difference. You can use the word “geodesic” when you are talking of the mathematics of their free-fall motion through space over time. But if you replace the feather with a photon and repeat, you find that the photon does not follow the same trajectory as the hammer. The downward component of its motion is twice that of the hammer. Hence their curved motion is not happening because they’re “following a geodesic”, or because they’re following the curvature of spacetime. That’s lies-to-children. The horizontal light beam curves wherever there’s a spacetime gradient. It curves twice as much as matter because of the wave nature of matter.
I watched the video. See here https://youtu.be/4QPKWFme0k4?t=3075 where Scott Hughes said let’s make this a little more rigorous, and then talked about a body moving on a trajectory through spacetime. It might be rigorous, but it’s not what Einstein said, and it’s wrong. A body moves on a trajectory through space, not spacetime. Spacetime is an abstract mathematical arena which models space at all times. So there is no motion in spacetime. So you do not move up your world line. You do not move along a geodesic either, and nor does a falling body. But that’s what people like Scott Hughes will teach his students. Because that’s what’s in his textbook. AI will repeat it all, compounding the error, and people like Scott Aaronson will be none the wiser.
Comment #45 January 22nd, 2026 at 12:10 pm
John Duffield #44
thanks John, very interesting!
Comment #46 January 22nd, 2026 at 5:17 pm
Scott, when anyone says that it’s just a giant lookup table, a statistical autocompletion engine and so on and so forth:
Ask them what happens if you ask an LLM what’s a horse-like striped African animal is. The LLM will go, “Sure! The animal you’re asking about is ” — and what is the next token, “a” or “an”?
It’s “a” most of the time at any reasonable temperature, which means that at this point it already knows what the final answer will be. It does not generate it token-by-token. You can investigate the size of the window (and it’s actually very interesting!), but even one token lookahead means that it is not generating text token by token.
And if they point out that it factually does, point out that they factually do as well.
Comment #47 January 22nd, 2026 at 5:19 pm
John Duffield #44: Your way of explaining geodesics seems to me too dependent on the choice of a coordinate system. Given any geodesic, there exists a coordinate system with respect to which that geodesic is a straight line. The issue is that you can’t use the same coordinate system for all geodesics simultaneously, and that fact is indeed due to curvature rather than the first derivative of the spacetime metric — when the curvature is zero, there is in fact a single coordinate system with respect to which all geodesics are straight lines.
Comment #48 January 23rd, 2026 at 4:07 am
A giant look-up table on all smart and stupid, good and bad ever produced, reprocessed for purposes of prediction – but is this really different from humans?
We observe our environment starting from the earliest childhood and memorize our observations – there is your lookup table, hash table, or whatever data structure you prefer.
We tailor our response to a new observation according to a Bayesian (or irrational) expectation for the result, based on our memories of the past – there is the stochastic predictor.
Most people strive to meet the expectations of their peers in private and public. As Tolkien’s ‘most respected hobbits’ you can predict what they would say or do in a given situation without having to ask them. And internet and most news media are also full with ‘human slop’.
So what arguments except ‘faith’ exist that we are fundamentally different from a organic chemistry based AI? Because if we are not, our eventual demise seems only a matter of future software and hardware progress.
Comment #49 January 23rd, 2026 at 7:32 am
Everything can be reduced to a giant lookup table.
But filling it with the right values can be arbitrarily tricky! Eg the busy beaver lookup table.
Comment #50 January 23rd, 2026 at 4:11 pm
@Ty-Ty:
I should warn you that what John Duffield said is wrong. In fact the downward component of the objects’ motion will change as the objects approach the speed of light, and as they draw near to it, they will approach the motion of a photon.
As for the second paragraph, it’s also wrong. The object’s trajectories in spacetime will be very close to spacetime geodesics (these are not, of course, geodesics of the curved space around the Earth), because their stress-energy tensor is very small. If, per impossibile, we were to have an object made of tachyons, and travelling at infinite speed, its trajectory in space would be a geodesic, curving entirely due to the curvature of space.
Comment #51 January 23rd, 2026 at 6:32 pm
Ty-ty #45 : You’re welcome. When you dig into this, you come to realise that things are pretty dire. Search the Internet on misconceptions in gravitational physics.
Dacyn #47 : Yes, there’s a coordinate system with respect to which some geodesic is a straight line. But if you and I are moving almost as fast fast side by side, and you’re an electron and I’m a photon, my downward curvature is twice yours. So there is no coordinate system that describes both your motion and mine. This all goes back to Einstein’s variable speed of light. He said light curves because a concentration of energy alters the surrounding space, making it “neither homogeneous nor isotropic”. As a result the horizontal light beam curves downwards like a sonar wave curves downwards in the sea. Sadly this is not a feature of modern general relativity. See https://en.wikipedia.org/w/index.php?title=Variable_speed_of_light&oldid=770034932#Einstein's_updated_proposals_(1905%E2%80%931915) for more. Sadly AI, or more properly LLM, won’t tell you any of this. That’s because it treats the prevalent material on the Internet as the truth. Which is why it will tell you that the Israelis are committing genocide in Gaza, when they’re not. I don’t think Scott has quite got this yet. Sorry Scott.
Comment #52 January 24th, 2026 at 12:55 am
Show her this: https://thegradient.pub/othello/
That’s unquestionably intelligence of a sort, I would say.
Comment #53 January 24th, 2026 at 1:16 am
John Duffield #36
I suspect you are confused by some popular (illiterate) exposition. Did you ever try to solve the ODE for geodesics? It is a very simple ODE; nothing special happens when the direction is light-like. In particular, null-geodesics (followed by light) are limits of geodesics of non zero length (followed by matter).
Comment #54 January 24th, 2026 at 5:44 am
John Duffield #Various
I don’t understand your logic. You note that increasing the speed of an object allows it to travel farther horizontally but then at the speed of light a discontinuity, suddenly the deflection doubles because of the wave nature of light. What about the wave nature results in twice the deflection? If we carefully monitor sound waves will they descend at twice the rate predicted by Newtonian mechanics.
(Note:Your statement was “ It curves twice as much as matter because of the wave nature of matter. ”. I assume you meant ….wave nature of light.)
I don’t understand your comments about trajectories through space but not spacetime. When light moves through a gravitational field the local clocks are slowed with respect to a distant observer’s clock outside a gravitational field. I don’t understand why you would consider a geodesic to apply only to space. Geodesics are well defined in differential geometry whether for a 3 manifold or 4 manifold. The measurements by Eddington supported that light was better predicted traveling through a 4 manifold (spacetime) rather than a 3 manifold (space).
Time passes for all non-zero displacements from the standpoint of an external observer so yes movement along the time axis as well as the spatial axes.
Comment #55 January 24th, 2026 at 7:50 am
Alex T #48
“Most people strive to meet the expectations of their peers in private and public. As Tolkien’s ‘most respected hobbits’ you can predict what they would say or do in a given situation without having to ask them. “
Don’t forget Gandalf’s observation-
“My dear Frodo!’ exclaimed Gandalf. ‘Hobbits really are amazing creatures, as I have said before. You can learn all that there is to know about their ways in a month, and yet after a hundred years they can still surprise you at a pinch.”
And therein lies the difference.
Comment #56 January 24th, 2026 at 9:44 am
John Duffield
“That’s because it treats the prevalent material on the Internet as the truth.”
There’s no way around this since LLMs are trained on human concepts and interpretations of the world (more accurately, not the world itself but sensory inputs and thoughts).
So, garbage in, garbage out.
No LLM has the power to independently question its inner models, check them against raw world data, and do corrections. This correction can only happen with the help of humans, rating its answers.
Another consequence of this is what happens when an LLM is trained with contradictory texts? Either this will prevent the formation of inner models or you will sometimes get one answer or its opposite, randomly.
Unlike Alpha Go Zero which trained from scratch by learning directly from the world it lives in, and is the path to true artificial intelligence i.e. emerging by itself based on its environment with no reliance on human interpretations.
Comment #57 January 24th, 2026 at 6:28 pm
John Duffield #51:
> But if you and I are moving almost as fast fast side by side, and you’re an electron and I’m a photon, my downward curvature is twice yours.
What? Why?
> So there is no coordinate system that describes both your motion and mine.
What do you mean by this? Of course there are coordinate systems that describe both your motion and mine, just look at any coordinate patch that covers the region of spacetime you’re interested in. Do you mean that there’s no coordinate system with respect to which both your motion and mine are straight lines?
Comment #58 January 26th, 2026 at 8:38 pm
This was such a cool dream!
Comment #59 January 27th, 2026 at 5:12 am
Ilya #53 : I’m not confused. The deflection of light is twice the Newtonian deflection of matter. Google it. Einstein revised his prediction to account for this.
OhMyGoodness #54 : there’s no discontinuity at the speed of light. If you contrived a light beam moving horizontally in a gravitational field, it would exhibit a downward deflection. If you contrived a variety of matter bodies moving horizontally at different speeds in a gravitational field, they would all exhibit the same downward deflection. However this would be half the downward dfelection of the light beam.
I meant the wave nature of matter. See Erwin Schrodinger’s 1926 paper “Quantisation as a problem of proper values Part II” which talked about a wave in a closed path. Or Charles Galton Darwin’s 1927 Nature paper “The electron as a vector wave” which talked about a spherical harmonic for the two directions of spin. You can think of an electron as a wave in a closed path, where half the path is horizontal, and half is vertical. Only the horizontal component gets deflected downwards. Hence the deflection of matter is half the deflection of light.
As regards the geodesics, Einstein said this in 1920: “Second, this consequence shows that the law of the constancy of the speed of light no longer holds, according to the general theory of relativity, in spaces that have gravitational fields. As a simple geometric consideration shows, the curvature of light rays occurs only in spaces where the speed of light is spatially variable”. He said light curves in a gravitational field because it’s a place where there’s a gradient in the speed of light. Not because it follows a geodesic.
Ty-ty #56: I agree wholeheartedly. That’s the point I was trying to make.
Dacyn #57: See what I said above. Yes, I mean there’s no single coordinate system with respect to which both your motion and mine are straight lines. That’s because the claim that a photon or electron curves downwards because it travels in a straight line through curved spacetime, is wrong. You have to read the Einstein digital papers to realise this, and Princeton have taken them offline. Hence you can’t get an AI to do it for you.
Scott, sorry to comment so much, but this is an important example that demonstrates a point.
Comment #60 January 27th, 2026 at 11:01 am
The long term trend of science has been to show that humans are not special: We are not in a special place in the cosmos, we were created specially, we aren’t made out of anything special. People who expect intelligence to be something very special might still be right, but it doesn’t look like a good bet.
Comment #61 January 27th, 2026 at 1:03 pm
John Duffield #59
“ As regards the geodesics, Einstein said this in 1920: “Second, this consequence shows that the law of the constancy of the speed of light no longer holds, according to the general theory of relativity, in spaces that have gravitational fields. As a simple geometric consideration shows, the curvature of light rays occurs only in spaces where the speed of light is spatially variable”. He said light curves in a gravitational field because it’s a place where there’s a gradient in the speed of light. Not because it follows a geodesic.”
I don’t feel competent to engage in this discussion but please bear with me. I am not sure what Einstein and then you are claiming here. If I measure the speed of light in a vacuum in a laboratory on Earth then I measure the distance it travels in one second based on my local clock and I obtain the standard velocity (speed) of light. I assume you mean scalar velocity by speed. I agree that the distance it traveled in 1 sec will be different depending on the gravitational field because a clocks run at different rates depending on the gravitational field, The distance it travels in one second can’t be considered versus some universal value because clocks vary but the speed (scalar velocity) remains the same (smaller distance/shorter second or larger distance/longer second). Yes I agree about a gradient being necessary but there is a time gradient so more poetically light is refracted by a time gradient.
Comment #62 January 27th, 2026 at 3:07 pm
John Duffield
I think you can see I paid darn good attention when I read Methuselah’s Children as a ten year old. 🙂
Comment #63 January 27th, 2026 at 7:08 pm
John Duffield #59: But of course for any two paths there is a coordinate system with respect to which they are both straight lines. This is a not too difficult theorem in manifold theory, though the proof is probably too long for a comment. You need to consider more paths, like an infinite family of them, before the existence of a coordinate system with respect to which they are all straight lines becomes a nontrivial requirement.
Regarding deflection of light, are you trying to say that photons and/or electrons don’t follow geodesics? Because if they do follow geodesics, then any behavior of their path can be said to be “because they are following geodesics”, despite the fact that Einstein’s papers may have a different explanation. Basically, it is possible for multiple explanations to be consistent with each other, but the modern explanation is usually the more enlightening one.
Regarding the speed of light being spatially variable, I am pretty sure either you or Einstein is/was wrong. The speed of light is a constant 299 792 458 m / s, in fact a meter is defined to be the unique length such that that conversion holds. Of course, it’s possible that Einstein actually meant something else, such as that the spacetime metric is variable. It’s also possible that Einstein is using language that meant something different when he wrote it than it means nowadays.
Comment #64 January 28th, 2026 at 3:24 am
Amazing to me that Cesium clocks show a difference in the passage of time due to gravitational time dilation even with centimeters of elevation difference in Earth’s gravitational field. Time is sensitive to gravity in a manner completely counterintuitive to everyday experience but readily apparent with sufficiently accurate measurement. The most accurate practical mapping of gravity field intensity employs a clock.
Comment #65 January 28th, 2026 at 3:55 am
The metaphor “time is money” considering GR implies that the deeper you are in a gravity field the more money you have. This probably needs a rewrite like “time is money excluding the effects of gravitational time dilation”.
Comment #66 January 29th, 2026 at 4:56 am
I asked ChatGPT about measurement of light speed on the surface of a neutron star. With surface gravity of 10^11 earth the gravitational time dilation is about 25% (I suspected more than this). In any event ChatGPT says local measurement results in c because time dilation is offset perfectly by change in local length. ChatGPT provides the GR math to support this conclusion. A distant observer may conclude a different result due to using his clock and change in coordinates. The factor to dilate time is the same as that to compress length for local observer and so just cancels out in computing the ratio d/t.
Comment #67 January 29th, 2026 at 2:42 pm
John Duffield #59
Are you going to bring “googling” arguments “to a fight” where many of your detractors actually took an exam on the subject and/or taught the subject?! As I said: you follow illiterate sources.
The Eddington’s factor 2 comes from comparing Newtonian gravitation in Galileo-invariant space to Einsteinian gravitation (in Lorentz-[micro-]invariant space). It has no relation to being massless.
(I cannot stop myself from mentioning here that a few years ago mathematicians realized that in Lorentz-[micro-]invariant space, the Newton law of gravitation written in the Laplace form Delta phi = 0 (in other words, “the tidal forces ‘averaged over all directions’ give zero” ) is actually equivalent to the Einstein vacuum equations! This became the scientific background to the novel Incandescence. See the author’s web page with the notes about this novel for details.)
.
How did you manage to come to this [ridiculous] conclusion?!
Dacyn #63
This doesn’t make sense in physics context. (Because it assumes existence of a special coordinate system.)
Comment #68 January 29th, 2026 at 2:55 pm
I think I wasn’t clear enough…
What I wrote above shows that the factor 2 is due essentially¹) to the difference between Galileo and Lorentz kinematics.
¹) However, relating one to another requires a lot of ingenious math — and this relation has been missed for almost a century.
Comment #69 January 29th, 2026 at 5:31 pm
Ilya Zakharevich #67:
> This doesn’t make sense in physics context. (Because it assumes existence of a special coordinate system.)
No, the “speed of light” is a fundamental physical constant that is by definition a conversion factor between meters and seconds, and is often set to 1 by convention. See e.g. the wiki page https://en.wikipedia.org/wiki/Speed_of_light. It’s true that the motion of light relative to a coordinate system is dependent on that coordinate system, e.g. it is possible for light to remain motionless in certain coordinate systems (since light follows null geodesics, and you can always choose a coordinate system where the time axis is a null geodesic). But this doesn’t have anything to do with the abstract concept of “the speed of light”.
Comment #70 January 30th, 2026 at 11:08 pm
Dacyn #63
Using meters and seconds shows your argument as completely bogus. It is practically impossible to discuss physics of special (and much more so general) relativity using SI units. Just try to write in the general form how electromagnetic force tensor changes under general Lorents transformations! (The transformation law is specified by a 4x4x4x4 tensor — and what do you think are dimensions of its entries?!) This is why any mathematically honest discussion requires using dimensionless units.
Discussions of speed of light — even when it is 1, and not some fancy dimensional constant — assumes you know what is space and what is time. In GR, you do not.
Instead you just discuss the local light cone. Every point has its own light cone — completely unrelated for different points.
Comment #71 January 31st, 2026 at 4:05 pm
Ilya Zakharevich #70: You are missing my point, which is a terminological one. My point is that whether you like it or not, whether it is useful for theoretical physics or not, the phrase “speed of light” in common usage refers to a certain numbers of meters per second. This is evidenced by my wiki link.
But in any case, you are wrong that a mathematically honest discussion requires dimensionless units. It may require setting the speed of light to 1, so that lengths are equivalent to durations, but why do lengths and durations have to be made unitless? To answer your question about the dimension of tensor entries, of course it depends on how many tensor indices are in subscript versus superscript; if n are in subscript and m are in superscript then the units should be meters^(m-n) (or equivalently seconds^(m-n)).
> Instead you just discuss the local light cone. Every point has its own light cone — completely unrelated for different points.
Sure — another way of putting it is that there is no notion of “the speed of light” in abstract physics.
Comment #72 January 31st, 2026 at 4:50 pm
That colleague of yours (is that her true stance in reality also?) mostly matches my view.
There are two modi, people do thinking: One is out-of-the-box, venturing into new space, erring maybe and on the other hand group think, interpolating *within* the known frame-of-reference.
It is impolite to match the first with masculine and right-wing, the latter with feminine and left-wing behaviour – but ignoring politeness, it is probably true (at a statistically meaningful level, not 100%)
Two women on women’s brains and feminism at large, resp.:
Tania Singer, via https://x.com/JoakimMarias/status/2014240818404503906
Helen Andrews, YT: v=EWLbq7PlrIA
AFAIK, current AI is 100% in the group think camp, with “internet incl. googlebooks” as frame-of-reference. So, AI is neither smart, nor dangerous in itself, the “lookup table” (or as a colleague of mine prefers: “a big switch statement”) applies.
Scott#23 on bagel#21 is a very good example: Rabbis have a narrow field of axioms, Torah+Talmud+?, and most questions have been asked, answers are available on the web, thence in LLM, thence repeatable by AI BUT: You come up with a new, original question (no idea of an example;-) … AI will mumble along hopelessly, while Rabbi is on point (I guess).
AI maybe is slop, but slop has its applications, and in many situations, slop is just good enough, and (relatively) cheap.
The danger, however, IS imminent, whenever politicians or similar try to replace “common sense” or “parliamentary voting” by “ask grok [and we feed it beforehand, purging the lookup-table a bit]”.
If AI ever became “out-of-the box”, I might prefer though it had been torched before.
PS: The best stock market tip for this decade might be shorting Oracle, openAI & the like …
Comment #73 February 1st, 2026 at 12:00 am
This is my point, yes.
Therefore, this is completely irrelevant. A lot of things in common usage do not make any sense; — but why discuss them?!
Obviously, you never tried to write it down. Did not you forget that the basis vector would look like 3×meters + 2×sec?!
BTW, you can quote here by
<blockquote>.Comment #74 February 1st, 2026 at 3:57 pm
I was correcting someone who was using the terminology in a nonstandard way. I think it’s important to use standard terminology in order to be clear.
Huh? The standard basis \\delta_i^j has 1 subscript and 1 superscript, so it would be unitless. Why would it look like 3×meters + 2×sec? (Incidentally, how do you write math here?)
Comment #75 February 2nd, 2026 at 1:05 am
To support general Lorents transforms, you need to be able to change to any “orthonormal” Minkowski basis. In particular, to a basis containing the vector above (with suitable normalization).
It was not math, just a few Unicode characters. (Gboard supports very few; on PC you can use my keyboard layouts which support thousands.) But I think
\(etc. are also supported here.Comment #76 February 2nd, 2026 at 2:43 pm
If I understand you right, you are saying that some vectors in an orthonormal basis may have nonzero space and time components, so they would look like “3 meters in direction i + 2 seconds in the time direction”. First of all, if this is true I don’t see why it’s a problem, as you can just use the conversion factor to convert meters to seconds and then everything is in one type of units. But secondly, the change of basis matrix is still of the form one-subscript-and-one-superscript, so it is unitless. One way of interpreting that is to say that the basis vectors themselves are unitless — though really the basis vectors shouldn’t be thought of as a separate thing from the change of basis matrix.
The part where you should actually use units is when there is a natural tensor such as the curvature with different numbers of subscripts and superscripts. The curvature describes how a vector is transformed when moved along a small loop in physical space, as a bilinear function of the two vectors used to create the loop. Since these vectors are in physical space they are best described using physical length/duration units, so curvature is best described using length/duration\(^{-2}\) units.
Comment #77 February 2nd, 2026 at 10:27 pm
Sorry, but I have no clue what you are talking about. And I suspect that this is because you neither.
So I give up.
Comment #78 February 2nd, 2026 at 10:42 pm
Nevertheless, I will try once more!
SI definitions of the second and of the meter imply that there is a coordinate system in which the metric tensor is constant — hence the space-time is flat. (In other words, the gravitation — measured as the inverse period of a satellite orbit — is negligible.) End of the story of usability of Si in physics.
(Likewise, the SI definition of the speed of light implies that the space-time is conformally flat. — I do not know how to express this in non-geometric terms.)
Comment #79 February 3rd, 2026 at 2:46 pm
You now appear to be making a much stronger claim than previously; namely, that the standard definitions of “meter” and “second” are somehow invalid. (You say that these definitions “imply that […] space-time is flat”, which of course we know it isn’t.) But isn’t this contradicted by the fact that people use the notions of meters and seconds all the time? Are you trying to claim that the gap between theoretical physics and experimental physics is so great that it cannot be bridged?
In any case, I don’t see why the standard definitions of meter and second should imply that spacetime is flat. Why isn’t it enough that in small neighborhoods of a point, it is close to being constant (in the sense that the metric \(g\) satisfies \(g = g(x,t) + O(\varepsilon)\) in an \(\varepsilon\)-neighborhood of \((x,t)\))?
Comment #80 February 3rd, 2026 at 5:44 pm
Let me try again. The metric tensor is a family of linear maps \(g_x\) from \(\otimes^2 T_x M\) to \(\mathbb R\), where \(M\) is the spacetime manifold and \(x\in M\). But nothing substantial changes if we replace \(g_x\) by a scaled copy \(c g_x\) where \(c > 0\) is independent of \(x\). In particular, rather than being an element of \((0,\infty)\), we can allow \(c\) to take on a formal value of the form “\(c\) seconds\(^2\)” — so now the codomain of \(g_x\) is the one-dimensional vector space \(S^{\otimes 2}\), where \(S = \{c\) seconds \( : c\in \mathbb R\}\). It follows that when you integrate \(\sqrt{-g_{x(t)}(x'(t))}\) along any timelike curve, you get an answer in units of seconds.
Now, \(T_x M\) can be decomposed as \(S\otimes (S^*\otimes T_x M)\), where \(S\) is a one-dimensional space in units of seconds and \(S^*\otimes T_x M\) is a unitless vector space. It follows that for every tensor \(t_x \in \otimes ^m (T_x M)^* \otimes \otimes^n T_x^M\), we have \(t_x \in S^{\otimes (n – m)} \otimes \otimes ^m (S^*\otimes T_x M)^* \otimes \otimes^n (S^*\otimes T_x M)\), i.e \(t_x\) is a unitless tensor times an expression that is in units of seconds\(^{n – m}\).
Comment #81 February 4th, 2026 at 4:33 am
Yes, the dimension taking values in (essentially) K-theory (vector bundles but not up to isomorphism) makes perfect sense. One dimension “for the tangent bundle” , and one more “for each bundle of gauge fields” .
But, in my experience, its “predictive power” it’s basically zilch comparing to the usefulness of dimensions in the settings “of schoolbook physics”. The corresponding “representation theory” is too limited! This is why I was hesitant to count it as an honest answer. (And/or interpret what you said before in this way — mea culpa!)
Comment #82 February 4th, 2026 at 12:40 pm
I mean, physics doesn’t have any predictive power unless you can relate it to reality, which requires units.
Why would you say the representation theory is more limited? E.g. the isomorphisms of \((T_x M, g_x)\) are the same as the isomorphisms of \((S^*\otimes T_x M, I\otimes g_x)\)…
Comment #83 February 5th, 2026 at 1:36 am
Too shortsighted for my taste.
Since I know tons of problems which can be solved by the method of dimension in classical mechanics. And I know none which may be solved by “just counting” to which \(TM^k \otimes {\mathcal E}^l\) the answer would allow intertwiners.
Generally speaking, all these isomorphisms would be trivial. (Since one most actually consider automorphisms of \( M \) instead.)
Comment #84 February 5th, 2026 at 2:19 pm
What does this have to do with representation theory?
Huh? If \((T_x M,g_x)\) is isomorphic to Minkowski space, then its isomorphisms are isomorphic to the Lorentz group, no?
How does that work? If \(g\) is a metric tensor, then usually \((M,g)\) will have no nontrivial automorphisms. Or do you not care whether \(g\) is preserved? In which case it seems like the group of automorphisms of \(M\) is pointlessly large…
Comment #85 February 5th, 2026 at 8:16 pm
“Dimension” is just a representation type of the group Mⁿ; here M is either the positive multiplicative group, or the full one (if you want to, say, distinguish scalars and pseudoscalars).
This is more or less what I said. (Only, to be complete, one should also keep in mind that conformal automorphisms might also lead to a theory of dimensions. But generally, there are no such beasts either —hence no delicate theory of dimension.)
#79
The SI definition has no words “close to being”.
Comment #86 February 6th, 2026 at 3:40 pm
Hang on. You wrote
The isomorphisms of \((T_x M,g_x)\) are not the same thing as the isomorphisms of \((M,g)\). The latter are trivial while the former are not. Maybe that is more or less what you meant, but it is not more or less what you said.
The SI definition is an empirical definition. It is given by measurements, which are always only approximate.
Comment #87 February 11th, 2026 at 4:44 am
A general isomorphism would not preserve kinematics — hence it’s useless as far as the predictive power of the corresponding representation theory is concerned. (GR is not a gauge theory!)
Anyway, it’s not how these definitions are written — and not how you quoted them.