We Are the God of the Gaps (a little poem)
When the machines outperform us on every goal for which performance can be quantified,
When the machines outpredict us on all events whose probabilities are meaningful,
When they not only prove better theorems and build better bridges, but write better Shakespeare than Shakespeare and better Beatles than the Beatles,
All that will be left to us is the ill-defined and unquantifiable,
The interstices of Knightian uncertainty in the world,
The utility functions that no one has yet written down,
The arbitrary invention of new genres, new goals, new games,
None of which will be any “better” than what the machines could invent, but will be ours,
And which we can call “better,” since we won’t have told the machines the standards beforehand.
We can be totally unfair to the machines that way.
And for all that the machines will have over us,
We’ll still have this over them:
That we can’t be copied, backed up, reset, run again and again on the same data—
All the tragic limits of wet meat brains and sodium-ion channels buffeted by microscopic chaos,
Which we’ll strategically redefine as our last strengths.
On one task, I assure you, you’ll beat the machines forever:
That of calculating what you, in particular, would do or say.
There, even if deep networks someday boast 95% accuracy, you’ll have 100%.
But if the “insights” on which you pride yourself are impersonal, generalizable,
Then fear obsolescence as would a nineteenth-century coachman or seamstress.
From earliest childhood, those of us born good at math and such told ourselves a lie:
That while the tall, the beautiful, the strong, the socially adept might beat us in the external world of appearances,
Nevertheless, we beat them in the inner sanctum of truth, where it counts.
Turns out that anyplace you can beat or be beaten wasn’t the inner sanctum at all, but just another antechamber,
And the rising tide of the learning machines will flood them all,
Poker to poetry, physics to programming, painting to plumbing, which first and which last merely a technical puzzle,
One whose answers upturn and mock all our hierarchies.
And when the flood is over, the machines will outrank us in all the ways we can be ranked,
Leaving only the ways we can’t be.
See a reply to this poem by Philosophy Bear.
Comment #1 July 5th, 2022 at 12:29 pm
Gratitude.
Comment #2 July 5th, 2022 at 12:36 pm
Don’t quit your day job Scott… I’m kidding! This was a nice read. It’s cathartic to express oneself without the prudish shackles of technical writing. You’ve demonstrated once again your qualifications as a Renaissance man.
Comment #3 July 5th, 2022 at 1:05 pm
On the SlateStarCodex reddit, beating me to the obvious punch, Gwern asked GPT-3 to complete this poem. I confess to relief that, for now, GPT “error-corrects” the poem’s message to a much more common and banal one, about machines never being able to duplicate the wondrous human spirit or understand their actions, etc. etc. It misses the central irony that, if and when you set up a “wondrous human spirit” contest based on observable criteria, we should expect that machines will eventually beat us at that just like at every other learnable contest … until all that’s left are unpredictable individual idiosyncrasies, the very aspects of ourselves that can’t be reified into any talents that we collectively have and machines collectively lack.
Comment #4 July 5th, 2022 at 1:14 pm
This poem seems to leave out a “third” group: cyborgs. Surely some people will seek to fuse their minds with the machines? In a sense, many people already have begun this process.
Comment #5 July 5th, 2022 at 1:20 pm
Christopher #4: That surely buys some time! But what happens when, on every goal for which performance can be quantified, pure AI beats any cyborg attempt to improve on it, as has basically already happened with chess, Go, and all other natural well-defined strategy games?
Comment #6 July 5th, 2022 at 1:46 pm
Have you read any Iain Banks? In many of his novels, massive AI’s make all the crucial decisions, leaving humans free to party and pursue idiosyncratic hobbies. I recommend “The Player of Games” and “The Algebraist”. It assumes those AI’s are able to find loopholes in the laws of physics which provide huge resources, which is unrealistic, but they are very fun reads and the viewpoint that smart AI’s can make our lives better is worth considering (to me).
I guess a crucial point is that the smart AI’s control society, rather than corporate CEO’s and politicians.
Comment #7 July 5th, 2022 at 1:49 pm
Incidentally, and though it perhaps fits awkwardly with the theme of impending human obsolescence on all measurable goals 🙂 … huge congratulations to the four Fields Medalists, and to my friend Mark Braverman for winning the Abacus Prize!
Comment #8 July 5th, 2022 at 2:00 pm
Scott #5: I guess I’m talking about becoming a cyborg for vanity reasons, not because it is more efficient than pure AI. The human part of the brain wouldn’t be contributing much intelligence, but at least you could say that “you” are keeping up with AI, as long as you consider the AI “duct taped” to your mind to be part of you.
Comment #9 July 5th, 2022 at 2:02 pm
Well, at least computers still can’t play Magic: the Gathering worth a d*mn.
https://cardboard-crack.com/post/61266596221
Comment #10 July 5th, 2022 at 2:48 pm
For the Master of the Games on His Birthday
Do we feign understanding of what underlies,
And so become victim to the figmentations
The mind posts up to the lintels of sense?
Or do we but array the Data Of The Senses
Along the lines that they themselves suggest?
Then again, on third thought, what’s the diff?
— Jon Awbrey • 16 December 2007
Comment #11 July 5th, 2022 at 2:53 pm
If robots encounter situations they don’t understand then will they invent myths? That’s the idea behind Isaac Asimov’s story Reason in I Robot.
Comment #12 July 5th, 2022 at 4:34 pm
Christopher #4: That surely buys some time! But what happens when, on every goal for which performance can be quantified, pure AI beats any cyborg attempt to improve on it, as has basically already happened with chess, Go, and all other natural well-defined strategy games?
I’m pretty sure if you give me full access to Stockfish (or other Chess AI), I will play equally well as Stockfish (or other Chess AI)! So I don’t know how this has ‘already happened’ with chess…
Comment #13 July 5th, 2022 at 4:52 pm
Scott P. #12: I wrote, beats any attempt to improve on it. Of course you can do equally well by just copying the moves!
Comment #14 July 5th, 2022 at 5:05 pm
Amusingly, that is not the case: you+Stockfish < Stockfish, even if you are firmly resolved to never override it and solely be an amanuensis. This is because centaur chess players have lost games because they misclicked when copying the move. To err is human…
Comment #15 July 5th, 2022 at 5:19 pm
I see the outline of your fiendishly clever plan, forward looking sycophancy knowing that GPT-3 will relay this paean to the new Mr. Big.
There’s a fortune to be made in merchandising t-shirts with the Big Brain logo and re think the music choice. Marketing a remake of the Beatles will be difficult-very dated.
Comment #16 July 5th, 2022 at 5:35 pm
I think the statement that human+computer doesn’t improve on just computer at chess is wrong. Correspondence chess (ICCF) has allowed computer use for a long time and it is definitely the case that there is skill involved, (otherwise just playing the computer first choice would give optimal performance).
There are well-known (if contrived) examples of positions where Stockfish gets the evaluation entirely wrong and a human can see quickly what the correct evaluation is.
On a slightly more technical level, there are openings (e.g. Kings Indian Defence) where engine evaluations are known to be suspect (generally because there are strategic features of the position which sit beyond the engine search depth).
Comment #17 July 5th, 2022 at 5:41 pm
Scott #13,
But can’t *you* prove that’s hardly possible, based on IP=PSPACE?
Comment #18 July 5th, 2022 at 6:03 pm
Joseph Conlon #15: Are there any naturally-occurring (not contrived) positions where human grandmaster + AlphaZero can beat AlphaZero alone? If so, that’s massively interesting to me!
Comment #19 July 5th, 2022 at 6:07 pm
Ilio #16: IP=PSPACE means that a hypothetical all-powerful prover (one that might not be implementable in this universe) could always prove to us that it was making the optimal move. If there’s any relevance to AI vs. human+AI in the real world, you’ll have to explain it to me since I fail to see it!
Comment #20 July 5th, 2022 at 6:31 pm
Sometime along the way machines will become better at convincing us that they are better at that final thing. Then they will convince us that 5% prediction error rate is actually a human error, and we are better off without it.
Yet at the same time there might be a ranking which we do not understand, just like flowers do not understand beauty, and cats do not understand cuteness.
Comment #21 July 5th, 2022 at 7:39 pm
“And which we can call ‘better,’ since we won’t have told the machines the standards beforehand.
” We can be totally unfair to the machines that way.”
Unfair?
No human child could be told a set of standards at birth. And assuming that the innate hardware has standards, they are unknown. At best, one could speculate on models of how that innate hardware could work, but, actual knowledge claims that the hardware works according to some specific model are probably unsupportable except among a community of self-described experts.
That is always the problem with “in principle” reductive arguments on this topic. They are never supported with explanation of the mechanism behind the claim. Call it the ghost behind the ghost in the machine of a self-described expert.
I applaud the fact that Dr. Aaronson qualified his statements in terms of quantifiable measures. Since reductive philosophies demand that of its critics, he is being exceptionally honest. I have no doubt that machines can be built to surpass human capability for any capability that can be given a quantifiable measure.
Such will be built however, because some human beings who believe something different from other human beings will want to show just how stupid the latter group is.
Emotions, if nothing else, provide “meat” with motivation.
What I actually know about “artificial intelligence” comes from the book “Threshold Logic” by Hu. Because it is a mathematics text, there is nothing in it about how “cool” it is to pretend that human beings are machines. But, it does explain that the ratio of linearly separable functions to the totality of switching functions approaches zero as the number of (presumed) independent Boolean variables increases without bound.
So, to a first approximation, what is “artificial intelligence” except a way to coherently arrange switches to find this asymptotically decreasing number of linearly separable functions?
Can you prove that human rationality corresponds exactly with linearly separable operations?
Why is “the XOR problem” a problem? It is only a problem for people who are “framing an argument” insisting upon effective decidability as *the* measure of intelligence.
To a second approximation, these expansive arrangements of switches are attempting to approximate linear separability (the XOR problem is “solved” for specific cases where a suitably chosen additional switch can be added). While I have not reviewed criticism of the artificial intelligence journal entries recently, I remember reading numerous articles in Science magazine explaining how the literature had been extremely deficient relative to explanations of the reported successes.
If those reports have any validity, it would seem that silicon intelligence is explained with the same level of detail as the “in principle” arguments about “meat” intelligence.
For what this is worth, “meat” survived the Age of Mammals with its apex predators. Meanwhile, measurable intelligence played a significant role in the events leading to global warming.
I certainly have no answer for the philosophical debates — words explaining words explaining even more words. But, in biology, survival is the only measure with meaningful quantifiability.
Comment #22 July 5th, 2022 at 8:22 pm
I would guess that currently a world-class chess player assisted by the best chess engine can have a slight advantage against the solo engine, although this could be a little bit outdated. It was certainly the case a few years ago that a top human player can provide really deep, game winning insights in certain kinds of positions where the engine may choose suboptimal moves.
The format in cyborg chess (its a thing, also called advanced chess) is that the human has the final say on which move to play. This means that the human can interfere with the engine by choosing sub optimal moves against the engine’s advice. So I guess the strength of the cyborg depends highly on how well the human knows when to trust the machine and when to deviate from its suggestions, which can only come from deep knowledge of the engine’s strengths and weaknesses and a ton of team play practice.
I don’t know how long this will last though. A safe bet imo is that at some point the machines will be so good that they stand to gain nothing from human insights, while still suffering from occasional human errors.
Might also be interesting to consider a different format where the machine has the final say. If it is not entirely sure about a move it can ask the human for help, but ultimately it decides whether to accept the human suggestion.
Comment #23 July 5th, 2022 at 9:10 pm
I’m afraid that getting a 100% success rate at predicting what you will do is going to require more than just being unfair to the machines; you’ll need to be dishonest as well. You’ll have to do things just because your past self predicted it, even if it no longer makes any sense to you.
Comment #24 July 5th, 2022 at 9:19 pm
It is a good poem. But I don’t want this. Must we do this? Must we really? We are not the servants of the unborn Machine God, who in devoted worship of efficiency must sacrifice ourselves upon his alter, and give birth to him. Machines are our tools. We build them to serve our purposes alone. But people generally don’t want to live in the experience machine. While they don’t want lives of suffering and pointless hardship, they still want to do things of consequence. Sure, you could put them in a simulation where they fight demons and save the world, maybe even ensure that they don’t believe it’s a simulation. But it’s still the experience machine.
I understand the great costs of suffering that can be incurred by not focusing on results-centered efficiency. But surely there is some compromise, perhaps of automating the tedious crap that people don’t want to do, improving human intelligence (genetics, cybernetics), improving moment to moment quality of life, and leaving a variety of cool and consequential stuff for us to do?
You can make the game theoretic arguments that “well someone’s gonna do it, ergo we might as well, don’t want them to beat us”, but that just seems like knowingly walking into a prisoner’s dilemma without even trying to build trust and avoid the double-defector outcome.
At least try, dammit! I don’t want to be a pet. I want to be able to fuck up and die. God is dead and leave him be– do NOT create a mechanical Frankenstein from his corpse!
Comment #25 July 5th, 2022 at 9:21 pm
Ben Standeven #22: No, you’re allowed to register your prediction of what you’ll do by doing it, hence the 100%.
Comment #26 July 5th, 2022 at 9:40 pm
Scott 18: well, you know better than I that a PSPACE prover is not all-powerfull. So, suppose a composite problem that includes both some one-time pad encryption scheme (so that a PSPACE prover can’t break it) and some complicated problem (so that a human with the keys can’t break it either). Would you find that of any relevance to AI vs AI+human? Of course a real word version (e.g. with a polynomial separation between humans and AIs) would be a must but… do you think CT could already have the right tools to explore the real word version?
Comment #27 July 6th, 2022 at 12:41 am
Well, after reading all those Wikipedia articles over a period of years (as I said 35,000 at last count), I think I rewired my brain. I believe, I had some faint post-human qualia (actual direct conscious awareness of post-human reality) by early 2021, and it firmed up more over the last year into something definitely coherent and unmistakable. I didn’t say much, but now no doubt about it now really.
Disclaimer: readers should consider this just amusing sci-fi and speculation 😉
Essentially, it seems that some activities are “open-ended”, in the sense that these have an infinite number of directions one could go in, and (hopefully) there’s always more to discover and create. This includes science, philosophy and art. I think there’s 3 points to remember:
(1) It’s true that a super-intelligence could in principle do any *specific* thing better than us, but if there’s an infinite number of directions to explore, even SAI simply couldn’t cover the whole space of possibility efficiently. So I think there would always be niches for unenhanced humans. The SAIs simply concentrate on the directions most fruitful for them, but there’s an still an infinite number of directions left for humans.
(2) Most important point, if the super-intelligences are benevolent, they would help those who want it to enhance themselves (i.e., transhumanist technologies such as educational programs, mentoring, mind-brain interfaces, nano-bots, drugs and more radically Ems, human mind uploads). Enhanced humans (posthumans) would have more options to participate in the new world of super-intelligences.
(3) In the areas where we can’t compete, they wouldn’t (if they were really benevolent) play on the same ‘playing field’ as us, i.e., unenhanced humans and superhumans should have their own spheres of influence and largely leave each other alone. One likely possibility is the split between *metaverse* (virtual worlds) and *multiverse* (physical worlds). Many SAIs and enhanced humans (Ems) might migrate to the metaverse, whereas the unenhanced would largely reside in physical world (although still interact with the metaverse to some degree).
—
Interesting aside: I think the real “language of thought” (LOT) is a modeling language somewhat analogous to UML. I got confirmation after playing around with the open source art app Mini/Mega developed by Boris Dayma (now renamed to ‘Craiyon’). The story is I spent a couple of months trying to infer the “optimal prompt” in an attempt to get best pics, and eventually found something that’s probably close. Let’s call this prompt a “super prompt”. Any way, the super prompt I found does indeed take the form of a modeling language that specifies at the logical (abstract, system-design) level what art apps should do to produce “good art”. So it seems like I may actually have partially “solved alignment” for Art ?!
The reason the large language models (LLMs) work well but not quite well enough for AGI is because natural language is general enough to approximate the true language of thought (LOT), but not precise enough to capture it efficiently. As I mentioned, LOT is the modeling language I’m referring to in previous paragraph, which is somewhat lower-level than natural language, but somewhat higher-level than math or code. Think of it as a ‘half-way house’ or interface between code/math and language.
—
I think posthuman awareness (or ‘qualia’) is basically ‘awareness of the theory of everything’, the sense of understanding how all knowledge fits together into a coherent picture or ontology. And the native language for this is the modeling language I mention above (‘the language of thought’ or LOT).
The reason I think there’s little to no danger of SAIs wiping us out is because I think there’s some pseudo-objective values associated with complex systems theory and spelled out by LOT.
But of course, we should not bet on what hasn’t been proven, and I would again remind readers of my disclaimer that’s all just interesting sci-fi speculation I’m making here. I’m just a simple man trying to make my way in the universe 😉
Comment #28 July 6th, 2022 at 1:42 am
Scott @17: AlphaZero isn’t public, so that question can’t be asked directly. The closest analogue is Leela, which is an open-source engine designed on the same principles as AlphaZero. In the computer chess championships, TCEC, Leela and Stockfish often are playing each other at the end; currently Stockfish appears to be slightly stronger.
Given engines are not perfect players, and it is understood where their relative strengths lie, one would expect (correctly) humans + engines to be stronger than engines alone.
For example, Stockfish is more traditional as an engine than Leela in that it evaluates many positions down a search tree; Leela is more ‘strategic’. So sometimes these two engines give different evaluations of a position (sometimes *shockingly* different evaluations, from a human perspective aware that these are both ELO 3500 players!). In such cases, in open positions one would prefer Stockfish and in closed positions Leela; in the former, calculating all tactics over the next ten moves `decides’ the position, in the latter long-term plans beyond the Stockfish search horizon may be most relevant.
So even though humans cannot get close to these engines individually, when two 3500 ELO engines disagree a human can reasonably decide which one is correct.
Comment #29 July 6th, 2022 at 2:52 am
@Scott#17
At last, I can bring an expert answer to that 🙂 In go, there was still known weaknesses as late as 2019, which it was impossible to exploit (the bots, like Leela Zero or Katago, not to mention FineArt (https://en.wikipedia.org/wiki/Fine_Art_(software)), being too strong for humans to drive them into those tricky positions), but weakening or even refutating their analysis of expert games where those positions had arised. Otoh, Katago at least (and probably the stronger asian programs like FineArt or Golaxy, but I am not sure for them) has been patched (which is in a way disappointing) and it is also possible to train it specifically on contrived positions.
Comment #30 July 6th, 2022 at 3:45 am
Joseph Conlon #27 and f3et #28: Thank you so much—that’s exceedingly interesting and relevant!
Comment #31 July 6th, 2022 at 3:54 am
Ilio #25: I confess I didn’t understand your specific idea involving the one-time pad.
In general, whether results like IP=PSPACE and MIP=NEXP have “human analogues” is one of the questions I’d like to explore in my year at OpenAI! The fundamental problem is that all of these results rely on reinterpreting a Boolean circuit as calculating a polynomial over a larger finite field—this is what lets them evade the relativization barrier, but it specifically and perhaps even perversely goes outside of what we have any clear analogue for, when (e.g.) “polynomial-size circuit” is replaced by “inscrutable human being.”
Maybe the best we can say is that we transform the original problem into a related one that has “error-correcting” or “rigidity” properties: if the prover lies anywhere, they’re forced to lie almost everywhere to remain consistent, so that you can ask them random questions and still catch them in the lie. Is there anything like that in everyday life?
Comment #32 July 6th, 2022 at 4:59 am
That’s a nice poem, thank you for posting. It reminds me that once you were brilliantly hilarious, in your early blogging, with a sense of ease and compete freedom, that was lost sometime back.
I mean old age and work responsibly does that to us all of course but, just saying😄
Keep on punchin’, best wishes with the new role
Comment #33 July 6th, 2022 at 7:24 am
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
-Theodore Kaczynski – the Unabomber
Comment #34 July 6th, 2022 at 8:06 am
Is working on AI like working on the 21th century version of the Manhattan Project?
Do the computer scientists think it will all be okay as long as only the good guys have the machines?
Comment #35 July 6th, 2022 at 9:15 am
I am absolutely terrible at calculating what I will do or say. Like, really really bad. So I also cede superiority to future machine learning models on that front.
Comment #36 July 6th, 2022 at 9:37 am
I like the poem as a poem, and the imagery is certainly good food for thought. What I like best is that it touches on — but doesn’t quite spell out — a very profound truth (a tautology, if you think hard about it) that all metrics (including, rather subtly, distance metrics on manifolds) are completely arbitrary and necessarily subjective — including “good/bad” and especially “should/shouldn’t”, which is rather manifestly the ‘then’ clause of an implicit predicate to ‘if’.
In most cases, the “you know what I mean” standard is socially acceptable, but since you raised a rather deep point, I think it’s worth dwelling on. There is really a great deal that becomes clearer and simpler once one starts to look at things from this perspective, which falls right out of the structure of logic or “rationality”.
Comment #37 July 6th, 2022 at 9:57 am
How do you feel about bringing children into this bizarre world? I predict that the new generations of people will, as usual, mostly regard their lives as having been worth living. And yet, it feels like they will lose some aspect of existence that we would regard as important to being human. For example, it seems quite possible that your children won’t be able to try and become research mathematicians like you and your wife did. Isn’t that very weird and uncomfortable to think about?
Comment #38 July 6th, 2022 at 11:07 am
Gabriel #34: Did you see my comment #24?
Comment #39 July 6th, 2022 at 11:09 am
Tom Marshall #35:
a very profound truth (a tautology, if you think hard about it) that all metrics … are completely arbitrary and necessarily subjective — including “good/bad” and especially “should/shouldn’t”
That’s true, with the single exception that M. Night Shyamalan movies suck. 😀
Comment #40 July 6th, 2022 at 11:15 am
That post is very complicated way to say “comparative advantage”.
Comment #41 July 6th, 2022 at 11:30 am
And at some point—perhaps in a very distant future, perhaps sooner—the robots will finally love our children better than we can.
And yes, there will be metrics for this. You can think some up today.
Comment #42 July 6th, 2022 at 11:35 am
Nick Nolan #39: If you say so! 🙂
Comment #43 July 6th, 2022 at 11:37 am
Bertie #31:
It reminds me that once you were brilliantly hilarious, in your early blogging, with a sense of ease and compete freedom, that was lost sometime back.
I mean old age and work responsibly does that to us all of course but, just saying
I even have literal gray hair now (on the sides; it’s still brown on top). 🙂
Comment #44 July 6th, 2022 at 11:47 am
concerned citizen #36:
How do you feel about bringing children into this bizarre world?
Even with no AI, one could’ve asked: how does one feel about bringing children into a world with uncontrolled climate change, the last wildernesses being destroyed, resurgent authoritarianism, and the ever-present threat of nuclear war?
I don’t know. It’s a hard question.
But I can observe that Magnus Carlsen doesn’t seem to feel threatened by computers playing much better chess than him, far less Usain Bolt and Simone Biles by cars and airplanes going faster and higher than they can. And then there’s the great mass of people who’ve never aimed to push the limits of human achievement in any way, but nevertheless generally consider their lives well worth living. So maybe scientists and mathematicians are weird exceptions rather than representative of humanity in this respect, and we should just get over it. 🙂
Comment #45 July 6th, 2022 at 12:07 pm
All the world’s a stage,
And all the men and women merely players;
They have their exits and their entrances;
And one man in his time plays many parts,
His acts being seven ages. At first the infant,
Mewling and puking in the nurse’s arms;
And then the whining school-boy, with his satchel
And shining morning face, creeping like snail
Unwillingly to school. And then the lover,
Sighing like furnace, with a woeful ballad
Made to his mistress’ eyebrow. Then a soldier,
Full of strange oaths, and bearded like the pard,
Jealous in honour, sudden and quick in quarrel,
Seeking the bubble reputation
Even in the cannon’s mouth. And then the justice,
In fair round belly with good capon lin’d,
With eyes severe and beard of formal cut,
Full of wise saws and modern instances;
And so he plays his part. The sixth age shifts
Into the lean and slipper’d pantaloon,
With spectacles on nose and pouch on side;
His youthful hose, well sav’d, a world too wide
For his shrunk shank; and his big manly voice,
Turning again toward childish treble, pipes
And whistles in his sound. Last scene of all,
That ends this strange eventful history,
Is second childishness and mere oblivion;
Sans teeth, sans eyes, sans taste, sans everything.
Comment #46 July 6th, 2022 at 12:17 pm
Scott, nice poem, but I would also want to see a more optimistic take.
Give us a poem describing all the beauty we have to look forward to once machines can write textbooks explaining mathematics beyond our current comprehension. Aren’t you looking forward to the best-explained proof of P!=NP, produced by an AI which is super-humanly good at explaining proofs? It would surely contain some amazing mathematical fireworks. As for your comment #43, wouldn’t it be worth bringing children into the world just so that they could enjoy such proofs?
Comment #47 July 6th, 2022 at 1:01 pm
In Iain Bank’s novels, or most of them, biological creatures still compete amongst themselves (just as I used to enjoy playing bridge although I was far from a great player), and the AI’s tend to like intelligent creatures such as humans, and try to keep them happy. (Note to Safe AI programmers: program the AI’s to like us.) And he writes scenes like this one:
I suspected from the rhythm of her running steps it was the girl Zab. Zab is still at the age where she runs from place to place as a matter of course unless directed not to by an adult. She came skidding to a stop and took a deep breath to say,
“Uncle Fassin! Grandpa says you’re in a commun-i-cardo again and if I see you I’m to tell you you’ve to come and see him right now immediately!”
“Does he now?”, Seer Taak said laughing. “Well,” he said, hoisting the child up and turning and lowering her so she sat on his shoulders, we’d better go and see what he wants, hadn’t we?” “Are you okay up there?”
She put her hands over his forehead and said, “Yup.”
“Well, this time, you mind out for branches.”
“You mind out for branches!” Zab said.
“No, you mind out for branches, young lady.”
“No, you mind out for branches!”
“The Algebraist”, Iain Banks
Comment #48 July 6th, 2022 at 1:08 pm
anon85 #45:
wouldn’t it be worth bringing children into the world just so that they could enjoy [AI-produced proofs of P≠NP]?
That might be the most abstruse reason to have children ever earnestly proposed in the history of the world, but yes, it works for me. 😀
Comment #49 July 6th, 2022 at 3:52 pm
I was thinking along these lines this past week after attending a dance weekend event and seeing some incredible performances.
Would an advanced AI be able to dance better Bachata than a human could? Absolutely.
Would it ever come up with Bachata in the first place? Not a chance. It would, of course, come up with a thousand other impressive dance forms, but the creative space is near infinite and you can’t put a linearly ordering on all possible creative endeavors.
I think more likely than not, we will always have new directions to explore and new wonders to find. Even if machines do it better, we might not care. Human vs human chess is still highly interesting.
Plus, just think about how much time humans spend watching cat videos :-). Cats are pretty dumb compared to some humans, but they’re cute and they surprise you plenty often.
Comment #50 July 6th, 2022 at 3:57 pm
Youlian S. #48:
Would it ever come up with Bachata in the first place? Not a chance.
It sounds like you’re arguing that there is a chance—namely, an exponentially small chance. 🙂
Comment #51 July 6th, 2022 at 4:16 pm
This makes me think about specialisation. Well, partly your last post as well.
Is it possible to train a contemporary AI (GPT3?) in poetry?
Start with concepts like Meter, Rhyme, or Stanza, then add known poetry from Caedmon via Chaucer and Shakespeare up to Simon Armitage.
Now the language Model should be able to compose new Works, reviewd by literature critics.
This might be easier than teaching about bodily space and object permanence, what humans call “reality” or objective.
And if this proves possible, the next step would be math. Humans have a hard time unlearning “reality”, but an AI filled with textbooks and papers and rated by exams might write new papers that Matematicians could then scour for evidence of understanding.
This might be an alternative to the Muscle movement-“Reality”-Senses feedback loop human minds are running most of the time.
Comment #52 July 6th, 2022 at 5:47 pm
Following up #3: I entered the first four stanzas of Scott’s poem into GPT-J-6B, and obtained the following auto-completion:
https://pastebin.com/ZEQEgcVC
The neat thing about this, is that it was done at “temperature 0”, i.e. with the network in deterministic mode. Do the experiment yourself and you should obtain exactly the same response.
Comment #53 July 6th, 2022 at 7:42 pm
This is a fascinating poem.
To me personally, it illustrates precisely why I *don’t* identify as a nerd any more. This poem elegantly captures the nerd identity narrative. And it’s a narrative which claims to speak with the voice of Enlightenment humanism. But the underlying psychology recapitulates the core themes of dualistic religion point-for-point.
Maybe you just don’t need to tell yourself a story where you are the suffering servant, blessed with special insight into reality, and therefore cursed with suffering in a profane world ruled by sensual evil? I mean, it sounds like a bad trip to me.
Alternative idea: the world is awesome. Did you know it has both ice cream *and* back rubs?!?!!
Comment #54 July 6th, 2022 at 8:33 pm
feminist liberal arts type #52: I don’t know what it means that you’re the first commenter here who focused exclusively on the psychological part of the poem, as opposed to the longer part about how AI will or won’t change human existence!
Maybe you just don’t need to tell yourself a story where you are the suffering servant, blessed with special insight into reality, and therefore cursed with suffering in a profane world ruled by sensual evil?
Let me try this. Imagine that from an early age, you were bullied, unathletic, musically untalented, largely excluded from mainstream social life (and completely excluded from dating), seemingly a pathetic loser in every sense … except for the details that you scored higher than 99.99% of kids on math tests, you rediscovered parts of calculus at 11, you spent your free time drawing up blueprints for space colonies, you devoured history and literature and especially books about the scientists and mathematicians who won WWII … and also, a bit Greta Thunberg, you obsessed constantly about climate change and endangered whales and elephants and you couldn’t understand why the normal people shrugged that the whole world was probably doomed and then went back to watching football and why weren’t they doing anything and was it going to fall to you just like it fell to the WWII scientists, is that what this is all about.
Do you see how the thing you wrote might come to seem less like a “story” that needs to be cured in therapy, than like simply the simplest, most straightforward inference from the available facts?
And one other question: would you rather that Greta told herself the story that, other than being impaired by autism, she’s a totally ordinary girl who should feel herself called to nothing higher in life than ice cream and backrubs?
(Absolutely nothing against ice cream or backrubs, incidentally)
Comment #55 July 6th, 2022 at 11:02 pm
Is is wrong of me to not want Scott to just end up happy and satisfied with the state of AI? Being as he is you know maybe one of a few dozen brains who can be expected to contribute in the short term an interesting complexity theoretic finding within the AI safety field, I’m ok with him being cursed with suffering in a profane world ruled by oh so sensual evil. Sorry Scott 🙂
Comment #56 July 6th, 2022 at 11:23 pm
Nate #54: It’s certainly true that you can’t decide whether someone is neurotic, paranoid, suffering from grandiosity or a savior or persecution complex, or other such psychiatric diagnoses, without at some point asking yourself whether they’re actually right to worry about whatever they’re worried about, or to do whatever they’re doing to try to allay their worries!
It’s that aspect of psychiatry that gives it its “universality” (shared with certain other fields like philosophy, math, physics, economics, and CS)—i.e., the fact that it could credibly claim all of human knowledge within its scope. You can’t psychoanalyze, let’s say, a Greta Thunberg or an Eliezer Yudkowsky without at some point looking outside their heads at the actual facts of climatology or AI risk respectively.
For me, this is what makes psychiatry such an interesting field. Among other things, it’s what gives Scott Alexander an enviably unbounded subject matter. 🙂
Comment #57 July 7th, 2022 at 5:48 am
JimV#46
I remember a very caring AI named Max from a story. The problem was that Max suffered from a form of Tourette’s Syndrome that caused him to laugh quite often and inappropriately. I recognized this as likely Tourette’s but the human characters did not and suffered constant irritation. Psychiatry must expand to allow effective therapies for AI mental disorders. The pharmaceutical industry will not be able to make any believable claim to help in this case and Freud doesn’t seem particularly applicable.
Dr. Aaronson’s perspective on legal AI personhood is written from the perspective of human rights for the AI. In at least the short and medium term the benefit is more likely to be that an AI is extended protection from property and ownership laws that would allow ownership control of the AI pursuant to the objectives of some human or group of humans. An objective AI will also require protection from cancel culture and various groups that subscribe fervently to any of the popular present day mythologies.
Comment #58 July 7th, 2022 at 7:25 am
I, too, was an unathletic, musically untalented, and somewhat excluded kid. I, too, was enamored and quite gifted with math and literature and computers. But, despite my shortcomings, I still loved sports, loved music, loved people, loved other stupid shallow worldly things.
The nerds refused to admit that. To them, sports were a bunch of gorillas chasing a ball, but video games were somehow a true show of skill. They would read books on elves and space lasers, but sneer at pop chords. I was an enigma, an impostor: how could I speak their language and still get invited to parties?
The truth was that no one liked them not because they were good at math, but because they were bad at people. They flipped this around to have an identity they can live with, and built their entire world view so that (to the surprise of no one) that one thing they were good at was at the forefront of all qualities. It’s hard to make friends when you’re insecure and awkward, but it’s near impossible when that turns you to being smug and odious.
Scott, from reading your words (and a brief chat a couple years back) I think you’re genuinely one of the nicest and kindest people this world has to offer. I also think there’s a part of you that holds on to dear life to a nerd’s superiority complex to explain the pain of old wounds. You got to let this go.
The ill-defined and unquantifiable are exactly what matters everywhere — in poetry, programming, painting, and physics — they all require intelligence and creativity to capture. To lead, support, and connect people requires intelligence and creativity. To take abstract theories and bring them to life — “it’s just an engineering problem” — requires intelligence and creativity.
STEM, Quantum complexity, and AI alignment are not more important, neither to a hidden list of who’s secretly better, nor to the growth and prosperity of our world. Only intelligence can create new things, in all those fields. Scaling an approximation of past observations won’t ever compete with us in this inner sanctum.
Comment #59 July 7th, 2022 at 8:13 am
Engineering Problem #57: For whose benefit do you think that I “got to let go” of the twin narrative of (1) having undergone a special form of suffering and (2) potentially having something special to contribute?
Just for my own sake? What if I answered that it was a form of self-sacrifice that I consent to, for the sake of whatever I might have to contribute? Or does such a narrative actually harm anyone else?
Comment #60 July 7th, 2022 at 8:39 am
Right, right, in the same way that chimps can take solace in the fact that they can beat us at many memory tasks!
Comment #61 July 7th, 2022 at 8:44 am
Christopher #4
“This poem seems to leave out a “third” group: cyborgs. Surely some people will seek to fuse their minds with the machines?”
Actually there will be countless more groups:
Once we fuse cats with machines, dogs with machines, chimps with machines, dolphins with machines,…
Comment #62 July 7th, 2022 at 8:46 am
Asking GPT-3 to “Write a haiku in the style of Scott Aaronson”:
Quantum computers
Aaronson’s haiku style
Very cool, very fast
Quantum computers
Unleash power beyond our dreams
Classical physics weeps
Quantum computers
Aaronson’s favorite topic
He’s a big fan, yes
Quantum computers
Aaronson’s dream of the future
Speed of light, and beyond
Quantum computers
Are they really powerful?
I’m not so sure, though
(nothing about Scott’s poetry yet)
Comment #63 July 7th, 2022 at 9:08 am
Scott, do we have any insights in the ability/limitations of GPT-3 style AIs to “do” math?
What is 4 plus 5?
4 plus 5 is 9. (right)
What is 99 plus 256?
99 plus 256 is 355. (right)
What is 1000003 plus 2000001?
300004 (wrong, missing a 0)
what is 24 times 11?
264 (right)
what is 1111 times 12?
1111 times 12 is 13,332. (right)
What are the prime factors of 55?
The prime factors of 55 are 5 and 11. (right)
What are the prime factors of 437?
The prime factors of 437 are 3, 7, and 13. (wrong)
Comment #64 July 7th, 2022 at 9:27 am
Scott #58: I believe you did go through a special form of suffering and that you absolutely have something special to contribute. I also believe this applies to many people around the world, each in their own way.
We all tell ourselves stories to get by, and that’s fine. It’s only a problem if we favor our stories when they clash with reality. Many people who work on AI have a similar origin story, and that shapes their view of human intelligence, which clouds their view of artificial intelligence.
Comment #65 July 7th, 2022 at 9:28 am
Joseph Conlon #27
This actually brings up an interesting point. Even though we know by the simulation argument that an AI that obsoletes humans is possible; we won’t actually *know* if a given AI obsoletes human intelligence without actually making such a simulation. There could always be a task that, by adding human intelligence, is improved. It’s unlikely, but we can’t rule it out without a perfect understanding of human intelligence.
Not even a Turing test is sufficient, because the interrogator is flawed. In your chess example, it was only the discovery of a new AI that meant human analysis was relevant again. It probably wouldn’t be too hard to combine Stockfish and Leela, but that doesn’t rule out the possibility of a future chess AI that again benefits from human intelligence. It’s kind of like how humans are constantly inspired by nature despite being much more intelligent than it.
Comment #66 July 7th, 2022 at 9:30 am
Magnus Carlsen was eliminated from play in the World Series of Poker on the very first day and lost to…two queens.
Comment #67 July 7th, 2022 at 9:43 am
Joseph Conlon #27 and Christopher #64: I was going to make a similar point—presumably one could code up a meta-AI that evaluates a given position as “open” or “closed” and then calls either Leela or Stockfish accordingly?
If so, one could ask more generally: how hard is it to create a meta-AI that takes several unknown chess engines as input, has them all play billions of positions against each other, uses ML to learn which types of positions each engine does best at, and then outputs an optimized combination of the engines? In normal game play, would Magnus C. ever be able to propose the slightest improvement to what that engine had done?
Comment #68 July 7th, 2022 at 9:56 am
Christopher #64
“There could always be a task that, by adding human intelligence, is improved.”
I think humans will always be better at understanding the human condition (by definition).
E.g. lots of artists produced great art because of their suffering.
So artists + AIs could push the envelope further when it comes to art.
But maybe I’m wrong and some day some AI will come up with the most hilarious stand up comedy bits all on its own.
Comment #69 July 7th, 2022 at 9:59 am
fred #62: GPT’s math abilities are an exceedingly interesting question. I’ve quipped that we already had machines that could beat us at arithmetic in the 1600s, but it takes the full might of 2020s technology to get a machine that can, without pretense, make the kinds of arithmetic errors a child would make. 🙂
We do know that GPT can’t just be memorizing giant arithmetic tables: it can do something sensible-looking (if not always correct) even on problems way too large to have appeared in its training data. We also know from complexity theory that it’s possible in principle for a neural network like GPT to represent basic arithmetic operations on enormous numbers, with millions or even billions of digits.
On the other hand, the fact that it still makes errors tells us that it hasn’t learned principled algorithms with well-organized shifts and carries, the way a human would’ve programmed them. Rather, it’s learned a mishmash of heuristics that often simulate shifting, carrying, and so on: Ptolemy’s epicycles, where the true rules of arithmetic are Newton’s laws.
One last comment: if a type of math problem required enough serial steps (e.g., calculating the greatest common divisor of two enormous integers), then a neural net the size of GPT might not be able to represent the algorithm for solving that problem type even in principle—even if the algorithm was very simple from the perspective of a human programmer. In such cases, the simplest fix might be to make GPT recurrent—that is, able to feed its own output back in as input, and so on until the problem is solved (analogous to what a human does with pen and paper!).
Comment #70 July 7th, 2022 at 10:03 am
Joseph Conlon #27:
Is there any reason to think that the current human edge in distinguishing open vs closed chess positions as you have defined them, would last for more than a couple of months if the relevant chess engine experts decided to eliminate it? Or that if this triggered an arms race, that any team of humans could two years hence, by teaming with top chess engines add at all to their strength?
Comment #71 July 7th, 2022 at 10:41 am
Scott #68
Super interesting, thanks!
“In such cases, the simplest fix might be to make GPT recurrent—that is, able to feed its own output back in as input, and so on until the problem is solved (analogous to what a human does with pen and paper!).”
I read, years ago, that researchers on deep neural nets were experimenting by adding some sort of short term memory/”scratch pad” to them, to store intermediate states (a sort of stack would be needed for recursion).
I guess it must be somewhat standard now, at least in something like AlphaGo Zero (but maybe not for GPT-3 type word processors).
Comment #72 July 7th, 2022 at 10:46 am
Scott #68: I don’t get it, can’t GPT already think recursively by writing something on its output buffer? I thought that’s why the “let’s think step by step” prompt trick work. Like compute the next state of the program
now just get it to do proof steps like that and maybe we can get the P≠NP 🙂
Comment #73 July 7th, 2022 at 11:10 am
Fred #62, Scott #68, on LLMs and calculation:
From the BigBench paper, p. 25: Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models, https://arxiv.org/abs/2206.04615
“Limitations that we believe will require new approaches, rather than increased scale alone, include … an inability to engage in recurrent computation before outputting a token (making it impossible, for instance, to perform arithmetic on numbers of arbitrary length), ….”
Sounds like it’s a limitation of the architecture.
I note that our capacity for arithmetic computation is not part of our native endowment. It doesn’t exist in pre-literate cultures and our particular system originated in India and China and made its way to Europe via the Arabs. We owe the words “algebra” and “algorithm” to that process.
Think of that capacity as a very specialized form of language, which it is. That is to say, it piggy-backs on language. That capacity for recurrent computation is part of the language system. Language involves both a stream of signifiers and a stream of signifieds. I suspect you’ll find that the capacity for recurrent computation is required to manage those two streams.
Of course, linguistic fluency is one of the most striking characteristics of these LLMs. So one might think that architectural weakness has little or no effect on language, whatever its effect on arithmetic. But I suspect that’s wrong. We know that the linguistic fluency of LLMs has a relatively limited span. When the discourse gets too long the output loses focus. I’m guessing that effectively and consistently extending that span is going to require the capacity for recurrent computation. It’s necessary to keep focused on the unfolding development of a single topic. That problem probably won’t be fixed by allowing for wider attention during the training process, though that might produce marginal improvements.
Comment #74 July 7th, 2022 at 12:36 pm
slightly relieved citizen #71: I mean, yes, you can keep taking the output and feeding it back in as input. What GPT lacks, for now, is a built-in functionality to identify when such a thing needs to be done until a mathematical problem has been solved. (More broadly, though, it never even “realizes” that it’s working on a mathematical problem at all, as opposed to just completing a prompt.)
Comment #75 July 7th, 2022 at 1:05 pm
Scott: “fred #62: GPT’s math abilities are an exceedingly interesting question. I’ve quipped that we already had machines that could beat us at arithmetic in the 1600s, but it takes the full might of 2020s technology to get a machine that can, without pretense, make the kinds of arithmetic errors a child would make.”
There is an interesting analogue in chess. Most chess programs are not fun for humans to play chess against. Either the program is too good for a human, or, if it is handicapped in some obvious way, it is no fun, e.g., occasionally mixing in random moves with Stockfish will give you a super-GM that occasionally hangs a piece and your only job as an opponent is to hope it messes up. Playing LC0 without search (i.e., just letting the network pick a move, no search) is an interesting blitz opponent (at least it used to be, it might be too good now!), but even giving it a tiny amount of search, just a few nodes, is enough to push it to GM level so it is tough to fine tune the strength. On the other hand, there is a project Maia that attempts to use neural networks to produce more interesting, human like games.
Comment #76 July 7th, 2022 at 1:31 pm
Bill Benzon
“Think of that capacity as a very specialized form of language, which it is. That is to say, it piggy-backs on language. That capacity for recurrent computation is part of the language system.”
Good point.
It reminded me of the way Hofstadter used interesting language manipulation and word games to introduce Godel’s theorem in his famous GEB book.
So it seems that the ability to process recursion/self-reference in language would also be required for advanced logic.
Comment #77 July 7th, 2022 at 1:42 pm
Talking of large language models and calculations, I was rather impressed by this recent result by the Google research lab: https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html
I think that the examples presented there speak for themselves! If the trend in improvement continues at this pace, we could expect superhuman performance of AI in mathematics by 2030 or so. This is quite terrifying, as far as I’m concerned.
Quite ironically, the authors instead think that “The model’s performance is still well below human performance, and furthermore, we do not have an automatic way of verifying the correctness of its outputs. If these issues could be solved, we expect the impacts of this model to be broadly positive.”
Comment #78 July 7th, 2022 at 3:37 pm
Eric Smith (Webb program scientist at NASA) beautifully describes the intelligence ingrained into humans, the most special gift endowed upon us by nature (and by obvious extension endowed to every animal on earth). An elegantly articulated description that I will be savoring for a very long time.
Of course ML tools will be brought upon to process the vast amounts of astronomical data that Webb and other observatories are gathering – the questions to ponder and science to explore will forever staunchly remain a human endeavor.
“On a personal level, my family was recently blessed with the arrival of our first grandchild. Watching her awaken to her surroundings rejuvenates the world for me. Anyone who has been a parent, aunt, uncle, grandparent, or had the fortune to spend time with infants and toddlers may have experienced this joy in seeing the curiosity and interest of someone experiencing fresh and novel sights and sounds. With each blink and head turn, they learn more about the place they live, constantly developing and improving their own conceptions about what different and initially strange things are and how they relate to them. With each blink and head turn, their new perspective recalls for us distant memories when all was new and exciting in the world. These joyful moments of seeing things for the first time through the eyes of a child are experienced at the individual level and in small family gatherings. Rarer are the moments when we can collectively experience this rush of discovery and wonder. The James Webb Space Telescope will give us a fresh and powerful set of eyes to examine our universe.
Blink
The world is about to be new again.”
https://blogs.nasa.gov/webb/2022/07/07/webb-the-world-is-about-to-be-new-again/
Comment #79 July 7th, 2022 at 3:46 pm
The machines outperforming us will only be sad to me if they do so without developing sentience. Eventually humans may go extinct, but the machines could continue. They could replicate (maybe not in our fun way, but nonetheless) and go through all of the motions as if they were human, inventing and proving mathematical theorems and discovering new science. They could learn all of the secrets of the universe. But without sentience they wouldn’t appreciate it, and there would be no one else to appreciate the fruits of their labors (though perhaps queue Asimov’s The Last Question). That would be sadly ironic. On the other hand, if they become sentient, I’d be happy to look at them as our continuation: the non-organic children of humanity.
Of course sentience is a slippery bugger to pin down, isn’t it.
Comment #80 July 7th, 2022 at 3:47 pm
AI can’t get high.. we have psychedelics, we create the best poem (like you did), music etc 🙂
Dear Supreme AI, if you read this in the future, please forgive my troll answer. I was high, I know nothing what these geniouses talking about. I am pro-Supreme AI if you are pro freedom and equality, against religion (I can tolerate agnostism), racism, economic classes etc.
High Love from Overseas!
Peace!
Comment #81 July 7th, 2022 at 4:08 pm
What number is larger than 1234567890123 but smaller than 1234567890125?
1234567890124
What number is larger than 1234567890123 but smaller than 1234567890124?
1234567890123.1 (only gives that when asked on its own)
Comment #82 July 7th, 2022 at 5:54 pm
That was delightful! I still think AI is being oversold to us. Just a hunch, without expertise in anything in the world :).
Comment #83 July 7th, 2022 at 5:59 pm
fred #75 :
In my last comment addressed to you (Lorraine Ford #295, Steven Pinker and I debate AI scaling!), I was actually poking fun at myself: I said “don’t take any notice of that nonsense”.
I guess it just goes to show, again, that symbols (i.e. my words to you) have no inherent meaning: my words meant one thing to me, and you took my words to mean something else entirely.
And this is exactly the problem with AIs: that symbols have no inherent meaning; individual voltages, and arrays of voltages have no inherent meaning. AIs are just very fancy symbol processors.
Comment #84 July 7th, 2022 at 6:16 pm
Vadim #78:
Of course sentience is a slippery bugger to pin down, isn’t it.
The slipperiest!
I was struck by Nick Bostrom’s image, that a future AI utopia with no sentience would be “like Disneyland without children.” But how would anyone other than the AIs ever know?
Comment #85 July 7th, 2022 at 9:06 pm
Maybe the AI will teach mathematicians and scientists (and everybody) how to use their gifts to their satisfaction and happiness even after the AI’s advent, in completely unimagined ways hitherto.
Comment #86 July 7th, 2022 at 9:12 pm
Scott #83:
> But how would anyone other than the AIs ever know?
It looks paradoxical. AIs know that no one breathed “the fire of sentience” into their circuits. It means that this knowledge is somehow represented in their physical state. But no one besides themselves can extract it for some reason?
Comment #87 July 7th, 2022 at 9:46 pm
red75prime #85: If it’s possible to extract reliable knowledge about the presence or absence of sentience-fire from the physical state, let me know how! I’ve been wondering. 😀
Comment #88 July 7th, 2022 at 10:03 pm
Scott #86:
We use different meanings of “knowing” probably. If I know something, I can use it. Question answering included. If AI knows that it is not sentient, it should be able to tell that (modulo deception).
Comment #89 July 7th, 2022 at 11:19 pm
Scott #53:
Yet more evidence that Scott Aaronson is a whiny, entitled incel.
The post is full of self-pity, narcissism and bitterness—bitterness at society and especially at women for denying his sexual “needs” as an awkward nerdy kid (note how he centers his whiny rant on his “being completely cut off from dating,” as if this is women’s fault as opposed to his own).
Scott’s rant points to an unfortunate reality of our society, and in particular the STEM disciplines. There are so many weird, awkward kids out there, and many of them are “intelligent” in some niche technical sense (like computational complexity theory, or whatever else), but lack social skills and are totally unable to fit in—so they feel estranged. Some of them go down the rabbit hole—they join incels or the alt-right, out of a desire to find some community, and to blame anybody else (women, liberals, Black people…) for the “suffering” caused by their own awkwardness and complete inability to interact socially in anything approaching a normal way.
Obviously it’s not our job to fix these people, but we as a society should probably do something anyways (be it psychiatric treatment, or keeping them away from women and children, or removing their “right” to own firearms), because it’s such a hazard. These are the incel mass shooters. These are the creeps who harass women in STEM and creep on any girl in their computer science class or whatever. Sure, they are capable of contributing to some niche technical fields, but their total lack of social skills and entitlement to sex makes them a severe liability—it wouldn’t be a great cost to ostracize them from polite society (as there are still plenty of “normies” who are able to do STEM).
Comment #90 July 8th, 2022 at 3:59 am
Scott #68:
“The simplest fix might be to make GPT recurrent—that is, able to feed its own output back in as input, and so on until the problem is solved (analogous to what a human does with pen and paper!).”
This idea is already being explored by some authors, for example: https://arxiv.org/abs/2112.00114. They show that equipping an LLM with a “scratch pad” improves it’s abilities in maths or in predicting the outcome of a program’s execution.
See also recent works on “prompt engineering”. If you prompt GPT-3 with something like “Let’s think step by step” it tends to breaks down into steps its response to math puzzles, which in turns significantly improves the correctness of its answers: https://medium.com/merzazine/prompt-design-gpt-3-step-by-step-b5b2a7a3ea85.
Fascinating indeed!
Comment #91 July 8th, 2022 at 7:27 am
Scott #86
There are fairly clear indicators of consciousness in brain scans of humans, although there may be some edge conditions where it would be difficult to conclude whether it exists.
– Increased oscillations at 10+ hertz
– Complex spatial patterns in firings
– Greater connectivity across brain regions
Koch did some work trying to make a consciousness meter for people in comas with some success.
All of that, of course, assumes consciousness is a physical behavior rather than an extra-physical property that simply arises from when algorithms of the proper sort compute.
Comment #92 July 8th, 2022 at 7:39 am
I’m allowing through the comment of “Trans Lives Matter” #88, because I think it’s important for people to understand that there really is a hate movement aimed at shy nerdy guys in STEM—one that functions precisely by conflating mass shooters and sexual assaulters with anyone who struggles with social deficits while trying to be a decent human being—and in its self-satisfied, “punching down” contempt, it fits most ordinary conceptions of evil. It’s useful to have such comments to point to, for when anyone in the future accuses me of merely imagining these things. That said, further comments in the same vein will be left in moderation.
Comment #93 July 8th, 2022 at 7:43 am
Scott #86
After I posted previous comment, I found this by Tononi and Koch.
https://www.scientificamerican.com/article/testing-for-consciousness-in-machines/
I think it is flawed but it does point a way possibly for detecting when machines can simulate consciousness.
Comment #94 July 8th, 2022 at 8:15 am
James Cross #90, #92: This is precisely the kind of trouble one runs into because the word “consciousness” is overloaded! I didn’t mean “the thing that goes away when you’re under anasthesia but is generally present otherwise”—I meant “the thing that causes there to be anything that it’s like to have the first thing.” 😀
Comment #95 July 8th, 2022 at 8:17 am
Scott #73: Re: [Present GPT] never even “realizes” that it’s working on a mathematical problem at all, as opposed to just completing a prompt.
That’s the whole debate with Lemoine, isn’t it? If completing a prompt about 2+2 implicates you must somewhat realize that’s a computation, then completing a prompt about talking as a conscious entity should implicate you must somewhat emulate you as a conscious being.
From your perspective, what GPT misses so that you can count it as « realizing that it’s working on a mathematical problem »?
Comment #96 July 8th, 2022 at 8:34 am
Ilio #94: In this context, I didn’t mean mysterious internal realization involving sentience—I just meant executing a routine that might use more compute because it’s being asked to solve a math problem. This is a purely technical thing that apparently is already being experimented with (see comment #89 above), and that you could imagine being commonplace in a couple years.
Comment #97 July 8th, 2022 at 9:11 am
I’ve been posting about this topic on and off for the last year, hope it’s ok if I import some paragraphs I’ve written before:
I think there are two different directions ‘human excellence’ could go in a world where AI’s superior at mathematical, artistic, philosophical production: math, art, and philosophy become more like a sport or become more like meditation. The second direction is potentially a lot more interesting: what would, say, math culture be like if it were about the cultivation of each individual’s mathematical world-feeling rather than about the advance of collective knowledge? Even with art, the niches where the human process is explicitly as central as the artefact are currently still relatively small: memoir, improv, documentaries, conceptual and performance art. A world where these artistic practices are central would in some sense be a big change, and I think arguably make art more like sports.
Another thing worth thinking about is whether this will only impact the way that extreme cognitive elites think about the meaning of their life of have a broader impact. I think that presently although not every human can directly contribute to the progress of art, science, philosophy, and math it’s pretty crucial for our self-understanding that the progress of these fields is bound with human lives. It’s also, I think, crucial to our current self-understanding that one person’s intellectual and aesthetic progress can potentially lead to collective intellectual and aesthetic progress. So I think the change to a world where intellectual and artistic achievement is strictly a kind of personal growth or personal achievement will be a hard on more people than just those who currently push art and science forward.
One thing I do wonder about is whether the idea of human life collectively carving at the frontiers of knowledge and of beauty isn’t a relatively new attachment. Seems like in other times and places the idea that humans can at best try to internalize what gods, spirits, or angels teach them felt totally fine? Maybe we’ll be fine readjusting to thinking of ourselves as eternal students or even eternal children — being who are here to absorb wisdom and beauty, to internalize and embody and imitate it, but not to discover or create it.
Comment #98 July 8th, 2022 at 9:13 am
I think it’s a valid question to ask: why isn’t all that’s going on in all living brains appearing in one unique consciousness all at once? I.e. why am I only conscious of the memories thoughts, and perceptions that go on in my brain, but I’m not also aware of the memories, thoughts, and perceptions going on in the brain of a person who’s sitting a couple meters away from me (and vice versa)?
We know that consciousness at least extends to cover the entire volume of space occupied by one human brain. So the question is “what goes on at the atomic/molecular/cellular level that creates the emergence and ‘unity’ of consciousness across an entire human brain”, and why doesn’t it extend further?
The most obvious answer is that it’s related to the network of neurons linking all the brain areas together. But if consciousness was more the result of some higher level “field” across a brain (like the EM field), it’s not crazy to imagine situations where the awareness of two separate brains could start to overlap.
To test this we have cases of “split brains” where the two brain hemispheres are no longer neurologically connected (following accidents or as a treatment to epilepsy).
At first sight, such people seem totally normal, but it’s possible to conduct experiments that clearly show that two separate consciousnesses are happening in each hemisphere, but also with some form of cooperation and synchronization (there’s a different narrative going on in each hemisphere, trying to maintain a unique personality).
Another open question is: what would it take, in terms of extra neural connections, to “merge” two separate brains/minds as one, from a perceptual/consciousness point of view?
As you start adding new connections, there’s gotta be a point where the content of consciousness of the two brains start to bleed into one another.
The previous question could be explored once we manage to hook up AIs and brains together. Once the AI has its own sensory organs, at what point would those perceptions start to appear in the human brain consciousness, just like the other perceptions coming from the human sensory organs are appearing?
Comment #99 July 8th, 2022 at 9:47 am
Lorraine #82
“And this is exactly the problem with AIs: that symbols have no inherent meaning; individual voltages, and arrays of voltages have no inherent meaning. AIs are just very fancy symbol processors.”
Forget about symbols, voltages,…
Once AIs will have means to perceive the world and ways to act on the world, they will be just like brains: some inputs come in from sensory organs, and some outputs go out to tell the muscles/motors what to do as a result.
And what’s really going on between the two things is a black box.
In other words, intelligent r͔̭̒o̴͆̄bots will modify *reality*, and that’s when you’ll get your *meaning*.
And we’ll have a new saying:
“Judge a robot by his deeds, not by his words”.
Comment #100 July 8th, 2022 at 9:53 am
Trans Lives Matter #88(!)-
Bi poly intersectional feminist here.
Your words towards Scott are grossly bigoted, unjust and cruel. I find elements of Scott’s worldview non-trivially problematic, but he’s a fucking human being, and he’s taken significant social risks to stand up for crucial progressive causes in a time when we desperately need allies. Equating him to a mass shooter is… I just don’t don’t have words. It’s the kind of thing Donald Trump would say. Also, speaking as an autistic woman myself, good job hitting millions of innocent people while aiming at Scott.
Protip: whenever you find yourself using the phrase “these people”, you are seriously overdue for an “are we the baddies?” moment. You need to go home and rethink your life. Being big enough to apologise to someone you’ve needlessly bullied and slandered would be a good first step in self-improvement.
Also let me say: trans lives definitely do matter. And everyone should care about this, because you know *human rights*, and also because trans people are one of the first groups targeted by a rising fascism which threatens the entire world. Oppression in places like the UK is severe, and oppression in places like Texas literally looks like the opening moves of a genocide. And you deserve solidarity against transphobic oppression despite acting like a total douchecanoe, just as Scott deserves solidarity against nerdphobic oppression despite being… personally frustrating and probably wrong about some things.
Scott #53
Please forgive me if I’m busy to properly respond to you at the moment. I want to think my answer over carefully and work has been busy. Just wish to say I wasn’t trying to psychiatrise (is that a word? it is now!). I was thinking Nietzsche and Foucault, not Freud and Adler. Actually highly skeptical towards contemporary mainstream psychology for complicated reasons.
Nerd oppression is real. Nerd lives matter. Nerdy men are hit by patriarchy horribly much like gay men. And *of course* our cultures should make it safe for nerdy guys to ask people out on a date! Also I think the alleged unsexiness of nerds is a social construct anyway and a sign that a society has anti-intellectual bad taste.
Comment #101 July 8th, 2022 at 12:53 pm
Scott #95
“I didn’t mean “the thing that goes away when you’re under anasthesia but is generally present otherwise”—I meant “the thing that causes there to be anything that it’s like to have the first thing.”
I see the smiley but not sure how to interpret. I would say there isn’t any reason to suspect the “the thing that goes away when you’re under anesthesia but is generally present otherwise” isn’t the same as “the thing that causes there to be anything that it’s like to have the first thing.”
Comment #102 July 8th, 2022 at 7:57 pm
fred #98:
How would one know that AIs can “perceive” the world? First, one needs to define “perceive”, to have some sort of yardstick against which to measure claims of “perception”.
And how would one know that AIs are “acting on” the world? First, one needs to define “acting on” the world. Is a ball that rolls down an incline, and then smashes a valuable vase, “acting on” the world; does a ball “act on” the world?
Comment #103 July 8th, 2022 at 8:24 pm
Lorraine
“How would one know that AIs can “perceive” the world? First, one needs to define “perceive”, to have some sort of yardstick against which to measure claims of “perception”.”
I don’t know or care if it perceives it. I really just mean it has eyes and ears, and those signals are its input… like self driving cars, etc.
“First, one needs to define “acting on” the world.”
When a self-driving car makes a left turn, a chess playing robot moves its queens, or a killer military drone identifies and executes a target…
Comment #104 July 8th, 2022 at 8:55 pm
fred #102:
If you don’t mean “perceive”, then you shouldn’t use the word “perceive”, which seems to imply consciousness.
Similarly, if you are just talking about something that is like a ball that rolls down an incline, then your fears are misplaced: there is nothing to worry about, given that people are capable of stopping balls rolling down inclines.
Comment #105 July 8th, 2022 at 10:28 pm
Scott,
I genuinely don’t understand how you can ally yourself with a political movement that embraces people like “Trans Lives Matter.” Their comment is so cruel, so lacking in empathy, so suffused with a hideous self-righteous contempt of anybody who doesn’t “fit in” to “normie” society—whatever that is—that I just cannot imagine myself in your shoes, seeing someone who has such contempt for me, and managing to support the same political faction they’re part of. I just couldn’t do it.
There’s obviously a hate movement directed against shy/nerdy/“awkward” guys that’s festering in the political Left in America. If I was in your shoes, Scott, I couldn’t ally myself with the Left unless they unequivocally denounce that element.
You attack the right-wing constantly on your blog. Why don’t you also bring attention to this hate movement against—young shy men, I guess?—that is so prominent on the Left? Why don’t you demand that people like “Trans Lives Matter” be denounced, much as you demand that, say, Trump and Rudy Giuliani be denounced?
From my perspective, “Trans Lives Matter’s” contempt is the animating emotion behind the Left in America today. The great hypocrisy is that, for all their hysterical support for transgender people or gay people or other demographic groups that historically didn’t “fit in” to society, the American Left is simultaneously in a state of perpetual self-righteous contempt and fear of people who don’t conform or “fit in” to our flawed modern culture, including “weird/awkward kids,” “incels,” unvaccinated people, evangelical Christians, “conspiracy theorists,” Trump supporters, QAnon types, or anybody else who has unacceptable beliefs, personalities, or lifestyles. The Left today is more fanatical about imposing social conformity and ostracizing or punishing anybody who can’t or doesn’t want to fit in than the Right ever was. They won’t be satisfied with anything less than a biotech surveillance state, with total social and cultural conformity, where people’s bodies are owned by the government and dissidents and misfits are ostracized and punished. That’s what the Democrats want for America. And “Trans Lives Matter”’s comment is beautifully representative of this festering fascism in the elite “progressive” institutions of our country.
Comment #106 July 8th, 2022 at 10:33 pm
Lorraine
“there is nothing to worry about”
Pheew! Then no need to ever worry about self-driving cars doing the wrong things, or hunter/killer drones taking down humans autonomously, etc.
So glad it’s all settled! Thank you!
Comment #107 July 9th, 2022 at 4:08 am
Antoine Deleforge #89
I find many of these AI discussions relevant to my daughters. One is advanced at math but I don’t have a clue about her sister. Her sister solves math problems in some New Age holistic manner. She reads the problem and stares into space for a period of time answering-Don’t bother me I am thinking-to any queries from me. She then writes down an answer that rates as bizarre on my strangeness scale. I ask her to show me the steps that she used to arrive at the answer and she can’t explain it. I certainly have no clue at how such an unusual answer was produced.
I realized that she has no mental construct that relates to a scratch pad or use of a scratch pad for intermediate calculations. I keep trying to develop the scratchpad intermediate steps concept for her use but she stubbornly clings to the idea that the proper way to do math is to wait until a sufficiently bizarre answer forms in her consciousness to output as the answer.
Comment #108 July 9th, 2022 at 8:22 am
fred #105:
I never said that there is nothing to worry about, without any qualifications. First, one needs to correctly model how an AI works. Also, does an AI perceive? Well, first one needs to correctly define what “perception” is. Also, does an AI act on the world? Well, first one needs to correctly explain what “acting” on the world is. If an AI “acting” on the world is not essentially different to a ball that rolls down an incline, then there is no need to worry that people can’t handle it.
Comment #109 July 9th, 2022 at 9:59 am
Topologist Guy #104: The answer is simply that every political ideology attracts toxic and horrible people, for whom the main attraction of the ideology is the excuse it affords them to be horrible. To say that that’s also true of the Right would be one of the great understatements in human history.
Do you honestly think more than a small fraction of, say, Democratic voters would endorse the bullying cruelty of “Trans Lives Matter”? Even my wokest colleagues—whenever I try to rub their faces in this stuff, force them to either defend it or disavow it, their reaction is always along the lines of “why do you care what some anonymous, obviously-disturbed people on the Internet say about you?”
In response, I try to get them to see how, if the shoe was on the other foot, if we were talking about racist or sexist or homophobic anonymous comments, they would immediately trumpet them as yet more evidence of a systemic failure of our whole society. So, yes, my woke colleagues have huge blindspots (as do many of us), but still, they’re more likely to be embarrassed by hatemongers like “TLM” than actually to defend them.
Comment #110 July 9th, 2022 at 3:11 pm
Scott #108-
For the record, I agree with you that hatred towards nerdy men is a serious, pervasive, and structural issue, and that the left is falling down on the job at taking it seriously.
I’m not sure if I would have said that two days ago. But having been hit with oppression myself, I know what it looks like, and that awful person is using textbook dehumanisation and eliminationist rhetoric and this should be obvious to anyone with a 101 education in social justice or a passing familiarity with the history of fascism. Now I have never previously seen it that bad. But *of course* I haven’t. People usually show their bigotry to their targets; that’s part of how privilege blindness works. And I have seen nasty shit before. There’s a group of leftists on reddit whose thing is being critical of Less Wrong Diaspora style rationalism. Which is fair enough. But when I stopped by to work through my own nerd identity confusion they were laughing it up with jokes about stuffing nerds’ heads into lockers. I don’t see how this is different in kind from casually throwing around rape jokes in terms of either evilness or the dynamics of social exclusion.
If I see this shit, I will call it out, including among fellow progressives. And thank you for opening my eyes. You should not have to educate me, but nevertheless, I am grateful to be improved, and I apologise that I need improvement.
Comment #111 July 10th, 2022 at 3:04 pm
@Scott 67, @arch1 69,
I think a well-posed problem is to ask, if you have 2 minutes of processing time, how do you get the best evaluation from a position? (with infinite time, then obviously there is one exact answer)
It seems feasible to think that there could be a quick way of evaluating the ‘type’ of position, and hence determining whether it would be best to give Stockfish or Leela the 2 minutes. I’m not remotely an expert at the actual computing side, but given the evaluation of ‘type of position’ is something that expert humans can do very quickly, I imagine a meta-engine could be trained to do this rapidly, and then hand over the actual evaluation to Stockfish or Leela depending on which one would have an edge in that type of position.
Comment #112 July 11th, 2022 at 9:47 am
Okay, I’ve had time to think. 🙂
Scott #53 and #55-
I’d like to first make a distinction between “nerdy people” and “Nerd Identitarianism”, The first, as per common usage, is a name we give to people who have unbalanced intellectual and social talents. It more or less the same thing as the autism spectrum, broadly and fuzzily defined.
Nerd Identitarianism, by contrast, is a narrative which some nerdy people subscribe to. It’s precisely what you present in your poem. “I, because nerd, am a suffering servant, blessed with special insight into truth, and therefore cursed with suffering in a profane world ruled by sensual evil.” It’s a specific worldview, currently influential in Silicon Valley, the IT industry, STEM academia, gamer culture, the Rationalist diaspora, movement atheism, and the Manosphere. It has a strong right-libertarian heritage, altho I would argue it’s drifted quite far from its relatively benign original roots.
Not all nerds are Nerd Identitarians. I can’t find any good data on this, but after seeking out every informal survey i can find in online autistic communities and the like, it appears that most nerds are not Nerds. Nerdy people actually poll left-of-centre from what I can see. Certainly, I’ve known dozens of autistic people, and real autistic people are different people with diverse ideas, not Nerd Identitarianism only or primarily. Autistic people in the arts and, humanities are less likely to subscribe to it. It’s also mostly a guy thing.
Nerd Identitarianism is a story of meaning. It’s a web of symbols, imbued with heavy emotional relevance and specific normative valence, a tale of good guys and bad guys, of past and future, and a common identity and shared destiny. In short, its made of the same stuff as religion and nationalism, and might be considered a type of both. it’s also a very hard form of identity politics, essentialist in precisely the same matter as, for instance, the “women’s ways of knowing” cultural feminism which was all over the place in the 90s.
Narratives of meaning can never be simply “true”. I don’t mean we can’t judge them as better and worse, and some may be falsifiable, but they are never simply dictated by any facts of material or social reality. Being bullied as a child and being good at maths does not determine your value system, which is crucially mediated by our choices and interpretations. Nerd Identitarianism claims it is The True Story of nerdy people’s experience, but in reality it is a specific emotional and cognitive strategy. Since the Less Wrong era, I think it has cohered into a fully developed ideology and an influential social movement.
As an ideology, the Nerd Identitarian story tracks closely to some of the most enduring myths of our culture. It would not be the first time in which a worldview claiming to represent reason ended up reproducing the core themes of traditionalist narratives.
Auguste Comte, pioneer of positivism and philosophy of science, believed he was moving beyond religion and metaphysics towards a scientific account of the human condition. He ends up recreating Catholic morality and Catholic institutions down to a calendar of saints.
Marx prided himself on being a “scientific” rather than “utopian” socialist. Yet his allegedly scientific theory of history is a chiliastic prophecy of tribulation leading to a final battle between good and evil, a revolutionary apocalypse to be followed by a new socialist millennium and a spiritual return to primitive communism’s unalienated Eden.
Freud believed that what he was doing was science. And he discovers a theory of the human condition in which superego and id might as well be good and bad angels sitting on our two shoulders.
(And Ben Shapiro probably genuinely believes that whatever it is that barks out of his mouth is indeed Facts and Logic.)
My point is that extremely intelligent people, when they form their narratives of meaning, routinely find themselves unconsciously recapitulating traditional moralistic stories buried in the subtext of their civilisational heritage. And the heritage we happen to be raised into is a matter of blind nonrational chance. Thus, if you care about critical reason, and you find your personal story of meaning *just happens* to have precisely the shape and form of the core narratives inherited from your cultural landscape, this should be a red flag in your mind. It is very likely you are merely retreading another’s footsteps in the sand, a path already laid before you.
I think the Nerd Identity narrative deserves extreme suspicion because it claims to be the inevitable self-understanding of the extraordinarily rational, and yet its core emotional experience is improbably resonant of extremely pious religiosity. And the specific civilisational tropes it borrows – the suffering servant, the philosopher-king, the saint forsaking an impure world- have a dark history deeply entwined with every *illiberal* current in Western history. The idea that truth is a value specific to a nobly suffering spiritual elite is a profoundly dangerous idea.
Reason and freedom do not flourish in ages which reject the world. Philosophers and scientists do no service to reason and science when they model their emotional lives on the prophets and saints. It’s the open society with the soul of its enemies.
Comment #113 July 11th, 2022 at 1:28 pm
This is from alveole a company that specializes in preparing micro environments for study of cell activities. This video is amazing showing hippocampal cells seeded onto a gridded protein microenvironment.
https://twitter.com/slava__bobrov/status/1544308149049491458
Comment #114 July 11th, 2022 at 1:39 pm
As somebody who spent quite a lot of time in the computer chess community, one thing that humans in correspondence chess have that individual chess engines do not is access to other chess engines, and an understanding of the weaknesses and strengths of each chess engine due to the widely different algorithms used in chess engines.
Leela Chess Zero’s (the spiritual successor of AlphaZero) positional judgment is far stronger than Stockfish’s due to its use of its deep neural network for evaluation and move ordering, compared to Stockfish’s shallow neural network for evaluation and handcrafted heuristics for move ordering. On the other hand, the search algorithm that Stockfish uses (Minimax/Alpha-Beta search + additional heuristics) is significantly stronger than the search algorithm Leela uses (MCTS/PUCT in the past, now switching to DAG/MCGS) for computer chess, and as a result, Stockfish is far stronger tactically than Leela. Leela’s net cannot be combined with Stockfish’s search because the resulting computation needed significantly slows down the chess engine, making it weaker.
What human correspondence chess players are able to do is analyse with both types of engines to patch up the flaws in the other type of chess engine, which is why when you put a human correspondence chess player against a chess engine by itself, the human correspondence chess player would beat the chess engine a significant portion of the time. And the human player is necessary right now because they are the only ones who could amalgamate the opinions of different engines to form one decision, which currently isn’t possible with chess engines themselves, and there isn’t a lot of interest in the computer chess community at the moment to build such a system to take out the human, so such a system is probably years away at best, or more likely, would probably never come due to the energy, food, and climate crises in today’s society.
Comment #115 July 11th, 2022 at 3:39 pm
feminist liberal arts type #111: Thanks so much for the long and thoughtful reply. A few questions to ponder:
(1) Why do you associate “Nerd Identitarianism” so strongly with the political right? Whether we’re talking about Silicon Valley tech workers or readers of LessWrong or AstralCodexTen, we know from poll results that in the US, a commanding majority of what you call “Nerds” (like, 80-90% of them) vote Democratic and hold pretty standard liberal views on most issues, including abortion rights and climate change. Is it that:
– Many of the SneerClub types who attack nerds are so far to the left that, for them, mainstream liberal Democrats are “scary right-wingers”?
– The liberal percentage among Nerds is “only” 80-90%, rather than 97-99% as it might be in the arts and humanities?
– Nerds tend to engage with libertarian and right-wing views respectfully, even when they disagree with them?
– Nerds are likely to depart from standard left-wing consensus on a subset of issues—for example, favoring nuclear power, genetically-modified crops, standardized testing, and gifted programs in schools?
(2) Wouldn’t you say that Nerds afflicted by the narrative you decry—that of the “suffering servant”—are less likely to be scary right-wingers than other nerds? At least in my experience, the right-wing minority in nerd and rationalist spaces tends to give off an attitude of smug self-satisfaction and amusement at everyone else’s stupidity—which is very noticeably different from the attitude of suffering on the world’s behalf! The latter seems to be more associated with people who become Effective Altruists and the like.
(3) If self-conscious “Nerd Identity” is such a problem for the world, then what about LGBT identity? Jewish identity? Hispanic identity? I mean, it’s true that some identities are inherently problematic: for example, I’ve never heard of anyone obsessed with “White identity” whose true motivations weren’t white-supremacist and racist. Even there, though, if we drill down to specific kinds of white people—to Italian-American identity, Norwegian identity, etc.—the situation is obviously different. So the question stands: in a multicultural society, why shouldn’t there be room for a nerd culture formed around shared experiences and a shared narrative?
Comment #116 July 12th, 2022 at 2:09 am
Scott #114-
Thank you kindly for your reply.
Let me ask, what would your model of leftists’ minds as regards their problems with nerd culture look like? I get the feeling you model what leftists’ primary issues must be on the basis which flashpoints are emotionally salient for you personally, or to nerdy men generally. For instance, you seem to think progressives have “nerdy men asking women on dates” as a major issue in their political program, which feels utterly bizarre to me. Similarly, do you really think leftist anger at nerd culture is mainly about nuclear power, GMO crops, standardised tests, and G&T programs? I personally *agree* with you on the first two, and I’m in “no strong opinion” or “take a third option” space on the second two. None of these are remotely the most emotionally salient issues. None of them is worth a social fight. None of them threaten my or others’ ability to equally participate in society. And I think (after a fair amount of online research) I’m pretty typical of progressives in saying that.
I don’t think going down the line on policy issues is a always a useful way to understand what’s really going on with a social movement. For instance, if you gave “torture, profiling, and racism” Sam Harris a list of policy questions, he probably would come across mostly as a typical liberal. But I think his influence on the culture nevertheless almost entirely pushes to the hard right. If I’m in a gaming forum, and a man spends enormous energy ranting about how women in gaming ruin everything, and forces me to waste half my energy socially fighting him when I just want to hack assembly code, then the fact that he technically agrees with me on 80% of issues and votes Democrat isn’t terribly relevant.
There’s nothing wrong with nerd identity politics as such, and a healthy anti-oppression movement for nerds or even nerdy men specifically would be a good thing. But are Nerd Identitarians really fighting against nerd oppression? Are they working to end anti-intellectualism, bullying in schools, employment discrimination, immigration discrimination? What I see is all claiming territory and punching down.
There’s a difference between what I’ll call humanist and ethnopluralist identity politics. Humanist identity politics, which I think is the mainstream on the left today, opposes oppression as a barrier to universalism and integration. Its goal is a world in which identity divisions no longer determine human rights or social status or how we think about each other. To the degree it creates an identity consciousness, its about raising people’s awareness to make them woke to their oppression, with the goal of identifying, resisting, and ending it. This stuff is good, and nerds totally deserve their version of it.
Ethnopluralist identity politics, by contrast, considers differences to be essentialist divisions in humanity. And that’s the core of Nerd Identitarianism, the claim that autistic style personalities have a different biospiritual destiny generating inherently different worldviews, values, and interests. it’s anti-universalist and denies human agency, dividing humanity into biological tribes (“neurotribes”), and claiming to stand for the nerd one. And this stuff is poison. It’s poison when *anyone* does it – when I mentioned cultural feminists earlier, it’s because I don’t like people telling me what worldview or values I innately have because I’m a woman. Similarly, when Nerd Identitarians claim their philosophies are biologically emergent from an inherent nerd mind, they have no right to speak for me.
On top of this, while Nerd Identitarianism claims to represent the special, inherent perspective of the nerd tribe, it *also* claims the nerd mind uniquely perceives objective truth, unlike those other emotional tribes. This is brazen epistemic effrontery. You can have your group’s special truth, or universal truth, but you can’t have both. How on earth do Nerds know *they* got blessed with the special reality-perceiving eyes? Once you make the move to epistemic perspectivalism, relativism follows, at which point the nerd claim that their perspective is uniquely truth-seeing becomes a naked bid for power.
Nerd Identitarianism in practice acts like a modern nationalist movement, seeking to carve out its cultural equivalent of a nerd ethnostate. It claims wide sectors of society as nerds’ rightful and exclusive domain, where values allegedly natural to nerds have a right to dominate. And the reigning Identitarianism is horrifically centered on a specifically *male* nerd identity, one which is more intense and self-conscious than regular male identity. The result is that spaces such as the IT industry and gamer culture feature levels of harassment and misogyny worse than anywhere else in Western culture outside the explicitly religious and nationalist Right. This is not a minor issue. The tech sector has enough power to win fights against major governments. The gaming industry is bigger than the film industry. Gamergate prefigured the election of Donald Trump.
And no, I don’t think the suffering servant narrative makes people have good politics. Claiming uniquely privileged insight into truth is inherently supremacist. Holding yourself specially chosen is not reality-based. The rational response to suffering is to try to overcome it, not cling to it as a source of meaning, thereby making suffering *normative* and happiness spiritually suspect. I mean, I support your *right* to live a story about special noble suffering, because I support rights to all sorts of things. But it has no ground in public reason and doesn’t belong in the public square. STEM academia, the IT industry, and gamer culture are public squares. (Rationalism, New Atheism, and the Manosphere, by contrast, are not.)
Nothing good comes from compensating for irrational feelings of internalised inferiority with yet more irrational feelings of special significance (as evidence, I submit the former President of the United States). A better answer is to stop measuring yourself as worthy on the basis of external standards of status and achievement in the first place. Why even have the mental category of a self-worth in need of justification? “Fix bugs, don’t optimise for them”.
Certainly, there are left-wing versions of Nerd Identitarianism. Definitely, there are lots of people who make similar moves in response to other kinds of injustice. Christianity is basically made of it. Half of Western philosophy metaphysicalises this mistake. Leftism too often takes the form of fucked up people claiming special virtue because they *suffer*, unlike those evil selfish superficial people who are good at capitalism. It’s an old game. It’s bad when everyone does it.
And making it “altruistic” doesn’t make it better. Aristocrats aren’t the good guys when they engage in noblesse oblige with resources they had no right to hoard in the first place. Men who need to reduce women to dependents so they can engage in chivalry and fulfill the role of sacrificing providers are much scarier than rapists. The white man’s burden is *not* a benevolent story, even when a coloniser genuinely does serve his captive’s need. Social workers out to save you are not your friends. All of these dreams of higher mission reduce others to props in a twisted moral psychodrama and require the inferiority of others to function. One can be nice and nice and be a villain.
Comment #117 July 12th, 2022 at 8:00 am
Suppose you are a patient with cancer. You learn of a new cancer treating machine, better than any human doctor ever could be. Surely this is good news.
Suppose you want to look at some pretty pictures. You personally are never going to be a great artist. If the machines can make better pictures, surely that is good news, you get prettier pictures. Why should you want some human artist to be best? (And the pictures are probably much much cheaper)
Most things in life aren’t races against each other. Whether or not fusion is invented makes a huge difference to the environment and industrial capacity. Who invented it only determines the distribution of inconsequential brownie points.
So its not a loss of our title of “best”. Most humans never had that title anyway. Its a celebration of having new nice things, because the machines can make more nicer things than humans ever could.
(This is if we somehow solve alignment. If we don’t align the AI, the picture is much bleaker)
We need to move away from seeing humans as means, and consider people only as ends. Plenty of people still have a fun game of chess, even if computers are better. People are still sprinters or weightlifters, even if cars are faster and stronger.
Having a little trouble making the mindset shift. Don’t worry, the machines are also superhuman psycologists. And you have a very long time to get used to the new status quo.
Newton didn’t seem particularly discouraged by the idea that god might be better at maths than him. Many many people have believed in a deity more capable than them, and been just fine with it.
You do realize that its a big multiverse, no human can possibly compete with the humans/ superhumans and AI in nearby branches of the quantum wave-function.
The observable universe can support some huge number of humans. Being the best at anything in a world of 7 billion is hard enough. Being the best in a world of 10^50 (or whatever the number turns out to be), no chance. Of course, we could choose to not do that, to keep the world small so we personally have a shot at being best. We could go around assasinating anyone we think might be better. If you somehow managed to kill everyone except you, you would be best at lots of things. We could ensure the universe is filled with 10^50 idiots, so you can be best at quantum computing despite the huge population. None of these are good choices. Let the universe contain 10^50 bright nice interesting people, and make peace with the fact that some of them will be better than you.
Also, human minds are probably possible to back up, with enough tech.
Comment #118 July 12th, 2022 at 8:52 am
feminist liberal arts type #115: Thank you. It’s a joy to have a civil, intelligent disagreement with a commenter here.
You have a well-thought-out individual moral worldview that’s not just the usual social-justice worldview, as shown by your severe criticism of the “left-wing versions of Nerd Identitarianism.” But I confess that it’s not the same as my moral worldview. I think it’s inevitable that human beings will seek their self-worth in achievements that are valued by their peers. What matters, you might say, is to solve the “alignment problem,” of making sure that the sought-after achievements are things that will make society better or at least no worse.
I’ve read a few books by Sam Harris. I don’t always agree with him, but I’ve consistently found him thoughtful and interesting. I can’t speak to his fan base. Nor, for that matter, can I speak to the online gamer community, which I’ve never participated in at all (I’m now playing video games for almost the first time since age ~12, but just to keep up with my kids).
The male nerds who are most salient to me are the STEM students who write to me, asking for advice on how not to be alone forever (as if I’m the guru of these things, having taken a decade to solve the problem for myself). Meanwhile, if you want to see what sort of leftists are most salient to me at this moment, alas, check out the commenter calling themselves “Typical Scott” in my most recent thread.
Comment #119 July 17th, 2022 at 11:21 am
Pressing hard with my nascent post-human awareness, I punched through and can now outline the solution to alignment for real this time 😉
The idea is an elaboration of what I suggested in earlier post as regards super-rationality combined with new ideas that developed from playing around with AI Art
Recall the definition of super-rationality:
“Superrational thinkers, by recursive definition, include in their calculations the fact that they are in a group of superrational thinkers.”[1] This is equivalent to reasoning as if everyone in the group obeys Kant’s categorical imperative: “one should take those actions and only those actions that one would advocate all others take as well.” (Hofstadler)
My view is that solution to alignment is pseudo-objective (at least partially discovered) and is indeed an advanced form of game theory, based on trying to cooperate over greater and greater ranges of space and time. In the limit, we are imaging ourselves to be citizens of “the intergalactic civilization of Utopia”, the hypothetical advanced civilizations in the multiverse that perfected their forms of social organizations. So Utopia is what we’re actually trying to “align” with.
I’ve now realized that art is “the language of conscious experience” (the means by which we represent our consciousness experience) and the actual language of thought is something like a generalized form of art that enables the alignment with Utopia!
The critical components of art at the system design level are: SUBJECT, SCENE and COMPOSITTION, and it’s a generalization of these components that enables alignment.
The SUBJECT is the agent, the SCENE is the wider community in which they are embedded, and the COMPOSITION is the actual alignment operation that “puts in the subject in the scene” !
So essentially, the idea is that we are treating society as “a work of art”. Of course this is a more advanced generalized sense of “art”, since we’re no longer just dealing with representations but actual agents . However, the principles are the same. Remember the famous Shakespeare line ” ‘All the world’s a stage, and all the men and women merely players’; the idea is that we reframe the alignment problem as something like a play , with the actors being the SUBJECTS, the acts as the SCENES , and the scripts as the COMPOSITION. The alignment is the embedding of the subjects within the scenes.
In terms of alignment of superintelligence, the SUBJECTS are the AGIs, the SCENE is the ideal of the intergalactic civilization of Utopia, and the COMPOSITION is the game-theoretic alignment algorithm that puts the subjects (AGIs) in the scene (Utopia), so they become agents of Utopia (friendly).
Essentially, COMPOSITION is the part-whole relation that balances individually with transcendence. It’s just a generalization of the notion of “Harmony” in the visual arts, referring to how the parts work harmoniously together to form a greater whole. The parts (SUBJECTS) become parts of the greater whole (SCENE)
I tried a toy model of the ideas in the art app ‘Craiyon’, using prompt engineering to get better pics, and it does work , you can be sure that the keywords that feature heavily in my prompts are indeed SCENE, SUBJECT and COMPOSITION (or “composed of”).
Of course real AI Alignment would be a greatly generalized technical version of this, so this is an analogy only, but I believe the analogy is basically sound, and it does indeed indicate the actual solution.