Archive for the ‘Nerd Interest’ Category

Busy Beaver Updates: Now Even Busier

Tuesday, August 30th, 2022

Way back in the covid-filled summer of 2020, I wrote a survey article about the ridiculously-rapidly-growing Busy Beaver function. My survey then expanded to nearly twice its original length, with the ideas, observations, and open problems of commenters on this blog. Ever since, I’ve felt a sort of duty to blog later developments in BusyBeaverology as well. It’s like, I’ve built my dam, I’ve built my lodge, I’m here in the pond to stay!

So without further ado:

  • This May, Pavel Kropitz found a machine demonstrating that $$ BB(6) \ge 10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10^{10}}}}}}}}}}}}}} $$ (15 times)—thereby blasting through his own 2010 record, that BB(6)≥1036,534. Or, for those tuning in from home: Kropitz constructed a 6-state, 2-symbol, 1-tape Turing machine that runs for at least the above number of steps, when started on an initially blank tape, and then halt. The machine was analyzed and verified by Pascal Michel, the modern keeper of Busy Beaver lore. In my 2020 survey, I’d relayed an open problem posed by my then 7-year-old daughter Lily: namely, what’s the first n such that BB(n) exceeds A(n), the nth value of the Ackermann function? All that’s been proven is that this n is at least 5 and at most 18. Kropitz and Michel’s discovery doesn’t settle the question—titanic though it is, the new lower bound on BB(6) is still less than A(6) (!!)—but in light of this result, I now strongly conjecture that the crossover happens at either n=6 or n=7. Huge congratulations to Pavel and Pascal!
  • Tristan Stérin and Damien Woods wrote to tell me about a new collaborative initiative they’ve launched called BB Challenge. With the participation of other leading figures in the neither-large-nor-sprawling Busy Beaver world, Tristan and Damien are aiming, not only to pin down the value of BB(5)—proving or disproving the longstanding conjecture that BB(5)=47,176,870—but to do so in a formally verified way, with none of the old ambiguities about which Turing machines have or haven’t been satisfactorily analyzed. In my survey article, I’d passed along a claim that, of all the 5-state machines, only 25 remained to be analyzed, to understand whether or not they run forever—the “default guess” being that they all do, but that proving it for some of them might require fearsomely difficult number theory. With their more formal and cautious approach, Tristan and Damien still count 1.5 million (!) holdout machines, but they hope to cut down that number extremely rapidly. If you’re feeling yourself successfully nerd-sniped, please join the quest and help them out!

Steven Pinker and I debate AI scaling!

Monday, June 27th, 2022

Before June 2022 was the month of the possible start of the Second American Civil War, it was the month of a lively debate between Scott Alexander and Gary Marcus about the scaling of large language models, such as GPT-3.  Will GPT-n be able to do all the intellectual work that humans do, in the limit of large n?  If so, should we be impressed?  Terrified?  Should we dismiss these language models as mere “stochastic parrots”?

I was privileged to be part of various email exchanges about those same questions with Steven Pinker, Ernest Davis, Gary Marcus, Douglas Hofstadter, and Scott Alexander.  It’s fair to say that, overall, Pinker, Davis, Marcus, and Hofstadter were more impressed by GPT-3’s blunders, while we Scotts were more impressed by its abilities.  (On the other hand, Hofstadter, more so than Pinker, Davis, or Marcus, said that he’s terrified about how powerful GPT-like systems will become in the future.)

Anyway, at some point Pinker produced an essay setting out his thoughts, and asked whether “either of the Scotts” wanted to share it on our blogs.  Knowing an intellectual scoop when I see one, I answered that I’d be honored to host Steve’s essay—along with my response, along with Steve’s response to that.  To my delight, Steve immediately agreed.  Enjoy!  –SA


Steven Pinker’s Initial Salvo

Will future deep learning models with more parameters and trained on more examples avoid the silly blunders which Gary Marcus and Ernie Davis entrap GPT into making, and render their criticisms obsolete?  And if they keep exposing new blunders in new models, would this just be moving the goalposts?  Either way, what’s at stake?

It depends very much on the question.  There’s the cognitive science question of whether humans think and speak the way GPT-3 and other deep-learning neural network models do.  And there’s the engineering question of whether the way to develop better, humanlike AI is to upscale deep learning models (as opposed to incorporating different mechanisms, like a knowledge database and propositional reasoning).

The questions are, to be sure, related: If a model is incapable of duplicating a human feat like language understanding, it can’t be a good theory of how the human mind works.  Conversely, if a model flubs some task that humans can ace, perhaps it’s because it’s missing some mechanism that powers the human mind.  Still, they’re not the same question: As with airplanes and other machines, an artificial system can duplicate or exceed a natural one but work in a different way.

Apropos the scientific question, I don’t see the Marcus-Davis challenges as benchmarks or long bets that they have to rest their case on.  I see them as scientific probing of an empirical hypothesis, namely whether the human language capacity works like GPT-3.  Its failures of common sense are one form of evidence that the answer is “no,” but there are others—for example, that it needs to be trained on half a trillion words, or about 10,000 years of continuous speech, whereas human children get pretty good after 3 years.  Conversely, it needs no social and perceptual context to make sense of its training set, whereas children do (hearing children of deaf parents don’t learn spoken language from radio and TV).  Another diagnostic is that baby-talk is very different from the output of a partially trained GPT.  Also, humans can generalize their language skill to express their intentions across a wide range of social and environmental contexts, whereas GPT-3 is fundamentally a text extrapolator (a task, incidentally, which humans aren’t particularly good at).  There are surely other empirical probes, limited only by scientific imagination, and it doesn’t make sense in science to set up a single benchmark for an empirical question once and for all.  As we learn more about a phenomenon, and as new theories compete to explain it, we need to develop more sensitive instruments and more clever empirical tests.  That’s what I see Marcus and Davis as doing.

Regarding the second, engineering question of whether scaling up deep-learning models will “get us to Artificial General Intelligence”: I think the question is probably ill-conceived, because I think the concept of “general intelligence” is meaningless.  (I’m not referring to the psychometric variable g, also called “general intelligence,” namely the principal component of correlated variation across IQ subtests.  This is  a variable that aggregates many contributors to the brain’s efficiency such as cortical thickness and neural transmission speed, but it is not a mechanism (just as “horsepower” is a meaningful variable, but it doesn’t explain how cars move.)  I find most characterizations of AGI to be either circular (such as “smarter than humans in every way,” begging the question of what “smarter” means) or mystical—a kind of omniscient, omnipotent, and clairvoyant power to solve any problem.  No logician has ever outlined a normative model of what general intelligence would consist of, and even Turing swapped it out for the problem of fooling an observer, which spawned 70 years of unhelpful reminders of how easy it is to fool an observer.

If we do try to define “intelligence” in terms of mechanism rather than magic, it seems to me it would be something like “the ability to use information to attain a goal in an environment.”  (“Use information” is shorthand for performing computations that embody laws that govern the world, namely logic, cause and effect, and statistical regularities.  “Attain a goal” is shorthand for optimizing the attainment of multiple goals, since different goals trade off.)  Specifying the goal is critical to any definition of intelligence: a given strategy in basketball will be intelligent if you’re trying to win a game and stupid if you’re trying to throw it.  So is the environment: a given strategy can be smart under NBA rules and stupid under college rules.

Since a goal itself is neither intelligent or unintelligent (Hume and all that), but must be exogenously built into a system, and since no physical system has clairvoyance for all the laws of the world it inhabits down to the last butterfly wing-flap, this implies that there are as many intelligences as there are goals and environments.  There will be no omnipotent superintelligence or wonder algorithm (or singularity or AGI or existential threat or foom), just better and better gadgets.

In the case of humans, natural selection has built in multiple goals—comfort, pleasure, reputation, curiosity, power, status, the well-being of loved ones—which may trade off, and are sometimes randomized or inverted in game-theoretic paradoxical tactics.  Not only does all this make psychology hard, but it makes human intelligence a dubious benchmark for artificial systems.  Why would anyone want to emulate human intelligence in an artificial system (any more than a mechanical engineer would want to duplicate a human body, with all its fragility)?  Why not build the best possible autonomous vehicle, or language translator, or dishwasher-emptier, or baby-sitter, or protein-folding predictor?  And who cares whether the best autonomous vehicle driver would be, out of the box, a good baby-sitter?  Only someone who thinks that intelligence is some all-powerful elixir.

Back to GPT-3, DALL-E, LaMDA, and other deep learning models: It seems to me that the question of whether or not they’re taking us closer to “Artificial General Intelligence” (or, heaven help us, “sentience”) is based not on any analysis of what AGI would consist of but on our being gobsmacked by what they can do.  But refuting our intuitions about what a massively trained, massively parameterized network is capable of (and I’ll admit that they refuted mine) should not be confused with a path toward omniscience and omnipotence.  GPT-3 is unquestionably awesome at its designed-in goal of extrapolating text.  But that is not the main goal of human language competence, namely expressing and perceiving intentions.  Indeed, the program is not even set up to input or output intentions, since that would require deep thought about how to represent intentions, which went out of style in AI as the big-data/deep-learning hammer turned every problem into a nail.  That’s why no one is using GPT-3 to answer their email or write an article or legal brief (except to show how well the program can spoof one).

So is Scott Alexander right that every scaled-up GPT-n will avoid the blunders that Marcus and Davis show in GPT-(n-1)?  Perhaps, though I doubt it, for reasons that Marcus and Davis explain well (in particular, that astronomical training sets at best compensate for their being crippled by the lack of a world model).  But even if they do, that would show neither that human language competence is a GPT (given the totality of the relevant evidence) nor that GPT-n is approaching Artificial General Intelligence (whatever that is).


Scott Aaronson’s Response

As usual, I find Steve crystal-clear and precise—so much so that we can quickly dispense with the many points of agreement.  Basically, one side says that, while GPT-3 is of course mind-bogglingly impressive, and while it refuted confident predictions that no such thing would work, in the end it’s just a text-prediction engine that will run with any absurd premise it’s given, and it fails to model the world the way humans do.  The other side says that, while GPT-3 is of course just a text-prediction engine that will run with any absurd premise it’s given, and while it fails to model the world the way humans do, in the end it’s mind-bogglingly impressive, and it refuted confident predictions that no such thing would work.

All the same, I do think it’s possible to identify a substantive disagreement between the distinguished baby-boom linguistic thinkers and the gen-X/gen-Y blogging Scott A.’s: namely, whether there’s a coherent concept of “general intelligence.”  Steve writes:

No logician has ever outlined a normative model of what general intelligence would consist of, and even Turing swapped it out for the problem of fooling an observer, which spawned 70 years of unhelpful reminders of how easy it is to fool an observer.

I freely admit that I have no principled definition of “general intelligence,” let alone of “superintelligence.”  To my mind, though, there’s a simple proof-of-principle that there’s something an AI could do that pretty much any of us would call “superintelligent.”  Namely, it could say whatever Albert Einstein would say in a given situation, while thinking a thousand times faster.  Feed the AI all the information about physics that the historical Einstein had in 1904, for example, and it would discover special relativity in a few hours, followed by general relativity a few days later.  Give the AI a year, and it would think … well, whatever thoughts Einstein would’ve thought, if he’d had a millennium in peak mental condition to think them.

If nothing else, this AI could work by simulating Einstein’s brain neuron-by-neuron—provided we believe in the computational theory of mind, as I’m assuming we do.  It’s true that we don’t know the detailed structure of Einstein’s brain in order to simulate it (we might have, had the pathologist who took it from the hospital used cold rather than warm formaldehyde).  But that’s irrelevant to the argument.  It’s also true that the AI won’t experience the same environment that Einstein would have—so, alright, imagine putting it in a very comfortable simulated study, and letting it interact with the world’s flesh-based physicists.  A-Einstein can even propose experiments for the human physicists to do—he’ll just have to wait an excruciatingly long subjective time for their answers.  But that’s OK: as an AI, he never gets old.

Next let’s throw into the mix AI Von Neumann, AI Ramanujan, AI Jane Austen, even AI Steven Pinker—all, of course, sped up 1,000x compared to their meat versions, even able to interact with thousands of sped-up copies of themselves and other scientists and artists.  Do we agree that these entities quickly become the predominant intellectual force on earth—to the point where there’s little for the original humans left to do but understand and implement the AIs’ outputs (and, of course, eat, drink, and enjoy their lives, assuming the AIs can’t or don’t want to prevent that)?  If so, then that seems to suffice to call the AIs “superintelligences.”  Yes, of course they’re still limited in their ability to manipulate the physical world.  Yes, of course they still don’t optimize arbitrary goals.  All the same, these AIs have effects on the real world consistent with the sudden appearance of beings able to run intellectual rings around humans—not exactly as we do around chimpanzees, but not exactly unlike it either.

I should clarify that, in practice, I don’t expect AGI to work by slavishly emulating humans—and not only because of the practical difficulties of scanning brains, especially deceased ones.  Like with airplanes, like with existing deep learning, I expect future AIs to take some inspiration from the natural world but also to depart from it whenever convenient.  The point is that, since there’s something that would plainly count as “superintelligence,” the question of whether it can be achieved is therefore “merely” an engineering question, not a philosophical one.

Obviously I don’t know the answer to the engineering question: no one does!  One could consistently hold that, while the thing I described would clearly count as “superintelligence,” it’s just an amusing fantasy, unlikely to be achieved for millennia if ever.  One could hold that all the progress in AI so far, including the scaling of language models, has taken us only 0% or perhaps 0.00001% of the way toward superintelligence so defined.

So let me make two comments about the engineering question.  The first is that there’s good news here, at least epistemically: unlike with the philosophical questions, we’re virtually guaranteed more clarity over time!  Indeed, we’ll know vastly more just by the end of this decade, as the large language models are further scaled and tweaked, and we find out whether they develop effective representations of the outside world and of themselves, the ability to reject absurd premises and avoid self-contradiction, or even the ability to generate original mathematical proofs and scientific hypotheses.  Of course, Gary Marcus and Scott Alexander have already placed concrete bets on the table for what sorts of things will be possible by 2030.  For all their differences in rhetoric, I was struck that their actual probabilities differed much more modestly.

So then what explains the glaring differences in rhetoric?  This brings me to my second comment: whenever there’s a new, rapidly-growing, poorly-understood phenomenon, whether it’s the Internet or AI or COVID, there are two wildly different modes of responding to it, which we might call “February 2020 mode” and “March 2020 mode.”  In February 2020 mode, one says: yes, a naïve extrapolation might lead someone to the conclusion that this new thing is going to expand exponentially and conquer the world, dramatically changing almost every other domain—but precisely because that conclusion seems absurd on its face, it’s our responsibility as serious intellectuals to articulate what’s wrong with the arguments that lead to it.  In March 2020 mode, one says: holy crap, the naïve extrapolation seems right!  Prepare!!  Why didn’t we start earlier?

Often, to be sure, February 2020 mode is the better mode, at least for outsiders—as with the Y2K bug, or the many disease outbreaks that fizzle.  My point here is simply that February 2020 mode and March 2020 mode differ by only a month.  Sometimes hearing a single argument, seeing a single example, is enough to trigger an epistemic cascade, causing all the same facts to be seen in a new light.  As a result, reasonable people might find themselves on opposite sides of the chasm even if they started just a few steps from each other.

As for me?  Well, I’m currently trying to hold the line around February 26, 2020.  Suspending my day job in the humdrum, pedestrian field of quantum computing, I’ve decided to spend a year at OpenAI, thinking about the theoretical foundations of AI safety.  But for now, only a year.


Steven Pinker’s Response to Scott

Thanks, Scott, for your thoughtful and good-natured reply, and for offering me the opportunity to respond  in Shtetl-Optimized, one of my favorite blogs. Despite the areas of agreement, I still think that discussions of AI and its role in human affairs—including AI safety—will be muddled as long as the writers treat intelligence as an undefined superpower rather than a mechanisms with a makeup that determines what it can and can’t do. We won’t get clarity on AI if we treat the “I” as “whatever fools us,” or “whatever amazes us,” or “whatever IQ tests measure,” or “whatever we have more of than animals do,” or “whatever Einstein has more of than we do”—and then start to worry about a superintelligence that has much, much more of whatever that is.

Take Einstein sped up a thousandfold. To begin with, current AI is not even taking us in that direction. As you note, no one is reverse-engineering his connectome, and current AI does not think the way Einstein thought, namely by visualizing physical scenarios and manipulating mathematical equations. Its current pathway would be to train a neural network with billions of physics problems and their solutions and hope that it would soak up the statistical patterns.

Of course, the reason you pointed to a sped-up Einstein was to procrastinate having to define “superintelligence.” But if intelligence is a collection of mechanisms rather than a quantity that Einstein was blessed with a lot of, it’s not clear that just speeding him up would capture what anyone would call superintelligence. After all, in many areas Einstein was no Einstein. You above all could speak of his not-so-superintelligence in quantum physics, and when it came world affairs, in the early 1950s he offered the not exactly prescient or practicable prescription, “Only the creation of a world government can prevent the impending self-destruction of mankind.” So it’s not clear that we would call a system that could dispense such pronouncements in seconds rather than years “superintelligent.” Nor with speeding up other geniuses, say, an AI Bertrand Russell, who would need just nanoseconds to offer his own solution for world peace: the Soviet Union would be given an ultimatum that unless it immediately submitted to world government, the US (which at the time had a nuclear monopoly) would bomb it with nuclear weapons.

My point isn’t to poke retrospective fun at brilliant men, but to reiterate that brilliance itself is not some uncanny across-the-board power that can be “scaled” by speeding it up or otherwise; it’s an engineered system that does particular things in particular ways. Only with a criterion for intelligence can we say which of these counts as intelligent.

Now, it’s true that raw speed makes new kinds of computation possible, and I feel silly writing this to you of all people, but speeding a process up by a constant factor is of limited use with problems that are exponential, as the space of possible scientific theories, relative to their complexity, must be. Speeding up a search in the space of theories a thousandfold would be a rounding error in the time it took to find a correct one. Scientific progress depends on the search exploring the infinitesimal fraction of the space in which the true theories are likely to lie, and this depends on the quality of the intelligence, not just its raw speed.

And it depends as well on a phenomenon you note, namely that scientific progress depends on empirical discovery, not deduction from a silicon armchair. The particle accelerators and space probes and wet labs and clinical trials still have to be implemented, with data accumulating at a rate set by the world. Strokes of genius can surely speed up the rate of discovery, but in the absence of omniscience about every particle, the time scale will still be capped by empirical reality. And this in turn directs the search for viable theories: which part of the space one should explore is guided by the current state of scientific knowledge, which depends on the tempo of discovery. Speeding up scientists a thousandfold would not speed up science a thousandfold.

All this is relevant to AI safety. I’m all for safety, but I worry that the dazzling intellectual capital being invested in the topic will not make us any safer if it begins with a woolly conception of intelligence as a kind of wonder stuff that you can have in different amounts. It leads to unhelpful analogies, like “exponential increase in the number of infectious people during a pandemic” ≈ “exponential increase in intelligence in AI systems.” It encourages other questionable extrapolations from the human case, such as imagining that an intelligent tool will develop an alpha-male lust for domination. Worst of all, it may encourage misconceptions of AI risk itself, particularly the standard scenario in which a hypothetical future AGI is given some preposterously generic single goal such as “cure cancer” or “make people happy” and theorists fret about the hilarious collateral damage that would ensue.

If intelligence is a mechanism rather than a superpower, the real dangers of AI come into sharper focus. An AI system designed to replace workers may cause mass unemployment; a system designed to use data to sort people may sort them in ways we find invidious; a system designed to fool people may be exploited to fool them in nefarious ways; and as many other hazards as there are AI systems. These dangers are not conjectural, and I suspect each will have to be mitigated by a different combination of policies and patches, just like other safety challenges such as falls, fires, and drownings. I’m curious whether, once intelligence is precisely characterized, any abstract theoretical foundations of AI safety will be useful in dealing with the actual AI dangers that will confront us.

An understandable failing?

Sunday, May 29th, 2022

I hereby precommit that this will be my last post, for a long time, around the twin themes of (1) the horribleness in the United States and the world, and (2) my desperate attempts to reason with various online commenters who hold me personally complicit in all this horribleness. I should really focus my creativity more on actually fixing the world’s horribleness, than on seeking out every random social-media mudslinger who blames me for it, shouldn’t I? Still, though, isn’t undue obsession with the latter a pretty ordinary human failing, a pretty understandable one?

So anyway, if you’re one of the thousands of readers who come here simply to learn more about quantum computing and computational complexity, rather than to try to provoke me into mounting a public defense of my own existence (which defense will then, ironically but inevitably, stimulate even more attacks that need to be defended against) … well, either scroll down to the very end of this post, or wait for the next post.


Thanks so much to all my readers who donated to Fund Texas Choice. As promised, I’ve personally given them a total of $4,106.28, to match the donations that came in by the deadline. I’d encourage people to continue donating anyway, while for my part I’ll probably run some more charity matching campaigns soon. These things are addictive, like pulling the lever of a slot machine, but where the rewards go to making the world an infinitesimal amount more consistent with your values.


Of course, now there’s a brand-new atrocity to shame my adopted state of Texas before the world. While the Texas government will go to extraordinary lengths to protect unborn children, the world has now witnessed 19 of itsborn children consigned to gruesome deaths, as the “good guys with guns”—waited outside and prevented parents from entering the classrooms where their children were being shot. I have nothing original to add to the global outpourings of rage and grief. Forget about the statistical frequency of these events: I know perfectly well that the risk from car crashes and home accidents is orders-of-magnitude greater. Think about it this way: the United States is now known to the world as “the country that can’t or won’t do anything to stop its children from semi-regularly being gunned down in classrooms,” not even measures that virtually every other comparable country on earth has successfully taken. It’s become the symbol of national decline, dysfunction, and failure. If so, then the stakes here could fairly be called existential ones—not because of its direct effects on child life expectancy or GDP or any other index of collective well-being that you can define and measure, but rather, because a country that lacks the will to solve this will be judged by the world, and probably accurately, as lacking the will to solve anything else.


In return for the untold thousands of hours I’ve poured into this blog, which has never once had advertising or asked for subscriptions, my reward has been years of vilification by sneerers and trolls. Some of the haters even compare me to Elliot Rodger and other aggrieved mass shooters. And I mean: yes, it’s true that I was bullied and miserable for years. It’s true that Elliot Rodger, Salvador Ramos (the Uvalde shooter), and most other mass shooters were also bullied and miserable for years. But, Scott-haters, if we’re being intellectually honest about this, we might say that the similarities between the mass shooter story and the Scott Aaronson story end at a certain point not very long after that. We might say: it’s not just that Aaronson didn’t respond by hurting anybody—rather, it’s that his response loudly affirmed the values of the Enlightenment, meaning like, the whole package, from individual autonomy to science and reason to the rejection of sexism and racism to everything in between. Affirmed it in a manner that’s not secretly about popularity (demonstrably so, because it doesn’t get popularity), affirmed it via self-questioning methods intellectually honest enough that they’d probably still have converged on the right answer even in situations where it’s now obvious that almost everyone you around would’ve been converging on the wrong answer, like (say) Nazi Germany or the antebellum South.

I’ve been to the valley of darkness. While there, I decided that the only “revenge” against the bullies that was possible or desirable was to do something with my life, to achieve something in science that at least some bullies might envy, while also starting a loving family and giving more than most to help strangers on the Internet and whatever good cause comes to his attention and so on. And after 25 years of effort, some people might say I’ve sort of achieved the “revenge” as I’d then defined it. And they might further say: if you could get every school shooter to redefine “revenge” as “becoming another Scott Aaronson,” that would be, you know, like, a step upwards. An improvement.


And let this be the final word on the matter that I ever utter in all my days, to the thousands of SneerClubbers and Twitter randos who pursue this particular line of attack against Scott Aaronson (yes, we do mean the thousands—which means, it both feels to its recipient like the entire earth yet actually is less than 0.01% of the earth).

We see what Scott did with his life, when subjected for a decade to forms of psychological pressure that are infamous for causing young males to lash out violently. What would you have done with your life?


A couple weeks ago, when the trolling attacks were arriving minute by minute, I toyed with the idea of permanently shutting down this blog. What’s the point? I asked myself. Back in 2005, the open Internet was fun; now it’s a charred battle zone. Why not restrict conversation to my academic colleagues and friends? Haven’t I done enough for a public that gives me so much grief? I was dissuaded by many messages of support from loyal readers. Thank you so much.


If anyone needs something to cheer them up, you should really watch Prehistoric Planet, narrated by an excellent, 96-year-old David Attenborough. Maybe 35 years from now, people will believe dinosaurs looked or acted somewhat differently from these portrayals, just like they believe somewhat differently now from when I was a kid. On the other hand, if you literally took a time machine to the Late Cretaceous and starting filming, you couldn’t get a result that seemed more realistic, let’s say to a documentary-watching child, than these CGI dinosaurs on their CGI planet seem. So, in the sense of passing that child’s Turing Test, you might argue, the problem of bringing back the dinosaurs has now been solved.

If you … err … really want to be cheered up, you can follow up with Dinosaur Apocalypse, also narrated by Attenborough, where you can (again, as if you were there) watch the dinosaurs being drowned and burned alive in their billions when the asteroid hits. We’d still be scurrying under rocks, were it not for that lucky event that only a monster could’ve called lucky at the time.


Several people asked me to comment on the recent savage investor review against the quantum computing startup IonQ. The review amusingly mixed together every imaginable line of criticism, with every imaginable degree of reasonableness from 0% to 100%. Like, quantum computing is impossible even in theory, and (in the very next sentence) other companies are much closer to realizing quantum computing than IonQ is. And IonQ’s response to the criticism, and see also this by the indefatigable Gil Kalai.

Is it, err, OK if I sit this one out for now? There’s probably, like, actually an already-existing machine learning model where, if you trained it on all of my previous quantum computing posts, it would know exactly what to say about this.

An update on the campaign to defend serious math education in California

Tuesday, April 26th, 2022

Update (April 27): Boaz Barak—Harvard CS professor, longtime friend-of-the-blog, and coauthor of my previous guest post on this topic—has just written an awesome FAQ, providing his personal answers to the most common questions about what I called our “campaign to defend serious math education.” It directly addresses several issues that have already come up in the comments. Check it out!


As you might remember, last December I hosted a guest post about the “California Mathematics Framework” (CMF), which was set to cause radical changes to precollege math in California—e.g., eliminating 8th-grade algebra and making it nearly impossible to take AP Calculus. I linked to an open letter setting out my and my colleagues’ concerns about the CMF. That letter went on to receive more than 1700 signatures from STEM experts in industry and academia from around the US, including recipients of the Nobel Prize, Fields Medal, and Turing Award, as well as a lot of support from college-level instructors in California. 

Following widespread pushback, a new version of the CMF appeared in mid-March. I and others are gratified that the new version significantly softens the opposition to acceleration in high school math and to calculus as a central part of mathematics.  Nonetheless, we’re still concerned that the new version promotes a narrative about data science that’s a recipe for cutting kids off from any chance at earning a 4-year college degree in STEM fields (including, ironically, in data science itself).

To that end, some of my Californian colleagues have issued a new statement today on behalf of academic staff at 4-year colleges in California, aimed at clearing away the fog on how mathematics is related to data science. I strongly encourage my readers on the academic staff at 4-year colleges in California to sign this commonsense statement, which has already been signed by over 250 people (including, notably, at least 50 from Stanford, home of two CMF authors).

As a public service announcement, I’d also like to bring to wider awareness Section 18533 of the California Education Code, for submitting written statements to the California State Board of Education (SBE) about errors, objections, and concerns in curricular frameworks such as the CMF.  

The SBE is scheduled to vote on the CMF in mid-July, and their remaining meeting before then is on May 18-19 according to this site, so it is really at the May meeting that concerns need to be aired.  Section 18533 requires submissions to be written (yes, snail mail) and postmarked at least 10 days before the SBE meeting. So to make your voice heard by the SBE, please send your written concern by certified mail (for tracking, but not requiring signature for delivery), no later than Friday May 6, to State Board of Education, c/o Executive Secretary of the State Board of Education, 1430 N Street, Room 5111, Sacramento, CA 95814, complemented by an email submission to sbe@cde.ca.gov and mathframework@cde.ca.gov.

On form versus meaning

Sunday, April 24th, 2022

There is a fundamental difference between form and meaning. Form is the physical structure of something, while meaning is the interpretation or concept that is attached to that form. For example, the form of a chair is its physical structure – four legs, a seat, and a back. The meaning of a chair is that it is something you can sit on.

This distinction is important when considering whether or not an AI system can be trained to learn semantic meaning. AI systems are capable of learning and understanding the form of data, but they are not able to attach meaning to that data. In other words, AI systems can learn to identify patterns, but they cannot understand the concepts behind those patterns.

For example, an AI system might be able to learn that a certain type of data is typically associated with the concept of “chair.” However, the AI system would not be able to understand what a chair is or why it is used. In this way, we can see that an AI system trained on form can never learn semantic meaning.

–GPT3, when I gave it the prompt “Write an essay proving that an AI system trained on form can never learn semantic meaning” 😃

Happy 70th birthday Dad!

Saturday, February 12th, 2022

When, before covid, I used to travel the world giving quantum computing talks, every once in a while I’d meet an older person who asked whether I had any relation to a 1970s science writer by the name of Steve Aaronson. So, yeah, Steve Aaronson is my dad. He majored in English in Penn State, where he was lucky enough to study under the legendary Phil Klass, who wrote under the pen name William Tenn and who basically created the genre of science-fiction comedy, half a century before there were any such things as Futurama. After graduating, my dad became a popular physics and cosmology writer, who interviewed greats like Steven Weinberg and John Archibald Wheeler and Arno Penzias (discoverer of the cosmic microwave background radiation). He published not only in science magazines but in Playboy and Penthouse, which (as he explained to my mom) paid better than the science magazines. When I was growing up, my dad had a Playboy on his office shelf, which I might take down if for example I wanted to show a friend a 2-page article, with an Aaronson byline, about the latest thinking on the preponderance of matter over antimatter in the visible universe.

Eventually, partly motivated by the need to make money to support … well, me, and then my brother, my dad left freelancing to become a corporate science writer at AT&T Bell Labs. There, my dad wrote speeches, delivered on the floor of Congress, about how breaking up AT&T’s monopoly would devastate Bell Labs, a place that stood with ancient Alexandria and Cambridge University among the human species’ most irreplaceable engines of scientific creativity. (Being a good writer, my dad didn’t put it in quite those words.) Eventually, of course, AT&T was broken up, and my dad’s dire warning about Bell Labs turned out to be 100% vindicated … although on the positive side, Americans got much cheaper long distance.

After a decade at Bell Labs, my dad was promoted to be a public relations executive at AT&T itself, where when I was a teenager, he was centrally involved in the launch of the AT&T spinoff Lucent Technologies (motto: “Bell Labs Innovations”), and then later the Lucent spinoff Avaya—developments that AT&T’s original breakup had caused as downstream effects.

In the 1970s, somewhere between his magazine stage and his Bell Labs stage, my dad also worked for Eugene Garfield, the pioneer of bibliometrics for scientific papers and founder of the Institute for Scientific Information, or ISI. (Sergey Brin and Larry Page would later cite Garfield’s work, on the statistics of the scientific-citation graph, as one of the precedents for the PageRank algorithm at the core of Google.)

My dad’s job at ISI was to supply Eugene Garfield with “raw material” for essays, which the latter would then write and publish in ISI’s journal Current Contents under the byline Eugene Garfield. Once, though, my dad supplied some “raw material” for a planned essay about “Style in Scientific Writing”—and, well, I’ll let Garfield tell the rest:

This topic of style in scientific writing was first proposed as something I should undertake myself, with some research and drafting help from Steve. I couldn’t, with a clear conscience, have put my name to the “draft” he submitted. And, though I don’t disagree with much of it, I didn’t want to modify or edit it in order to justify claiming it as my own. So here is Aaronson’s “draft,” as it was submitted for “review.” You can say I got a week’s vacation. After reading what he wrote it required little work to write this introduction.

Interested yet? You can read “Style in Scientific Writing” here. You can, if we’re being honest, tell that this piece was originally intended as “raw material”—but only because of the way it calls forth such a fierce armada of all of history’s awesomest quotations about what makes scientific writing good or bad, like Ben Franklin and William James and the whole gang, which would make it worth the read regardless. I love eating raw dough, I confess, and I love my dad’s essay. (My dad, ironically enough, likes everything he eats to be thoroughly cooked.)

When I read that essay, I hear my dad’s voice from my childhood. “Omit needless words.” There were countless revisions and pieces of advice on every single thing I wrote, but usually, “omit needless words” was the core of it. And as terrible as you all know me to be on that count, imagine how much worse it would’ve been if not for my dad! And I know that as soon as he reads this post, he’ll find needless words to omit.

But hopefully he won’t omit these:

Happy 70th birthday Pops, congrats on beating the cancer, and here’s to many more!

AlphaCode as a dog speaking mediocre English

Sunday, February 6th, 2022

Tonight, I took the time actually to read DeepMind’s AlphaCode paper, and to work through the example contest problems provided, and understand how I would’ve solved those problems, and how AlphaCode solved them.

It is absolutely astounding.

Consider, for example, the “n singers” challenge (pages 59-60). To solve this well, you first need to parse a somewhat convoluted English description, discarding the irrelevant fluff about singers, in order to figure out that you’re being asked to find a positive integer solution (if it exists) to a linear system whose matrix looks like
1 2 3 4
4 1 2 3
3 4 1 2
2 3 4 1.
Next you need to find a trick for solving such a system without Gaussian elimination or the like (I’ll leave that as an exercise…). Finally, you need to generate code that implements that trick, correctly handling the wraparound at the edges of the matrix, and breaking and returning “NO” for any of multiple possible reasons why a positive integer solution won’t exist. Oh, and also correctly parse the input.

Yes, I realize that AlphaCode generates a million candidate programs for each challenge, then discards the vast majority by checking that they don’t work on the example data provided, then still has to use clever tricks to choose from among the thousands of candidates remaining. I realize that it was trained on tens of thousands of contest problems and millions of solutions to those problems. I realize that it “only” solves about a third of the contest problems, making it similar to a mediocre human programmer on these problems. I realize that it works only in the artificial domain of programming contests, where a complete English problem specification and example inputs and outputs are always provided.

Forget all that. Judged against where AI was 20-25 years ago, when I was a student, a dog is now holding meaningful conversations in English. And people are complaining that the dog isn’t a very eloquent orator, that it often makes grammatical errors and has to start again, that it took heroic effort to train it, and that it’s unclear how much the dog really understands.

It’s not obvious how you go from solving programming contest problems to conquering the human race or whatever, but I feel pretty confident that we’ve now entered a world where “programming” will look different.

Update: A colleague of mine points out that one million, the number of candidate programs that AlphaCode needs to generate, could be seen as roughly exponential in the number of lines of the generated programs. If so, this suggests a perspective according to which DeepMind has created almost the exact equivalent, in AI code generation, of a non-fault-tolerant quantum computer that’s nevertheless competitive on some task (as in the quantum supremacy experiments). I.e., it clearly does something highly nontrivial, but the “signal” is still decreasing exponentially with the number of instructions, necessitating an exponential number of repetitions to extract the signal and imposing a limit on the size of the programs you can scale to.

Scott Aaronson Speculation Grant WINNERS!

Friday, February 4th, 2022

Two weeks ago, I announced on this blog that, thanks to the remarkable generosity of Jaan Tallinn, and the Speculation Grants program of the Survival and Flourishing Fund that Jaan founded, I had $200,000 to give away to charitable organizations of my choice. So, inspired by what Scott Alexander had done, I invited the readers of Shtetl-Optimized to pitch their charities, mentioning only some general areas of interest to me (e.g., advanced math education at the precollege level, climate change mitigation, pandemic preparedness, endangered species conservation, and any good causes that would enrage the people who attack me on Twitter).

I’m grateful to have gotten more than twenty well-thought-out pitches; you can read a subset of them in the comment thread. Now, having studied them all, I’ve decided—as I hadn’t at the start—to use my entire allotment to make as strong a statement as I can about a single cause: namely, subject-matter passion and excellence in precollege STEM education.

I’ll be directing funds to some shockingly cash-starved math camps, math circles, coding outreach programs, magnet schools, and enrichment programs, in Maine and Oregon and England and Ghana and Ethiopia and Jamaica. The programs I’ve chosen target a variety of ability levels, not merely the “mathematical elite.” Several explicitly focus on minority and other underserved populations. But they share a goal of raising every student they work with as high as possible, rather than pushing the students down to fit some standardized curriculum.

Language like that ought to be meaningless boilerplate, but alas, it no longer is. We live in a time when the state of California, in a misguided pursuit of “modernization” and “equity,” is poised to eliminate 8th-grade algebra, make it nearly impossible for high-school seniors to take AP Calculus, and shunt as many students as possible from serious mathematical engagement into a “data science pathway” that in practice might teach little more than how to fill in spreadsheets. (This watering-down effort now itself looks liable to be watered down—but only because of a furious pushback from parents and STEM professionals, pushback in which I’m proud that this blog played a small role.) We live in a time when elite universities are racing to eliminate the SAT—thus, for all their highminded rhetoric, effectively slamming the door on thousands of nerdy kids from poor or immigrant backgrounds who know how to think, but not how to shine in a college admissions popularity pageant. We live in a time when America’s legendary STEM magnet high schools, from Thomas Jefferson in Virginia to Bronx Science to Lowell in San Francisco, rather than being celebrated as the national treasures that they are, or better yet replicated, are bitterly attacked as “elitist” (even while competitive sports and music programs are not similarly attacked)—and are now being forcibly “demagnetized” by bureaucrats, made all but indistinguishable from other high schools, over the desperate pleas of their students, parents, and alumni.

And—alright, fine, on a global scale, arresting climate change is surely a higher-priority issue than protecting the intellectual horizons of a few teenage STEM nerds. The survival of liberal democracy is a higher-priority issue. Pandemic preparedness, poverty, malnutrition are higher-priority issues. Some of my friends strongly believe that the danger of AI becoming super-powerful and taking over the world is the highest-priority issue … and truthfully, with this week’s announcements of AlphaCode and OpenAI’s theorem prover, which achieve human-competitive performance in elite programming and math competitions respectively, I can’t confidently declare that they’re wrong.

On the other hand, when you think about the astronomical returns on every penny that was invested in setting a teenage Ramanujan or Einstein or Turing or Sofya Kovalevskaya or Norman Borlaug or Mario Molina onto their trajectories in life … and the comically tiny budgets of the world-leading programs that aim to nurture the next Ramanujans, to the point where $10,000 often seems like a windfall to those programs … well, you might come to the conclusion that the “protecting nerds” thing actually isn’t that far down the global priority list! Like, it probably cracks the top ten.

And there’s more to it than that. There’s a reason beyond parochialism, it dawned on me, why individual charities tend to specialize in wildlife conservation in Ecuador or deworming in Swaziland or some other little domain, rather than simply casting around for the highest-priority cause on earth. Expertise matters—since one wants to make, not only good judgments about which stuff to support, but good judgments that most others can’t or haven’t made. In my case, it would seem sensible to leverage the fact that I’m Scott Aaronson. I’ve spent much of my career in math/CS education and outreach—mostly, of course, at the university level, but by god did I personally experience the good and the bad in nearly every form of precollege STEM education! I’m pretty confident in my ability to distinguish the two, and for whatever I don’t know, I have close friends in the area who I trust.

There’s also a practical issue: in order for me to fund something, the recipient has to fill out a somewhat time-consuming application to SFF. If I’d added, say, another $20,000 drop into the bucket of global health or sustainability or whatever, there’s no guarantee that the intended recipients of my largesse would even notice, or care enough to go through the application process if they did. With STEM education, by contrast, holy crap! I’ve got an inbox full of Shtetl-Optimized readers explaining how their little math program is an intellectual oasis that’s changed the lives of hundreds of middle-schoolers in their region, and how $20,000 would mean the difference between their program continuing or not. That’s someone who I trust to fill out the form.

Without further ado, then, here are the first-ever Scott Aaronson Speculation Grants:

  • $57,000 for Canada/USA Mathcamp, which changed my life when I attended it as a 15-year-old in 1996, and which I returned to as a lecturer in 2008. The funds will be used for COVID testing to allow Mathcamp to resume in-person this summer, and perhaps scholarships and off-season events as well.
  • $30,000 for AddisCoder, which has had spectacular success teaching computer science to high-school students in Ethiopia, placing some of its alumni at elite universities in the US, to help them expand to a new “JamCoders” program in Jamaica. These programs were founded by UC Berkeley’s amazing Jelani Nelson, also with involvement from friend and Shtetl-Optimized semi-regular Boaz Barak.
  • $30,000 for the Maine School of Science and Mathematics, which seems to offer a curriculum comparable to those of Thomas Jefferson, Bronx Science, or the nation’s other elite magnet high schools, but (1) on a shoestring budget and (2) in rural Maine. I hadn’t even heard of MSSM before Alex Altair, an alum and Shtetl-Optimized reader, told me about it, but now I couldn’t be prouder to support it.
  • $30,000 for the Eugene Math Circle, which provides a math enrichment lifeline to kids in Oregon, and whose funding was just cut. This donation will keep the program alive for another year.
  • $13,000 for the Summer Science Program, which this summer will offer research experiences to high-school juniors in astrophysics, biochemistry, and genomics.
  • $10,000 for the MISE Foundation, which provides math enrichment for the top middle- and high-school students in Ghana.
  • $10,000 for Number Champions, which provides one-on-one coaching to kids in the UK who struggle with math.
  • $10,000 for Bridge to Enter Advanced Mathematics (BEAM), which runs math summer programs in New York, Los Angeles, and elsewhere for underserved populations.
  • $10,000 for Powderhouse, an innovative lab school being founded in Somerville, MA.

While working on this, it crossed my mind that, on my deathbed, I might be at least as happy about having directed funds to efforts like these as about any of my research or teaching.

To the applicants who weren’t chosen: I’m sorry, as many of you had wonderful projects too! As I said in the earlier post, you remain warmly invited to apply to SFF, and to make your pitch to the other Speculators and/or the main SFF committee.

Needless to say, anyone who feels inspired should add to my (or rather, SFF’s) modest contributions to these STEM programs. My sense is that, while $200k can go eye-poppingly far in this area, it still hasn’t come close to exhausting even the lowest-hanging fruit.

Also needless to say, the opinions in this post are my own and are not necessarily shared by SFF or by the organizations I’m supporting. The latter are welcome to disagree with me as long as they keep up their great work!

Huge thanks again to Jaan, to SFF, to my SFF contact Andrew Critch, to everyone (whether chosen or not) who participated in this contest, and to everyone who’s putting in work to broaden kids’ intellectual horizons or otherwise make the world a little less horrible.

Book Review: “Viral” by Alina Chan and Matt Ridley

Saturday, January 1st, 2022

Happy New Year, everyone!

It was exactly two years ago that it first became publicly knowable—though most of us wouldn’t know for at least two more months—just how freakishly horrible is the branch of the wavefunction we’re on. I.e., that our branch wouldn’t just include Donald Trump as the US president, but simultaneously a global pandemic far worse than any in living memory, and a world-historically bungled response to that pandemic.

So it’s appropriate that I just finished reading Viral: The Search for the Origin of COVID-19, by Broad Institute genetics postdoc Alina Chan and science writer Matt Ridley. Briefly, I think that this is one of the most important books so far of the twenty-first century.

Of course, speculation and argument about the origin of COVID goes back all the way to that fateful January of 2020, and most of this book’s information was already available in fragmentary form elsewhere. And by their own judgment, Chan and Ridley don’t end their search with a smoking-gun: no Patient Zero, no Bat Zero, no security-cam footage of the beaker dropped on the Wuhan Institute of Virology floor. Nevertheless, as far as I’ve seen, this is the first analysis of COVID’s origin to treat the question with the full depth, gravity, and perspective that it deserves.

Viral is essentially a 300-page plea to follow every lead as if we actually wanted to get to the bottom of things, and in particular, yes, to take the possibility of a lab leak a hell of a lot more seriously than was publicly permitted in 2020. (Fortuitously, much of this shift already happened as the authors were writing the book, but in June 2021 I was still sneered at for discussing the lab leak hypothesis on this blog.) Viral is simultaneously a model of lucid, non-dumbed-down popular science writing and of cogent argumentation. The authors never once come across like tinfoil-hat-wearing conspiracy theorists, railing against the sheeple with their conventional wisdom: they’re simply investigators carefully laying out what they’re confident should become conventional wisdom, with the many uncertainties and error bars explicitly noted. If you read the book and your mind works anything like mine, be forewarned that you might come out agreeing with a lot of it.

I would say that Viral proves the following propositions beyond reasonable doubt:

  • Virologists, including at Shi Zhengli’s group at WIV and at Peter Daszak’s EcoHealth Alliance, were engaged in unbelievably risky work, including collecting virus-laden fecal samples from thousands of bats in remote caves, transporting them to the dense population center of Wuhan, and modifying them to be more dangerous, e.g., through serial passage through human cells and the insertion of furin cleavage sites. Years before the COVID-19 outbreak, there were experts remarking on how risky this research was and trying to stop it. Had they known just how lax the biosecurity was in Wuhan—dangerous pathogens experimented on in BSL-2 labs, etc. etc.—they would have been louder.
  • Even if it didn’t cause the pandemic, the massive effort to collect and enhance bat coronaviruses now appears to have been of dubious value. It did not lead to an actionable early warning about how bad COVID-19 was going to be, nor did it lead to useful treatments, vaccines, or mitigation measures, all of which came from other sources.
  • There are multiple routes by which SARS-CoV2, or its progenitor, could’ve made its way, otherwise undetected, from the remote bat caves of Yunnan province or some other southern location to the city of Wuhan a thousand miles away, as it has to do in any plausible origin theory. Having said that, the regular Yunnan→Wuhan traffic in scientific samples of precisely these kinds of viruses, sustained over a decade, does stand out a bit! On the infamous coincidence of the pandemic starting practically next door to the world’s center for studying SARS-like coronaviruses, rather than near where the horseshoe bats live in the wild, Chan and Ridley memorably quote Humphrey Bogart’s line from Casablanca: “Of all the gin joints in all the towns in all the world, she walks into mine.”
  • The seafood market was probably “just” an early superspreader site, rather than the site of the original spillover event. No bats or pangolins at all, and relatively few mammals of any kind, appear to have been sold at that market, and no sign of SARS-CoV2 was ever found in any of the animals despite searching.
  • Most remarkably, Shi and Daszak have increasingly stonewalled, refusing to answer 100% reasonable questions from fellow virologists. They’ve acted more and more like defendants exercising their right to remain silent than like participants in a joint search for the truth. That might be understandable if they’d already answered ad nauseam and wearied of repeating themselves, but with many crucial questions, they haven’t answered even once. They’ve refused to make available a key database of all the viruses WIV had collected, which WIV inexplicably took offline in September 2019. When, in January 2020, Shi disclosed to the world that WIV had collected a virus called RaTG13, which was 96% identical to SARS-CoV2, she didn’t mention that it was collected from a mine in Mojiang, which the WIV had sampled from over and over because six workers had gotten a SARS-like pneumonia there in 2012 and three had died from it. She didn’t let on that her group had been studying RaTG13 for years—giving, instead, the false impression that they’d just noticed it recently, when searching WIV’s records for cousins of SARS-CoV2. And she didn’t see fit to mention that WIV had collected eight other coronaviruses resembling SARS-CoV2 from the same mine (!). Shi’s original papers on SARS-CoV2 also passed in silence over the virus’s furin cleavage site—even though SARS-CoV2 was the first sarbecoronavirus with that feature, and Shi herself had recently demonstrated adding furin cleavage sites to other viruses to make them more transmissible, and the cleavage site would’ve leapt out immediately to any coronavirus researcher as the most interesting feature of SARS-CoV2 and as key to its transmissibility. Some of these points had to be uncovered by Internet sleuths, poring over doctoral theses and the like, after which Shi would glancingly acknowledge the points in talks without ever explaining her earlier silences. Shi and Daszak refused to cooperate with Chan and Ridley’s book, and have stopped answering questions more generally. When people politely ask Daszak about these matters on Twitter, he blocks them.
  • The Chinese regime has been every bit as obstructionist as you might expect: destroying samples, blocking credible investigations, censoring researchers, and preventing journalists from accessing the Mojiang mine. So Shi at least has the excuse that, even if she’d wanted to come clean with everything relevant she knows about WIV’s bat coronavirus work, she might not be able to do so without endangering herself or loved ones. Daszak has no such excuse.

It’s important to understand that, even in the worst case—that (1) there was a lab leak, and (2) Shi and Daszak are knowingly withholding information relevant to it—they’re far from monsters. Even in Viral‘s relentlessly unsparing account, they come across as genuine believers in their mission to protect the world from the next pandemic.

And it’s like: imagine devoting your life to that mission, having most of the world refuse to take you seriously, and then the calamity happens exactly like you said … except that, not only did your efforts fail to prevent it, but there’s a live possibility that they caused it. It’s conceivable that your life’s work managed to save minus 15 million lives and create minus $50 trillion in economic value.

Very few scientists in history have faced that sort of psychic burden, perhaps not even the ones who built the atomic bomb. I hope I’d maintain my scientific integrity under such an astronomical weight, but I’m doubtful that I would. Would you?

Viral very wisely never tries to psychoanalyze Shi and Daszak. I fear that one might need a lot of conceptual space between “knowing” and “not knowing,” “suspecting” and “not suspecting,” to do justice to the planet-sized enormity of what’s at stake here. Suppose, for example, that an initial investigation in January 2020 reassured you that SARS-CoV2 probably hadn’t come from your lab: would you continue trying to get to the bottom of things, or would you thereafter decide the matter was closed?

For all that, I agree with Chan and Ridley that COVID-19 might well have had a zoonotic origin after all. And one point Viral makes abundantly clear is that, if our goal is to prevent the next pandemic, then resolving the mystery of COVID-19 actually matters less than one might think. This is because, whichever possibility—zoonotic spillover or lab leak—turns out to be the truth of this case, the other possibility would remain absolutely terrifying and would demand urgent action as well. Read the book and see for yourself.

Searching my inbox, I found an email from April 16, 2020 where I told someone who’d asked me that the lab-leak hypothesis seemed perfectly plausible to me (albeit no more than plausible), that I couldn’t understand why it wasn’t being investigated more, but that I was hesitant to blog about these matters. As I wrote seven months ago, I now see my lack of courage on this as having been a personal failing. Obviously, I’m just a quantum computing theorist, not a biologist, so I don’t have to have any thoughts whatsoever about the origin of COVID-19 … but I did have some, and I didn’t share them here only because of the likelihood that I’d be called an idiot on social media. Having now read Chan and Ridley, though, I think I’d take being called an idiot for this book review more as a positive signal about my courage than as a negative signal about my reasoning skills!

At one level, Viral stands alongside, I dunno, Eichmann in Jerusalem among the saddest books I’ve ever read. It’s 300 pages of one of the great human tragedies of our lifetime balancing on a hinge between happening and not happening, and we all know how it turns out. On another level, though, Viral is optimistic. Like with Richard Feynman’s famous “personal appendix” about the Space Shuttle Challenger explosion, the very act of writing such a book reflects a view that you’re still allowed to ask questions; that one or two people armed with nothing but arguments can run rings around governments, newspapers, and international organizations; that we don’t yet live in a post-truth world.

An Orthodox rabbi and Steven Weinberg walk into an email exchange…

Friday, October 22nd, 2021

Ever since I posted my obituary for the great Steven Weinberg three months ago, I’ve gotten a steady trickle of emails—all of which I’ve appreciated enormously—from people who knew Steve, or were influenced by him, and who wanted to share their own thoughts and memories. Last week, I was contacted by one Moshe Katz, an Orthodox rabbi, who wanted to share a long email exchange that he’d had with Steve, about Steve’s reasons for rejecting his birth-religion of Judaism (along with every other religion). Even though Rabbi Katz, rather than Steve, does most of the talking in this exchange, and even though Steve mostly expresses the same views he’d expressed in many of his public writings, I knew immediately on seeing this exchange that it could be of broader interest—so I secured permission to share it here on Shtetl-Optimized, both from Rabbi Katz and from Steve’s widow Louise.

While longtime readers can probably guess what I think about most of the topics discussed, I’ll refrain from any editorial commentary in this post—but of course, feel free to share your own thoughts in the comments, and maybe I’ll join in. Mostly, reading this exchange reminded me that someone at some point should write a proper book-length biography of Steve, and someone should also curate and publish a selection of his correspondence, much like Perfectly Reasonable Deviations from the Beaten Track did for Richard Feynman. There must be a lot more gems to be mined.

Anyway, without further ado, here’s the exchange (10 pages, PDF).

Update (Nov. 2, 2021): By request, see here for some of my own thoughts.