OpenAI!

June 17th, 2022

I have some exciting news (for me, anyway). Starting next week, I’ll be going on leave from UT Austin for one year, to work at OpenAI. They’re the creators of the astonishing GPT-3 and DALL-E2, which have not only endlessly entertained me and my kids, but recalibrated my understanding of what, for better and worse, the world is going to look like for the rest of our lives. Working with an amazing team at OpenAI, including Jan Leike, John Schulman, and Ilya Sutskever, my job will be think about the theoretical foundations of AI safety and alignment. What, if anything, can computational complexity contribute to a principled understanding of how to get an AI to do what we want and not do what we don’t want?

Yeah, I don’t know the answer either. That’s why I’ve got a whole year to try to figure it out! One thing I know for sure, though, is that I’m interested both in the short-term, where new ideas are now quickly testable, and where the misuse of AI for spambots, surveillance, propaganda, and other nefarious purposes is already a major societal concern, and the long-term, where one might worry about what happens once AIs surpass human abilities across nearly every domain. (And all the points in between: we might be in for a long, wild ride.) When you start reading about AI safety, it’s striking how there are two separate communities—one mostly worried about machine learning perpetuating racial and gender biases, and the other mostly worried about superhuman AI turning the planet into goo—who not only don’t work together, but are at each other’s throats, with each accusing the other of totally missing the point. I persist, however, in the possibly-naïve belief that these are merely two extremes along a single continuum of AI worries. By figuring out how to align AI with human values today—constantly confronting our theoretical ideas with reality—we can develop knowledge that will give us a better shot at aligning it with human values tomorrow.

For family reasons, I’ll be doing this work mostly from home, in Texas, though traveling from time to time to OpenAI’s office in San Francisco. I’ll also spend 30% of my time continuing to run the Quantum Information Center at UT Austin and working with my students and postdocs. At the end of the year, I plan to go back to full-time teaching, writing, and thinking about quantum stuff, which remains my main intellectual love in life, even as AI—the field where I started, as a PhD student, before I switched to quantum computing—has been taking over the world in ways that none of us can ignore.

Maybe fittingly, this new direction in my career had its origins here on Shtetl-Optimized. Several commenters, including Max Ra and Matt Putz, asked me point-blank what it would take to induce me to work on AI alignment. Treating it as an amusing hypothetical, I replied that it wasn’t mostly about money for me, and that:

The central thing would be finding an actual potentially-answerable technical question around AI alignment, even just a small one, that piqued my interest and that I felt like I had an unusual angle on. In general, I have an absolutely terrible track record at working on topics because I abstractly feel like I “should” work on them. My entire scientific career has basically just been letting myself get nerd-sniped by one puzzle after the next.

Anyway, Jan Leike at OpenAI saw this exchange and wrote to ask whether I was serious in my interest. Oh shoot! Was I? After intensive conversations with Jan, others at OpenAI, and others in the broader AI safety world, I finally concluded that I was.

I’ve obviously got my work cut out for me, just to catch up to what’s already been done in the field. I’ve actually been in the Bay Area all week, meeting with numerous AI safety people (and, of course, complexity and quantum people), carrying a stack of technical papers on AI safety everywhere I go. I’ve been struck by how, when I talk to AI safety experts, they’re not only not dismissive about the potential relevance of complexity theory, they’re more gung-ho about it than I am! They want to talk about whether, say, IP=PSPACE, or MIP=NEXP, or the PCP theorem could provide key insights about how we could verify the behavior of a powerful AI. (Short answer: maybe, on some level! But, err, more work would need to be done.)

How did this complexitophilic state of affairs come about? That brings me to another wrinkle in the story. Traditionally, students follow in the footsteps of their professors. But in trying to bring complexity theory into AI safety, I’m actually following in the footsteps of my student: Paul Christiano, one of the greatest undergrads I worked with in my nine years at MIT, the student whose course project turned into the Aaronson-Christiano quantum money paper. After MIT, Paul did a PhD in quantum computing at Berkeley, with my own former adviser Umesh Vazirani, while also working part-time on AI safety. Paul then left quantum computing to work on AI safety full-time—indeed, along with others such as Dario Amodei, he helped start the safety group at OpenAI. Paul has since left to found his own AI safety organization, the Alignment Research Center (ARC), although he remains on good terms with the OpenAI folks. Paul is largely responsible for bringing complexity theory intuitions and analogies into AI safety—for example, through the “AI safety via debate” paper and the Iterated Amplification paper. I’m grateful for Paul’s guidance and encouragement—as well as that of the others now working in this intersection, like Geoffrey Irving and Elizabeth Barnes—as I start this new chapter.

So, what projects will I actually work on at OpenAI? Yeah, I’ve been spending the past week trying to figure that out. I still don’t know, but a few possibilities have emerged. First, I might work out a general theory of sample complexity and so forth for learning in dangerous environments—i.e., learning where making the wrong query might kill you. Second, I might work on explainability and interpretability for machine learning: given a deep network that produced a particular output, what do we even mean by an “explanation” for “why” it produced that output? What can we say about the computational complexity of finding that explanation? Third, I might work on the ability of weaker agents to verify the behavior of stronger ones. Of course, if P≠NP, then the gap between the difficulty of solving a problem and the difficulty of recognizing a solution can sometimes be enormous. And indeed, even in empirical machine learing, there’s typically a gap between the difficulty of generating objects (say, cat pictures) and the difficulty of discriminating between them and other objects, the latter being easier. But this gap typically isn’t exponential, as is conjectured for NP-complete problems: it’s much smaller than that. And counterintuitively, we can then turn around and use the generators to improve the discriminators. How can we understand this abstractly? Are there model scenarios in complexity theory where we can prove that something similar happens? How far can we amplify the generator/discriminator gap—for example, by using interactive protocols, or debates between competing AIs?

OpenAI, of course, has the word “open” right in its name, and a founding mission “to ensure that artificial general intelligence benefits all of humanity.” But it’s also a for-profit enterprise, with investors and paying customers and serious competitors. So throughout the year, don’t expect me to share any proprietary information—that’s not my interest anyway, even if I hadn’t signed an NDA. But do expect me to blog my general thoughts about AI safety as they develop, and to solicit feedback from readers.

In the past, I’ve often been skeptical about the prospects for superintelligent AI becoming self-aware and destroying the world anytime soon (see, for example, my 2008 post The Singularity Is Far). While I was aware since 2005 or so of the AI-risk community; and of its leader and prophet, Eliezer Yudkowsky; and of Eliezer’s exhortations for people to drop everything else they’re doing and work on AI risk, as the biggest issue facing humanity, I … kept the whole thing at arms’ length. Even supposing I agreed that this was a huge thing to worry about, I asked, what on earth do you want me to do about it today? We know so little about a future superintelligent AI and how it would behave that any actions we took today would likely be useless or counterproductive.

Over the past 15 years, though, my and Eliezer’s views underwent a dramatic and ironic reversal. If you read Eliezer’s “litany of doom” from two weeks ago, you’ll see that he’s now resigned and fatalistic: because his early warnings weren’t heeded, he argues, humanity is almost certainly doomed and an unaligned AI will soon destroy the world. He says that there are basically no promising directions in AI safety research: for any alignment strategy anyone points out, Eliezer can trivially refute it by explaining how (e.g.) the AI would be wise to the plan, and would pretend to go along with whatever we wanted from it while secretly plotting against us.

The weird part is, just as Eliezer became more and more pessimistic about the prospects for getting anywhere on AI alignment, I’ve become more and more optimistic. Part of my optimism is because people like Paul Christiano have laid foundations for a meaty mathematical theory: much like the Web (or quantum computing theory) in 1992, it’s still in a ridiculously primitive stage, but even my limited imagination now suffices to see how much more could be built there. An even greater part of my optimism is because we now live in a world with GPT-3, DALL-E2, and other systems that, while they clearly aren’t AGIs, are powerful enough that worrying about AGIs has come to seem more like prudence than like science fiction. And we can finally test our intuitions against the realities of these systems, which (outside of mathematics) is pretty much the only way human beings have ever succeeded at anything.

I didn’t predict that machine learning models this impressive would exist by 2022. Most of you probably didn’t predict it. For godsakes, Eliezer Yudkowsky didn’t predict it. But it’s happened. And to my mind, one of the defining virtues of science is that, when empirical reality gives you a clear shock, you update and adapt, rather than expending your intelligence to come up with clever reasons why it doesn’t matter or doesn’t count.

Anyway, so that’s the plan! If I can figure out a way to save the galaxy, I will, but I’ve set my goals slightly lower, at learning some new things and doing some interesting research and writing some papers about it and enjoying a break from teaching. Wish me a non-negligible success probability!


Update (June 18): To respond to a couple criticisms that I’ve seen elsewhere on social media…

Can the rationalists sneer at me for waiting to get involved with this subject until it had become sufficiently “respectable,” “mainstream,” and ”high-status”? I suppose they can, if that’s their inclination. I suppose I should be grateful that so many of them chose to respond instead with messages of congratulations and encouragement. Yes, I plead guilty to keeping this subject at arms-length until I could point to GPT-3 and DALL-E2 and the other dramatic advances of the past few years to justify the reality of the topic to anyone who might criticize me. It feels internally like I had principled reasons for this: I can think of almost no examples of research programs that succeeded over decades even in the teeth of opposition from the scientific mainstream. If so, then arguably the best time to get involved with a “fringe” scientific topic, is when and only when you can foresee a path to it becoming the scientific mainstream. At any rate, that’s what I did with quantum computing, as a teenager in the mid-1990s. It’s what many scientists of the 1930s did with the prospect of nuclear chain reactions. And if I’d optimized for getting the right answer earlier, I might’ve had to weaken the filters and let in a bunch of dubious worries that would’ve paralyzed me. But I admit the possibility of self-serving bias here.

Should you worry that OpenAI is just hiring me to be able to say “look, we have Scott Aaronson working on the problem,” rather than actually caring about what its safety researchers come up with? I mean, I can’t prove that you shouldn’t worry about that. In the end, whatever work I do on the topic will have to speak for itself. For whatever it’s worth, though, I was impressed by the OpenAI folks’ detailed, open-ended engagement with these questions when I met them—sort of like how it might look if they actually believed what they said about wanting to get this right for the world. I wouldn’t have gotten involved otherwise.

Alright, so here are my comments…

June 12th, 2022

… on Blake Lemoine, the Google engineer who became convinced that a machine learning model had become sentient, contacted federal government agencies about it, and was then fired placed on administrative leave for violating Google’s confidentiality policies.

(1) I don’t think Lemoine is right that LaMDA is at all sentient, but the transcript is so mind-bogglingly impressive that I did have to stop and think for a second! Certainly, if you sent the transcript back in time to 1990 or whenever, even an expert reading it might say, yeah, it looks like by 2022 AGI has more likely been achieved than not (“but can I run my own tests?”). Read it for yourself, if you haven’t yet.

(2) Reading Lemoine’s blog and Twitter this morning, he holds many views that I disagree with, not just about the sentience of LaMDA. Yet I’m touched and impressed by how principled he is, and I expect I’d hit it off with him if I met him. I wish that a solution could be found where Google wouldn’t fire him.

Computer scientists crash the Solvay Conference

June 9th, 2022

Thanks so much to everyone who sent messages of support following my last post! I vowed there that I’m going to stop letting online trolls and sneerers occupy so much space in my mental world. Truthfully, though, while there are many trolls and sneerers who terrify me, there are also some who merely amuse me. A good example of the latter came a few weeks ago, when an anonymous commenter calling themselves “String Theorist” submitted the following:

It’s honestly funny to me when you [Scott] call yourself a “nerd” or a “prodigy” or whatever [I don’t recall ever calling myself a “prodigy,” which would indeed be cringe, though “nerd” certainly —SA], as if studying quantum computing, which is essentially nothing more than glorified linear algebra, is such an advanced intellectual achievement. For what it’s worth I’m a theoretical physicist, I’m in a completely different field, and I was still able to learn Shor’s algorithm in about half an hour, that’s how easy this stuff is. I took a look at some of your papers on arXiv and the math really doesn’t get any more advanced than linear algebra. To understand quantum circuits about the most advanced concept is a tensor product which is routinely covered in undergraduate linear algebra. Wheras in my field of string theory grasping, for instance, holographic dualities relating confirmal field theories and gravity requires vastly more expertise (years of advanced study). I actually find it pretty entertaining that you’ve said yourself you’re still struggling to understand QFT, which most people I’m working with in my research group were first exposed to in undergrad 😉 The truth is we’re in entirely different leagues of intelligence (“nerdiness”) and any of your qcomputing papers could easily be picked up by a first or second year math major. It’s just a joke that this is even a field (quantum complexity theory) with journals and faculty when the results in your papers that I’ve seen are pretty much trivial and don’t require anything more than undergraduate level maths.

Why does this sort of trash-talk, reminiscent of Luboš Motl, no longer ruffle me? Mostly because the boundaries between quantum computing theory, condensed matter physics, and quantum gravity, which were never clear in the first place, have steadily gotten fuzzier. Even in the 1990s, the field of quantum computing attracted amazing physicists—folks who definitely do know quantum field theory—such as Ed Farhi, John Preskill, and Ray Laflamme. Decades later, it would be fair to say that the physicists have banged their heads against many of the same questions that we computer scientists have banged our heads against, oftentimes in collaboration with us. And yes, there were cases where actual knowledge of particle physics gave physicists an advantage—with some famous examples being the algorithms of Farhi and collaborators (the adiabatic algorithm, the quantum walk on conjoined trees, the NAND-tree algorithm). There were other cases where computer scientists’ knowledge gave them an advantage: I wouldn’t know many details about that, but conceivably shadow tomography, BosonSampling, PostBQP=PP? Overall, it’s been what you wish every indisciplinary collaboration could be.

What’s new, in the last decade, is that the scientific conversation centered around quantum information and computation has dramatically “metastasized,” to encompass not only a good fraction of all the experimentalists doing quantum optics and sensing and metrology and so forth, and not only a good fraction of all the condensed-matter theorists, but even many leading string theorists and quantum gravity theorists, including Susskind, Maldacena, Bousso, Hubeny, Harlow, and yes, Witten. And I don’t think it’s just that they’re too professional to trash-talk quantum information people the way commenter “String Theorist” does. Rather it’s that, because of the intellectual success of “It from Qubit,” we’re increasingly participating in the same conversations and working on the same technical questions. One particularly exciting such question, which I’ll have more to say about in a future post, is the truth or falsehood of the Quantum Extended Church-Turing Thesis for observers who jump into black holes.

Not to psychoanalyze, but I’ve noticed a pattern wherein, the more secure a scientist is about their position within their own field, the readier they are to admit ignorance about the neighboring fields, to learn about those fields, and to reach out to the experts in them, to ask simple or (as it usually turns out) not-so-simple questions.


I can’t imagine any better illustration of these tendencies better than the 28th Solvay Conference on the Physics of Quantum Information, which I attended two weeks ago in Brussels on my 41st birthday.

As others pointed out, the proportion of women is not as high as we all wish, but it’s higher than in 1911, when there was exactly one: Madame Curie herself.

It was my first trip out of the US since before COVID—indeed, I’m so out of practice that I nearly missed my flights in both directions, in part because of my lack of familiarity with the COVID protocols for transatlantic travel, as well as the immense lines caused by those protocols. My former adviser Umesh Vazirani, who was also at the Solvay Conference, was proud.

The Solvay Conference is the venue where, legendarily, the fundamentals of quantum mechanics got hashed out between 1911 and 1927, by the likes of Einstein, Bohr, Planck, and Curie. (Einstein complained, in a letter, about being called away from his work on general relativity to attend a “witches’ sabbath.”) Remarkably, it’s still being held in Brussels every few years, and still funded by the same Solvay family that started it. The once-every-few-years schedule has, we were constantly reminded, been interrupted only three times in its 110-year history: once for WWI, once for WWII, and now once for COVID (this year’s conference was supposed to be in 2020).

This was the first ever Solvay conference organized around the theme of quantum information, and apparently, the first ever that counted computer scientists among its participants (me, Umesh Vazirani, Dorit Aharonov, Urmila Mahadev, and Thomas Vidick). There were four topics: (1) many-body physics, (2) quantum gravity, (3) quantum computing hardware, and (4) quantum algorithms. The structure, apparently unchanged since the conference’s founding, is this: everyone attends every session, without exception. They sit around facing each other the whole time; no one ever stands to lecture. For each topic, two “rapporteurs” introduce the topic with half-hour prepared talks; then there are short prepared response talks as well as an hour or more of unstructured discussion. Everything everyone says is recorded in order to be published later.


Daniel Gottesman and I were the two rapporteurs for quantum algorithms: Daniel spoke about quantum error-correction and fault-tolerance, and I spoke about “How Much Structure Is Needed for Huge Quantum Speedups?” The link goes to my PowerPoint slides, if you’d like to check them out. I tried to survey 30 years of history of that question, from Simon’s and Shor’s algorithms, to huge speedups in quantum query complexity (e.g., glued trees and Forrelation), to the recent quantum supremacy experiments based on BosonSampling and Random Circuit Sampling, all the way to the breakthrough by Yamakawa and Zhandry a couple months ago. The last slide hypothesizes a “Law of Conservation of Weirdness,” which after all these decades still remains to be undermined: “For every problem that admits an exponential quantum speedup, there must be some weirdness in its detailed statement, which the quantum algorithm exploits to focus amplitude on the rare right answers.” My title slide also shows DALL-E2‘s impressionistic take on the title question, “how much structure is needed for huge quantum speedups?”:

The discussion following my talk was largely a debate between me and Ed Farhi, reprising many debates he and I have had over the past 20 years: Farhi urged optimism about the prospect for large, practical quantum speedups via algorithms like QAOA, pointing out his group’s past successes and explaining how they wouldn’t have been possible without an optimistic attitude. For my part, I praised the past successes and said that optimism is well and good, but at the same time, companies, venture capitalists, and government agencies are right now pouring billions into quantum computing, in many cases—as I know from talking to them—because of a mistaken impression that QCs are already known to be able to revolutionize machine learning, finance, supply-chain optimization, or whatever other application domains they care about, and to do so soon. They’re genuinely surprised to learn that the consensus of QC experts is in a totally different place. And to be clear: among quantum computing theorists, I’m not at all unusually pessimistic or skeptical, just unusually willing to say in public what others say in private.

Afterwards, one of the string theorists said that Farhi’s arguments with me had been a highlight … and I agreed. What’s the point of a friggin’ Solvay Conference if everyone’s just going to agree with each other?


Besides quantum algorithms, there was naturally lots of animated discussion about the practical prospects for building scalable quantum computers. While I’d hoped that this discussion might change the impressions I’d come with, it mostly confirmed them. Yes, the problem is staggeringly hard. Recent ideas for fault-tolerance, including the use of LDPC codes and bosonic codes, might help. Gottesman’s talk gave me the insight that, at its core, quantum fault-tolerance is all about testing, isolation, and contact-tracing, just for bit-flip and phase-flip errors rather than viruses. Alas, we don’t yet have the quantum fault-tolerance analogue of a vaccine!

At one point, I asked the trapped-ion experts in open session if they’d comment on the startup company IonQ, whose stock price recently fell precipitously in the wake of a scathing analyst report. Alas, none of them took the bait.

On a different note, I was tremendously excited by the quantum gravity session. Netta Engelhardt spoke about her and others’ celebrated recent work explaining the Page curve of an evaporating black hole using Euclidean path integrals—and by questioning her and others during coffee breaks, I finally got a handwavy intuition for how it works. There was also lots of debate, again at coffee breaks, about Susskind’s recent speculations on observers jumping into black holes and the quantum Extended Church-Turing Thesis. One of my main takeaways from the conference was a dramatically better understanding of the issues involved there—but that’s a big enough topic that it will need its own post.

Toward the end of the quantum gravity session, the experimentalist John Martinis innocently asked what actual experiments, or at least thought experiments, had been at issue for the past several hours. I got a laugh by explaining to him that, while the gravity experts considered this too obvious to point out, the thought experiments in question all involve forming a black hole in a known quantum pure state, with total control over all the Planck-scale degrees of freedom; then waiting outside the black hole for ~1070 years; collecting every last photon of Hawking radiation that comes out and routing them all into a quantum computer; doing a quantum computation that might actually require exponential time; and then jumping into the black hole, whereupon you might either die immediately at the event horizon, or else learn something in your last seconds before hitting the singularity, which you could then never communicate to anyone outside the black hole. Martinis thanked me for clarifying.


Anyway, I had a total blast. Here I am amusing some of the world’s great physicists by letting them mess around with GPT-3.

Back: Ahmed Almheiri, Juan Maldacena, John Martinis, Aron Wall. Front: Geoff Penington, me, Daniel Harlow. Thanks to Michelle Simmons for the photo.

I also had the following exchange at my birthday dinner:

Physicist: So I don’t get this, Scott. Are you a physicist who studied computer science, or a computer scientist who studied physics?

Me: I’m a computer scientist who studied computer science.

Physicist: But then you…

Me: Yeah, at some point I learned what a boson was, in order to invent BosonSampling.

Physicist: And your courses in physics…

Me: They ended at thermodynamics. I couldn’t handle PDEs.

Physicist: What are the units of h-bar?

Me: Uhh, well, it’s a conversion factor between energy and time. (*)

Physicist: Good. What’s the radius of the hydrogen atom?

Me: Uhh … not sure … maybe something like 10-15 meters?

Physicist: OK fine, he’s not one of us.

(The answer, it turns out, is more like 10-10 meters. I’d stupidly substituted the radius of the nucleus—or, y’know, a positively-charged hydrogen ion, i.e. proton. In my partial defense, I was massively jetlagged and at most 10% conscious.)

(*) Actually h-bar is a conversion factor between energy and 1/time, i.e. frequency, but the physicist accepted this answer.


Anyway, I look forward to attending more workshops this summer, seeing more colleagues who I hadn’t seen since before COVID, and talking more science … including branching out in some new directions that I’ll blog about soon. It does beat worrying about online trolls.

An understandable failing?

May 29th, 2022

I hereby precommit that this will be my last post, for a long time, around the twin themes of (1) the horribleness in the United States and the world, and (2) my desperate attempts to reason with various online commenters who hold me personally complicit in all this horribleness. I should really focus my creativity more on actually fixing the world’s horribleness, than on seeking out every random social-media mudslinger who blames me for it, shouldn’t I? Still, though, isn’t undue obsession with the latter a pretty ordinary human failing, a pretty understandable one?

So anyway, if you’re one of the thousands of readers who come here simply to learn more about quantum computing and computational complexity, rather than to try to provoke me into mounting a public defense of my own existence (which defense will then, ironically but inevitably, stimulate even more attacks that need to be defended against) … well, either scroll down to the very end of this post, or wait for the next post.


Thanks so much to all my readers who donated to Fund Texas Choice. As promised, I’ve personally given them a total of $4,106.28, to match the donations that came in by the deadline. I’d encourage people to continue donating anyway, while for my part I’ll probably run some more charity matching campaigns soon. These things are addictive, like pulling the lever of a slot machine, but where the rewards go to making the world an infinitesimal amount more consistent with your values.


Of course, now there’s a brand-new atrocity to shame my adopted state of Texas before the world. While the Texas government will go to extraordinary lengths to protect unborn children, the world has now witnessed 19 of itsborn children consigned to gruesome deaths, as the “good guys with guns”—waited outside and prevented parents from entering the classrooms where their children were being shot. I have nothing original to add to the global outpourings of rage and grief. Forget about the statistical frequency of these events: I know perfectly well that the risk from car crashes and home accidents is orders-of-magnitude greater. Think about it this way: the United States is now known to the world as “the country that can’t or won’t do anything to stop its children from semi-regularly being gunned down in classrooms,” not even measures that virtually every other comparable country on earth has successfully taken. It’s become the symbol of national decline, dysfunction, and failure. If so, then the stakes here could fairly be called existential ones—not because of its direct effects on child life expectancy or GDP or any other index of collective well-being that you can define and measure, but rather, because a country that lacks the will to solve this will be judged by the world, and probably accurately, as lacking the will to solve anything else.


In return for the untold thousands of hours I’ve poured into this blog, which has never once had advertising or asked for subscriptions, my reward has been years of vilification by sneerers and trolls. Some of the haters even compare me to Elliot Rodger and other aggrieved mass shooters. And I mean: yes, it’s true that I was bullied and miserable for years. It’s true that Elliot Rodger, Salvador Ramos (the Uvalde shooter), and most other mass shooters were also bullied and miserable for years. But, Scott-haters, if we’re being intellectually honest about this, we might say that the similarities between the mass shooter story and the Scott Aaronson story end at a certain point not very long after that. We might say: it’s not just that Aaronson didn’t respond by hurting anybody—rather, it’s that his response loudly affirmed the values of the Enlightenment, meaning like, the whole package, from individual autonomy to science and reason to the rejection of sexism and racism to everything in between. Affirmed it in a manner that’s not secretly about popularity (demonstrably so, because it doesn’t get popularity), affirmed it via self-questioning methods intellectually honest enough that they’d probably still have converged on the right answer even in situations where it’s now obvious that almost everyone you around would’ve been converging on the wrong answer, like (say) Nazi Germany or the antebellum South.

I’ve been to the valley of darkness. While there, I decided that the only “revenge” against the bullies that was possible or desirable was to do something with my life, to achieve something in science that at least some bullies might envy, while also starting a loving family and giving more than most to help strangers on the Internet and whatever good cause comes to his attention and so on. And after 25 years of effort, some people might say I’ve sort of achieved the “revenge” as I’d then defined it. And they might further say: if you could get every school shooter to redefine “revenge” as “becoming another Scott Aaronson,” that would be, you know, like, a step upwards. An improvement.


And let this be the final word on the matter that I ever utter in all my days, to the thousands of SneerClubbers and Twitter randos who pursue this particular line of attack against Scott Aaronson (yes, we do mean the thousands—which means, it both feels to its recipient like the entire earth yet actually is less than 0.01% of the earth).

We see what Scott did with his life, when subjected for a decade to forms of psychological pressure that are infamous for causing young males to lash out violently. What would you have done with your life?


A couple weeks ago, when the trolling attacks were arriving minute by minute, I toyed with the idea of permanently shutting down this blog. What’s the point? I asked myself. Back in 2005, the open Internet was fun; now it’s a charred battle zone. Why not restrict conversation to my academic colleagues and friends? Haven’t I done enough for a public that gives me so much grief? I was dissuaded by many messages of support from loyal readers. Thank you so much.


If anyone needs something to cheer them up, you should really watch Prehistoric Planet, narrated by an excellent, 96-year-old David Attenborough. Maybe 35 years from now, people will believe dinosaurs looked or acted somewhat differently from these portrayals, just like they believe somewhat differently now from when I was a kid. On the other hand, if you literally took a time machine to the Late Cretaceous and starting filming, you couldn’t get a result that seemed more realistic, let’s say to a documentary-watching child, than these CGI dinosaurs on their CGI planet seem. So, in the sense of passing that child’s Turing Test, you might argue, the problem of bringing back the dinosaurs has now been solved.

If you … err … really want to be cheered up, you can follow up with Dinosaur Apocalypse, also narrated by Attenborough, where you can (again, as if you were there) watch the dinosaurs being drowned and burned alive in their billions when the asteroid hits. We’d still be scurrying under rocks, were it not for that lucky event that only a monster could’ve called lucky at the time.


Several people asked me to comment on the recent savage investor review against the quantum computing startup IonQ. The review amusingly mixed together every imaginable line of criticism, with every imaginable degree of reasonableness from 0% to 100%. Like, quantum computing is impossible even in theory, and (in the very next sentence) other companies are much closer to realizing quantum computing than IonQ is. And IonQ’s response to the criticism, and see also this by the indefatigable Gil Kalai.

Is it, err, OK if I sit this one out for now? There’s probably, like, actually an already-existing machine learning model where, if you trained it on all of my previous quantum computing posts, it would know exactly what to say about this.

Donate to protect women’s rights: a call to my fellow creepy, gross, misogynist nerdbros

May 4th, 2022

So, I’d been planning a fun post for today about the DALL-E image-generating AI model, and in particular, a brief new preprint about DALL-E’s capabilities by Ernest Davis, Gary Marcus, and myself. We wrote this preprint as a sort of “adversarial collaboration”: Ernie and Gary started out deeply skeptical of DALL-E, while I was impressed bordering on awestruck. I was pleasantly surprised that we nevertheless managed to produce a text that we all agreed on.

Not for the first time, though, world events have derailed my plans. The most important part of today’s post is this:

For the next week, I, Scott Aaronson, will personally match all reader donations to Fund Texas Choice—a group that helps women in Texas travel to out-of-state health clinics, for reasons that are neither your business nor mine—up to a total of $5,000.

To show my seriousness, I’ve already donated $1,000. Just let me know how much you’ve donated in the comments section!

The first reason for this donation drive is that, perhaps like many of you, I stayed up hours last night reading Alito’s leaked decision in a state of abject terror. I saw how the logic of the decision, consistent and impeccable on its own terms, is one by which the Supreme Court’s five theocrats could now proceed to unravel the whole of modernity. I saw how this court, unchecked by our broken democratic system, can now permanently enshrine the will of a radical minority, perhaps unless and until the United States is plunged into a second Civil War.

Anyway, that’s the first reason for the donation drive. The second reason is to thank Shtetl-Optimized‘s commenters for their … err, consistently generous and thought-provoking contributions. Let’s take, for example, this comment on last week’s admittedly rather silly post, from an anonymous individual who calls herself “Feminist Bitch,” and who was enraged that it took me a full day to process one of the great political cataclysms of our lifetimes and publicly react to it:

OF COURSE. Not a word about Roe v. Wade being overturned, but we get a pseudo-intellectual rationalist-tier rant about whatever’s bumping around Scott’s mind right now. Women’s most basic reproductive rights are being curtailed AS WE SPEAK and not a peep from Scott, eh? Even though in our state (Texas) there are already laws ON THE BOOKS that will criminalize abortion as soon as the alt-right fascists in our Supreme Court give the go-ahead. If you cared one lick about your female students and colleagues, Scott, you’d be posting about the Supreme Court and helping feminist causes, not posting your “memes.” But we all know Scott doesn’t give a shit about women. He’d rather stand up for creepy nerdbros and their right to harass women than women’s right to control their own fucking bodies. Typical Scott.

If you want, you can read all of Feminist Bitch’s further thoughts about my failings, with my every attempt to explain and justify myself met with further contempt. No doubt my well-meaning friends of both sexes would counsel me to ignore her. Alas, from my infamous ordeal of late 2014, I know that with her every word, Feminist Bitch speaks for thousands, and the knowledge eats at me day and night.

It’s often said that “the right looks for converts, while the left looks only for heretics.” Has Feminist Bitch ever stopped to think about how our civilization reached its current terrifying predicament—how Trump won in 2016, how the Supreme Court got packed with extremists who represent a mere 25% of the country, how Putin and Erdogan and Orban and Bolsonaro and all the rest consolidated their power? Does she think it happened because wokeists like herself reached out too much, made too many inroads among fellow citizens who share some but not all of their values? Would Feminist Bitch say that, if the Democrats want to capitalize on the coming tsunami of outrage about the death of Roe and the shameless lies that enabled it, if they want to sweep to victory in the midterms and enshrine abortion rights into federal law … then their best strategy would be to double down on their condemnations of gross, creepy, smelly, white male nerdbros who all the girls, like, totally hate?

(until, thank God, some of them don’t)

I continue to think that the majority of my readers, of all races and sexes and backgrounds, are reasonable and sane. I continue to think the majority of you recoil against hatred and dehumanization of anyone—whether that means women seeking abortions, gays, trans folks, or (gasp!) even white male techbros. In this sad twilight for the United States and for liberal democracy around the world, we the reasonable and sane, we the fans of the Enlightenment, we the Party of Psychological Complexity, have decades of work cut out for us. For now I’ll simply say: I don’t hear from you nearly enough in the comments.

My first-ever attempt to create a meme!

April 27th, 2022

An update on the campaign to defend serious math education in California

April 26th, 2022

Update (April 27): Boaz Barak—Harvard CS professor, longtime friend-of-the-blog, and coauthor of my previous guest post on this topic—has just written an awesome FAQ, providing his personal answers to the most common questions about what I called our “campaign to defend serious math education.” It directly addresses several issues that have already come up in the comments. Check it out!


As you might remember, last December I hosted a guest post about the “California Mathematics Framework” (CMF), which was set to cause radical changes to precollege math in California—e.g., eliminating 8th-grade algebra and making it nearly impossible to take AP Calculus. I linked to an open letter setting out my and my colleagues’ concerns about the CMF. That letter went on to receive more than 1700 signatures from STEM experts in industry and academia from around the US, including recipients of the Nobel Prize, Fields Medal, and Turing Award, as well as a lot of support from college-level instructors in California. 

Following widespread pushback, a new version of the CMF appeared in mid-March. I and others are gratified that the new version significantly softens the opposition to acceleration in high school math and to calculus as a central part of mathematics.  Nonetheless, we’re still concerned that the new version promotes a narrative about data science that’s a recipe for cutting kids off from any chance at earning a 4-year college degree in STEM fields (including, ironically, in data science itself).

To that end, some of my Californian colleagues have issued a new statement today on behalf of academic staff at 4-year colleges in California, aimed at clearing away the fog on how mathematics is related to data science. I strongly encourage my readers on the academic staff at 4-year colleges in California to sign this commonsense statement, which has already been signed by over 250 people (including, notably, at least 50 from Stanford, home of two CMF authors).

As a public service announcement, I’d also like to bring to wider awareness Section 18533 of the California Education Code, for submitting written statements to the California State Board of Education (SBE) about errors, objections, and concerns in curricular frameworks such as the CMF.  

The SBE is scheduled to vote on the CMF in mid-July, and their remaining meeting before then is on May 18-19 according to this site, so it is really at the May meeting that concerns need to be aired.  Section 18533 requires submissions to be written (yes, snail mail) and postmarked at least 10 days before the SBE meeting. So to make your voice heard by the SBE, please send your written concern by certified mail (for tracking, but not requiring signature for delivery), no later than Friday May 6, to State Board of Education, c/o Executive Secretary of the State Board of Education, 1430 N Street, Room 5111, Sacramento, CA 95814, complemented by an email submission to sbe@cde.ca.gov and mathframework@cde.ca.gov.

On form versus meaning

April 24th, 2022

There is a fundamental difference between form and meaning. Form is the physical structure of something, while meaning is the interpretation or concept that is attached to that form. For example, the form of a chair is its physical structure – four legs, a seat, and a back. The meaning of a chair is that it is something you can sit on.

This distinction is important when considering whether or not an AI system can be trained to learn semantic meaning. AI systems are capable of learning and understanding the form of data, but they are not able to attach meaning to that data. In other words, AI systems can learn to identify patterns, but they cannot understand the concepts behind those patterns.

For example, an AI system might be able to learn that a certain type of data is typically associated with the concept of “chair.” However, the AI system would not be able to understand what a chair is or why it is used. In this way, we can see that an AI system trained on form can never learn semantic meaning.

–GPT3, when I gave it the prompt “Write an essay proving that an AI system trained on form can never learn semantic meaning” 😃

Back

April 23rd, 2022

Thanks to everyone who asked whether I’m OK! Yeah, I’ve been living, loving, learning, teaching, worrying, procrastinating, just not blogging.


Last week, Takashi Yamakawa and Mark Zhandry posted a preprint to the arXiv, “Verifiable Quantum Advantage without Structure,” that represents some of the most exciting progress in quantum complexity theory in years. I wish I’d thought of it. tl;dr they show that relative to a random oracle (!), there’s an NP search problem that quantum computers can solve exponentially faster than classical ones. And yet this is 100% consistent with the Aaronson-Ambainis Conjecture!


A student brought my attention to Quantle, a variant of Wordle where you need to guess a true equation involving 1-qubit quantum states and unitary transformations. It’s really well-done! Possibly the best quantum game I’ve seen.


Last month, Microsoft announced on the web that it had achieved an experimental breakthrough in topological quantum computing: not quite the creation of a topological qubit, but some of the underlying physics required for that. This followed their needing to retract their previous claim of such a breakthrough, due to the criticisms of Sergey Frolov and others. One imagines that they would’ve taken far greater care this time around. Unfortunately, a research paper doesn’t seem to be available yet. Anyone with further details is welcome to chime in.


Woohoo! Maximum flow, maximum bipartite matching, matrix scaling, and isotonic regression on posets (among many others)—all algorithmic problems that I was familiar with way back in the 1990s—are now solvable in nearly-linear time, thanks to a breakthrough by Chen et al.! Many undergraduate algorithms courses will need to be updated.


For those interested, Steve Hsu recorded a podcast with me where I talk about quantum complexity theory.

Nothing non-obvious to say…

February 24th, 2022

… but these antiwar protesters in St. Petersburg know that they’re all going to be arrested and are doing it anyway.

Meanwhile, I just spent an hour giving Lily, my 9-year-old, a crash course on geopolitics, including WWII, the Cold War, the formation of NATO, Article 5, nuclear deterrence, economic sanctions, the breakup of the USSR, Ukraine, the Baltic Republics, and the prospects now for WWIII. Her comment at the end was that from now on she’s going to refer to Putin as “Poopin,” in the hope that that shames him into changing course.

Update (March 1): A longtime Shtetl-Optimized reader has a friend who’s trying to raise funds to get her family out of Ukraine. See here if you’d like to help.