Archive for the ‘Announcements’ Category

A low-tech solution

Tuesday, July 19th, 2022

Thanks so much to everyone who offered help and support as this blog’s comment section endured the weirdest, most motivated and sophisticated troll attack in its 17-year history. For a week, a parade of self-assured commenters showed up to demand that I explain and defend my personal hygiene, private thoughts, sexual preferences, and behavior around female students (and, absurdly, to cajole me into taking my family on a specific Disney cruise ship). In many cases, the troll or trolls appropriated the names and email addresses of real academics, imitating them so convincingly that those academics’ closest colleagues told me they were confident it was really them. And when some trolls finally “outed” themselves, I had no way to know whether that was just another chapter in the trolling campaign. It was enough to precipitate an epistemic crisis, where one actively doubts the authenticity of just about every piece of text.

The irony isn’t lost on me that I’ve endured this just as I’m starting my year-long gig at OpenAI, to think, among other things, about the potential avenues for misuse of Large Language Models like GPT-3, and what theoretical computer science could contribute to mitigating them. To say this episode has given me a more vivid understanding of the risks would be an understatement.

But why didn’t I just block and ignore the trolls immediately? Why did I bother engaging?

At least a hundred people asked some variant of this question, and the answer is this. For most of my professional life, this blog has been my forum, where anyone in the world could show up to raise any issue they wanted, as if we were tunic-wearing philosophers in the Athenian agora. I prided myself on my refusal to take the coward’s way out and ignore anything—even, especially, severe personal criticism. I’d witnessed how Jon Stewart, let’s say, would night after night completely eviscerate George W. Bush, his policies and worldview and way of speaking and justifications and lies, and then Bush would just continue the next day, totally oblivious, never deigning to rebut any of it. And it became a core part of my identity that I’d never be like that. If anyone on earth had a narrative of me where I was an arrogant bigot, a clueless idiot, etc., I’d confront that narrative head-on and refute it—or if I couldn’t, I’d reinvent my whole life. What I’d never do is suffer anyone’s monstrous caricature of me to strut around the Internet unchallenged, as if conceding that only my academic prestige or tenure or power, rather than a reasoned rebuttal, could protect me from the harsh truths that the caricature revealed.

Over the years, of course, I carved out some exceptions: P=NP provers and quantum mechanics deniers enraged that I’d dismissed their world-changing insights. Raving antisemites. Their caricatures of me had no legs in any community I cared about. But if an attack carried the implied backing of the whole modern social-justice movement, of thousands of angry grad students on Twitter, of Slate and Salon and New York Times writers and Wikipedia editors and university DEI offices, then the coward’s way out was closed. The monstrous caricature then loomed directly over me; I could either parry his attacks or die.

With this stance, you might say, the astounding part is not that this blog’s “agora” model eventually broke down, but rather that it survived for so long! I started blogging in October 2005. It took until July 2022 for me to endure a full-scale “social/emotional denial of service attack” (not counting the comment-171 affair). Now that I have, though, it’s obvious even to me that the old way is no longer tenable.

So what’s the solution? Some of you liked the idea of requiring registration with real email addresses—but alas, when I tried to implement that, I found that WordPress’s registration system is a mess and I couldn’t see how to make it work. Others liked the idea of moving to Substack, but others actively hated it, and in any case, even if I moved, I’d still have to figure out a comment policy! Still others liked the idea of an army of volunteer moderators. At least ten people volunteered themselves.

On reflection, the following strikes me as most directly addressing the actual problem. I’m hereby establishing the Shtetl-Optimized Committee of Guardians, or SOCG (same acronym as the computational geometry conference 🙂 ). If you’re interested in joining, shoot me an email, or leave a comment on this post with your (real!) email address. I’ll accept members only if I know them in real life, personally or by reputation, or if they have an honorable history on this blog.

For now, the SOCG’s only job is this: whenever I get a comment that gives me a feeling of unease—because, e.g., it seems trollish or nasty or insincere, it asks a too-personal question, or it challenges me to rebut a hostile caricature of myself—I’ll email the comment to the SOCG and ask what to do. I commit to respecting the verdict of those SOCG members who respond, whenever a clear verdict exists. The verdict could be, e.g., “this seems fine,” “if you won’t be able to resist responding then don’t let this appear,” or “email the commenter first to confirm their identity.” And if I simply need reassurance that the commenter’s view of me is false, I’ll seek it from the SOCG before I seek it from the whole world.

Here’s what SOCG members can expect in return: I continue pouring my heart into this subscription-free, ad-free blog, and I credit you for making it possible—publicly if you’re comfortable with your name being listed, privately if not. I buy you a fancy lunch or dinner if we’re ever in the same town.

Eventually, we might move to a model where the SOCG members can log in to WordPress and directly moderate comments themselves. But let’s try it this way first and see if it works.

Choosing a new comment policy

Tuesday, July 12th, 2022

Update (July 13): I was honored to read this post by my friend Boaz Barak.

Update (July 14): By now, comments on this post allegedly from four CS professors — namely, Josh Alman, Aloni Cohen, Rana Hanocka, and Anna Farzindar — as well as from the graduate student “BA,” have been unmasked as from impersonator(s).

I’ve been the target of a motivated attack-troll (or multiple trolls, but I now believe just one) who knows about the CS community. This might be the single weirdest thing that’s happened to me in 17 years of blogging, surpassing even the legendary Ricoh printer episode of 2007. It obviously underscores the need for a new, stricter comment policy, which is what this whole post was about.


Yesterday and today, both my work and my enjoyment of the James Webb images were interrupted by an anonymous troll, who used the Shtetl-Optimized comment section to heap libelous abuse on me—derailing an anodyne quantum computing discussion to opine at length about how I’m a disgusting creep who surely, probably, maybe has lewd thoughts about his female students. Unwisely or not, I allowed it all to appear, and replied to all of it. I had a few reasons: I wanted to prove that I’m now strong enough to withstand bullying that might once have driven me to suicide. I wanted, frankly, many readers to come to my defense (thanks to those who did!). I at least wanted readers to see firsthand what I now regularly deal with: the emotional price of maintaining this blog. Most of all, I wanted my feminist, social-justice-supporting readers to either explicitly endorse or (hopefully) explicitly repudiate the unambiguous harassment that was now being gleefully committed in their name.

Then, though, the same commenter upped the ante further, by heaping misogynistic abuse on my wife Dana—while still, ludicrously and incongruously, cloaking themselves in the rhetoric of social justice. Yes: apparently the woke, feminist thing to do is now to rate female computer scientists on their looks.

Let me be blunt: I cannot continue to write Shtetl-Optimized while dealing with regular harassment of me and my family. At the same time, I’m also determined not to “surrender to the terrorists.” So, I’m weighing the following options:

  • Close comments except to commenters who provide a real identity—e.g., a full real name, a matching email address, a website.
  • Move to Substack, and then allow only commenters who’ve signed up.
  • Hire someone to pre-screen comments for me, and delete ones that are abusive or harassing (to me or others) before I even see them. (Any volunteers??)
  • Make the comment sections for readers only, eliminating any expectation that I’ll participate.

One thing that’s clear is that the status quo will not continue. I can’t “just delete” harassing or abusive comments, because the trolls have gotten too good at triggering me, and they will continue to weaponize my openness and my ethic of responding to all possible arguments against me.

So, regular readers: what do you prefer?

Because I couldn’t not post

Friday, June 24th, 2022

In 1973, the US Supreme Court enshrined the right to abortion—considered by me and ~95% of everyone I know to be a basic pillar of modernity—in such a way that the right could be overturned only if its opponents could somehow gain permanent minority rule, and thereby disregard the wills of three-quarters of Americans. So now, half a century later, that’s precisely what they’ve done. Because Ruth Bader Ginsburg didn’t live three more weeks, we’re now faced with a civilizational crisis, with tens of millions of liberals and moderates in the red states now under the authority of a social contract that they never signed. With this backwards leap, Curtis Yarvin’s notion that “Cthulhu only ever swims leftward” stands as decimated by events as any thesis has ever been. I wonder whether Yarvin is happy to have been so thoroughly refuted.

Most obviously for me, the continued viability of Texas as a place for science, for research, for technology companies, is now in severe doubt. Already this year, our 50-member CS department at UT Austin has had faculty members leave, and faculty candidates turn us down, with abortion being the stated reason, and I expect that to accelerate. Just last night my wife, Dana Moshkovitz, presented a proposal at the STOC business meeting to host STOC’2024 at a beautiful family-friendly resort outside Austin. The proposal failed, in part because of the argument that, if a pregnant STOC attendee faced a life-threatening medical condition, Texas doctors might choose to let her die, or the attendee might be charged with murder for having a miscarriage. In other words: Texas (and indeed, half the US) will apparently soon be like Donetsk or North Korea, dangerous for Blue Americans to visit even for just a few days. To my fellow Texans, I say: if you find that hyperbolic, understand that this is how the blue part of the country now sees you. Understand that only a restoration of the previous social contract can reverse it.

Of course, this destruction of everything some of us have tried to build in science in Texas is happening despite the fact that 47-48% of Texans actually vote Democratic. It’s happening despite the fact that, if Blue Americans wanted to stop it, the obvious way to do so would be to move to Austin and Houston (and the other blue enclaves of red states) in droves, and exert their electoral power. In other words, to do precisely what Dana and I did. But can I urge others to do the same with a straight face?

As far as I can tell, the only hope at this point of averting a cold Civil War is if, against all odds, there’s a Democratic landslide in Congress, sufficient to get the right to abortion enshrined into federal law. Given the ways both the House and the Senate are stacked against Democrats, I don’t expect that anytime soon, but I’ll work for it—and will do so even if many of the people I’m working with me despise me for other reasons. I will match reader donations to Democratic PACs and Congressional campaigns (not necessarily the same ones, though feel free to advocate for your favorites), announced in the comment section of this post, up to a limit of $10,000.

OpenAI!

Friday, June 17th, 2022

I have some exciting news (for me, anyway). Starting next week, I’ll be going on leave from UT Austin for one year, to work at OpenAI. They’re the creators of the astonishing GPT-3 and DALL-E2, which have not only endlessly entertained me and my kids, but recalibrated my understanding of what, for better and worse, the world is going to look like for the rest of our lives. Working with an amazing team at OpenAI, including Jan Leike, John Schulman, and Ilya Sutskever, my job will be think about the theoretical foundations of AI safety and alignment. What, if anything, can computational complexity contribute to a principled understanding of how to get an AI to do what we want and not do what we don’t want?

Yeah, I don’t know the answer either. That’s why I’ve got a whole year to try to figure it out! One thing I know for sure, though, is that I’m interested both in the short-term, where new ideas are now quickly testable, and where the misuse of AI for spambots, surveillance, propaganda, and other nefarious purposes is already a major societal concern, and the long-term, where one might worry about what happens once AIs surpass human abilities across nearly every domain. (And all the points in between: we might be in for a long, wild ride.) When you start reading about AI safety, it’s striking how there are two separate communities—one mostly worried about machine learning perpetuating racial and gender biases, and the other mostly worried about superhuman AI turning the planet into goo—who not only don’t work together, but are at each other’s throats, with each accusing the other of totally missing the point. I persist, however, in the possibly-naĂŻve belief that these are merely two extremes along a single continuum of AI worries. By figuring out how to align AI with human values today—constantly confronting our theoretical ideas with reality—we can develop knowledge that will give us a better shot at aligning it with human values tomorrow.

For family reasons, I’ll be doing this work mostly from home, in Texas, though traveling from time to time to OpenAI’s office in San Francisco. I’ll also spend 30% of my time continuing to run the Quantum Information Center at UT Austin and working with my students and postdocs. At the end of the year, I plan to go back to full-time teaching, writing, and thinking about quantum stuff, which remains my main intellectual love in life, even as AI—the field where I started, as a PhD student, before I switched to quantum computing—has been taking over the world in ways that none of us can ignore.

Maybe fittingly, this new direction in my career had its origins here on Shtetl-Optimized. Several commenters, including Max Ra and Matt Putz, asked me point-blank what it would take to induce me to work on AI alignment. Treating it as an amusing hypothetical, I replied that it wasn’t mostly about money for me, and that:

The central thing would be finding an actual potentially-answerable technical question around AI alignment, even just a small one, that piqued my interest and that I felt like I had an unusual angle on. In general, I have an absolutely terrible track record at working on topics because I abstractly feel like I “should” work on them. My entire scientific career has basically just been letting myself get nerd-sniped by one puzzle after the next.

Anyway, Jan Leike at OpenAI saw this exchange and wrote to ask whether I was serious in my interest. Oh shoot! Was I? After intensive conversations with Jan, others at OpenAI, and others in the broader AI safety world, I finally concluded that I was.

I’ve obviously got my work cut out for me, just to catch up to what’s already been done in the field. I’ve actually been in the Bay Area all week, meeting with numerous AI safety people (and, of course, complexity and quantum people), carrying a stack of technical papers on AI safety everywhere I go. I’ve been struck by how, when I talk to AI safety experts, they’re not only not dismissive about the potential relevance of complexity theory, they’re more gung-ho about it than I am! They want to talk about whether, say, IP=PSPACE, or MIP=NEXP, or the PCP theorem could provide key insights about how we could verify the behavior of a powerful AI. (Short answer: maybe, on some level! But, err, more work would need to be done.)

How did this complexitophilic state of affairs come about? That brings me to another wrinkle in the story. Traditionally, students follow in the footsteps of their professors. But in trying to bring complexity theory into AI safety, I’m actually following in the footsteps of my student: Paul Christiano, one of the greatest undergrads I worked with in my nine years at MIT, the student whose course project turned into the Aaronson-Christiano quantum money paper. After MIT, Paul did a PhD in quantum computing at Berkeley, with my own former adviser Umesh Vazirani, while also working part-time on AI safety. Paul then left quantum computing to work on AI safety full-time—indeed, along with others such as Dario Amodei, he helped start the safety group at OpenAI. Paul has since left to found his own AI safety organization, the Alignment Research Center (ARC), although he remains on good terms with the OpenAI folks. Paul is largely responsible for bringing complexity theory intuitions and analogies into AI safety—for example, through the “AI safety via debate” paper and the Iterated Amplification paper. I’m grateful for Paul’s guidance and encouragement—as well as that of the others now working in this intersection, like Geoffrey Irving and Elizabeth Barnes—as I start this new chapter.

So, what projects will I actually work on at OpenAI? Yeah, I’ve been spending the past week trying to figure that out. I still don’t know, but a few possibilities have emerged. First, I might work out a general theory of sample complexity and so forth for learning in dangerous environments—i.e., learning where making the wrong query might kill you. Second, I might work on explainability and interpretability for machine learning: given a deep network that produced a particular output, what do we even mean by an “explanation” for “why” it produced that output? What can we say about the computational complexity of finding that explanation? Third, I might work on the ability of weaker agents to verify the behavior of stronger ones. Of course, if P≠NP, then the gap between the difficulty of solving a problem and the difficulty of recognizing a solution can sometimes be enormous. And indeed, even in empirical machine learing, there’s typically a gap between the difficulty of generating objects (say, cat pictures) and the difficulty of discriminating between them and other objects, the latter being easier. But this gap typically isn’t exponential, as is conjectured for NP-complete problems: it’s much smaller than that. And counterintuitively, we can then turn around and use the generators to improve the discriminators. How can we understand this abstractly? Are there model scenarios in complexity theory where we can prove that something similar happens? How far can we amplify the generator/discriminator gap—for example, by using interactive protocols, or debates between competing AIs?

OpenAI, of course, has the word “open” right in its name, and a founding mission “to ensure that artificial general intelligence benefits all of humanity.” But it’s also a for-profit enterprise, with investors and paying customers and serious competitors. So throughout the year, don’t expect me to share any proprietary information—that’s not my interest anyway, even if I hadn’t signed an NDA. But do expect me to blog my general thoughts about AI safety as they develop, and to solicit feedback from readers.

In the past, I’ve often been skeptical about the prospects for superintelligent AI becoming self-aware and destroying the world anytime soon (see, for example, my 2008 post The Singularity Is Far). While I was aware since 2005 or so of the AI-risk community; and of its leader and prophet, Eliezer Yudkowsky; and of Eliezer’s exhortations for people to drop everything else they’re doing and work on AI risk, as the biggest issue facing humanity, I … kept the whole thing at arms’ length. Even supposing I agreed that this was a huge thing to worry about, I asked, what on earth do you want me to do about it today? We know so little about a future superintelligent AI and how it would behave that any actions we took today would likely be useless or counterproductive.

Over the past 15 years, though, my and Eliezer’s views underwent a dramatic and ironic reversal. If you read Eliezer’s “litany of doom” from two weeks ago, you’ll see that he’s now resigned and fatalistic: because his early warnings weren’t heeded, he argues, humanity is almost certainly doomed and an unaligned AI will soon destroy the world. He says that there are basically no promising directions in AI safety research: for any alignment strategy anyone points out, Eliezer can trivially refute it by explaining how (e.g.) the AI would be wise to the plan, and would pretend to go along with whatever we wanted from it while secretly plotting against us.

The weird part is, just as Eliezer became more and more pessimistic about the prospects for getting anywhere on AI alignment, I’ve become more and more optimistic. Part of my optimism is because people like Paul Christiano have laid foundations for a meaty mathematical theory: much like the Web (or quantum computing theory) in 1992, it’s still in a ridiculously primitive stage, but even my limited imagination now suffices to see how much more could be built there. An even greater part of my optimism is because we now live in a world with GPT-3, DALL-E2, and other systems that, while they clearly aren’t AGIs, are powerful enough that worrying about AGIs has come to seem more like prudence than like science fiction. And we can finally test our intuitions against the realities of these systems, which (outside of mathematics) is pretty much the only way human beings have ever succeeded at anything.

I didn’t predict that machine learning models this impressive would exist by 2022. Most of you probably didn’t predict it. For godsakes, Eliezer Yudkowsky didn’t predict it. But it’s happened. And to my mind, one of the defining virtues of science is that, when empirical reality gives you a clear shock, you update and adapt, rather than expending your intelligence to come up with clever reasons why it doesn’t matter or doesn’t count.

Anyway, so that’s the plan! If I can figure out a way to save the galaxy, I will, but I’ve set my goals slightly lower, at learning some new things and doing some interesting research and writing some papers about it and enjoying a break from teaching. Wish me a non-negligible success probability!


Update (June 18): To respond to a couple criticisms that I’ve seen elsewhere on social media…

Can the rationalists sneer at me for waiting to get involved with this subject until it had become sufficiently “respectable,” “mainstream,” and ”high-status”? I suppose they can, if that’s their inclination. I suppose I should be grateful that so many of them chose to respond instead with messages of congratulations and encouragement. Yes, I plead guilty to keeping this subject at arms-length until I could point to GPT-3 and DALL-E2 and the other dramatic advances of the past few years to justify the reality of the topic to anyone who might criticize me. It feels internally like I had principled reasons for this: I can think of almost no examples of research programs that succeeded over decades even in the teeth of opposition from the scientific mainstream. If so, then arguably the best time to get involved with a “fringe” scientific topic, is when and only when you can foresee a path to it becoming the scientific mainstream. At any rate, that’s what I did with quantum computing, as a teenager in the mid-1990s. It’s what many scientists of the 1930s did with the prospect of nuclear chain reactions. And if I’d optimized for getting the right answer earlier, I might’ve had to weaken the filters and let in a bunch of dubious worries that would’ve paralyzed me. But I admit the possibility of self-serving bias here.

Should you worry that OpenAI is just hiring me to be able to say “look, we have Scott Aaronson working on the problem,” rather than actually caring about what its safety researchers come up with? I mean, I can’t prove that you shouldn’t worry about that. In the end, whatever work I do on the topic will have to speak for itself. For whatever it’s worth, though, I was impressed by the OpenAI folks’ detailed, open-ended engagement with these questions when I met them—sort of like how it might look if they actually believed what they said about wanting to get this right for the world. I wouldn’t have gotten involved otherwise.

An understandable failing?

Sunday, May 29th, 2022

I hereby precommit that this will be my last post, for a long time, around the twin themes of (1) the horribleness in the United States and the world, and (2) my desperate attempts to reason with various online commenters who hold me personally complicit in all this horribleness. I should really focus my creativity more on actually fixing the world’s horribleness, than on seeking out every random social-media mudslinger who blames me for it, shouldn’t I? Still, though, isn’t undue obsession with the latter a pretty ordinary human failing, a pretty understandable one?

So anyway, if you’re one of the thousands of readers who come here simply to learn more about quantum computing and computational complexity, rather than to try to provoke me into mounting a public defense of my own existence (which defense will then, ironically but inevitably, stimulate even more attacks that need to be defended against) … well, either scroll down to the very end of this post, or wait for the next post.


Thanks so much to all my readers who donated to Fund Texas Choice. As promised, I’ve personally given them a total of $4,106.28, to match the donations that came in by the deadline. I’d encourage people to continue donating anyway, while for my part I’ll probably run some more charity matching campaigns soon. These things are addictive, like pulling the lever of a slot machine, but where the rewards go to making the world an infinitesimal amount more consistent with your values.


Of course, now there’s a brand-new atrocity to shame my adopted state of Texas before the world. While the Texas government will go to extraordinary lengths to protect unborn children, the world has now witnessed 19 of itsborn children consigned to gruesome deaths, as the “good guys with guns”—waited outside and prevented parents from entering the classrooms where their children were being shot. I have nothing original to add to the global outpourings of rage and grief. Forget about the statistical frequency of these events: I know perfectly well that the risk from car crashes and home accidents is orders-of-magnitude greater. Think about it this way: the United States is now known to the world as “the country that can’t or won’t do anything to stop its children from semi-regularly being gunned down in classrooms,” not even measures that virtually every other comparable country on earth has successfully taken. It’s become the symbol of national decline, dysfunction, and failure. If so, then the stakes here could fairly be called existential ones—not because of its direct effects on child life expectancy or GDP or any other index of collective well-being that you can define and measure, but rather, because a country that lacks the will to solve this will be judged by the world, and probably accurately, as lacking the will to solve anything else.


In return for the untold thousands of hours I’ve poured into this blog, which has never once had advertising or asked for subscriptions, my reward has been years of vilification by sneerers and trolls. Some of the haters even compare me to Elliot Rodger and other aggrieved mass shooters. And I mean: yes, it’s true that I was bullied and miserable for years. It’s true that Elliot Rodger, Salvador Ramos (the Uvalde shooter), and most other mass shooters were also bullied and miserable for years. But, Scott-haters, if we’re being intellectually honest about this, we might say that the similarities between the mass shooter story and the Scott Aaronson story end at a certain point not very long after that. We might say: it’s not just that Aaronson didn’t respond by hurting anybody—rather, it’s that his response loudly affirmed the values of the Enlightenment, meaning like, the whole package, from individual autonomy to science and reason to the rejection of sexism and racism to everything in between. Affirmed it in a manner that’s not secretly about popularity (demonstrably so, because it doesn’t get popularity), affirmed it via self-questioning methods intellectually honest enough that they’d probably still have converged on the right answer even in situations where it’s now obvious that almost everyone you around would’ve been converging on the wrong answer, like (say) Nazi Germany or the antebellum South.

I’ve been to the valley of darkness. While there, I decided that the only “revenge” against the bullies that was possible or desirable was to do something with my life, to achieve something in science that at least some bullies might envy, while also starting a loving family and giving more than most to help strangers on the Internet and whatever good cause comes to his attention and so on. And after 25 years of effort, some people might say I’ve sort of achieved the “revenge” as I’d then defined it. And they might further say: if you could get every school shooter to redefine “revenge” as “becoming another Scott Aaronson,” that would be, you know, like, a step upwards. An improvement.


And let this be the final word on the matter that I ever utter in all my days, to the thousands of SneerClubbers and Twitter randos who pursue this particular line of attack against Scott Aaronson (yes, we do mean the thousands—which means, it both feels to its recipient like the entire earth yet actually is less than 0.01% of the earth).

We see what Scott did with his life, when subjected for a decade to forms of psychological pressure that are infamous for causing young males to lash out violently. What would you have done with your life?


A couple weeks ago, when the trolling attacks were arriving minute by minute, I toyed with the idea of permanently shutting down this blog. What’s the point? I asked myself. Back in 2005, the open Internet was fun; now it’s a charred battle zone. Why not restrict conversation to my academic colleagues and friends? Haven’t I done enough for a public that gives me so much grief? I was dissuaded by many messages of support from loyal readers. Thank you so much.


If anyone needs something to cheer them up, you should really watch Prehistoric Planet, narrated by an excellent, 96-year-old David Attenborough. Maybe 35 years from now, people will believe dinosaurs looked or acted somewhat differently from these portrayals, just like they believe somewhat differently now from when I was a kid. On the other hand, if you literally took a time machine to the Late Cretaceous and starting filming, you couldn’t get a result that seemed more realistic, let’s say to a documentary-watching child, than these CGI dinosaurs on their CGI planet seem. So, in the sense of passing that child’s Turing Test, you might argue, the problem of bringing back the dinosaurs has now been solved.

If you … err … really want to be cheered up, you can follow up with Dinosaur Apocalypse, also narrated by Attenborough, where you can (again, as if you were there) watch the dinosaurs being drowned and burned alive in their billions when the asteroid hits. We’d still be scurrying under rocks, were it not for that lucky event that only a monster could’ve called lucky at the time.


Several people asked me to comment on the recent savage investor review against the quantum computing startup IonQ. The review amusingly mixed together every imaginable line of criticism, with every imaginable degree of reasonableness from 0% to 100%. Like, quantum computing is impossible even in theory, and (in the very next sentence) other companies are much closer to realizing quantum computing than IonQ is. And IonQ’s response to the criticism, and see also this by the indefatigable Gil Kalai.

Is it, err, OK if I sit this one out for now? There’s probably, like, actually an already-existing machine learning model where, if you trained it on all of my previous quantum computing posts, it would know exactly what to say about this.

An update on the campaign to defend serious math education in California

Tuesday, April 26th, 2022

Update (April 27): Boaz Barak—Harvard CS professor, longtime friend-of-the-blog, and coauthor of my previous guest post on this topic—has just written an awesome FAQ, providing his personal answers to the most common questions about what I called our “campaign to defend serious math education.” It directly addresses several issues that have already come up in the comments. Check it out!


As you might remember, last December I hosted a guest post about the “California Mathematics Framework” (CMF), which was set to cause radical changes to precollege math in California—e.g., eliminating 8th-grade algebra and making it nearly impossible to take AP Calculus. I linked to an open letter setting out my and my colleagues’ concerns about the CMF. That letter went on to receive more than 1700 signatures from STEM experts in industry and academia from around the US, including recipients of the Nobel Prize, Fields Medal, and Turing Award, as well as a lot of support from college-level instructors in California. 

Following widespread pushback, a new version of the CMF appeared in mid-March. I and others are gratified that the new version significantly softens the opposition to acceleration in high school math and to calculus as a central part of mathematics.  Nonetheless, we’re still concerned that the new version promotes a narrative about data science that’s a recipe for cutting kids off from any chance at earning a 4-year college degree in STEM fields (including, ironically, in data science itself).

To that end, some of my Californian colleagues have issued a new statement today on behalf of academic staff at 4-year colleges in California, aimed at clearing away the fog on how mathematics is related to data science. I strongly encourage my readers on the academic staff at 4-year colleges in California to sign this commonsense statement, which has already been signed by over 250 people (including, notably, at least 50 from Stanford, home of two CMF authors).

As a public service announcement, I’d also like to bring to wider awareness Section 18533 of the California Education Code, for submitting written statements to the California State Board of Education (SBE) about errors, objections, and concerns in curricular frameworks such as the CMF.  

The SBE is scheduled to vote on the CMF in mid-July, and their remaining meeting before then is on May 18-19 according to this site, so it is really at the May meeting that concerns need to be aired.  Section 18533 requires submissions to be written (yes, snail mail) and postmarked at least 10 days before the SBE meeting. So to make your voice heard by the SBE, please send your written concern by certified mail (for tracking, but not requiring signature for delivery), no later than Friday May 6, to State Board of Education, c/o Executive Secretary of the State Board of Education, 1430 N Street, Room 5111, Sacramento, CA 95814, complemented by an email submission to sbe@cde.ca.gov and mathframework@cde.ca.gov.

Back

Saturday, April 23rd, 2022

Thanks to everyone who asked whether I’m OK! Yeah, I’ve been living, loving, learning, teaching, worrying, procrastinating, just not blogging.


Last week, Takashi Yamakawa and Mark Zhandry posted a preprint to the arXiv, “Verifiable Quantum Advantage without Structure,” that represents some of the most exciting progress in quantum complexity theory in years. I wish I’d thought of it. tl;dr they show that relative to a random oracle (!), there’s an NP search problem that quantum computers can solve exponentially faster than classical ones. And yet this is 100% consistent with the Aaronson-Ambainis Conjecture!


A student brought my attention to Quantle, a variant of Wordle where you need to guess a true equation involving 1-qubit quantum states and unitary transformations. It’s really well-done! Possibly the best quantum game I’ve seen.


Last month, Microsoft announced on the web that it had achieved an experimental breakthrough in topological quantum computing: not quite the creation of a topological qubit, but some of the underlying physics required for that. This followed their needing to retract their previous claim of such a breakthrough, due to the criticisms of Sergey Frolov and others. One imagines that they would’ve taken far greater care this time around. Unfortunately, a research paper doesn’t seem to be available yet. Anyone with further details is welcome to chime in.


Woohoo! Maximum flow, maximum bipartite matching, matrix scaling, and isotonic regression on posets (among many others)—all algorithmic problems that I was familiar with way back in the 1990s—are now solvable in nearly-linear time, thanks to a breakthrough by Chen et al.! Many undergraduate algorithms courses will need to be updated.


For those interested, Steve Hsu recorded a podcast with me where I talk about quantum complexity theory.

Scott Aaronson Speculation Grant WINNERS!

Friday, February 4th, 2022

Two weeks ago, I announced on this blog that, thanks to the remarkable generosity of Jaan Tallinn, and the Speculation Grants program of the Survival and Flourishing Fund that Jaan founded, I had $200,000 to give away to charitable organizations of my choice. So, inspired by what Scott Alexander had done, I invited the readers of Shtetl-Optimized to pitch their charities, mentioning only some general areas of interest to me (e.g., advanced math education at the precollege level, climate change mitigation, pandemic preparedness, endangered species conservation, and any good causes that would enrage the people who attack me on Twitter).

I’m grateful to have gotten more than twenty well-thought-out pitches; you can read a subset of them in the comment thread. Now, having studied them all, I’ve decided—as I hadn’t at the start—to use my entire allotment to make as strong a statement as I can about a single cause: namely, subject-matter passion and excellence in precollege STEM education.

I’ll be directing funds to some shockingly cash-starved math camps, math circles, coding outreach programs, magnet schools, and enrichment programs, in Maine and Oregon and England and Ghana and Ethiopia and Jamaica. The programs I’ve chosen target a variety of ability levels, not merely the “mathematical elite.” Several explicitly focus on minority and other underserved populations. But they share a goal of raising every student they work with as high as possible, rather than pushing the students down to fit some standardized curriculum.

Language like that ought to be meaningless boilerplate, but alas, it no longer is. We live in a time when the state of California, in a misguided pursuit of “modernization” and “equity,” is poised to eliminate 8th-grade algebra, make it nearly impossible for high-school seniors to take AP Calculus, and shunt as many students as possible from serious mathematical engagement into a “data science pathway” that in practice might teach little more than how to fill in spreadsheets. (This watering-down effort now itself looks liable to be watered down—but only because of a furious pushback from parents and STEM professionals, pushback in which I’m proud that this blog played a small role.) We live in a time when elite universities are racing to eliminate the SAT—thus, for all their highminded rhetoric, effectively slamming the door on thousands of nerdy kids from poor or immigrant backgrounds who know how to think, but not how to shine in a college admissions popularity pageant. We live in a time when America’s legendary STEM magnet high schools, from Thomas Jefferson in Virginia to Bronx Science to Lowell in San Francisco, rather than being celebrated as the national treasures that they are, or better yet replicated, are bitterly attacked as “elitist” (even while competitive sports and music programs are not similarly attacked)—and are now being forcibly “demagnetized” by bureaucrats, made all but indistinguishable from other high schools, over the desperate pleas of their students, parents, and alumni.

And—alright, fine, on a global scale, arresting climate change is surely a higher-priority issue than protecting the intellectual horizons of a few teenage STEM nerds. The survival of liberal democracy is a higher-priority issue. Pandemic preparedness, poverty, malnutrition are higher-priority issues. Some of my friends strongly believe that the danger of AI becoming super-powerful and taking over the world is the highest-priority issue … and truthfully, with this week’s announcements of AlphaCode and OpenAI’s theorem prover, which achieve human-competitive performance in elite programming and math competitions respectively, I can’t confidently declare that they’re wrong.

On the other hand, when you think about the astronomical returns on every penny that was invested in setting a teenage Ramanujan or Einstein or Turing or Sofya Kovalevskaya or Norman Borlaug or Mario Molina onto their trajectories in life … and the comically tiny budgets of the world-leading programs that aim to nurture the next Ramanujans, to the point where $10,000 often seems like a windfall to those programs … well, you might come to the conclusion that the “protecting nerds” thing actually isn’t that far down the global priority list! Like, it probably cracks the top ten.

And there’s more to it than that. There’s a reason beyond parochialism, it dawned on me, why individual charities tend to specialize in wildlife conservation in Ecuador or deworming in Swaziland or some other little domain, rather than simply casting around for the highest-priority cause on earth. Expertise matters—since one wants to make, not only good judgments about which stuff to support, but good judgments that most others can’t or haven’t made. In my case, it would seem sensible to leverage the fact that I’m Scott Aaronson. I’ve spent much of my career in math/CS education and outreach—mostly, of course, at the university level, but by god did I personally experience the good and the bad in nearly every form of precollege STEM education! I’m pretty confident in my ability to distinguish the two, and for whatever I don’t know, I have close friends in the area who I trust.

There’s also a practical issue: in order for me to fund something, the recipient has to fill out a somewhat time-consuming application to SFF. If I’d added, say, another $20,000 drop into the bucket of global health or sustainability or whatever, there’s no guarantee that the intended recipients of my largesse would even notice, or care enough to go through the application process if they did. With STEM education, by contrast, holy crap! I’ve got an inbox full of Shtetl-Optimized readers explaining how their little math program is an intellectual oasis that’s changed the lives of hundreds of middle-schoolers in their region, and how $20,000 would mean the difference between their program continuing or not. That’s someone who I trust to fill out the form.

Without further ado, then, here are the first-ever Scott Aaronson Speculation Grants:

  • $57,000 for Canada/USA Mathcamp, which changed my life when I attended it as a 15-year-old in 1996, and which I returned to as a lecturer in 2008. The funds will be used for COVID testing to allow Mathcamp to resume in-person this summer, and perhaps scholarships and off-season events as well.
  • $30,000 for AddisCoder, which has had spectacular success teaching computer science to high-school students in Ethiopia, placing some of its alumni at elite universities in the US, to help them expand to a new “JamCoders” program in Jamaica. These programs were founded by UC Berkeley’s amazing Jelani Nelson, also with involvement from friend and Shtetl-Optimized semi-regular Boaz Barak.
  • $30,000 for the Maine School of Science and Mathematics, which seems to offer a curriculum comparable to those of Thomas Jefferson, Bronx Science, or the nation’s other elite magnet high schools, but (1) on a shoestring budget and (2) in rural Maine. I hadn’t even heard of MSSM before Alex Altair, an alum and Shtetl-Optimized reader, told me about it, but now I couldn’t be prouder to support it.
  • $30,000 for the Eugene Math Circle, which provides a math enrichment lifeline to kids in Oregon, and whose funding was just cut. This donation will keep the program alive for another year.
  • $13,000 for the Summer Science Program, which this summer will offer research experiences to high-school juniors in astrophysics, biochemistry, and genomics.
  • $10,000 for the MISE Foundation, which provides math enrichment for the top middle- and high-school students in Ghana.
  • $10,000 for Number Champions, which provides one-on-one coaching to kids in the UK who struggle with math.
  • $10,000 for Bridge to Enter Advanced Mathematics (BEAM), which runs math summer programs in New York, Los Angeles, and elsewhere for underserved populations.
  • $10,000 for Powderhouse, an innovative lab school being founded in Somerville, MA.

While working on this, it crossed my mind that, on my deathbed, I might be at least as happy about having directed funds to efforts like these as about any of my research or teaching.

To the applicants who weren’t chosen: I’m sorry, as many of you had wonderful projects too! As I said in the earlier post, you remain warmly invited to apply to SFF, and to make your pitch to the other Speculators and/or the main SFF committee.

Needless to say, anyone who feels inspired should add to my (or rather, SFF’s) modest contributions to these STEM programs. My sense is that, while $200k can go eye-poppingly far in this area, it still hasn’t come close to exhausting even the lowest-hanging fruit.

Also needless to say, the opinions in this post are my own and are not necessarily shared by SFF or by the organizations I’m supporting. The latter are welcome to disagree with me as long as they keep up their great work!

Huge thanks again to Jaan, to SFF, to my SFF contact Andrew Critch, to everyone (whether chosen or not) who participated in this contest, and to everyone who’s putting in work to broaden kids’ intellectual horizons or otherwise make the world a little less horrible.

Win a Scott Aaronson Speculation Grant!

Thursday, January 20th, 2022

Exciting news, everyone! Jaan Tallinn, who many of you might recognize as a co-creator of Skype, tech enthusiast, and philanthropist, graciously invited me, along with a bunch of other nerds, to join the new Speculation Grants program of the Survival and Flourishing Fund (SFF). In plain language, that means that Jaan is giving me $200,000 to distribute to charitable organizations in any way I see fit—though ideally, my choices will have something to do with the survival and flourishing of our planet and civilization.

(If all goes well, this blog post will actually lead to a lot more than just $200,000 in donations, because it will inspire applications to SFF that can then be funded by other “Speculators” or by SFF’s usual process.)

Thinking about how to handle the responsibility of this amazing and unexpected gift, I decided that I couldn’t possibly improve on what Scott Alexander did with his personal grants program on Astral Codex Ten. Thus: I hereby invite the readers of Shtetl-Optimized to pitch registered charities (which might or might not be their own)—especially, charities that are relatively small, unknown, and unappreciated, yet that would resonate strongly with someone who thinks the way I do. Feel free to renominate (i.e., bring back to my attention) charities that were mentioned when I asked a similar question after winning $250,000 from the ACM Prize in Computing.

If you’re interested, there’s a two-step process this time:

Step 1 is to make your pitch to me, either by a comment on this post or by email to me, depending on whether you’d prefer the pitch to be public or private. Let’s set a deadline for this step of Thursday, January 27, 2022 (i.e., one week from now). Your pitch can be extremely short, like 1 paragraph, although I might ask you followup questions. After January 27, I’ll then take one of two actions in response: I’ll either

(a) commit a specified portion of my $200,000 to your charity, if the charity formally applies to SFF, and if the charity isn’t excluded for some unexpected reason (5 sexual harassment lawsuits against its founders or whatever), and if one of my fellow “Speculators” doesn’t fund your charity before I do … or else I’ll

(b) not commit, in which case your charity can still apply for funding from SFF! One of the other Speculators might fund it, or it might be funded by the “ordinary” SFF process.

Step 2, which cannot be skipped, is then to have your charity submit a formal application to SFF. The application form isn’t too bad. But if the charity isn’t your own, it would help enormously if you at least knew someone at the charity, so you could tell them to apply to SFF. Again, Step 2 can be taken regardless of the outcome of Step 1.

The one big rule is that anything you suggest has to be a registered, tax-exempt charity in either the US or the UK. I won’t be distributing funds myself, but only advising SFF how to do so, and this is SFF’s rule, not mine. So alas, no political advocacy groups and no individuals. Donating to groups outside the US and UK is apparently possible but difficult.

While I’m not putting any restrictions on the scope, let me list a few examples of areas of interest to me.

  • Advanced math and science education at the precollege level: gifted programs, summer camps, online resources, or anything, really, that aims to ensure that the next Ramanujan or von Neumann isn’t lost to the world.
  • Conservation of endangered species.
  • Undervalued approaches to dealing with the climate catastrophe (including new approaches to nuclear energy, geoengineering, and carbon capture and storage … or even, e.g., studies of the effects of rising CO2 on cognition and how to mitigate them).
  • Undervalued approaches to preventing or mitigating future pandemics—basically, anything dirt-cheap that we wish had been done before covid.
  • Almost anything that Scott Alexander might have funded if he’d had more money.
  • Anything that would enrage the SneerClubbers or those who attack me on Twitter, by doing stuff that even they would have to acknowledge makes the world better, but that does so via people, organizations, and means that they despise.

Two examples of areas that I don’t plan to focus on are:

  • AI-risk and other “strongly rationalist-flavored” organizations (these are already well-covered by others at SFF, so that I don’t expect to have an advantage), and
  • quantum computing research (this is already funded by a zillion government agencies, companies, and venture capitalists).

Anyway, thanks so much to Jaan and to SFF for giving me this incredible opportunity, and I look forward to seeing what y’all come up with!

Note: Any other philanthropists who read this blog, and who’d like to add to the amount, are more than welcome to do so!

My values, howled into the wind

Sunday, December 19th, 2021

I’m about to leave for a family vacation—our first such since before the pandemic, one planned and paid for literally the day before the news of Omicron broke. On the negative side, staring at the case-count graphs that are just now going vertical, I estimate a ~25% chance that at least one of us will get Omicron on this trip. On the positive side, I estimate a ~60% chance that in the next 6 months, at least one of us would’ve gotten Omicron or some other variant even without this trip—so maybe it’s just as well if we get it now, when we’re vaxxed to the maxx and ready and school and university are out.

If, however, I do end this trip dead in an ICU, I wouldn’t want to do so without having clearly set out my values for posterity. So with that in mind: in the comments of my previous post, someone asked me why I identify as a liberal or a progressive, if I passionately support educational practices like tracking, ability grouping, acceleration, and (especially) encouraging kids to learn advanced math whenever they’re ready for it. (Indeed, that might be my single stablest political view, having been held, for recognizably similar reasons, since I was about 5.)

Incidentally, that previous post was guest-written by my colleagues Edith Cohen and Boaz Barak, and linked to an open letter that now has almost 1500 signatories. Our goal was, and is, to fight the imminent dumbing-down of precollege math education in the United States, spearheaded by the so-called “California Mathematics Framework.” In our joint efforts, we’ve been careful with every word—making sure to maintain the assent of our entire list of signatories, to attract broad support, to stay narrowly focused on the issue at hand, and to bend over backwards to concede much as we could. Perhaps because of those cautions, we—amazingly—got some actual traction, reaching people in government (such as Rep. Ro Khanna, D – Silicon Valley) and technology leaders, and forcing the “no one’s allowed to take Algebra in 8th grade” faction to respond to us.

This was disorienting to me. On this blog, I’m used just to howling into the wind, having some agree, some disagree, some take to Twitter to denounce me, but in any case, having no effect of any kind on the real world.

So let me return to howling into the wind. And return to the question of what I “am” in ideology-space, which doesn’t have an obvious answer.

It’s like, what do you call someone who’s absolutely terrified about global warming, and who thinks the best response would’ve been (and actually, still is) a historic surge in nuclear energy, possibly with geoengineering to tide us over?

… who wants to end world hunger … and do it using GMO crops?

… who wants to smash systems of entrenched privilege in college admissions … and believes that the SAT and other standardized tests are the best tools ever invented for that purpose?

… who feels a personal distaste for free markets, for the triviality of what they so often elevate and the depth of what they let languish, but tolerates them because they’ve done more than anything else to lift up the world’s poor?

… who’s happiest when telling the truth for the cause of social justice … but who, if told to lie for the cause of social justice, will probably choose silence or even, if pushed hard enough, truth?

… who wants to legalize marijuana and psychedelics, and also legalize all the promising treatments currently languishing in FDA approval hell?

… who feels little attraction to the truth-claims of the world’s ancient religions, except insofar as they sometimes serve as prophylactics against newer and now even more virulent religions?

… who thinks the covid response of the CDC, FDA, and other authorities was a historic disgrace—not because it infringed on the personal liberties of antivaxxers or anything like that, but on the contrary, because it was weak, timid, bureaucratic, and slow, where it should’ve been like that of a general at war?

… who thinks the Nazi Holocaust was even worse than the mainstream holds it to be, because in addition to the staggering, one-lifetime-isn’t-enough-to-internalize-it human tragedy, the Holocaust also sent up into smoke whatever cultural process had just produced Einstein, von Neumann, Bohr, Szilard, Born, Meitner, Wigner, Haber, Pauli, Cantor, Hausdorff, Ulam, Tarski, Erdös, and Noether, and with it, one of the wellsprings of our technological civilization?

… who supports free speech, to the point of proudly tolerating views that really, actually disgust them at their workplace, university, or online forum?

… who believes in patriotism, the police, the rule of law, to the extent that they don’t understand why all the enablers of the January 6 insurrection, up to and including Trump, aren’t currently facing trial for treason against the United States?

… who’s (of course) disgusted to the core by Trump and everything he represents, but who’s also disgusted by the elite virtue-signalling hypocrisy that made the rise of a Trump-like backlash figure predictable?

… who not only supports abortion rights, but also looks forward to a near future when parents, if they choose, are free to use embryo selection to make their children happier, smarter, healthier, and free of life-crippling diseases (unless the “bioethicists” destroy that future, as a previous generation of Deep Thinkers destroyed our nuclear future)?

… who, when reading about the 1960s Sexual Revolution, instinctively sides with free-loving hippies and against the scolds … even if today’s scolds are themselves former hippies, or intellectual descendants thereof, who now clothe their denunciations of other people’s gross, creepy sexual desires in the garb of feminism and social justice?

What, finally, do you call someone whose image of an ideal world might include a young Black woman wearing a hijab, an old Orthodox man with black hat and sidecurls, a broad-shouldered white guy from the backwoods of Alabama, and a trans woman with purple hair, face tattoos and a nose ring … all of them standing in front of a blackboard and arguing about what would happen if Alice and Bob jumped into opposite ends of a wormhole?

Do you call such a person “liberal,” “progressive,” “center-left,” “centrist,” “Pinkerite,” “technocratic,” “neoliberal,” “libertarian-ish,” “classical liberal”? Why not simply call them “correct”? 🙂