Archive for the ‘Announcements’ Category

“If you’re not a woke communist, you have nothing to fear,” they claimed

Saturday, February 8th, 2025

Part of me feels bad not to have written for weeks about quantum error-correction or BQP or QMA or even the new Austin-based startup that launched a “quantum computing dating app” (which, before anyone asks, is 100% as gimmicky and pointless as it sounds).

But the truth is that, even if you cared narrowly about quantum computing, there would be no bigger story right now than the fate of American science as a whole, which for the past couple weeks has had a knife to its throat.

Last week, after I blogged about the freeze in all American federal science funding (which has since been lifted by a judge’s order), a Trump-supporting commenter named Kyle had this to say:

No, these funding cuts are not permanent. He is only cutting funds until his staff can identify which money is going to the communists and the wokes. If you aren’t a woke or a communist, you have nothing to fear.

Read that one more time: “If you aren’t woke or a communist, you have nothing to fear.”

Can you predict what happened barely a week later? Science magazine now reports that the Trump/Musk/DOGE administration is planning to cut the National Science Foundation’s annual budget from $9 billion to only $3 billion (Biden, by contrast, had proposed an increase to $10 billion). Other brilliant ideas under discussion, according to the article, are to use AI to evaluate the grant proposals (!), and to shift the little NSF funding that remains from universities to private companies.

To be clear: in the United States, NSF is the only government agency whose central mission is curiosity-driven basic research—not that other agencies like DOE or NIH or NOAA, which also fund basic research, are safe from the chopping block either.

Maybe Congress, where support for basic science has long been bipartisan, will at some point grow some balls and push back on this. If not, though: does anyone seriously believe that you can cut the NSF’s budget by two-thirds while targeting only “woke communism”? That this won’t decimate the global preeminence of American universities in math, physics, computer science, astronomy, genetics, neuroscience, and more—preeminence that took a century to build?

Or does anyone think that I, for example, am a “woke communist”? I, the old-fashioned Enlightenment liberal who repeatedly risked his reputation to criticize “woke communism,” who the “woke communists” denounced when they noticed him at all, and who narrowly survived a major woke cancellation attempt a decade ago? Alas, I doubt any of that will save me: I presumably won’t be able to get NSF grants either under this new regime. Nor will my hundreds of brilliant academic colleagues, who’ve done what they can to make sure the center of quantum computing research remains in America rather than China or anywhere else.

I of course have no hope that the “Kyles” of the world will ever apologize to me for their prediction, their promise, being so dramatically wrong. But here’s my plea to Elon Musk, J. D. Vance, Joe Lonsdale, Curtis Yarvin, the DOGE boys, and all the readers of this blog who are connected to their circle: please prove me wrong, and prove Kyle right.

Please preserve and increase the NSF’s budget, after you’ve cleansed it of “woke communism” as you see fit. For all I care, add a line item to the budget for studying how to build rockets that are even bigger, louder, and more phallic.

But if you won’t save the NSF and the other basic research agencies—well hey, you’re the ones who now control the world’s nuclear-armed superpower, not me. But don’t you dare bullshit me about how you did all this so that merit-based science could once again flourish, like in the days of Newton and Gauss, finally free from meddling bureaucrats and woke diversity hires. You’d then just be another in history’s endless litany of conquering bullies, destroying what they can’t understand, no more interesting than all the previous bullies.

The American science funding catastrophe

Thursday, January 30th, 2025

It’s been almost impossible to get reliable information this week, but here’s what my sources are telling me:

There is still a complete freeze on money being disbursed from the US National Science Foundation. Well, there’s total chaos in the federal government much more broadly, a lot of it more immediately consequential than the science freeze, but I’ll stick for now to my little corner of the universe.

The funding freeze has continued today, despite the fact that Trump supposedly rescinded it yesterday after a mass backlash. Basically, program directors remain in a state of confusion, paralysis, and fear. Where laws passed by Congress order them to do one thing, but the new Executive Orders seem to order the opposite, they’re simply doing nothing, waiting for clarification, and hoping to preserve their jobs.

Hopefully the funding will restart in a matter of days, after NSF and other agencies go through and cancel any expense that can be construed as DEI-related. Hopefully this will be like the short-lived Muslim travel ban of 2017: a “shock-and-awe” authoritarian diktat that thrills the base but quickly melts on contact with the reality of how our civilization works.

The alternative is painful to contemplate. If the current freeze drags on for months, tens of thousands of grad students and postdocs will no longer get stipends, and will be forced to quit. Basic science in the US will essentially grind to a halt—and even if it eventually restarts, an entire cohort of young physicists, mathematicians, and biologists will have been lost, while China and other countries race ahead in those fields.

Also, even if the funding does restart, the NSF and other federal agencies are now under an indefinite hiring freeze. If not quickly lifted, this will shrink these agencies and cripple their ability to carry out their missions.

If you voted for Trump, because you wanted to take a hammer to the woke deep state or whatever, then please understand: you may or may not have realized you were voting for this, exactly, but this is what you’ve gotten. In place of professionals who you dislike and who are sometimes systematically wrong, the American spaceship is now being piloted by drunken baboons, mashing the controls to see what happens. I hope you like the result.

Meanwhile, to anyone inside or outside the NSF who has more information about this rapidly-evolving crisis: I strongly encourage you to share whatever you know in the comments section. Or get in touch with me by email. I’ll of course respect all wishes for anonymity, and I won’t share anything without permission. But you now have a chance—some might even say an enviable chance—to put your loyalty to science and your country above your fear of a bully.

Update: By request, you can also contact me at ScottAaronson.49 on the encrypted messaging app Signal.

Another update: Maybe I should’ve expected this, but people are now sending me Signal messages to ask quantum mechanics questions or share their views on random topics! Should’ve added: I’m specifically interested in on-the-ground intel, from anyone who has it, about the current freeze in American science funding.

Yet another update: Terry Tao discusses the NSF funding crisis in terms of mean field theory.

The mini-singularity

Monday, January 20th, 2025

Err, happy MLK Day!

This week represents the convergence of so many plotlines that, if it were the season finale of some streaming show, I’d feel like the writers had too many balls in the air. For the benefit of the tiny part of the world that cares what I think, I offer the following comments.


My view of Trump is the same as it’s been for a decade—that he’s a con man, a criminal, and the most dangerous internal threat the US has ever faced in its history. I think Congress and Merrick Garland deserve eternal shame for not moving aggressively to bar Trump from office and then prosecute him for insurrection—that this was a catastrophic failure of our system, one for which we’ll now suffer the consequences. If this time Trump got 52% of some swing state rather than 48%, if the “zeitgeist” or the “vibes” have shifted, if the “Resistance” is so weary that it’s barely bothering to show up, if Bezos and Zuckerberg and Musk and even Sam Altman now find it expedient to placate the tyrant rather than standing up for what previously appeared to be their principles—well, I don’t see how any of that affects how I ought to feel.

All the same, I have no plans to flee the United States or anything, just like I didn’t the last time. I’ll even permit myself pleasure when the crazed strongman takes actions that I happen to agree with (like pushing the tottering Ayatollah regime toward its well-deserved end). And then I’ll vote for Enlightenment values (or the nearest available approximation) in 2026 and 2028, assuming the country survives until then.


The second plotline is the ceasefire in Gaza, and the beginning of the release of the Israeli hostages, in exchange for thousands of Palestinian prisoners. I have all the mixed emotions you might expect. I’m terrified about the precedent this reinforces and about the many mass-murderers it will free—as I was terrified in 2011 by the Gilad Shalit deal, the one that released Sinwar and thereby set the stage for October 7. Certainly World War II didn’t end with the Nazis marching triumphantly around Berlin, guns in the air, and vowing to repeat their conquest of Europe at the earliest opportunity. All the same, it’s not my place to be more Zionist than Netanyahu, or than the vast majority of the Israeli public that supported the deal. I’m obviously thrilled to see the hostages return, and even slightly touched by the ethic that would move heaven and earth to save these specific people, almost every consideration of game theory and utilitarianism be damned. I take solace that we’re not quite returning to the situation of October 6, since Hamas, Hezbollah, and Iran itself have all been severely degraded (and the Assad regime no longer exists). This is no longer 1944, when you can slaughter 1200 Jews without paying any price for it: that was the original promise of the State of Israel. All the same, I fear that bloodshed will continue from here until the Singularity, unless majorities on both sides choose coexistence—partition, the two-state solution, call it whatever you will. And that’s primarily a question of culture, and the education of children.


The third plotline was the end of TikTok, quickly followed by its (temporary?) return on Trump’s order. As far as I can tell, Instagram, Twitter/X, and TikTok have all been net negatives for the world; it would’ve been far better if none of them had been invented. But, OK, our society allows many things that are plausibly net-negative, like sports betting and Cheetos. In this case, however, the US Supreme Court ruled 9-0 (!!) that Congress has a legitimate interest in keeping Chinese Communist Party spyware off 170 million Americans’ phones—and that there’s no First Amendment concern that overrides this security interest, since the TikTok ban isn’t targeting speech on the basis of its content. I found the court’s argument convincing. I hope TikTok goes dark 90 days from now—or, second-best, that it gets sold to some entity that’s merely bad in the normal ways and not a hostile foreign power.


The fourth plotline is the still-ongoing devastation of much of Los Angeles. I heard from friends at Caltech and elsewhere who had to evacuate their homes—but at least they had homes to return to, as those in Altadena and the Palisades didn’t. It’s a sign of the times that even a disaster of this magnitude now brings only partisan bickering: was the cause climate change, reshaping the entire planet in terrifying ways, just like all those experts have been warning for decades? Or was it staggering lack of preparation from the California and LA governments? My own answers to these questions are “yes” and “yes.”

Maybe I’ll briefly highlight the role of the utilitarianism versus deontology debate. According to this article from back in October, widely shared once the fires started, the US Forest Service halted controlled burns in California because it lacked the manpower, but also this:

“I think the Forest Service is worried about the risk of something bad happening [with a prescribed burn]. And they’re willing to trade that risk — which they will be blamed for — for increased risks on wildfires,” Wara said. In the event of a wildfire, “if something bad happens, they’re much less likely to be blamed because they can point the finger at Mother Nature.”

We saw something similar with the refusal to allow challenge trials for the COVID vaccines, which could’ve moved the approval date up by months and saved millions of lives. Humans are really bad at trolley problems, at weighing a concrete, immediate risk against a diffuse future risk that might be orders of magnitude worse. (Come to think of it, Israel’s repeated hostage deals are another example—though that one has the defense that it demonstrates the lengths to which the state will go to protect its people.)


Oh, and on top of all the other plotlines, today—January 20th—is my daughter’s 12th birthday. Happy birthday Lily!!

Podcasts!

Wednesday, December 4th, 2024

Update (Dec. 9): For those who still haven’t gotten enough, check out a 1-hour Zoom panel discussion about quantum algorithms, featuring yours truly along with my distinguished colleagues Eddie Farhi, Aram Harrow, and Andrew Childs, moderated by Barry Sanders, as part of the QTML’2024 conference held in Melbourne (although, it being Thanksgiving week, none of the four panelists were actually there in person). Part of the panel devolves into a long debate between me and Eddie about how interesting quantum algorithms are if they don’t achieve speedups over classical algorithms, and whether some quantum algorithms papers mislead people by not clearly addressing the speedup question (you get one guess as to which side I took). I resolved going in to keep my comments as civil and polite as possible—you can judge for yourself how well I succeeded! Thanks very much to Barry and the other QTML organizers for making this happen.


Do you like watching me spout about AI alignment, watermarking, my time at OpenAI, the P versus NP problem, quantum computing, consciousness, Penrose’s views on physics and uncomputability, university culture, wokeness, free speech, my academic trajectory, and much more, despite my slightly spastic demeanor and my many verbal infelicities? Then holy crap are you in luck today! Here’s 2.5 hours of me talking to former professional poker players (and now wonderful Austin-based friends) Liv Boeree and her husband Igor Kurganov about all of those topics. (Or 1.25 hours if you watch at 2x speed, as I strongly recommend.)

But that’s not all! Here I am talking to Harvard’s Hrvoje Kukina, in a much shorter (45-minute) podcast focused on quantum computing, cosmological bounds on information processing, and the idea of the universe as a computer:

Last but not least, here I am in an hour-long podcast (this one audio-only) with longtime friend Kelly Weinersmith and her co-host Daniel Whiteson, talking about quantum computing.

Enjoy!

Thanksgiving

Thursday, November 28th, 2024

I’m thankful to the thousands of readers of this blog.  Well, not the few who submit troll comments from multiple pseudonymous handles, but the 99.9% who don’t. I’m thankful that they’ve stayed here even when events (as they do more and more often) send me into a spiral of doomscrolling and just subsisting hour-to-hour—when I’m left literally without words for weeks.

I’m thankful for Thanksgiving itself.  As I often try to explain to non-Americans (and to my Israeli-born wife), it’s not primarily about the turkey but rather about the sides: the stuffing, the mashed sweet potatoes with melted marshmallows, the cranberry jello mold.  The pumpkin pie is good too.

I’m thankful that we seem to be on the threshold of getting to see the birth of fault-tolerant quantum computing, nearly thirty years after it was first theorized.

I’m thankful that there’s now an explicit construction of pseudorandom unitaries — and that, with further improvement, this would lead to a Razborov-Rudich natural proofs barrier for the quantum circuit complexity of unitaries, explaining for the first time why we don’t have superpolynomial lower bounds for that quantity.

I’m thankful that there’s been recent progress on QMA versus QCMA (that is, quantum versus classical proofs), with a full classical oracle separation now possibly in sight.

I’m thankful that, of the problems I cared about 25 years ago — the maximum gap between classical and quantum query complexities of total Boolean functions, relativized BQP versus the polynomial hierarchy, the collision problem, making quantum computations classically verifiable — there’s now been progress if not a full solution for almost all of them. And yet I’m thankful as well that lots of great problems remain open.

I’m thankful that the presidential election wasn’t all that close (by contemporary US standards, it was a ““landslide,”” 50%-48.4%).  Had it been a nail-biter, not only would I fear violence and the total breakdown of our constitutional order, I’d kick myself that I hadn’t done more to change the outcome.  As it is, there’s no denying that a plurality of Americans actually chose this, and now they’re going to get it good and hard.

I’m thankful that, while I absolutely do see Trump’s return as a disaster for the country and for civilization, it’s not a 100% unmitigated disaster.  The lying chaos monster will occasionally rage for things I support rather than things I oppose.  And if he actually plunges the country into another Great Depression through tariffs, mass deportations, and the like, hopefully that will make it easier to repudiate his legacy in 2028.

I’m thankful that, whatever Jews around the world have had to endure over the past year — both the physical attacks and the moral gaslighting that it’s all our fault — we’ve already endured much worse on both fronts, not once but countless times over 3000 years, and this is excellent Bayesian evidence that we’ll survive the latest onslaught as well.

I’m thankful that my family remains together, and healthy. I’m thankful to have an 11-year-old who’s a striking wavy-haired blonde who dances and does gymnastics (how did that happen?) and wants to be an astrophysicist, as well as a 7-year-old who now often beats me in chess and loves to solve systems of two linear equations in two unknowns.

I’m thankful that, compared to what I imagined my life would be as an 11-year-old, my life is probably in the 50th percentile or higher.  I haven’t saved the world, but I haven’t flamed out either.  Even if I do nothing else from this point, I have a stack of writings and results that I’m proud of. And I fully intend to do something else from this point.

I’m thankful that the still-most-powerful nation on earth, the one where I live, is … well, more aligned with good than any other global superpower in the miserable pageant of human history has been.  I’m thankful to live in the first superpower in history that has some error-correction machinery built in, some ability to repudiate its past sins (and hopefully its present sins, in the future).  I’m thankful to live in the first superpower that has toleration of Jews and other religious minorities built in as a basic principle, with the possible exception of the Persian Empire under Cyrus.

I’m thankful that all eight of my great-grandparents came to the US in 1905, back when Jewish mass immigration was still allowed.  Of course there’s a selection effect here: if they hadn’t made it, I wouldn’t be here to ponder it.  Still, it seems appropriate to express gratitude for the fact of existing, whatever metaphysical difficulties might inhere in that act.

I’m thankful that there’s now a ceasefire between Israel and Lebanon that Israel’s government saw fit to agree to.  While I fear that this will go the way of all previous ceasefires — Hezbollah “obeys” until it feels ready to strike again, so then Israel invades Lebanon again, then more civilians die, then there’s another ceasefire, rinse and repeat, etc. — the possibility always remains that this time will be the charm, for all people on both sides who want peace.

I’m thankful that our laws of physics are so constructed that G, c, and ℏ, three constants that are relatively easy to measure, can be combined to tell us the fundamental units of length and time, even though those units — the Planck time, 10-43 seconds, and the Planck length, 10-33 centimeters — are themselves below the reach of any foreseeable technology, and to atoms as atoms are to the solar system.

I’m thankful that, almost thirty years after I could have and should have, I’ve now finally learned the proof of the irrationality of π.

I’m thankful that, if I could go back in time to my 14-year-old self, I could tell him firstly, that female heterosexual attraction to men is a real phenomenon in the world, and secondly, that it would sometimes fixate on him (the future him, that is) in particular.

I’m thankful for red grapefruit, golden mangos, seedless watermelons, young coconuts (meat and water), mangosteen, figs, dates, and even prunes.  Basically, fruit is awesome, the more so after whatever selective breeding and genetic engineering humans have done to it.

I’m thankful for Futurama, and for the ability to stream every episode of it in order, as Dana, the kids, and I have been doing together all fall.  I’m thankful that both of my kids love it as much as I do—in which case, how far from my values and worldview could they possibly be? Even if civilization is destroyed, it will have created 100 episodes of something this far out on the Pareto frontier of lowbrow humor, serious intellectual content, and emotional depth for a future civilization to discover.  In short: “good news, everyone!”

Steven Rudich (1961-2024)

Saturday, November 2nd, 2024

I was sure my next post would be about the election—the sword of Damocles hanging over the United States and civilization as a whole. Instead, I have sad news, but also news that brings memories of warmth, humor, and complexity-theoretic insight.

Steven Rudich—professor at Carnegie Mellon, central figure of theoretical computer science since the 1990s, and a kindred spirit and friend—has died at the too-early age of 63. While I interacted with him much more seldom than I wish I had, it would be no exaggeration to call him one of the biggest influences on my life and career.

I first became aware of Steve at age 17, when I read the Natural Proofs paper that he coauthored with Razborov. I was sitting in the basement computer room at Telluride House at Cornell, and still recall the feeling of awe that came over me with every page. This one paper changed my scientific worldview. It expanded my conception of what the P versus NP problem was about and what theoretical computer science could even do—showing how it could turn in on itself, explain its own difficulties in proving problems hard in terms of the truth of those same problems’ hardness, and thereby transmute defeat into victory. I may have been bowled over by the paper’s rhetoric as much as by its results: it was like, you’re allowed to write that way?

I was nearly as impressed by Steve’s PhD thesis, which was full of proofs that gave off the appearance of being handwavy, “just phoning it in,” but were in reality completely rigorous. The result that excited me the most said that, if a certain strange combinatorial conjecture was true, then there was essentially no hope of proving that P≠NP∩coNP relative to a random oracle with probability 1. I played around with the combinatorial conjecture but couldn’t make headway on it; a year or two later, I was excited when I met Clifford Smyth and he told me that he, Kahn, and Saks had just proved it. Rudich’s conjecture directly inspired me to work on what later became the Aaronson-Ambainis Conjecture, which is still unproved, but which if true, similarly implies that there’s no hope of proving P≠BQP relative to a random oracle with probability 1.

When I applied to CS PhD programs in 1999, I wrote about how I wanted to sing the ideas of theoretical computer science from the rooftops—just like Steven Rudich had done, with the celebrated Andrew’s Leap summer program that he’d started at Carnegie Mellon. (How many other models were there? Indeed, how many other models are there today?) I was then honored beyond words when Steve called me on the phone, before anyone else had, and made an hourlong pitch for me to become his student. “You’re what I call a ‘prefab’,” he said. “You already have the mindset that I try to instill in students by the end of their PhDs.” I didn’t have much self-confidence then, which is why I can still quote Steve’s words a quarter-century later. In the ensuing years, when (as often) I doubted myself, I’d think back to that phone call with Steve, and my burning desire to be what he apparently thought I was.

Alas, when I arrived in Pittsburgh for CMU’s visit weekend, I saw Steve holding court in front of a small crowd of students, dispensing wisdom and doing magic tricks. I was miffed that he never noticed or acknowledged me: had he already changed his mind about me, lost interest? It was only later that I learned that Steve was going blind at the time, and literally hadn’t seen me.

In any case, while I came within a hair of accepting CMU’s offer, in the end I chose Berkeley. I wasn’t yet 100% sure that I wanted to do quantum computing (as opposed to AI or classical complexity theory), but the lure of the Bay Area, of the storied CS theory group where Steve himself had studied, and of Steve’s academic sibling Umesh Vazirani proved too great.

Full of regrets about the road not taken, I was glad that, in the summer between undergrad and PhD, I got to attend the PCMI summer school on computational complexity at the Institute for Advanced Study in Princeton, where Steve gave a spectacular series of lectures. By that point, Steve was almost fully blind. He put transparencies up, sometimes upside-down until the audience corrected him, and then lectured about them entirely from memory. He said that doing CS theory sightless was a new, more conceptual experience for him.

Even in his new condition, Steve’s showmanship hadn’t left him; he held the audience spellbound as few academics do. And in a special lecture on “how to give talks,” he spilled his secrets.

“What the speaker imagines the audience is thinking,” read one slide. And then, inside the thought bubbles: “MORE! HARDER! FASTER! … Ahhhhh yes, QED! Truth is beauty.”

“What the audience is actually thinking,” read the next slide, below which: “When is this over? I need to pee. Can I get a date with the person next to me?” (And this was before smartphones.) And yet, Steve explained, rather than resenting the many demands on the audience’s attention, a good speaker would break through, meet people where they were, just as he was doing right then.

I listened, took mental notes, resolved to practice this stuff. I reflected that, even if my shtick only ever became 10% as funny or fluid as Steve’s, I’d still come out way ahead.

It’s possible that the last time I saw Steve was in 2007, when I visited Carnegie Mellon to give a talk about algebrization, a new barrier to solving P vs. NP (and other central problems of complexity theory) that Avi Wigderson and I had recently discovered. When I started writing the algebrization paper, I very consciously modeled it after the Natural Proofs paper; the one wouldn’t have been thinkable without the other. So you can imagine how much it meant to me when Steve liked algebrization—when, even though he couldn’t see my slides, he got enough from the spoken part of the talk to burst with “conceptual” questions and comments.

Steve not only peeled back the mystery of P vs NP insofar as anyone has. He did it with exuberance and showmanship and humor and joy and kindness. I won’t forget him.


I’ve written here only about the tiniest sliver of Steve’s life: namely, the sliver where it intersected mine. I wish that sliver were a hundred times bigger, so that there’d be a hundred times more to write. But CS theory, and CS more broadly, are communities. When I posted about Steve’s passing on Facebook, I got inundated by comments from friends of mine who (as it turned out) had taken Steve’s courses, or TA’d for him, or attended Andrew’s Leap, or otherwise knew him, and on whom he’d left a permanent impression—and I hadn’t even known any of this.

So I’ll end this post with a request: please share your Rudich stories in the comments! I’d especially love specific recollections of his jokes, advice, insights, or witticisms. We now live in a world where, even in the teeth of the likelihood that P≠NP, powerful algorithms running in massive datacenters nevertheless try to replicate the magic of human intelligence, by compressing and predicting all the text on the public Internet. I don’t know where this is going, but I can’t imagine that it would hurt for the emerging global hive-mind to know more about Steven Rudich.


In Support of SB 1047

Wednesday, September 4th, 2024

I’ve finished my two-year leave at OpenAI, and returned to being just a normal (normal?) professor, quantum complexity theorist, and blogger. Despite the huge drama at OpenAI that coincided with my time there, including the departures of most of the people I worked with in the former Superalignment team, I’m incredibly grateful to OpenAI for giving me an opportunity to learn and witness history, and even to contribute here and there, though I wish I could’ve done more.

Over the next few months, I plan to blog my thoughts and reflections about the current moment in AI safety, inspired by my OpenAI experience. You can be certain that I’ll be doing this only as myself, not as a representative of any organization. Unlike some former OpenAI folks, I was never offered equity in the company or asked to sign any non-disparagement agreement. OpenAI retains no power over me, at least as long as I don’t share confidential information (which of course I won’t, not that I know much!).

I’m going to kick off this blog series, today, by defending a position that differs from the official position of my former employer. Namely, I’m offering my strong support for California’s SB 1047, a first-of-its-kind AI safety regulation written by California State Senator Scott Wiener, then extensively revised through consultations with pretty much every faction of the AI community. AI leaders like Geoffrey Hinton, Yoshua Bengio, and Stuart Russell are for the bill, as is Elon Musk (for whatever that’s worth), and Anthropic now says that the bill’s “benefits likely outweigh its costs.” Meanwhile, Facebook, OpenAI, and basically the entire VC industry are against the bill, while California Democrats like Nancy Pelosi and Zoe Lofgren have also come out against it for whatever reasons.

The bill has passed the California State Assembly by a margin of 48-16, having previously passed the State Senate by 32-1. It’s now on Governor Gavin Newsom’s desk, and it’s basically up to him whether it becomes law or not. I understand that supporters and opponents are both lobbying him hard.

People much more engaged than me have already laid out, accessibly and in immense detail, exactly what the current bill does and the arguments for and against. Try for example:

  • For a very basic explainer, this in TechCrunch
  • This by Kelsey Piper, and this by Kelsey Piper, Sigal Samuel, and Dylan Matthews in Vox
  • This by Zvi Mowshowitz (Zvi has also written a great deal else about SB 1047, strongly in support)

Briefly: given the ferocity of the debate about it, SB 1047 does remarkably little. It says that if you spend more than $100 million to train a model, you need to notify the government and submit a safety plan. It establishes whistleblower protections for people at AI companies to raise safety concerns. And, if a company failed to take reasonable precautions and its AI then causes catastrophic harm, it says that the company can be sued (which was presumably already true, but the bill makes it extra clear). And … unless I’m badly mistaken, those are the main things in it!

While the bill is mild, opponents are on a full scare campaign saying that it will strangle the AI revolution in its crib, put American AI development under the control of Luddite bureaucrats, and force companies out of California. They say that it will discourage startups, even though the whole point of the $100 million provision is to target only the big players (like Google, Meta, OpenAI, and Anthropic) while leaving small startups free to innovate.

The only steelman that makes sense to me, for why many tech leaders are against the bill, is the idea that it’s a stalking horse. On this view, the bill’s actual contents are irrelevant. What matters is simply that, once you’ve granted the principle that people worried about AI-caused catastrophes get a seat at the table, any legislative acknowledgment of the validity of their concerns—then they’re going to take a mile rather than an inch, and kill the whole AI industry.

Notice that the exact same slippery-slope argument could be deployed against any AI regulation whatsoever. In other words, if someone opposes SB 1047 on these grounds, then they’d presumably oppose any attempt to regulate AI—either because they reject the whole premise that creating entities with humanlike intelligence is a risky endeavor, and/or because they’re hardcore libertarians who never want government to intervene in the market for any reason, not even if the literal fate of the planet was at stake.

Having said that, there’s one specific objection that needs to be dealt with. OpenAI, and Sam Altman in particular, say that they oppose SB 1047 simply because AI regulation should be handled at the federal rather than the state level. The supporters’ response is simply: yeah, everyone agrees that’s what should happen, but given the dysfunction in Congress, there’s essentially no chance of it anytime soon. And California suffices, since Google, OpenAI, Anthropic, and virtually every other AI company is either based in California or does many things subject to California law. So, some California legislators decided to do something. On this issue as on others, it seems to me that anyone who’s serious about a problem doesn’t get to reject a positive step that’s on offer, in favor of a utopian solution that isn’t on offer.

I should also stress that, in order to support SB 1047, you don’t need to be a Yudkowskyan doomer, primarily worried about hard AGI takeoffs and recursive self-improvement and the like. For that matter, if you are such a doomer, SB 1047 might seem basically irrelevant to you (apart from its unknowable second- and third-order effects): a piece of tissue paper in the path of an approaching tank. The world where AI regulation like SB 1047 makes the most difference is the world where the dangers of AI creep up on humans gradually, so that there’s enough time for governments to respond incrementally, as they did with previous technologies.

If you agree with this, it wouldn’t hurt to contact Governor Newsom’s office. For all its nerdy and abstruse trappings, this is, in the end, a kind of battle that ought to be familiar and comfortable for any Democrat: the kind with, on one side, most of the public (according to polls) and also hundreds of the top scientific experts, and on the other side, individuals and companies who all coincidentally have strong financial stakes in being left unregulated. This seems to me like a hinge of history where small interventions could have outsized effects.

New comment policy

Monday, July 15th, 2024

Update (July 24): Remember the quest that Adam Yedidia and I started in 2016, to find the smallest n such that the value of the nth Busy Beaver number can be proven independent of the axioms of ZF set theory? We managed to show that BB(8000) was independent. This was later improved to BB(745) by Stefan O’Rear and Johannes Riebel. Well, today Rohan Ridenour writes to tell me that he’s achieved a further improvement to BB(643). Awesome!


With yesterday’s My Prayer, for the first time I can remember in two decades of blogging, I put up a new post with the comments section completely turned off. I did so because I knew my nerves couldn’t handle a triumphant interrogation from Trumpist commenters about whether, in the wake of their Messiah’s (near-)blood sacrifice on behalf of the Nation, I’d at last acquiesce to the dissolution of America’s constitutional republic and its replacement by the dawning order: one where all elections are fraudulent unless the MAGA candidate wins, and where anything the leader does (including, e.g., jailing his opponents) is automatically immune from prosecution. I couldn’t handle it, but at the same time, and in stark contrast to the many who attack from my left, I also didn’t care what they thought of me.

With hindsight, turning off comments yesterday might be the single best moderation decision I ever made. I still got feedback on what I’d written, on Facebook and by email and text message and in person. But this more filtered feedback was … thoughtful. Incredibly, it lowered the stress that I was feeling rather than raising it even higher.

For context, I should explain that over the past couple years, one or more trolls have developed a particularly vicious strategy against me. Below my every blog post, even the most anodyne, a “new” pseudonymous commenter shows up to question me about the post topic, in what initially looks like a curious, good-faith way. So I engage, because I’m Scott Aaronson and that’s what I do; that’s a large part of the value I can offer the world.

Then, only once a conversation is underway does the troll gradually ratchet up the level of crazy, invariably ending at some place tailor-made to distress me (for example: vaccines are poisonous, death to Jews and Israel, I don’t understand basic quantum mechanics or computer science, I’m a misogynist monster, my childhood bullies were justified and right). Of course, as soon as I’ve confirmed the pattern, I send further comments straight to the trash. But the troll then follows up with many emails taunting me for not engaging further, packed with farcical accusations and misreadings for me to rebut and other bait.

Basically, I’m now consistently subjected to denial-of-service attacks against my open approach to the world. Or perhaps I’ve simply been schooled in why most people with audiences of thousands or more don’t maintain comment sections where, by default, they answer everyone! And yet it’s become painfully clear that, as long as I maintain a quasi-open comment section, I’ll feel guilty if I don’t answer everyone.


So without further ado, I hereby announce my new comment policy. Henceforth all comments to Shtetl-Optimized will be treated, by default, as personal missives to me—with no expectation either that they’ll appear on the blog or that I’ll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I’ll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I’ll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

The new policy starts now, in the comment section of this post. To the many who’ve asked me for this over the years, you’re welcome!

BusyBeaver(5) is now known to be 47,176,870

Tuesday, July 2nd, 2024

The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth.

All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! Because holy moly, what news have I got today. For lovers of this super-rapidly-growing sequence of integers, I’ve honored to announce the biggest Busy Beaver development that there’s been since 1983, when I slept in a crib and you booted up your computer using a 5.25-inch floppy. That was the year when Allen Brady determined that BusyBeaver(4) was equal to 107. (Tibor Radó, who invented the Busy Beaver function in the 1960s, quickly proved with his student Shen Lin that the first three values were 1, 6, and 21 respectively. The fourth value was harder.)

Only now, after an additional 41 years, do we know the fifth Busy Beaver value. Today, an international collaboration called bbchallenge is announcing that it’s determined, and even formally verified using the Coq proof system, that BB(5) is equal to 47,176,870—the value that’s been conjectured since 1990, when Heiner Marxen and Jürgen Buntrock discovered a 5-state Turing machine that runs for exactly 47,176,870 steps before halting, when started on a blank tape. The new bbchallenge achievement is to prove that all 5-state Turing machines that run for more steps than 47,176,870, actually run forever—or in other words, that 47,176,870 is the maximum finite number of steps for which any 5-state Turing machine can run. That’s what it means for BB(5) to equal 47,176,870.

For more on this story, see Ben Brubaker’s superb article in Quanta magazine, or bbchallenge’s own announcement. For more background on the Busy Beaver function, see my 2020 survey, or my 2017 big numbers lecture, or my 1999 big numbers essay, or the Googology Wiki page, or Pascal Michel’s survey.


The difficulty in pinning down BB(5) was not just that there are a lot of 5-state Turing machines (16,679,880,978,201 of them to be precise, although symmetries reduce the effective number). The real difficulty is, how do you prove that some given machine runs forever? If a Turing machine halts, you can prove that by simply running it on your laptop until halting (at least if it halts after a “mere” ~47 million steps, which is child’s-play). If, on the other hand, the machine runs forever, via some never-repeating infinite pattern rather than a simple infinite loop, then how do you prove that? You need to find a mathematical reason why it can’t halt, and there’s no systematic method for finding such reasons—that was the great discovery of Gödel and Turing nearly a century ago.

More precisely, the Busy Beaver function grows faster than any function that can be computed, and we know that because if a systematic method existed to compute arbitrary BB(n) values, then we could use that method to determine whether a given Turing machine halts (if the machine has n states, just check whether it runs for more than BB(n) steps; if it does, it must run forever). This is the famous halting problem, which Turing proved to be unsolvable by finite means. The Busy Beaver function is Turing-uncomputability made flesh, a finite function that scrapes the edge of infinity.

There’s also a more prosaic issue. Proofs that particular Turing machines run forever tend to be mind-numbingly tedious. Even supposing you’ve found such a “proof,” why should other people trust it, if they don’t want to spend days staring at the outputs of your custom-written software?

And so for decades, a few hobbyists picked away at the BB(5) problem. One, who goes by the handle “Skelet”, managed to reduce the problem to 43 holdout machines whose halting status was still undetermined. Or maybe only 25, depending who you asked? (And were we really sure about the machines outside those 43?)

The bbchallenge collaboration improved on the situation in two ways. First, it demanded that every proof of non-halting be vetted carefully. While this went beyond the original mandate, a participant named “mxdys” later upped the standard to fully machine-verifiable certificates for every non-halting machine in Coq, so that there could no longer be any serious question of correctness. (This, in turn, was done via “deciders,” programs that were crafted to recognize a specific type of parameterized behavior.) Second, the collaboration used an online forum and a Discord server to organize the effort, so that everyone knew what had been done and what remained to be done.

Despite this, it was far from obvious a priori that the collaboration would succeed. What if, for example, one of the 43 (or however many) Turing machines in the holdout set turned out to encode the Goldbach Conjecture, or one of the other great unsolved problems of number theory? Then the final determination of BB(5) would need to await the resolution of that problem. (We do know, incidentally, that there’s a 27-state Turing machine that encodes Goldbach.)

But apparently the collaboration got lucky. Coq proofs of non-halting were eventually found for all the 5-state holdout machines.

As a sad sidenote, Allen Brady, who determined the value of BB(4), apparently died just a few days before the BB(5) proof was complete. He was doubtful that BB(5) would ever be known. The reason, he wrote in 1988, was that “Nature has probably embedded among the five-state holdout machines one or more problems as illusive as the Goldbach Conjecture. Or, in other terms, there will likely be nonstopping recursive patterns which are beyond our powers of recognition.”


Maybe I should say a little at this point about what the 5-state Busy Beaver—i.e., the Marxen-Buntrock Turing machine that we now know to be the champion—actually does. Interpreted in English, the machine iterates a certain integer function g, which is defined by

  • g(x) = (5x+18)/3 if x = 0 (mod 3),
  • g(x) = (5x+22)/3 if x = 1 (mod 3),
  • g(x) = HALT if x = 2 (mod 3).

Starting from x=0, the machine computes g(0), g(g(0)), g(g(g(0))), and so forth, halting if and if it ever reaches … well, HALT. The machine runs for millions of steps because it so happens that this iteration eventually reaches HALT, but only after a while:

0 → 6 → 16 → 34 → 64 → 114 → 196 → 334 → 564 → 946 → 1584 → 2646 → 4416 → 7366 → 12284 → HALT.

(And also, at each iteration, the machine runs for a number of steps that grows like the square of the number x.)

Some readers might be reminded of the Collatz Conjecture, the famous unsolved problem about whether, if you repeatedly replace a positive integer x by x/2 if x is even or 3x+1 if x is odd, you’ll always eventually reach x=1. As Scott Alexander would say, this is not a coincidence because nothing is ever a coincidence. (Especially not in math!)


It’s a fair question whether humans will ever know the value of BB(6). Pavel Kropitz discovered, a couple years ago, that BB(6) is at least 10^10^10^10^10^10^10^10^10^10^10^10^10^10^10 (i.e., 10 raised to itself 15 times). Obviously Kropitz didn’t actually run a 6-state Turing machine for that number of steps until halting! Instead he understood what the machine did—and it turned out to apply an iterative process similar to the g function above, but this time involving an exponential function. And the process could be proven to halt after ~15 rounds of exponentiation.

Meanwhile Tristan Stérin, who coordinated the bbchallenge effort, tells me that a 6-state machine was recently discovered that “iterates the Collatz-like map {3x/2, (3x-1)/2} from the number 8 and halts if and only if the number of odd terms ever gets bigger than twice the number of even terms.” This shows that, in order to determine the value of BB(6), one would first need to prove or disprove the Collatz-like conjecture that that never happens.

Basically, if and when artificial superintelligences take over the world, they can worry about the value of BB(6). And then God can worry about the value of BB(7).


I first learned about the BB function in 1996, when I was 15 years old, from a book called The New Turing Omnibus by A. K. Dewdney.  From what I gather, Dewdney would go on to become a nutty 9/11 truther.  But that’s irrelevant to the story.  What matters was that his book provided my first exposure to many of the key concepts of computer science, and probably played a role in my becoming a theoretical computer scientist at all.

And of all the concepts in Dewdney’s book, the one I liked the most was the Busy Beaver function. What a simple function! You could easily explain its definition to Archimedes, or Gauss, or any of the other great mathematicians of the past. And yet, by using it, you could name definite positive integers (BB(10), for example) incomprehensibly larger than any that they could name.

It was from Dewdney that I learned that the first four Busy Beaver numbers were the unthreatening-looking 1, 6, 21, and 107 … but then that the fifth value was already unknown (!!), and at any rate at least 47,176,870. I clearly remember wondering whether BB(5) would ever be known for certain, and even whether I might be the one to determine it. That was almost two-thirds of my life ago.

As things developed, I played no role whatsoever in the determination of BB(5) … except for this. Tristan Stérin tells me that reading my survey article, The Busy Beaver Frontier, was what inspired him to start and lead the bbchallenge collaboration that finally cracked the problem. It’s hard to express how gratified that makes me.


Why care about determining particular values of the Busy Beaver function? Isn’t this just a recreational programming exercise, analogous to code golf, rather than serious mathematical research?

I like to answer that question with another question: why care about humans landing on the moon, or Mars? Those otherwise somewhat arbitrary goals, you might say, serve as a hard-to-fake gauge of human progress against the vastness of the cosmos. In the same way, the quest to determine the Busy Beaver numbers is one concrete measure of human progress against the vastness of the arithmetical cosmos, a vastness that we learned from Gödel and Turing won’t succumb to any fixed procedure. The Busy Beaver numbers are just … there, Platonically, as surely as 13 was prime long before the first caveman tried to arrange 13 rocks into a nontrivial rectangle and failed. And yet we might never know the sixth of these numbers and only today learned the fifth.

Anyway, huge congratulations to the bbchallenge team on their accomplishment. At a terrifying time for the world, I’m happy that, whatever happens, at least I lived to see this.

Openness on OpenAI

Monday, May 20th, 2024

I am, of course, sad that Jan Leike and Ilya Sutskever, the two central people who recruited me to OpenAI and then served as my “bosses” there—two people for whom I developed tremendous admiration—have both now resigned from the company. Ilya’s resignation followed the board drama six months ago, but Jan’s resignation last week came as a shock to me and others. The Superalignment team, which Jan and Ilya led and which I was part of, is being split up and merged into other teams at OpenAI.

See here for Ilya’s parting statement, and here for Jan’s. See here for Zvi Mowshowitz’s perspective and summary of reporting on these events. For additional takes, see pretty much the entire rest of the nerd Internet.

As for me? My two-year leave at OpenAI was scheduled to end this summer anyway. It seems pretty clear that I ought to spend my remaining months at OpenAI simply doing my best for AI safety—for example, by shepherding watermarking toward deployment. After a long delay, I’m gratified that interest in watermarking has spiked recently, not only within OpenAI and other companies but among legislative bodies in the US and Europe.

And afterwards? I’ll certainly continue thinking about how AI is changing the world and how (if at all) we can steer its development to avoid catastrophes, because how could I not think about that? I spent 15 years mostly avoiding the subject, and that now seems like a huge mistake, and probably like enough of that mistake for one lifetime.

So I’ll continue looking for juicy open problems in complexity theory that are motivated by interpretability, or scalable oversight, or dangerous capability evaluations, or other aspects of AI safety—I’ve already identified a few such problems! And without giving up on quantum computing (because how could I?), I expect to reorient at least some of my academic work toward problems at the interface of theoretical computer science and AI safety, and to recruit students who want to work on those problems, and to apply for grants about them. And I’ll presumably continue giving talks about this stuff, and doing podcasts and panels and so on—anyway, as long as people keep asking me to!

And I’ll be open to future sabbaticals or consulting arrangements with AI organizations, like the one I’ve done at OpenAI. But I expect that my main identity will always be as an academic. Certainly I never want to be in a position where I have to speak for an organization rather than myself, or censor what I can say in public about the central problems I’m working on, or sign a nondisparagement agreement or anything of the kind.

I can tell you this: in two years at OpenAI, hanging out at the office and meeting the leadership and rank-and-file engineers, I never once found a smoke-filled room where they laugh at all the rubes who take the talk about “safety” and “alignment” seriously. While my interactions were admittedly skewed toward safetyists, the OpenAI folks I met were invariably smart and earnest and dead serious about the mission of getting AI right for humankind.

It’s more than fair for outsiders to ask whether that’s enough, whether even good intentions can survive bad incentives. It’s likewise fair of them to ask: what fraction of compute and other resources ought to be set aside for alignment research? What exactly should OpenAI do on alignment going forward? What should governments force them and other AI companies to do? What should employees and ex-employees be allowed, or encouraged, to share publicly?

I don’t know the answers to these questions, but if you do, feel free to tell me in the comments!