Rowena He

December 20th, 2023

This fall, I’m honored to have made a new friend: the noted Chinese dissident scholar Rowena He, currently a Research Fellow at the Civitas Institute at UT Austin, and formerly of Harvard, the Institute for Advanced Study at Princeton, the National Humanities Center, and other fine places. I was connected to Rowena by the Harvard computer scientist Harry Lewis.

But let’s cut to the chase, as Rowena tends to do in every conversation. As a teenage girl in Guangdong, Rowena eagerly participated in the pro-democracy protests of 1989, the ones that tragically culminated in the Tiananmen Square massacre. Since then, she’s devoted her life to documenting and preserving the memory of what happened, fighting its deliberate erasure from the consciousness of future generations of Chinese. You can read some of her efforts in her first book, Tiananmen Exiles: Voices of the Struggle for Democracy in China (one of the Asia Society’s top 5 China books of 2014). She’s now spending her time at UT writing a second book.

Unsurprisingly, Rowena’s life’s project has not (to put it mildly) sat well with the Chinese authorities. From 2019, she had a history professorship at the Chinese University of Hong Kong, where she could be close to her research material and to those who needed to hear her message—and where she was involved in the pro-democracy protests that convulsed Hong Kong that year. Alas, you might remember the grim outcome of those protests. Following Hong Kong’s authoritarian takeover, in October of this year, Rowena was denied a visa to return to Hong Kong, and then fired from CUHK because she’d been denied a visa—events that were covered fairly widely in the press. Learning about the downfall of academic freedom in Hong Kong was particularly poignant for me, given that I lived in Hong Kong when I was 13 years old, in some of the last years before the handover to China (1994-1995), and my family knew many people there who were trying to get out—to Canada, Australia, anywhere—correctly fearing what eventually came to pass.

But this is all still relatively dry information that wouldn’t have prepared me for the experience of meeting Rowena in person. Probably more than anyone else I’ve had occasion to meet, Rowena is basically the living embodiment of what it means to sacrifice everything for abstract ideals of freedom and justice. Many academics posture that way; to spend a couple hours with Rowena is to understand the real deal. You can talk to her about trivialities—food, work habits, how she’s settling in Austin—and she’ll answer, but before too long, the emotion will rise in her voice and she’ll be back to telling you how the protesting students didn’t want to overthrow the Chinese government, but only help to improve it. As if you, too, were a CCP bureaucrat who might imprison her if the truth turned out otherwise. Or she’ll talk about how, when she was depressed, only the faces of the students in Hong Kong who crowded her lecture gave her the will to keep living; or about what she learned by reading the letters that Lin Zhao, a dissident from Maoism, wrote in blood in Chinese jail before she was executed.

This post has a practical purpose. Since her exile from China, Rowena has spent basically her entire life moving from place to place, with no permanent position and no financial security. In the US—a huge country full of people who share Rowena’s goal of exposing the lies of the CCP—there must be an excellent university, think tank, or institute that would offer a permanent position to possibly the world’s preeminent historian of Tiananmen and of the Chinese democracy movement. Though the readership of this blog is heavily skewed toward STEM, maybe that institute is yours. If it is, please get in touch with Rowena. And then I could say this blog had served a useful purpose, even if everything else I wrote for two decades was for naught.

On being wrong about AI

December 13th, 2023

Update (Dec. 17): Some of you might enjoy a 3-hour podcast I recently did with Lawrence Krauss, which was uploaded to YouTube just yesterday. The first hour is about my life and especially childhood (!); the second hour’s about quantum computing; the third hour’s about computational complexity, computability, and AI safety.


I’m being attacked on Twitter for … no, none of the things you think. This time it’s some rationalist AI doomers, ridiculing me for a podcast I did with Eliezer Yudkowsky way back in 2009, one that I knew even then was a piss-poor performance on my part. The rationalists are reminding the world that I said back then that, while I knew of no principle to rule out superhuman AI, I was radically uncertain of how long it would take—my “uncertainty was in the exponent,” as I put it—and that for all I knew, it was plausibly thousands of years. When Eliezer expressed incredulity, I doubled down on the statement.

I was wrong, of course, not to contemplate more seriously the prospect that AI might enter a civilization-altering trajectory, not merely eventually but within the next decade. In this case, I don’t need to be reminded about my wrongness. I go over it every day, asking myself what I should have done differently.

If I were to mount a defense of my past self, it would look something like this:

  1. Eliezer himself didn’t believe that staggering advances in AI were going to happen the way they did, by pure scaling of neural networks. He seems to have thought someone was going to discover a revolutionary “key” to AI. That didn’t happen; you might say I was right to be skeptical of it. On the other hand, the scaling of neural networks led to better and better capabilities in a way that neither of us expected.
  2. For that matter, hardly anyone predicted the staggering, civilization-altering trajectory of neural network performance from roughly 2012 onwards. Not even most AI experts predicted it (and having taken a bunch of AI courses between 1998 and 2003, I was well aware of that). The few who did predict what ended up happening, notably Ray Kurzweil, made lots of other confident predictions (e.g., the Singularity around 2045) that seemed so absurdly precise as to rule out the possibility that they were using any sound methodology.
  3. Even with hindsight, I don’t know of any principle by which I should’ve predicted what happened. Indeed, we still don’t understand why deep learning works, in any way that would let us predict which capabilities will emerge at which scale. The progress has been almost entirely empirical.
  4. Once I saw the empirical case that a generative AI revolution was imminent—sometime during the pandemic—I updated, hard. I accepted what’s turned into a two-year position at OpenAI, thinking about what theoretical computer science can do for AI safety. I endured people, on this blog and elsewhere, confidently ridiculing me for not understanding that GPT-3 was just a stochastic parrot, no different from ELIZA in the 1960s, and that nothing of interest had changed. I didn’t try to invent convoluted reasons why it didn’t matter or count, or why my earlier skepticism had been right all along.
  5. It’s still not clear where things are headed. Many of my academic colleagues express confidence that large language models, for all their impressiveness, will soon hit a plateau as we run out of Internet to use as training data. Sure, LLMs might automate most white-collar work, saying more about the drudgery of such work than about the power of AI, but they’ll never touch the highest reaches of human creativity, which generate ideas that are fundamentally new rather than throwing the old ideas into a statistical blender. Are these colleagues right? I don’t know.
  6. (Added) In 2014, I was seized by the thought that it should now be possible to build a vastly better chatbot than “Eugene Goostman” (which was basically another ELIZA), by training the chatbot on all the text on the Internet. I wondered why the experts weren’t already trying that, and figured there was probably some good reason that I didn’t know.

Having failed to foresee the generative AI revolution a decade ago, how should I fix myself? Emotionally, I want to become even more radically uncertain. If fate is a terrifying monster, which will leap at me with bared fangs the instant I venture any guess, perhaps I should curl into a ball and say nothing about the future, except that the laws of math and physics will probably continue to hold, there will still be war between Israel and Palestine, and people online will still be angry at each other and at me.

But here’s the problem: in saying “for all I know, human-level AI might take thousands of years,” I thought I was being radically uncertain already. I was explaining that there was no trend you could knowably, reliably project into the future such that you’d end up with human-level AI by roughly such-and-such time. And in a sense, I was right. The trouble, with hindsight, was that I placed the burden of proof only on those saying a dramatic change would happen, not on those saying it wouldn’t. Note that this is the same mistake most of the world made with COVID in early 2020.

I would sum up the lesson thus: one must never use radical ignorance as an excuse to default, in practice, to the guess that everything will stay basically the same. Live long enough, and you see that year to year and decade to decade, everything doesn’t stay the same, even though most days and weeks it seems to.

The hard part is that, as soon as you venture a particular way in which the world might radically change—for example, that a bat virus spreading in Wuhan might shut down civilization, or Hamas might attempt a second Holocaust while the vaunted IDF is missing in action and half the world cheers Hamas, or a gangster-like TV personality might threaten American democracy more severely than did the Civil War, or a neural network trained on all the text on the Internet might straightaway start conversing more intelligently than most humans—say that all the prerequisites for one of these events seem to be in place, and you’ll face, not merely disagreement, but ridicule. You’ll face serenely self-confident people who call the entire existing order of the world as witness to your wrongness. That’s the part that stings.

Perhaps the wisest course for me would be to admit that I’m not and have never been a prognosticator, Bayesian or otherwise—and then stay consistent in my refusal, rather than constantly getting talked into making predictions that I’ll later regret. I should say: I’m just someone who likes to draw conclusions validly from premises, and explore ideas, and clarify possible scenarios, and rage against obvious injustices, and not have people hate me (although I usually fail at the last).


The rationalist AI doomers also dislike that, in their understanding, I recently expressed a “p(doom)” (i.e., a probability of superintelligent AI destroying all humans) of “merely” 2%. The doomers’ probabilities, by contrast, tend to range between 10% and 95%—that’s why they’re called “doomers”!

In case you’re wondering, I arrived at my 2% figure via a rigorous Bayesian methodology, of taking the geometric mean of what my rationalist friends might consider to be sane (~50%) and what all my other friends might consider to be sane (~0.1% if you got them to entertain the question at all?), thereby ensuring that both camps would sneer at me equally.

If you read my post, though, the main thing that interested me was not to give a number, but just to unsettle people’s confidence that they even understand what should count as “AI doom.” As I put it last week on the other Scott’s blog:

To set the record straight: I once gave a ~2% probability for the classic AGI-doom paperclip-maximizer-like scenario. I have a much higher probability for an existential catastrophe in which AI is causally involved in one way or another — there are many possible existential catastrophes (nuclear war, pandemics, runaway climate change…), and many bad people who would cause or fail to prevent them, and I expect AI will soon be involved in just about everything people do! But making a firm prediction would require hashing out what it means for AI to play a “critical causal role” in the catastrophe — for example, did Facebook play a “critical causal role” in Trump’s victory in 2016? I’d say it’s still not obvious, but in any case, Facebook was far from the only factor.

This is not a minor point. That AI will be a central force shaping our lives now seems certain. Our new, changed world will have many dangers, among them that all humans might die. Then again, human extinction has already been on the table since at least 1945, and outside the “paperclip maximizer”—which strikes me as just one class of scenario among many—AI will presumably be far from the only force shaping the world, and chains of historical causation will still presumably be complicated even when they pass through AIs.

I have a dark vision of humanity’s final day, with the Internet (or whatever succeeds it) full of thinkpieces like:

  • Yes, We’re All About to Die. But Don’t Blame AI, Blame Capitalism
  • Who Decided to Launch the Missiles: Was It President Boebert, Kim Jong Un, or AdvisorBot-4?
  • Why Slowing Down AI Development Wouldn’t Have Helped

Here’s what I want to know in the comments section. Did you foresee the current generative AI boom, say back in 2010? If you did, what was your secret? If you didn’t, how (if at all) do you now feel you should’ve been thinking differently? Feel free also to give your p(doom), under any definition of the concept, so long as you clarify which one.

Weird but cavity-free

December 8th, 2023

Over at Astral Codex Ten, the other Scott A. blogs in detail about a genetically engineered mouth bacterium that metabolizes sugar into alcohol rather than acid, thereby (assuming it works as intended) ending dental cavities forever. Despite good results in trials with hundreds of people, this bacterium has spent decades in FDA approval hell. It’s in the news because Lantern Bioworks, a startup founded by rationalists, is now trying again to legalize it.

Just another weird idea that will never see the light of day, I’d think … if I didn’t have these bacteria in my mouth right now.

Here’s how it happened: I’d read earlier about these bacteria, and was venting to a rationalist of my acquaintance about the blankfaces who keep that and a thousand other medical advances from ever reaching the public, and who sleep soundly at night, congratulating themselves for their rigor in enforcing nonsensical rules.

“Are you serious?” the rationalist asked me. “I know the people in Berkeley who can get you into the clinical trial for this.”

This was my moment of decision. If I agreed to put unapproved bacteria into my mouth on my next trip to Berkeley, I could live my beliefs and possibly never get cavities again … but on the other hand, friends and colleagues would think I was weird when I told them.

Then again, I mused, four years ago most people would think you were weird if you said that a pneumonia spreading at a seafood market in Wuhan was about to ignite a global pandemic, and also that chatbots were about to go from ELIZA-like jokes to the technological powerhouses transforming civilization.

And so it was that I found myself brushing a salty, milky-white substance onto my teeth. That was last month. I … haven’t had any cavities since, for what it’s worth? Nor have I felt drunk, despite the ever-so-slightly elaevated ethanol in my system. Then again, I’m not even 100% sure that the bacteria took, given that (I confess) the germy substance strongly triggered my gag reflex.

Anyway, read other Scott’s post, and then ask yourself: will you try this, once you can? If not, is it just because it seems too weird?

Update: See a Hacker News thread where the merits of this new treatment are debated.

Staggering toward quantum fault-tolerance

December 7th, 2023

Happy Hanukkah! I’m returning to Austin from a Bay Area trip that included the annual Q2B (Quantum 2 Business) conference. This year, for the first time, I opened the conference, with a talk on “The Future of Quantum Supremacy Experiments,” rather than closing it with my usual ask-me-anything session.


The biggest talk at Q2B this year was yesterday’s announcement, by a Harvard/MIT/QuEra team led by Misha Lukin and Vlad Vuletic, to have demonstrated “useful” quantum error-correction, for some definition of “useful,” in neutral atoms (see here for the Nature paper). To drill down a bit into what they did:

  • They ran experiments with up to 280 physical qubits, which simulated up to 48 logical qubits.
  • They demonstrated surface codes of varying sizes as well as color codes.
  • They performed over 200 two-qubit transversal gates on their encoded logical qubits.
  • They did a couple demonstrations, including the creation and verification of an encoded GHZ state and (more impressively) an encoded IQP circuit, whose outputs were validated using the Linear Cross-Entropy Benchmark (LXEB).
  • Crucially, they showed that in their system, the use of logically encoded qubits produced a modest “net gain” in success probability compared to not using encoding, consistent with theoretical expectations (though see below for the caveats). With a 48-qubit encoded IQP circuit with a few hundred gates, for example, they achieved an LXEB score of 1.1, compared to a record of ~1.01 for unencoded physical qubits.
  • At least with their GHZ demonstration and with a particular decoding strategy (about which more later), they showed that their success probability improves with increasing code size.

Here are what I currently understand to be the limitations of the work:

  • They didn’t directly demonstrate applying a universal set of 2- or 3-qubit gates to their logical qubits. This is because they were limited to transversal gates, and the Eastin-Knill Theorem shows that transversal gates can’t be universal. On the other hand, they were able to simulate up to 48 CCZ gates, which do yield universality, by using magic initial states.
  • They didn’t demonstrate the “full error-correction cycle” on encoded qubits, where you’d first correct errors and then proceed to apply more logical gates to the corrected qubits. For now it’s basically just: prepare encoded qubits, then apply transversal gates, then measure, and use the encoding to deal with any errors.
  • With their GHZ demonstration, they needed to use what they call “correlated decoding,” where the code blocks are decoded in conjunction with each other rather than separately, in order to get good results.
  • With their IQP demonstration, they needed to postselect on the event that no errors occurred (!!), which happened about 0.1% of the time with their largest circuits. This just further underscores that they haven’t yet demonstrated a full error-correction cycle.
  • They don’t claim to have demonstrated quantum supremacy with their logical qubits—i.e., nothing that’s too hard to simulate using a classical computer. (On the other hand, if they can really do 48-qubit encoded IQP circuits with hundreds of gates, then a convincing demonstration of encoded quantum supremacy seems like it should follow in short order.)

As always, experts are strongly urged to correct anything I got wrong.

I should mention that this might not be the first experiment to get a net gain from the use of a quantum error-correcting code: Google might or might not have gotten one in an experiment that they reported in a Nature paper from February of this year (for discussion, see a comment by Robin). In any case, though, the Google experiment just encoded the qubits and measured them, rather than applying hundreds of logical gates to the encoded qubits. Quantinuum also previously reported an experiment that at any rate got very close to net gain (again see the comments for discussion).

Assuming the result stands, I think it’s plausibly the top experimental quantum computing advance of 2023 (coming in just under the deadline!). We clearly still have a long way to go until “actually useful” fault-tolerant QC, which might require thousands of logical qubits and millions of logical gates. But this is already beyond what I expected to be done this year, and (to use the AI doomers’ lingo) it “moves my timelines forward” for quantum fault-tolerance. It should now be possible, among other milestones, to perform the first demonstrations of Shor’s factoring algorithm with logically encoded qubits (though still to factor tiny numbers, of course). I’m slightly curious to see how Gil Kalai and the other quantum computing skeptics wiggle their way out now, though I’m absolutely certain they’ll find a way! Anyway, huge congratulations to the Harvard/MIT/QuEra team for their achievement.


In other QC news, IBM got a lot of press for announcing a 1000-qubit superconducting chip a few days ago, although I don’t yet know what two-qubit gate fidelities they’re able to achieve. Anyone with more details is encouraged to chime in.


Yes, I’m well-aware that 60 Minutes recently ran a segment on quantum computing, featuring the often-in-error-but-never-in-doubt Michio Kaku. I wasn’t planning to watch it unless events force me to.


Do any of you have strong opinions on whether, once my current contract with OpenAI is over, I should focus my research efforts more on quantum computing or on AI safety?

On the one hand: I’m now completely convinced that AI will transform civilization and daily life in a much deeper way and on a shorter timescale than QC will — and that’s assuming full fault-tolerant QCs eventually get built, which I’m actually somewhat optimistic about (a bit more than I was last week!). I’d like to contribute if I can to helping the transition to an AI-centric world go well for humanity.

On the other hand: in quantum computing, I feel like I’ve somehow been able to correct the factual misconceptions of 99.99999% of people, and this is a central source of self-confidence about the value I can contribute to the world. In AI, by contrast, I feel like at least a thousand times more people understand everything I do, and this causes serious self-doubt about the value and uniqueness of whatever I can contribute.


Update (Dec. 8): A different talk on the Harvard/MIT/QuEra work—not the one I missed at Q2B—is now on YouTube.

More Updates!

November 26th, 2023

Yet Another Update (Dec. 5): For those who still haven’t had enough of me, check me out on Curt Jaimungal’s Theories of Everything Podcast, talking about … err, computational complexity, the halting problem, the time hierarchy theorem, free will, Newcomb’s Paradox, the no-cloning theorem, interpretations of quantum mechanics, Wolfram, Penrose, AI, superdeterminism, consciousness, integrated information theory, and whatever the hell else Curt asks me about. I strongly recommend watching the video at 2x speed to smooth over my verbal infelicities.

In answer to a criticism I’ve received: I agree that it would’ve been better for me, in this podcast, to describe Wolfram’s “computational irreducibility” as simply “the phenomenon where you can’t predict a computation faster than by running it,” rather than also describing it as a “discrete analog of chaos / sensitive dependence on initial conditions.” (The two generally co-occur in the systems Wolfram talks about, but are not identical.)

On the other hand: no, I do not recognize that Wolfram deserves credit for giving a new name (“computational irreducibility”) to a thing that was already well-understood in the relevant fields.  This is particularly true given that

(1) the earlier understanding of the halting problem and the time hierarchy theorem was rigorous, giving us clear criteria for proving when computations can be sped up and when they can’t be, and

(2) Wolfram replaced it with handwaving (“well, I can’t see how this process could be predicted faster than by running it, so let’s assume that it can’t be”).

In other words, the earlier understanding was not only decades before Wolfram, it was superior.

It would be as if I announced my new “Principle of Spacetime Being Like A Floppy Trampoline That’s Bent By Gravity,” and then demanded credit because even though Einstein anticipated some aspects of my principle with his complicated and confusing equations, my version was easier for the layperson to intuitively understand.

I’ll reopen the comments on this post, but only for comments on my Theories of Everything podcast.


Another Update (Dec. 1): Quanta Magazine now has a 20-minute explainer video on Boolean circuits, Turing machines, and the P versus NP problem, featuring yours truly. If you already know these topics, you’re unlikely to learn anything new, but if you don’t know them, I found this to be a beautifully produced introduction with top-notch visuals. Better yet—and unusually for this sort of production—everything I saw looked entirely accurate, except that (1) the video never explains the difference between Turing machines and circuits (i.e., between uniform and non-uniform computation), and (2) the video also never clarifies where the rough identities “polynomial = efficient” and “exponential = inefficient” hold or fail to hold.


For the many friends who’ve asked me to comment on the OpenAI drama: while there are many things I can’t say in public, I can say I feel relieved and happy that OpenAI still exists. This is simply because, when I think of what a world-leading AI effort could look like, many of the plausible alternatives strike me as much worse than OpenAI, a company full of thoughtful, earnest people who are at least asking the right questions about the ethics of their creations, and who—the real proof that they’re my kind of people—are racked with self-doubts (as the world has now spectacularly witnessed). Maybe I’ll write more about the ethics of self-doubt in a future post.

For now, the narrative that I see endlessly repeated in the press is that last week’s events represented a resounding victory for the “capitalists” and “businesspeople” and “accelerationists” over the “effective altruists” and “safetyists” and “AI doomers,” or even that the latter are now utterly discredited, raw egg dripping from their faces. I see two overwhelming problems with that narrative. The first problem is that the old board never actually said that it was firing Sam Altman for reasons of AI safety—e.g., that he was moving too quickly to release models that might endanger humanity. If the board had said anything like that, and if it had laid out a case, I feel sure the whole subsequent conversation would’ve looked different—at the very least, the conversation among OpenAI’s employees, which proved decisive to the outcome. The second problem with the capitalists vs. doomers narrative is that Sam Altman and Greg Brockman and the new board members are also big believers in AI safety, and conceivably even “doomers” by the standards of most of the world. Yes, there are differences between their views and those of Ilya Sutskever and Adam D’Angelo and Helen Toner and Tasha McCauley (as, for that matter, there are differences within each group), but you have to drill deeper to articulate those differences.

In short, it seems to me that we never actually got a clean test of the question that most AI safetyists are obsessed with: namely, whether or not OpenAI (or any other similarly constituted organization) has, or could be expected to have, a working “off switch”—whether, for example, it could actually close itself down, competition and profits be damned, if enough of its leaders or employees became convinced that the fate of humanity depended on its doing so. I don’t know the answer to that question, but what I do know is that you don’t know either! If there’s to be a decisive test, then it remains for the future. In the meantime, I find it far from obvious what will be the long-term effect of last week’s upheavals on AI safety or the development of AI more generally. For godsakes, I couldn’t even predict what was going to happen from hour to hour, let alone the aftershocks years from now.


Since I wrote a month ago about my quantum computing colleague Aharon Brodutch, whose niece, nephews, and sister-in-law were kidnapped by Hamas, I should share my joy and relief that the Brodutch family was released today as part of the hostage deal. While it played approximately zero role in the release, I feel honored to have been able to host a Shtetl-Optimized guest post by Aharon’s brother Avihai. Meanwhile, over 180 hostages remain in Gaza. Like much of the world, I fervently hope for a ceasefire—so long as it includes the release of all hostages and the end of Hamas’s ability to repeat the Oct. 7 pogrom.


Greta Thunberg is now chanting to “crush Zionism” — ie, taking time away from saving civilization to ensure that half the world’s remaining Jews will be either dead or stateless in the civilization she saves. Those of us who once admired Greta, and experience her new turn as a stab to the gut, might be tempted to drive SUVs, fly business class, and fire up wood-burning stoves just to spite her and everyone on earth who thinks as she does.

The impulse should be resisted. A much better response would be to redouble our efforts to solve the climate crisis via nuclear power, carbon capture and sequestration, geoengineering, cap-and-trade, and other effective methods that violate Greta’s scruples and for which she and her friends will receive and deserve no credit.

(On Facebook, a friend replied that an even better response would be to “refuse to let people that we don’t like influence our actions, and instead pursue the best course of action as if they didn’t exist at all.” My reply was simply that I need a response that I can actually implement!)

Updates!

November 18th, 2023

No, I don’t know what happened with Sam Altman, beyond what’s being reported all over the world’s press, which I’ve been reading along with everyone else. Ilya Sutskever does know, and I talk to Ilya nearly every week. But I know Ilya well enough to know that whatever he’d tell me about this, he’d also tell the world. It feels weird to be so close to the biggest news story on the planet, and yet at the same time so far from it. My current contract with OpenAI is set to expire this summer. Until then, and afterwards, I remain just as interested in figuring out what theoretical computer science can contribute to AI safety as I was yesterday morning.

My friend, theoretical computer science colleague, and now OpenAI colleague Boaz Barak has coauthored a paper giving a general class to attack against watermarking methods for large language models—100% consistent with the kinds of attacks we already knew about and were resigned to, but still good to spell out at a formal level. I hope to write more about it in the future.

Here’s a recent interview with me in Politico, touching on quantum computing, AI, and more.

And if that’s not enough of me, here’s a recent podcast that I did with Theo Jaffee, touching on quantum computing, P vs. NP, AI alignment, David Deutsch, and Twitter.

Whatever feelings anyone has about it, the new University of Austin (not to be confused with the University of Texas at Austin, where I work) is officially launching. And they’re hiring! People who are interested in STEM positions there should contact David Ruth.

I forgot to link to it when it came out more than a month ago—a lot has happened in the meantime!—but Dalzell et al. put up a phenomenal 337-page survey of quantum algorithms, focusing relentlessly on the crucial question of whether there’s actually an end-to-end speedup over the best known classical algorithm for each given task. In countless situations where I would just scream “no, the hypesters are lying to you, this is BS,” Dalzell et al. take dozens of polite, careful, and highly technical pages to spell out why.

Besides AI intrigue, this past week might be remembered for a major breakthrough in classical complexity theory, in solving arbitrary compression problems via a nonuniform algorithm (i.e., a family of Boolean circuits) that takes only 24n/5 time, rather than the 2n time that would be needed for brute force. See this paper by Hirahara, Ilango, and Williams, and as well this independent one by Mazor and Pass.

New travel/podcast/speaking policy

November 15th, 2023

I’ve been drowning in both quantum-computing-related and AI-related talks, interviews, podcasts, panels, and so on. These activities have all but taken over my days, leaving virtually no time for the actual research (especially once one factors in time for family, and time for getting depressed on social media). I’ve let things reach this point partly because I really do love talking about things that interest me, but partly also because I never learned how to say no. I have no choice but to cut back.

So, the purpose of this post is for me to link people to it whenever I get a new request. From now on, I agree only under the following conditions:

  1. For travel: you reimburse all travel costs. I don’t have to go through a lengthy process for reimbursements, but just forward you my receipts. There’s not a time limit on doing so.
  2. You don’t require me to upload my slides in advance, or provide readings or other “extra” materials. (Title and abstract a week or two before the talk are reasonable.)
  3. You don’t require me to schedule a “practice session” or “orientation session” before the main event.
  4. For podcasts and virtual talks: you don’t require me to set up any special equipment (including headphones or special cameras), or install any special software.
  5. If you’re a for-profit company: you compensate me for the time.
  6. For podcasts and virtual talks: unless specified otherwise, I am in Austin, TX, in US Central time zone. You email me a reminder the day before with the time in US Central, and the link. Otherwise I won’t be held responsible in the likely event that we get it wrong.

The Tragedy of SBF

November 6th, 2023

So, Sam Bankman-Fried has been found guilty on all counts, after the jury deliberated for just a few hours. His former inner circle all pointed fingers at him, in exchange for immunity or reduced sentences, and their testimony doomed him. The most dramatic was the testimony of Caroline Ellison, the CEO of Alameda Research (to which FTX gave customer deposits) and SBF’s sometime-girlfriend. The testimony of Adam Yedidia, my former MIT student, who Shtetl-Optimized readers might remember for our paper proving the value of the 8000th Busy Beaver number independent of the axioms of set theory, also played a significant role. (According to news reports, Adam testified about confronting SBF during a tennis match over $8 billion in missing customer deposits.)

Just before the trial, I read Michael Lewis’s much-discussed book about what happened, Going Infinite. In the press, Lewis has generally been savaged for getting too close to SBF and for painting too sympathetic a portrait of him. The central problem, many reviewers explained, is that Lewis started working on the book six months before the collapse of FTX—when it still seemed to nearly everyone, including Lewis, that SBF was a hero rather than villain. Thus, Going Infinite reads like tale of triumph that unexpectedly veers at the end into tragedy, rather than the book Lewis obviously should’ve written, a tragedy from the start.

Me? I thought Going Infinite was great. And it was great partly because of, rather than in spite of, Lewis not knowing how the story would turn out when he entered it. The resulting document makes a compelling case for the radical contingency and uncertainty of the world—appropriate given that the subject, SBF, differed from those around him in large part by seeing everything probabilistically all the time (infamously, including ethics).

In other contexts, serious commentators love to warn against writing “Whig history,” the kind where knowledge of the outcome colors the whole. With the SBF saga, though, there seems to be a selective amnesia, where all the respectable people now always knew that FTX—and indeed, cryptocurrency, utilitarianism, and Effective Altruism in their entirety—were all giant scams from the beginning. Even if they took no actions based on that knowledge. Even if the top crypto traders and investors, who could’ve rescued or made fortunes by figuring out that FTX was on the verge of collapse, didn’t. Even if, when people were rightly suspicious about FTX, it still mostly wasn’t for the right reasons.

Going Infinite takes the radical view that, what insiders and financial experts didn’t know at the time, the narrative mostly shouldn’t know either. It should show things the way they seemed then, so that readers can honestly ponder the question: faced with this evidence, when would I have figured it out?


Even if Michael Lewis is by far the most sympathetic person to have written about SBF post-collapse, he still doesn’t defend him, not really. He paints a picture of someone who could totally, absolutely have committed the crimes for which he’s now been duly convicted. But—and this was the central revelation for me—Lewis also makes it clear that SBF didn’t have to.

With only “minor” changes, that is, SBF could still be running a multibillion-dollar cryptocurrency empire to this day, without lying, stealing, or fraud, and without the whole thing being especially vulnerable to collapse. He could have donated his billions to pandemic prevention and AI risk and stopping Trump. He conceivably even could’ve done more good, in one or more of those ways, than anyone else in the world was doing. He didn’t, but he came “close.” The tragedy is all the greater, some people might even say that SBF’s culpability (or the rage we should feel at him, or at fate) is all the greater, because of how close he came.

I’m not a believer in historical determinism. I’ve argued before on this blog that if Yitzhak Rabin hadn’t been killed—if he’d walked down the staircase a little differently, if he’d survived the gunshot—there would likely now be peace between Israel and Palestine. For that matter: if Hitler hadn’t been born, if he’d been accepted to art school, if he’d been shot while running between trenches in WWI, there would probably have been no WWII, and with near-certainty no Holocaust. Likewise, if not for certain contingent political developments of the 1970s (especially, the turn away from nuclear power), the world wouldn’t now face the climate crisis.

Maybe there’s an arc of the universe that bends toward horribleness. Or maybe someone has to occupy the freakishly horrible branches of the wavefunction, and that someone happens to be you and me. Or maybe the freakishly improbable good (for example, the availability of Winston Churchill and Alan Turing to win WWII) actually balances out the freakishly improbable bad in the celestial accounting, if only we could examine the books. Whatever the case, again and again civilization’s worst catastrophes were at least proximately caused by seemingly minor events that could have turned out differently.

But what’s the argument that FTX, Alameda, and SBF’s planet-sized philanthropic mission “could have” succeeded? It rests on three planks:

First, FTX was actually a profitable business till the end. It brought in hundreds of millions per year—meaning fees, not speculative investments—and could’ve continued doing so more-or-less indefinitely. That’s why even FTX’s executives were shocked when FTX became unable to honor customer withdrawals: FTX made plenty of money, so where the hell did it all go?

Second: we now have the answer to that mystery. John Ray, the grizzled CEO who managed FTX’s bankruptcy, has successfully recovered more than 90% of the customer funds that went missing in 2022! The recovery was complicated, enormously, by Ray’s refusal to accept help from former FTX executives, but ultimately the money was still there, stashed under the virtual equivalent of random sofa cushions.

Yes, the funds had been illegally stolen from FTX customer deposits—according to trial testimony, at SBF’s personal direction. Yes, the funds had then been invested in thousands of places—incredibly, with no one person or spreadsheet or anything really keeping track. Yes, in the crucial week, FTX was unable to locate the funds in time to cover customer withdrawals. But holy crap, the rockets’ red glare, the bombs bursting in air—the money was still there! Which means: if FTX had just had better accounting (!), the entire collapse might not have happened. This is a crucial part of the story that’s gotten lost, which is why I’m calling so much attention to it now. It’s a part that I imagine should be taught in accounting courses from now till the end of time. (“This double-entry bookkeeping might seem unsexy, but someday it could mean the difference between you remaining the most sought-after wunderkind-philanthropist in the world, and you spending the rest of your life in prison…”)

Third, SBF really was a committed utilitarian, as he apparently remains today. As a small example, he became a vegan after my former student Adam Yedidia argued him into it, even though giving up chicken was extremely hard for him. None of it was an act. It was not a cynical front for crime, or for the desire to live in luxury (something SBF really, truly seems not to have cared about, although he indulged those around him who did). When I blogged about SBF last fall, I mused that I’d wished I’d met him back when he was an undergrad at MIT and I was a professor there, so that I could’ve tried to convince him to be more risk-averse: for example, to treat utility as logarithmic rather than linear in money. To my surprise, I got bitterly attacked for writing that: supposedly, by blaming a “merely technical” failure, I was excusing SBF’s far more important moral failure.

But reading Lewis confirmed for me that it really was all part of the same package. (See also here for Sarah Constantin’s careful explanation of SBF’s failure to understand the rationale for the Kelly betting criterion, and how many of his later errors were downstream of that.) Not once but over and over, SBF considers hypotheticals of the form “if this coin lands heads then the earth gets multiplied by three, while if it lands tails then the earth gets destroyed”—and always, every time, he chooses to flip the coin. SBF was so committed to double-or-nothing that he’d take what he saw as a positive-expected-utility gamble even when his customers’ savings were on the line, even when all the future good he could do for the planet as well as the reputation of Effective Altruism were on the line, even when his own life and freedom were on the line.

On the one hand, you have to give that level of devotion to a principle its grudging due. On the other hand, if “the Gambler’s Ruin fallacy is not a fallacy” is so central to someone’s worldview, then how shocked should we be when he ends up … well, in Gambler’s Ruin?

The relevance is that, if SBF’s success and downfall alike came from truly believing what he said, then I’m plausibly correct that this whole story would’ve played out differently, had he believed something slightly different. And given the role of serendipitous conversations in SBF’s life (e.g., one meeting with William MacAskill making him an Effective Altruist, one conversation with Adam Yedidia making him a vegan), I find it plausible that a single conversation might’ve set him on the path to a less brittle, more fault-tolerant utilitarianism.


Going Infinite shows signs of being finished in a hurry, in time for the trial. Sometimes big parts of the story seem skipped over without comment; we land without warning in a later part and have to reorient ourselves. There’s almost nothing about the apparent rampant stimulant use at FTX and the role it might have played, nor does Lewis ever directly address the truth or falsehood of the central criminal charge against SBF (namely, that he ordered his subordinates to move customer deposits from FTX’s control to Alameda’s). Rather, the book has the feeling of a series of magazine articles, as Lewis alights on one interesting topic after the next: the betting games that Jane Street uses to pick interns (SBF discovered that he excelled at those games, unfortunately for him and for the world). The design process (such as it was) for FTX’s never-built Bahamian headquarters. The musings of FTX’s in-house psychotherapist, George Lerner. The constant struggles of SBF’s personal scheduler to locate SBF, get his attention, and predict where he might go next.

When it comes to explaining cryptocurrency, Lewis amusingly punts entirely, commenting that the reader has surely already read countless “blockchain 101” explainers that seemed to make sense at the time but didn’t really stick, and that in any case, SBF himself (by his own admission) barely understood crypto even as he started trading it by the billions.

Anyway, what vignettes we do get are so vividly written that they’ll clearly be a central part of the documentary record of this episode—as anyone who’d read any of Lewis’s previous books could’ve predicted.

And for anyone who accuses me or Lewis of excusing SBF: while I can’t speak for Lewis, I don’t even excuse myself. For the past 15 years, I should have paid more attention to cryptocurrency, to the incredible ease (in hindsight!) with which almost anyone could’ve ridden this speculative bubble in order to direct billions of dollars toward the salvation of the human race. If I wasn’t going to try it myself, then at least I should’ve paid attention to who else in my wide social circle was trying it. Who knows, maybe I could’ve discovered something about the extreme financial, moral, and legal risks those people were taking on, and then I could’ve screamed at them to turn the ship and avoid those risks. Instead, I spent the time proving quantum complexity theorems, and raising my kids, and teaching courses, and arguing with commenters on this blog. I was too selfish to enter the world of crypto billionaires.

The floorboard test

October 30th, 2023

Last night a colleague sent me a gracious message, wishing for the safe return of the hostages and expressing disgust over the antisemites in my comment section. I wanted to share my reply.


You have no idea how much this means to me.

I’ve just been shaking with anger after an exchange with the latest antisemite to email me. After I asked her whether she really wished for my family and friends in Israel to be murdered, she said that if I “read a fucking book that’s not about computers,” I would understand that “violence is the language of the oppressed.”

The experience of the last few weeks has radicalized me like nothing else in life. I’m not the same person as I was in September. My priorities are not the same. 48% of Americans aged 18-24 now say that they sympathize with Hamas more than Israel. Not with the Palestinian people, with Hamas. That’s nearly half of the next generation of my own country that might want me and my loved ones to be slaughtered.

I feel like the last thread connecting me to my previous life are the people like you, who write to me with kindness and understanding, and who make me think: there are Gentiles who would’ve hidden me under the floorboards when the SS showed up.

Be well.
—Scott

Bring the Brodutch family home

October 21st, 2023

Another Update (Oct. 27): At Boaz Barak’s Windows on Theory blog, you can now find a petition signed by 63 prize-winning mathematicians and computer scientists—one guess which one is alphabetically first—asking that the kidnapped Israeli children be returned home. I feel confident that the pleas of Fields Medalists and Turing Award laureates will be what finally makes Hamas see the light.


Update: Every time another antisemite writes to me to excuse, justify, or celebrate Hamas’s orgy of murder and kidnapping, I make another donation to the Jewish Federations of North America to help Israeli terror victims, listing the antisemite’s name or alias in the “in honor of” field. By request, I’m sharing the link in case anyone else is also interested to donate.


Aharon Brodutch is a quantum computing researcher who I’ve known for nearly a decade. He’s worked at the Institute for Quantum Computing in Waterloo, the University of Toronto, his own startup company, and most recently IonQ. He’s thought about quantum discord, the one-clean-qubit model, weak measurements, and other topics that have long been of interest on this blog. He’s also on the paper giving an adaptive attack against Wiesner’s quantum money scheme—an application of the Elitzur-Vaidman bomb tester so simple and beautiful that I teach it in my undergrad Intro to Quantum Information Science class.

Yesterday I learned that Aharon’s sister-in-law Hagar, his niece Ofri, and his two nephews Yuval and Uriah were kidnapped by Hamas. Like Jews around the world, I’ve spent the last two weeks endlessly learning the names, faces, and life stories of hundreds of Israeli civilians who were murdered or kidnapped—and yet this news, directly affecting a colleague of mine, still managed to hit me in the gut.

I’m gratified that much of the world shares my revulsion at Hamas’s pogrom—the worst violence against Jews since the Holocaust—and joins me in wishing for the safe return of the 200 hostages as well as the destruction of Hamas, and its replacement by a governing authority that actually cares about the welfare of the Palestinian people. I’m glad that even many who call themselves “anti-Israel” or “anti-Zionist” have the basic human decency to be disgusted by Hamas. Some of the most touching messages of love and support that I got came from my Iranian friends.

All the same, for a whole week, my inbox and my blog moderation queue have been filling up with missives from people who profess to be thrilled, delighted, exhilirated by what Hamas did. They tell me that the young people at the Nova music festival had it coming, and that they hope Hamas burns the settler-colonialist Zionist entity to the ground. While some of these people praise Adolf Hitler, others parrot social-justice slogans. One of these lovely correspondents claimed that virtually all of his academic colleagues in history and social science share his attitudes, and said I had no right to lecture him as a mere computer scientist.

Meanwhile, as quantum computing founder David Deutsch has documented on his Twitter, in cities and university campuses around the world, posters with the names and faces of the children kidnapped by Hamas—just the names and faces of the kidnapped children (!)—are being torn down by anti-Israel activists. The cognitive dissonance involved in such an act is astounding, but also deeply informative about the millennia-old forces at work here.

One way I’ve been coping with this is, every time a Jew-hater emails me, I make another donation to help the victims in Israel, specifying that my donation is being made in the Jew-hater’s name. But another way to cope is simply to use this blog to make what’s at stake visceral and explicit to my readers. I got in touch with Aharon, and he asked me to share the guest post below, written by his brother Avihai. I said it was the least I could do. –Scott Aaronson


Guest Post by Avihai Brodutch

My name is Avihai Brodutch. My wife Hagar, along with our three children Ofri, Yuval, and Uriah, are being held hostage by Hamas. I want to share this message with people around the world: Children should never be involved in war. My wife and family should not be held hostage and they need to be released immediately.

Here’s my story:

I am an Israeli from Kfar Aza. My wife and I chose to build our home close to the border with Gaza, hoping for peace and relying on the Israeli government to protect our children. It was a beautiful home. Hagar, the love of my life, spent her entire life in Kibbutz Gvulot near the border. Our daughter Ofri, who is 10 years old, is an amazing, fun-loving girl who brings joy to everyone around her. Our son Yuval, 8, is smart, kind, and loving. And our youngest, Uriah, is the cutest little rascal. He is four and a half years old.  All four of them are in the hands of Hamas, and I hope they are at least together.

On October 7th, our family’s life was shattered by a brutal attack. Hamas terrorists infiltrated Kfar Aza early in the morning while I was away from home. Security alerts are common in the kibbutz, and we all thought this one was no different until Hagar heard a knock on the door and saw the neighbor’s 4-year-old girl, Avigail, covered in blood. Both her parents had been murdered, and Hagar took Avigail in. She locked the door, and they all hid in the house. Soon, the entire kibbutz was filled with the sounds of bullets and bombs.

I maintained contact with Hagar, who informed me that she had secured the door and was hiding with the children. We communicated quietly through text messages until she messaged, “they are coming in.” At that point, we lost communication, and I was convinced that I had lost my wife and three children. I do not want to describe the images that raced through my mind. A day later, I received word that a neighbor had witnessed them being captured and taken to Gaza. My family was alive, and this was the happiest news I’ve ever received. However, I knew they were far from being safe.

I am asking all the governments in the world, do the right thing and help bring my family back to safety. This is not controversial, it is obvious to every human, the first priority should be bringing the families back home.