Quantum fault-tolerance milestones dropping like atoms

September 10th, 2024

Update: I’d been wavering—should I vote for the terrifying lunatic, ranting about trans criminal illegal aliens cooking cat meat, or for the nice woman constantly making faces as though the lunatic was completely cracking her up? But when the woman explicitly came out in favor of AI and quantum computing research … that really sealed the deal for me.


Between roughly 2001 and 2018, I’ve happy to have done some nice things in quantum computing theory, from the quantum lower bound for the collision problem to the invention of shadow tomography.  I hope that’s not the end of it.  QC research brought me about as much pleasure as anything in life did.  So I hope my tired brain can be revved up a few more times, between now and whenever advances in AI or my failing health or the collapse of civilization makes the issue moot. If not, though, there are still many other quantum activities to fill my days: teaching (to which I’ve returned after two years), advising my students and postdocs, popular writing and podcasts and consulting, and of course, learning about the latest advances in quantum computing so I can share them with you, my loyal readers.

On that note, what a time it is in QC!  Basically, one experimental milestone after another that people talked about since the 90s is finally being achieved, to the point where it’s become hard to keep up with it all. Briefly though:

A couple weeks ago, the Google group announced an experiment that achieved net gain from the use of Kitaev’s surface code, using 101 physical qubits to encode 1 logical qubit. The headline result here is that, in line with theory, they see the performance improve as they pass to larger codes with more physical qubits and higher distance. Their best demonstrated code has a distance of 7, which is enough to get “beyond break-even” (their logical qubit lasts more than twice as long as the underlying physical qubits), and is also enough that any future improvements to the hardware will get amplified a lot. With superconducting qubits, one is (alas) still limited by how many one can cram onto a single chip. On paper, though, they say that scaling the same setup to a distance-27 code with ~1500 physical qubits would get them down to an error rate of 10-6, good enough to be a building block in a future fault-tolerant QC. They also report correlated bursts of errors that come about once per hour, from a still-unknown source that appears not to be cosmic rays. I hope it’s not Gil Kalai in the next room.

Separately, just this morning, Microsoft and Quantinuum announced that they entangled 12 logical qubits on a 56-physical-qubit trapped-ion processor, building on earlier work that I blogged about in April. They did this by applying a depth-3 logical circuit with 12 logical CNOT gates, to prepare a cat state. They report an 0.2% error rate when they do this, which is 11x better than they would’ve gotten without using error-correction. (Craig Gidney, in the comments, says that these results still involve postselection.)

The Microsoft/Quantinuum group also did what they called a “chemistry simulation” involving 13 physical qubits. The latter involved “only” 2 logical qubits and 4 logical gates, but 3 of those gates were non-Clifford, which are the hard kind when one is doing error-correction using a transversal code. (CNOT, by contrast, is a Clifford gate.)

Apart from the fact that Google is using superconducting qubits while Microsoft/Quantinuum are using trapped ions, the two results are incomparable in terms of what they demonstrate. Google is just scaling up a single logical qubit, but showing (crucially) that their error rate decreases with increasing size and distance. Microsoft and Quantinuum are sticking with “small” logical qubits with insufficient distance, but they’re showing that they can apply logical circuits that entangle up to 12 of these qubits.

Microsoft also announced today a new collaboration with the startup company Atom Computing, headquartered near Quantinuum in Colorado, which is trying to build neutral-atom QCs (like QuEra in Boston). Over the past few years, Microsoft’s quantum group has decisively switched from a strategy of “topological qubits or bust” to a strategy of “anything that works,” although they assure me that they also remain committed to the topological approach.

Anyway, happy to hear in the comments from anyone who knows more details, or wants to correct me on any particular, or has questions which I or others can try our best to answer.

Let me end by sticking my neck out. If hardware progress continues at the rate we’ve seen for the past year or two, then I find it hard to understand why we won’t have useful fault-tolerant QCs within the next decade. (And now to retreat my neck a bit: the “if” clause in that sentence is important and non-removable!)

In Support of SB 1047

September 4th, 2024

I’ve finished my two-year leave at OpenAI, and returned to being just a normal (normal?) professor, quantum complexity theorist, and blogger. Despite the huge drama at OpenAI that coincided with my time there, including the departures of most of the people I worked with in the former Superalignment team, I’m incredibly grateful to OpenAI for giving me an opportunity to learn and witness history, and even to contribute here and there, though I wish I could’ve done more.

Over the next few months, I plan to blog my thoughts and reflections about the current moment in AI safety, inspired by my OpenAI experience. You can be certain that I’ll be doing this only as myself, not as a representative of any organization. Unlike some former OpenAI folks, I was never offered equity in the company or asked to sign any non-disparagement agreement. OpenAI retains no power over me, at least as long as I don’t share confidential information (which of course I won’t, not that I know much!).

I’m going to kick off this blog series, today, by defending a position that differs from the official position of my former employer. Namely, I’m offering my strong support for California’s SB 1047, a first-of-its-kind AI safety regulation written by California State Senator Scott Wiener, then extensively revised through consultations with pretty much every faction of the AI community. AI leaders like Geoffrey Hinton, Yoshua Bengio, and Stuart Russell are for the bill, as is Elon Musk (for whatever that’s worth), and Anthropic now says that the bill’s “benefits likely outweigh its costs.” Meanwhile, Facebook, OpenAI, and basically the entire VC industry are against the bill, while California Democrats like Nancy Pelosi and Zoe Lofgren have also come out against it for whatever reasons.

The bill has passed the California State Assembly by a margin of 48-16, having previously passed the State Senate by 32-1. It’s now on Governor Gavin Newsom’s desk, and it’s basically up to him whether it becomes law or not. I understand that supporters and opponents are both lobbying him hard.

People much more engaged than me have already laid out, accessibly and in immense detail, exactly what the current bill does and the arguments for and against. Try for example:

  • For a very basic explainer, this in TechCrunch
  • This by Kelsey Piper, and this by Kelsey Piper, Sigal Samuel, and Dylan Matthews in Vox
  • This by Zvi Mowshowitz (Zvi has also written a great deal else about SB 1047, strongly in support)

Briefly: given the ferocity of the debate about it, SB 1047 does remarkably little. It says that if you spend more than $100 million to train a model, you need to notify the government and submit a safety plan. It establishes whistleblower protections for people at AI companies to raise safety concerns. And, if a company failed to take reasonable precautions and its AI then causes catastrophic harm, it says that the company can be sued (which was presumably already true, but the bill makes it extra clear). And … unless I’m badly mistaken, those are the main things in it!

While the bill is mild, opponents are on a full scare campaign saying that it will strangle the AI revolution in its crib, put American AI development under the control of Luddite bureaucrats, and force companies out of California. They say that it will discourage startups, even though the whole point of the $100 million provision is to target only the big players (like Google, Meta, OpenAI, and Anthropic) while leaving small startups free to innovate.

The only steelman that makes sense to me, for why many tech leaders are against the bill, is the idea that it’s a stalking horse. On this view, the bill’s actual contents are irrelevant. What matters is simply that, once you’ve granted the principle that people worried about AI-caused catastrophes get a seat at the table, any legislative acknowledgment of the validity of their concerns—then they’re going to take a mile rather than an inch, and kill the whole AI industry.

Notice that the exact same slippery-slope argument could be deployed against any AI regulation whatsoever. In other words, if someone opposes SB 1047 on these grounds, then they’d presumably oppose any attempt to regulate AI—either because they reject the whole premise that creating entities with humanlike intelligence is a risky endeavor, and/or because they’re hardcore libertarians who never want government to intervene in the market for any reason, not even if the literal fate of the planet was at stake.

Having said that, there’s one specific objection that needs to be dealt with. OpenAI, and Sam Altman in particular, say that they oppose SB 1047 simply because AI regulation should be handled at the federal rather than the state level. The supporters’ response is simply: yeah, everyone agrees that’s what should happen, but given the dysfunction in Congress, there’s essentially no chance of it anytime soon. And California suffices, since Google, OpenAI, Anthropic, and virtually every other AI company is either based in California or does many things subject to California law. So, some California legislators decided to do something. On this issue as on others, it seems to me that anyone who’s serious about a problem doesn’t get to reject a positive step that’s on offer, in favor of a utopian solution that isn’t on offer.

I should also stress that, in order to support SB 1047, you don’t need to be a Yudkowskyan doomer, primarily worried about hard AGI takeoffs and recursive self-improvement and the like. For that matter, if you are such a doomer, SB 1047 might seem basically irrelevant to you (apart from its unknowable second- and third-order effects): a piece of tissue paper in the path of an approaching tank. The world where AI regulation like SB 1047 makes the most difference is the world where the dangers of AI creep up on humans gradually, so that there’s enough time for governments to respond incrementally, as they did with previous technologies.

If you agree with this, it wouldn’t hurt to contact Governor Newsom’s office. For all its nerdy and abstruse trappings, this is, in the end, a kind of battle that ought to be familiar and comfortable for any Democrat: the kind with, on one side, most of the public (according to polls) and also hundreds of the top scientific experts, and on the other side, individuals and companies who all coincidentally have strong financial stakes in being left unregulated. This seems to me like a hinge of history where small interventions could have outsized effects.

Book Review: “2040” by Pedro Domingos

September 1st, 2024

Pedro Domingos is a computer scientist at the University of Washington.  I’ve known him for years as a guy who’d confidently explain to me why I was wrong about everything from physics to CS to politics … but then, for some reason, ask to meet with me again.  Over the past 6 or 7 years, Pedro has become notorious in the CS world as a right-wing bomb-thrower on what I still call Twitter—one who, fortunately for Pedro, is protected by his tenure at UW. He’s also known for a popular book on machine learning called The Master Algorithm, which I probably should’ve read but didn’t.

Now Pedro has released a short satirical novel, entitled 2040.  The novel centers around a presidential election between:

  • The Democratic candidate, “Chief Raging Bull,” an angry activist with 1/1024 Native American ancestry (as proven by a DNA test, the Chief proudly boasts) who wants to dissolve the United States and return it to its Native inhabitants, and
  • The Republican candidate, “PresiBot,” a chatbot with a frequently-malfunctioning robotic “body.” While this premise would’ve come off as comic science fiction five years ago, PresiBot now seems like it could plausibly be built using existing LLMs.

This is all in a near-future whose economy has been transformed (and to some extent hollowed out) by AI, and whose populace is controlled and manipulated by “Happinet,” a giant San Francisco tech company that parodies Google and/or Meta.

I should clarify that the protagonists, the ones we’re supposed to root for, are the founders of the startup company that built PresiBot—that is, people who are trying to put the US under the control of a frequently-glitching piece of software that’s also a Republican. For some readers, this alone might be a dealbreaker. But as I already knew Pedro’s ideological convictions, I felt like I had fair warning.

As I read the first couple chapters, my main worry was that I was about to endure an entire novel constructed out of tweet-like witticisms. But my appreciation for what Pedro was doing grew the more I read.

[Warning: Spoilers follow]

To my mind, the emotional core of the novel comes near the end, after PresiBot creator Ethan Burnswagger gets cancelled for a remark that’s judged racially insensitive. Exiled and fired from his own company, Ethan wanders around 2040 San Francisco, and meets working-class and homeless people who are doing their best to cope with the changes AI has wrought on civilization. This gives him the crucial idea to upgrade PresiBot into a crowdsourced entity that continuously channels the American popular will. Citizens watching PresiBot will register their second-by-second opinions on what it should say or do, and PresiBot will use its vast AI powers to make decisions incorporating their feedback. (How will the bot, once elected, handle classified intelligence briefings? One of many questions left unanswered here.) Pedro is at his best when, rather than taking potshots at the libs, he’s honestly trying to contemplate how AI is going to change regular people’s lives in the coming decades.

As for the novel’s politics? I mean, you might complain that Pedro stacks the deck too far in the AI candidate’s favor, thereby spoiling the novel’s central thought experiment, by making the AI’s opponent a human who literally wants to end the United States, killing or expelling most of its inhabitants. Worse, the Republican party that actually exists in our reality—i.e., the one dominated by Trump and his conspiratorial revenge fantasies—is simply dissolved by authorial fiat and replaced by a moderate, centrist party of Pedro’s dreams, a party so open-minded it would even nominate an AI.

Having said all that: I confess I enjoyed “2040.” The plot is tightly constructed, the dialogue crackles (certainly for a CS professor writing a first novel), the satire at least provokes chuckles, and at just 215 pages, the action moves.

“The Right Side of History”

August 16th, 2024

This morning I was pondering one of the anti-Israel protesters’ favorite phrases—I promise, out of broad philosophical curiosity rather than just parochial concern for my extended family’s survival.

“We’re on the right side of history. Don’t put yourself on the wrong side by opposing us.”

Why do the protesters believe they shouldn’t face legal or academic sanction for having blockaded university campuses, barricaded themselves in buildings, shut down traffic, or vandalized Jewish institutions? Because, just like the abolitionists and Civil Rights marchers and South African anti-apartheid heroes, they’re on the right side of history. Surely the rules and regulations of the present are of little concern next to the vindication of future generations?

The main purpose of this post is not to adjudicate whether their claim is true or false, but to grapple with something much more basic: what kind of claim are they even making, and who is its intended audience?

One reading of “we’re on the right of history” is that it’s just a fancy way to say “we’re right and you’re wrong.” In which case, fair enough! Few people passionately believe themselves to be wrong.

But there’s a difficulty: if you truly believe your side to be right, then you should believe it’s right win or lose. For example, an anti-Zionist should say that, even if Israel continues existing, and even if everyone else on the planet comes to support it, still eliminating Israel would’ve been the right choice. Conversely, a Zionist should say that if Israel is destroyed and the whole rest of the world celebrates its destruction forevermore—well then, the whole world is wrong. (That, famously, is more-or-less what the Jews did say, each time Israel and Judah were crushed in antiquity.)

OK, but if the added clause “of history” is doing anything in the phrase “the right side of history,” that extra thing would appear to be an empirical prediction. The protesters are saying: “just like the entire world looks back with disgust at John Calhoun, Bull Connor, and other defenders of slavery and then segregation, so too will the world look back with disgust at anyone who defends Israel now.”

Maybe this is paired with a theory about the arc of the moral universe bending toward justice: “we’ll win the future and then look back with disgust on you, and we’ll be correct to do so, because morality inherently progresses over time.” Or maybe it has merely the character of a social threat: “we’ll win the future and then look back with disgust on you, so regardless of whether we’ll be right or wrong, you’d better switch to our side if you know what’s good for you.”

Either way, the claim of winning the future is now the kind of thing that could be wagered about in a prediction market. And, in essence, the Right-Side-of-History people are claiming to be able to improve on today’s consensus estimate: to have a hot morality tip that beats the odds. But this means that they face the same problem as anyone who claims it’s knowable that, let’s say, a certain stock will increase a thousandfold. Namely: if it’s so certain, then why hasn’t the price shot up already?

The protesters and their supporters have several possible answers. Many boil down to saying that most people—because they need to hold down a job, earn a living, etc.—make all sorts of craven compromises, preventing them from saying what they know in their hearts to be true. But idealistic college students, who are free from such burdens, are virtually always right.

Does that sound like a strawman? Then recall the comedian Sarah Silverman’s famous question from eight years ago:

PLEASE tell me which times throughout history protests from college campuses got it wrong. List them for me

Crucially, lots of people happily took Silverman up on her challenge. They pointed out that, in the Sixties and Seventies, thousands of college students, with the enthusiastic support of many of their professors, marched for Ho Chi Minh, Mao, Castro, Che Guevara, Pol Pot, and every other murderous left-wing tyrant to sport a green uniform and rifle. Few today would claim that these students correctly identified the Right Side of History, despite the students’ certainty that they’d done so.

(There were also, of course, moderate protesters, who merely opposed America’s war conduct—just like there are moderate protesters now who merely want Israel to end its Gaza campaign rather than its existence. But then as now, the revolutionaries sucked up much of the oxygen, and the moderates rarely disowned them.)

What’s really going on, we might say, is reference class tennis. Implicitly or explicitly, the anti-Israel protesters are aligning themselves with Gandhi and MLK and Nelson Mandela and every other celebrated resister of colonialism and apartheid throughout history. They ask: what are the chances that all those heroes were right, and we’re the first ones to be wrong?

The trouble is that someone else could just as well ask: what are the chances that Hamas is the first group in history to be morally justified in burning Jews alive in their homes … even though the Assyrians, Babylonians, Romans, Crusaders, Inquisitors, Cossacks, Nazis, and every other group that did similar things to the Jews over 3000 years is now acknowledged by nearly every educated person to have perpetrated an unimaginable evil? What are the chances that, with Israel’s establishment in 1948, this millennia-old moral arc of Western civilization suddenly reversed its polarity?

We should admit from the outset that such a reversal is possible. No one, no matter how much cruelty they’ve endured, deserves a free pass, and there are certainly many cases where victims turned into victimizers. Still, one could ask: shouldn’t the burden be on those who claim that today‘s campaign against Jewish self-determination is history’s first justified one?

It’s like, if I were a different person, born to different parents in a different part of the world, maybe I’d chant for Israel’s destruction with the best of them. Even then, though, I feel like the above considerations would keep me awake at night, would terrify me that maybe I’d picked the wrong side, or at least that the truth was more complicated. The certainty implied by the “right side of history” claim is the one part I don’t understand, as far as I try to stretch my sympathetic imagination.


For all that, I, too, have been moved by rhetorical appeals to “stand on the right side of history”—say, for the cause of Ukraine, or slowing down climate change, or saving endangered species, or defeating Trump. Thinking it over, this has happened when I felt sure of which side was right (and would ultimately be seen to be right), but inertia or laziness or inattention or whatever else prevented me from taking action.

When does this happen for me? As far as I can tell, the principles of the Enlightenment, of reason and liberty and progress and the flourishing of sentient life, have been on the right side of every conflict in human history. My abstract commitment to those principles doesn’t always tell me which side of the controversy du jour is correct, but whenever it does, that’s all I ever need cognitively; the rest is “just” motivation and emotion.

(Amusingly, I expect some people to say that my “reason and Enlightenment” heuristic is vacuous, that it works only because I define those ideals to be the ones that pick the right side. Meanwhile, I expect others to say that the heuristic is wrong and to offer counterexamples.)

Anyway, maybe this generalizes. Sure, a call to “stand on the right side of history” could do nontrivial work, but only in the same way that a call to buy Bitcoin in 2011 could—namely, for those who’ve already concluded that buying Bitcoin is a golden opportunity, but haven’t yet gotten around to buying it. Such a call does nothing for anyone who’s already considered the question and come down on the opposite side of it. The abuse of “arc of the moral universe” rhetoric—i.e., the calling down of history’s judgment in favor of X, even though you know full well that your listeners see themselves as having consulted history’s judgment just as earnestly as you did, and gotten back not(X) instead—yeah, that’s risen to be one of my biggest pet peeves. If I ever slip up and indulge in it, please tell me and I’ll stop.

My Reading Burden

August 14th, 2024

Want some honesty about how I (mis)spend my time? These days, my daily routine includes reading all of the following:

Many of these materials contain lists of links to other articles, or tweet threads, some of which then take me hours to read in themselves. This is not counting podcasts or movies or TV shows.

While I read unusually quickly, I’d estimate that my reading burden is now at eight hours per day, seven days per week. I haven’t finished reading by the time my kids are back from school or day camp. Now let’s add in my actual job (or two jobs, although the OpenAI one is ending this month, and I start teaching again in two weeks). Add in answering emails (including from fans and advice-seekers), giving lectures, meeting grad students and undergrads, doing Zoom calls, filling out forms, consulting, going on podcasts, reviewing papers, taking care of my kids, eating, shopping, personal hygiene.

As often as not, when the day is done, it’s not just that I’ve achieved nothing of lasting value—it’s that I’ve never even started with research, writing, or any long-term projects. This contrasts with my twenties, when obsessively working on research problems and writing up the results could easily fill my day.

The solution seems obvious: stop reading so much. Cut back to a few hours per day, tops. But it’s hard. The rapid scale-up of AI is a once-in-the-history-of-civilization story that I feel astounded to be living through and compelled to follow, and just keeping up with the highlights is almost a full-time job in itself. The threat to democracy from Trump, Putin, Xi, Maduro, and the world’s other authoritarians is another story that I feel unable to look away from.

Since October 7, though, the once-again-precarious situation of Jews everywhere on earth has become, on top of everything else it is, the #1 drain on my time. It would be one thing if I limited myself to thoughtful analyses, but I can easily lose hours per day doomscrolling through the infinite firehose of strident anti-Zionism (and often, simple unconcealed Jew-hatred) that one finds for example on Twitter, Facebook, and the comment sections of Washington Post articles. Every time someone calls the “Zios” land-stealing baby-killers who deserve to die, my brain insists that they’re addressing me personally. So I stop to ponder the psychology of each individual commenter before moving on to the next, struggle to see the world from their eyes. Would explaining the complex realities of the conflict change this person’s mind? What about introducing them to my friends and relatives in Israel who never knew any other home and want nothing but peace, coexistence, and a two-state solution?

I naturally can’t say that all this compulsive reading makes me happy or fulfilled. Worse yet, I can’t even say it makes me feel more informed. What I suppose it does make me feel is … excused. If so much is being written daily about the biggest controversies in the world, then how can I be blamed for reading it rather than doing anything new?

At the risk of adding even more to the terrifying torrent of words, I’d like to hear from anyone who ever struggled with a similar reading addiction, and successfully overcame it. What worked for you?


Update (Aug. 15): Thanks so much for the advice, everyone! I figured this would be the perfect day to put some of your wisdom into practice, and finally go on a reading fast and embark on some serious work. So of course, this is the day that Tablet and The Free Press had to drop possibly the best pieces in their respective histories: namely, a gargantuan profile of the Oculus and Anduril founder Palmer Luckey, and an interview with an anonymous Palestinian who, against huge odds, landed a successful tech career and a group of friends in Israel, but who’s now being called “traitor” by other Palestinians for condemning the October 7 massacre and who fears for his life. Both of these articles could be made into big-budget feature films—I’m friggin serious. But the more immediate task is to get this anonymous Palestinian hero out of harm’s way while there’s still time.

And as for my reading fast, there’s always tomorrow.

My pontificatiest AI podcast ever!

August 11th, 2024

Back in May, I had the honor (nay, honour) to speak at HowTheLightGetsIn, an ideas festival held annually in Hay-on-Wye on the English/Welsh border. It was my first time in that part of the UK, and I loved it. There was an immense amount of mud due to rain on the festival ground, and many ideas presented at the talks and panels that I vociferously disagreed with (but isn’t that the point?).

At some point, interviewer Alexis Papazoglou with the Institute for Art and Ideas ambushed me while I was trudging through the mud to sit me down for a half-hour interview about AI that I’d only vaguely understood was going to take place, and that interview is now up on YouTube. I strongly recommend listening at 2x speed: you’ll save yourself fifteen minutes, I’ll sound smarter, my verbal infelicities will be less noticeable, what’s not to like?

I was totally unprepared and wearing a wrinkled t-shirt, but I dutifully sat in the beautiful chair arranged for me and shot the breeze about AI. The result is actually one of the recorded AI conversations I’m happiest with, the one that might convey the most of my worldview per minute. Topics include:

  • My guesses about where AI is going
  • How I respond to skeptics of AI
  • The views of Roger Penrose and where I part ways from him
  • The relevance (or not) of the quantum No-Cloning Theorem to the hard problem of consciousness
  • Whether and how AI will take over the world
  • An overview of AI safety research, including interpretability and dangerous capability evaluations
  • My work on watermarking for OpenAI

Last night I watched the video with my 7-year-old son. His comment: “I understood it, and it kept my brain busy, but it wasn’t really fun.” But hey, at least my son didn’t accuse me of being so dense I don’t even understand that “an AI is just a program,” like many commenters on YouTube did! My YouTube critics, in general, were helpful in reassuring me that I wasn’t just arguing with strawmen in this interview (is there even such a thing as a strawman position in philosophy and AI?). Of course the critics would’ve been more helpful still if they’d, y’know, counterargued, rather than just calling me “really shallow,” “superficial,” an “arrogant poser,” a “robot,” a “chattering technologist,” “lying through his teeth,” and “enmeshed in so many faulty assumptions.” Watch and decide for yourself!

Meanwhile, there’s already a second video on YouTube, entitled Philosopher reacts to ‘OpenAI expert Scott Aaronson on consciousness, quantum physics, and AI safety.’   So I opened the video, terrified that I was about to be torn a new asshole. But no, this philosopher just replays the whole interview, occasionally pausing it to interject comments like “yes, really interesting, I agree, Scott makes a great point here.”


Update: You can also watch the same interviewer grill General David Petraeus, at the same event in the same overly large chairs.

My “Never-Trump From Here to Eternity” FAQ

July 30th, 2024

Q1: Who will you be voting for in November?

A: Kamala Harris (and mainstream Democrats all down the ballot), of course.

Q2: Of course?

A: If the alternative is Trump, I would’ve voted for Biden’s rotting corpse. Or for Hunter Biden. Or for…

Q3: Why can’t you see this is just your Trump Derangement Syndrome talking?

A: Look, my basic moral commitments remain pretty much as they’ve been since childhood. Namely, that I’m on the side of reason, Enlightenment, scientific and technological progress, secular government, pragmatism, democracy, individual liberty, justice, intellectual honesty, an American-led peaceful world order, preservation of the natural world, mitigation of existential risks, and human flourishing. (Crazy and radical, I know.)

Only when choosing between candidates who all espouse such values, do I even get the luxury of judging them on any lower-order bits. Sadly, I don’t have that luxury today. Trump’s values, such as they are, would seem to be “America First,” protectionism, vengeance, humiliation of enemies, winning at all costs, authoritarianism, the veneration of foreign autocrats, and the veneration of himself. No amount of squinting can ever reconcile those with the values I listed before.

Q4: Is that all that’s wrong with him?

A: No, there are also the lies, and worst of all the “Big Lie.” Trump is the first president in US history to incite a mob to try to overturn the results of an election. He was serious! He very nearly succeeded, and probably would have, had Mike Pence been someone else. It’s now inarguable that Trump rejects the basic rules of our system, or “accepts” them only when he wins. We’re numb from having heard it so many times, but it’s a big deal, as big a deal as the Civil War was.

Q5: Oh, so this is about your precious “democracy.” Why do you care? Haven’t you of all people learned that the masses are mostly idiots and bullies, who don’t deserve power? As Curtis Yarvin keeps trying to explain to you, instead of “democracy,” you should want a benevolent king or dictator-CEO, who could offer a privileged position to the competent scientists like yourself.

A: Yeah, so how many examples does history furnish where that worked out well? I suppose you might make a partial case for Napoleon, or Ataturk? More to the point: even if benevolent, science-and-reason-loving authoritarian strongmen are possible in theory, do you really expect me to believe that Trump could be one of them? I still love how Scott Alexander put it in 2016:

Can anyone honestly say that Trump or his movement promote epistemic virtue? That in the long-term, we’ll be glad that we encouraged this sort of thing, that we gave it power and attention and all the nutrients it needed to grow? That the road to whatever vision of a just and rational society we imagine, something quiet and austere with a lot of old-growth trees and Greek-looking columns, runs through LOCK HER UP?

I don’t like having to vote for the lesser of two evils. But at least I feel like I know who it is.

Q6: But what about J. D. Vance? He got his start in Silicon Valley, was championed by Peter Thiel, and is obviously highly intelligent. Doesn’t he seem like someone who might listen to and empower tech nerds like yourself?

A: Who can say what J. D. Vance believes? Here are a few choice quotes of his from eight years ago:

I’m obviously outraged at Trump’s rhetoric, and I worry most of all about how welcome Muslim citizens feel in their own country. But I also think that people have always believed crazy shit (I remember a poll from a few years back suggesting that a near majority of democratic voters blame ‘the Jews’ for the financial crisis). And there have always been demagogues willing to exploit the people who believe crazy shit.

The more white people feel like voting for trump, the more black people will suffer. I really believe that.

[Trump is] just a bad man. A morally reprehensible human being.

To get from that to being Trump’s running mate is a Simone-Biles-like feat of moral acrobatics. Vance reminds me of the famous saying by L. Ron Hubbard from his pre-Dianetics days: “If a man really wants to make a million dollars, the best way would be to start his own religion.” (And I feel like Harris’s whole campaign strategy should just be to replay Vance’s earlier musings in wall-to-wall ads while emphasizing her agreement with them.) No, Vance is not someone I trust to share my values, if he has values at all.

Q7: What about the other side’s values, or lack thereof? I mean, don’t you care that the whole Democratic establishment—including Harris—colluded to cover up that Biden was senile and cognitively unfit to be president now, let alone for another term?

A: Look, we’ve all seen what happens as a relative gets old. It’s gradual. It’s hard for anyone to say at which specific moment they can no longer drive a car, or be President of the United States, or whatever. This means that I don’t necessarily read evil intent into the attempts to cover up Biden’s decline—merely an epic, catastrophic failure of foresight. That failure of foresight itself would’ve been a huge deal in normal circumstances, but these are not normal circumstances—not if you believe, as I do, that the alternative is the beginning of the end of a 250-year-old democratic experiment.

Q8: Oh stop being so melodramatic. What terrible thing happened to you because of Trump’s first term? Did you lose your job? Did fascist goons rough you up in the street?

A: Well, my Iranian PhD student came close to having his visa revoked, and it became all but impossible to recruit PhD students from China. That sucked, since I care about my students’ welfare like I care about my own. Also, the downfall of Roe v. Wade, which enabled Texas’ draconian new abortion laws, made it much harder for us to recruit faculty at UT Austin. But I doubt any of that will impress you. “Go recruit American students,” you’ll say. “Go recruit conservative faculty who are fine with abortion being banned.”

The real issue is that Trump was severely restrained in his first term, by being surrounded by people who (even if, in many cases, they started out loyal to him) were also somewhat sane and valued the survival of the Republic. Alas, he learned from that, and he won’t repeat that mistake the next time.

Q9: Why do you care so much about Trump’s lies? Don’t you realize that all politicians lie?

A: Yes, but there are importantly different kinds of lies. There are white lies. There are scheming, 20-dimensional Machiavellian lies, like a secret agent’s cover story (or is that only in fiction?). There are the farcical, desperate, ever-shifting lies of the murderer to the police detective or the cheating undergrad to the professor. And then there are the lies of bullies and mob bosses and populist autocrats, which are special and worse.

These last, call them power-lies, are distinguished by the fact that they aren’t even helped by plausibility. Often, as with conspiracy theories (which strongly overlap with power-lies), the more absurd the better. Obama was born in Kenya. Trump’s crowd was the biggest in history. The 2020 election was stolen by a shadowy conspiracy involving George Soros and Dominion and Venezuela.

The central goal of a power-lie is just to demonstrate your power to coerce others into repeating it, much like with the Party making Winston Smith affirm 2+2=5, or Petruchio making Katharina call the sun the moon in The Taming of the Shrew. A closely-related goal is as a loyalty test for your own retinue.

It’s Trump’s embrace of the power-lie that puts him beyond the pale for me.

Q10: But Scott, we haven’t even played our “Trump” card yet. Starting on October 7, 2023, did you not witness thousands of your supposed allies, the educated secular progressives on “the right side of history,” cheer the sadistic mass-murder of Jews—or at least, make endless excuses for those who did? Did this not destabilize your entire worldview? Will you actually vote for a party half of which seems at peace with the prospect of your family members’ physical annihilation? Or will you finally see who your real friends now are: Arkansas MAGA hillbillies who pray for your people’s survival?

A: Ah, this is your first slash that’s actually drawn blood. I won’t pretend that the takeover of part of the US progressive coalition by literal Hamasniks hasn’t been one of the most terrifying experiences of my life. Yes, if I had to be ruled by either (a) a corrupt authoritarian demagogue or (b) an idiot college student chanting for “Intifada Revolution,” I’d be paralyzed. So it’s lucky that I don’t face that choice! I get to vote, once more, for a rather boring mainstream Democrat—alongside at least 70% of American Jews. The idea of Harris as an antisemite would be ludicrous even if she didn’t have a Jewish husband or wasn’t strongly considering a pro-Israel Jew as her running mate.

Q11: Sure, Kamala Harris might mouth all the right platitudes about Israel having a right to defend itself, but she’ll constantly pressure Israel to make concessions to Hamas and Hezbollah. She’ll turn a blind eye to Iran’s imminent nuclearization. Why don’t you stay up at night worrying that, if you vote for a useful idiot like her, you’ll have Israel’s annihilation and a second Holocaust on your conscience forever?

A: Look, oftentimes—whenever, for example, I’m spending hours reading anti-Zionists on Twitter—I feel like there’s no limit to how intensely Zionist I am. On reflection, though, there is a limit. Namely, I’m not going to be more Zionist than the vast majority of my Israeli friends and colleagues—the ones who served in the IDF, who in some cases did reserve duty in Gaza, who prop up the Israeli economy with their taxes, and who will face the consequences of whatever happens more directly than I will. With few exceptions, these friends despise the Trump/Bibi alliance with white-hot rage, and they desperately want more moderate leadership in both countries.

Q12: Suppose I concede that Kamala is OK on Israel. We both know that she’s not the future of the Democratic Party, any more than Biden is. The future is what we all saw on campuses this spring. “Houthis Houthis make us proud, turn another ship around.” How can you vote for a party whose rising generation seems to want you and your family dead?

A: Let me ask you something. When Trump won in 2016, did that check the power of the campus radicals? Or as Scott Alexander prophesied at the time, did it energize and embolden them like nothing else, by dramatically confirming their theology of a planet held hostage by the bullying, misogynistic rich white males? I fundamentally reject your premise that, if I’m terrified of crazy left-wing extremists, then a good response is to vote for the craziest right-wing extremists I can find, in hopes that the two will somehow cancel each other out. Instead I should support a coherent Enlightenment alternative to radicalism, or the closest thing to that available.

Q13: Even leaving aside Israel, how can you not be terrified by what the Left has become? Which side denounced you on social media a decade ago, as a misogynist monster who wanted all women to be his sex slaves? Which side tried to ruin your life and career? Did we, the online rightists, do that? No. We did not. We did nothing worse to you than bemusedly tell you to man up, grow a pair, and stop pleading for sympathy from feminists who will hate you no matter what.

A: I’ll answer with a little digression. Back in 2017, when Kamala Harris was in the Senate, her office invited me to DC to meet with them to provide advice about the National Quantum Initiative Act, which Kamala was then spearheading. Kamala herself sent regrets that she couldn’t meet me, because she had to be at the Kavanaugh hearings. I have (nerdy, male) friends who did meet her about tech policy and came away with positive impressions.

And, I dunno, does that sound like someone who wants me dead for the crime of having been born a nerdy heterosexual male? Or having awkwardly and ineptly asked women on dates, including the one who became my wife? OK, maybe Amanda Marcotte wants me dead for those crimes. Maybe Arthur Chu does (is he still around?). Good that they’re not running for president then.

Q14: Let me try one more time to show you how much your own party hates you. Which side has been at constant war against the SAT and other standardized tests, and merit-based college admissions, and gifted programs, and academic tracking and acceleration, and STEM magnet schools, and every single other measure by which future young Scott Aaronsons (and Saket Agrawals) might achieve their dreams in life? Has that been our side, or theirs?

A: To be honest, I haven’t seen the Trump or Harris campaigns take any position on any of these issues. Even if they did, there’s very little that the federal government can do: these battles happen in individual states and cities and counties and universities. So I’ll vote for Harris while continuing to advocate for what I think is right in education policy.

Q15: Can you not see that Kamala Harris is a vapid, power-seeking bureaucratic machine—that she has no fixed principles at all? For godsakes, she all but condemned Biden as a racist in the 2020 primary, then agreed to serve as his running mate!

A: I mean, she surely has more principles than Vance does. As far as I can tell, for example, she’s genuinely for abortion rights (as I am). Even if she believed in nothing, though, better a cardboard cutout on which values I recognize are written, than a flesh-and-blood person shouting values that horrify me.

Q16: What, if anything, could Republicans do to get you to vote for them?

A: Reject all nutty conspiracy theories. Fully, 100% commit to the peaceful transfer of power. Acknowledge the empirical reality of human-caused climate change, and the need for both technological and legislative measures to slow it and mitigate its impacts. Support abortion rights, or at least a European-style compromise on abortion. Republicans can keep the anti-wokeness stuff, which actually seems to have become their defining issue. If they do all that, and also the Democrats are taken over by frothing radicals who want to annihilate the state of Israel and abolish the police … that’s, uh, probably the point when I start voting Republican.

Q17: Aha, so you now admit that there exist conceivable circumstances that would cause you to vote Republican! In that case, why did you style yourself “Never-Trump From Here to Eternity”?

A: Tell you what, the day the Republicans (and Trump himself?) repudiate authoritarianism and start respecting election outcomes, is the day I’ll admit my title was hyperbolic.

Q18: In the meantime, will you at least treat us Trump supporters with civility and respect?

A: Not only does civil disagreement not compromise any of my values, it is a value to which I think we should all aspire. And to whatever extent I’ve fallen short of that ideal—even when baited into it—I’m sorry and I’ll try to do better. Certainly, age and experience have taught me that there’s hardly anyone so far gone that I can’t find something on which I agree with them, while disagreeing with most of the rest of the world.

New comment policy

July 15th, 2024

Update (July 24): Remember the quest that Adam Yedidia and I started in 2016, to find the smallest n such that the value of the nth Busy Beaver number can be proven independent of the axioms of ZF set theory? We managed to show that BB(8000) was independent. This was later improved to BB(745) by Stefan O’Rear and Johannes Riebel. Well, today Rohan Ridenour writes to tell me that he’s achieved a further improvement to BB(643). Awesome!


With yesterday’s My Prayer, for the first time I can remember in two decades of blogging, I put up a new post with the comments section completely turned off. I did so because I knew my nerves couldn’t handle a triumphant interrogation from Trumpist commenters about whether, in the wake of their Messiah’s (near-)blood sacrifice on behalf of the Nation, I’d at last acquiesce to the dissolution of America’s constitutional republic and its replacement by the dawning order: one where all elections are fraudulent unless the MAGA candidate wins, and where anything the leader does (including, e.g., jailing his opponents) is automatically immune from prosecution. I couldn’t handle it, but at the same time, and in stark contrast to the many who attack from my left, I also didn’t care what they thought of me.

With hindsight, turning off comments yesterday might be the single best moderation decision I ever made. I still got feedback on what I’d written, on Facebook and by email and text message and in person. But this more filtered feedback was … thoughtful. Incredibly, it lowered the stress that I was feeling rather than raising it even higher.

For context, I should explain that over the past couple years, one or more trolls have developed a particularly vicious strategy against me. Below my every blog post, even the most anodyne, a “new” pseudonymous commenter shows up to question me about the post topic, in what initially looks like a curious, good-faith way. So I engage, because I’m Scott Aaronson and that’s what I do; that’s a large part of the value I can offer the world.

Then, only once a conversation is underway does the troll gradually ratchet up the level of crazy, invariably ending at some place tailor-made to distress me (for example: vaccines are poisonous, death to Jews and Israel, I don’t understand basic quantum mechanics or computer science, I’m a misogynist monster, my childhood bullies were justified and right). Of course, as soon as I’ve confirmed the pattern, I send further comments straight to the trash. But the troll then follows up with many emails taunting me for not engaging further, packed with farcical accusations and misreadings for me to rebut and other bait.

Basically, I’m now consistently subjected to denial-of-service attacks against my open approach to the world. Or perhaps I’ve simply been schooled in why most people with audiences of thousands or more don’t maintain comment sections where, by default, they answer everyone! And yet it’s become painfully clear that, as long as I maintain a quasi-open comment section, I’ll feel guilty if I don’t answer everyone.


So without further ado, I hereby announce my new comment policy. Henceforth all comments to Shtetl-Optimized will be treated, by default, as personal missives to me—with no expectation either that they’ll appear on the blog or that I’ll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I’ll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I’ll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

The new policy starts now, in the comment section of this post. To the many who’ve asked me for this over the years, you’re welcome!

My Prayer

July 14th, 2024

It is the duty of good people, always and everywhere, to condemn, reject, and disavow the use of political violence.

Even or especially when evildoers would celebrate the use of political violence against us.

It is our duty always to tell the truth, always to play by the rules — even when evil triumphs by lying, by sneeringly flouting every rule.

It appears to be an iron law of Fate that whenever good tries to steal a victory by evil means, it fails. This law is so infallible that any good that tries to circumvent it thereby becomes evil.

When Sam Bankman-Fried tries to save the world using financial fraud — he fails. Only the selfish succeed through fraud.

When kind, nerdy men, in celibate desperation, try to get women to bed using “Game” and other underhanded tactics — they fail. Only the smirking bullies get women that way.

Quantum mechanics is false, because its Born Rule speaks of randomness.

But randomness can’t explain why a bullet aimed at a destroyer of American democracy must inevitably miss by inches, while a bullet aimed at JFK or RFK or MLK or Gandhi or Rabin must inevitably meet its target.

Yet for all that, over the millennia, good has made actual progress. Slavery has been banished to the shadows. Children survive to adulthood. Sometimes altruists become billionaires, or billionaires altruists. Sometimes the good guy gets the girl.

Good has progressed not by lucky breaks — for good never gets lucky breaks — but only because the principles of good are superior.

There’s a kind of cosmic solace that could be offered even to the Jewish mother in the gas chamber watching her children take their last breaths, though the mother could be forgiven for rejecting it.

The solace is that good will triumph — if not in the next four years, then in the four years after that.

Or if not in four, then in a hundred.

Or if not in a hundred, then in a thousand.

Or if not in the entire history of life in on this planet, then on a different planet.

Or if not in this universe, then in a different universe.

Let us commit to fighting for good using good methods only. Fate has decreed in any case that, for us, those are the only methods that work.

Let us commit to use good methods only even if it means failure, heartbreak, despair, the destruction of democratic institutions and ecosystems multiplied by a thousand or a billion or any other constant — with the triumph of good only in the asymptotic limit.

Good will triumph, when it does, only because its principles are superior.

Endnote: I’ve gotten some pushback for this prayer from one of my scientific colleagues … specifically, for the part of the prayer where I deny the universal validity of the Born rule. And yet a less inflammatory way of putting the same point would simply be: I am not a universal Bayesian. There are places where my personal utility calculations do a worst-case analysis rather than averaging over possible futures for the world.

Endnote 2: It is one thing to say, never engage in political violence because the expected utility will come out negative. I’m saying something even stronger than that. Namely, even if the expected utility comes out positive, throw away the whole framework of being an expected-utility maximizer before you throw away that you’re never going to endorse political violence. There’s a class of moral decisions for which you’re allowed to use, even commendable for using, expected-utility calculations, and this is outside that class.

Endnote 3: If you thought that Trump’s base was devoted before, now that the MAGA Christ-figure has sacrificed his flesh — or come within a few inches of doing so — on behalf of the Nation, they will go to the ends of the earth for him, as much as any followers did for any ruler in human history. Now the only questions, assuming Trump wins (as he presumably will), are where he chooses to take his flock, and what emerges in the aftermath for what we currently call the United States. I urge my left-leaning American friends to look into second passports. Buckle up, and may we all be here to talk about it on the other end.

Quantum developments!

July 11th, 2024

Perhaps like the poor current President of the United States, I can feel myself fading, my memory and verbal facility and attention to detail failing me, even while there’s so much left to do to battle the nonsense in the world. I started my career on an accelerated schedule—going to college at 15, finishing my PhD at 22, etc. etc.—and the decline is (alas) also hitting me early, at the ripe age of 43.

Nevertheless, I do seem to remember that this was once primarily a quantum computing blog, and that I was known to the world as a quantum computing theorist. And exciting things continue to happen in quantum computing…


First, a company in the UK called Oxford Ionics has announced that it now has a system of trapped-ion qubits in which it’s prepared two-qubit maximally entangled states with 99.97% fidelity. If true, this seems extremely good. Indeed, it seems better than the numbers from bigger trapped-ion efforts, and quite close to the ~99.99% that you’d want for quantum fault-tolerance. But maybe there’s a catch? Will they not be able to maintain this kind of fidelity when doing a long sequence of programmable two-qubit gates on dozens of qubits? Can the other trapped-ion efforts actually achieve similar fidelities in head-to-head comparisons? Anyway, I was surprised to see how little attention the paper got on SciRate. I look forward to hearing from experts in the comment section.


Second, I almost forgot … but last week Quantinuum announced that it’s done a better quantum supremacy experiment based on Random Circuit Sampling with 56 qubits—similar to what Google and USTC did in 2019-2020, but this time using 2-qubit gates with 99.84% fidelities (rather than merely ~99.5%). This should set a new standard for those looking to simulate these things using tensor network methods.


Third, a new paper by Schuster, Haferkamp, and Huang gives a major advance on k-designs and pseudorandom unitaries. Roughly speaking, the paper shows that even in one dimension, a random n-qubit quantum circuit, with alternating brickwork layers of 2-qubit gates, forms a “k-design” after only O(k polylog k log n) layers of gates. Well, modulo one caveat: the “random circuit” isn’t from the most natural ensemble, but has to have some of its 2-qubit gates set to the identity, namely those that straddle certain contiguous blocks of log n qubits. This seems like a purely technical issue—how could randomizing those straddling gates make the mixing behavior worse?—but future work will be needed to address it. Notably, the new upper bound is off from the best-possible k layers by only logarithmic factors. (For those tuning in from home: a k-design informally means a collection of n-qubit unitaries such that, from the perspective of degree-k polynomials, choosing a unitary randomly from the collection looks the same as choosing randomly among all n-qubit unitary transformations—i.e., from the Haar measure.)

Anyway, even in my current decrepit state, I can see that such a result would have implications for … well, all sorts of things that quantum computing and information theorists care about. Again I welcome any comments from experts!


Incidentally, congratulations to Peter Shor for winning the Shannon Award!