Letter to a Jewish voter in Pennsylvania

November 3rd, 2024

Election Day Update: For anyone who’s still undecided (?!?), I can’t beat this from Sam Harris.

When I think of Harris winning the presidency this week, it’s like watching a film of a car crash run in reverse: the windshield unshatters; stray objects and bits of metal converge; and defenseless human bodies are hurled into states of perfect repose. Normalcy descends out of chaos.


Important Announcement: I don’t in any way endorse voting for Jill Stein, or any other third-party candidate. But if you are a Green Party supporter who lives in a swing state, then please at least vote for Harris, and use SwapYourVote.org to arrange for two (!) people in safe states to vote for Jill Stein on your behalf. Thanks so much to friend-of-the-blog Linchuan Zhang for pointing me to this resource.

Added on Election Day: And, if you swing that way, click here to arrange to have your vote for Kamala in a swing state traded for two votes for libertarian candidate Chase Oliver in safe states. In any case, if you’re in a swing state and you haven’t yet voted (for Kamala Harris and for the norms of civilization), do!


For weeks I’d been wondering what I could say right before the election, at this momentous branch-point in the wavefunction, that could possibly do any good. Then, the other day, a Jewish voter in Pennsylvania and Shtetl-Optimized fan emailed me to ask my advice. He said that he’d read my Never-Trump From Here to Eternity FAQ and saw the problems with Trump’s autocratic tendencies, but that his Israeli friends and family wanted him to vote Trump anyway, believing him better on the narrow question of “Israel’s continued existence.” I started responding, and then realized that my response was the election-eve post I’d been looking for. So without further ado…


Thanks for writing.  Of course this is ultimately between you and your conscience (and your empirical beliefs), but I can tell you what my Israeli-American wife and I did.  We voted for Kamala, without the slightest doubt or hesitation.  We’d do it again a thousand quadrillion times.  We would’ve done the same in the swing state of Pennsylvania, where I grew up (actually in Bucks, one of the crucial swing counties).

And later this week, along with tens of millions of others, I’ll refresh the news with heart palpitations, looking for movement toward blue in Pennsylvania and Wisconsin.  I’ll be joyous and relieved if Kamala wins.  I’ll be ashen-faced if she doesn’t.  (Or if there’s a power struggle that makes the 2021 insurrection look like a dress rehearsal.)  And I’ll bet anyone, at 100:1 odds, that at the end of my life I’ll continue to believe that voting Kamala was the right decision.

I, too, have pro-Israel friends who urged me to switch to Trump, on the ground that if Kamala wins, then (they say) the Jews of Israel are all but doomed to a second Holocaust.  For, they claim, the American Hamasniks will then successfully prevail on Kamala to prevent Israel from attacking Iran’s nuclear sites, or will leave Israel to fend for itself if it does.  And therefore, Iran will finish and test nuclear weapons in the next couple years, and then it will rebuild the battered Hamas and Hezbollah under its nuclear umbrella, and then it will fulfill its stated goal since 1979, of annihilating the State of Israel, by slaughtering all the Jews who aren’t able to flee.  And, just to twist the knife, the UN diplomats and NGO officials and journalists and college students and Wikipedia editors who claimed such a slaughter was a paranoid fantasy, they’ll all cheer it when it happens, calling it “justice” and “resistance” and “intifada.”

And that, my friends say, will finally show me the liberal moral evolution of humanity since 1945, in which I’ve placed so much stock.  “See, even while they did virtually nothing to stop the first Holocaust, the American and British cultural elites didn’t literally cheer the Holocaust as it happened.  This time around, they’ll cheer.”

My friends’ argument is that, if I’m serious about “Never Again” as a moral lodestar of my life, then the one issue of Israel and Iran needs to override everything else I’ve always believed, all my moral and intellectual repugnance at Trump and everything he represents, all my knowledge of his lies, his evil, his venality, all the former generals and Republican officials who say that he’s unfit to serve and an imminent danger to the Republic.  I need to vote for this madman, this pathological liar, this bullying autocrat, because at least he’ll stand between the Jewish people and the darkness that would devour them, as it devoured them in my grandparents’ time.

My friends add that it doesn’t matter that Kamala’s husband is Jewish, that she’s mouthed all the words a thousand times about Israel’s right to defend itself, that Biden and Harris have indeed continued to ship weapons to Israel with barely a wag of their fingers (even as they’ve endured vituperation over it from their left, even as Kamala might lose the whole election over it).  Nor does it matter that a commanding majority of American Jews will vote for Kamala, or that … not most Israelis, but most of the Israelis in academia and tech who I know, would vote for Kamala if they could.  They could all be mistaken about their own interests.  But you and I, say my right-wing friends, realize that what actually matters is Iran, and what the next president will do about Iran.  Trump would unshackle Israel to do whatever it takes to prevent nuclear-armed Ayatollahs.  Kamala wouldn’t.

Anyway, I’ve considered this line of thinking.  I reject it with extreme prejudice.

To start with the obvious, I’m not a one-issue voter.  Presumably you aren’t either.  Being Jewish is a fundamental part of my humanity—if I didn’t know that before I’d witnessed the world’s reaction to October 7, then I certainly know now.  But only in the fantasies of antisemites would I vote entirely on the basis of “is this good for the Jews?”  The parts of me that care about the peaceful transfer of power, about truth, about standing up to Putin, about the basic sanity of the Commander-in-Chief in an emergency, about climate change and green energy and manufacturing, about not destroying the US economy through idiotic tariffs, about talented foreign scientists getting green cards, about the right to abortion, about RFK and his brainworm not being placed in charge of American healthcare, even about AI safety … all those parts of me are obviously for Kamala.

More interestingly, though, the Jewish part of me is also for Kamala—if possible, even more adamantly than other parts.  It’s for Kamala because…

Well, after these nine surreal years, how does one even spell out the Enlightenment case against Trump?  How does one say what hasn’t already been said a trillion times?  Now that the frog is thoroughly boiled, how does one remind people of the norms that used to prevail in America—even after Newt Gingrich and Sarah Palin and the rest had degraded them—and how those norms were what stood between us and savagery … and how laughably unthinkable is the whole concept of Trump as president, the instant you judge him according to those norms?

Kamala, whatever her faults, is basically a normal politician.  She lies, but only as normal politicians lie.  She dodges questions, changes her stances, says different things to different audiences, but only as normal politicians do.  Trump is something else entirely.  He’s one of the great flimflam artists of human history.  He believes (though “belief” isn’t quite the right word) that truth is not something external to himself, but something he creates by speaking it.  He is the ultimate postmodernist.  He’s effectively created a new religion, one of grievance and lies and vengeance against outsiders, and converted a quarter of Americans to his religion, while another quarter might vote it into power because of what they think is in it for them.

And this cult of lies … this is what you ask if Jewish people should enter into a strategic alliance with?  Do you imagine this cult is a trustworthy partner, one likely to keep its promises?

For centuries, Jews have done consistently well under cosmopolitan liberal democracies, and consistently poorly—when they remained alive at all—under nativist tyrants.  Do you expect whatever autocratic regime follows Trump, a regime of JD Vance and Tucker Carlson and the like, to be the first exception to this pattern in history?

For I take it as obvious that a second Trump term, and whatever follows it, will make the first Trump term look like a mere practice run, a Beer Hall Putsch.  Trump I was restrained by John Kelly, by thousands of civil service bureaucrats and judges, by the generals, and in the last instance, by Mike Pence.  But Trump II will be out for the blood of his enemies—he says so himself at his rallies—and will have nothing to restrain him, not even any threat of criminal prosecution.  Do you imagine this goes well for the Jews, or for pretty much anyone?

It doesn’t matter if Trump has no personal animus against Jews—excepting, of course, the majority who vote against him.  Did the idealistic Marxist intellectuals of Russia in 1917 want Stalin?  Did the idealistic Iranian students of Iran in 1979 want Khomeini?  It doesn’t matter: what matters is what they enabled.  Turn over the rock of civilization, and everything that was wriggling underneath is suddenly loosed on the world.

How much time have you spent looking at pro-Israel people on Twitter (Hen Mazzig, Haviv Rettig Gur, etc.), and then—crucially—reading their replies?  I spend at least an hour or two per day on that, angry and depressed though it makes me, perhaps because of an instinct to stare into the heart of darkness, not to look away from a genocidal evil arrayed against my family.  

Many replies are the usual: “Shut the fuck up, Zio, and stop murdering babies.”  “Two-state solution?  I have a different solution: that all you land-thieves pack your bags and go back to Poland.” But then, every time, you reach tweets like “you Jews have been hated and expelled from all the world’s countries for thousands of years, yet you never consider that the common factor is you.”  “Your Talmud commands you to kill goyim children, so that’s why you’re doing it.”  “Even while you maintain apartheid in Palestine, you cynically import millions of third-world savages to White countries, in order to destroy them.”  None of this is the way leftists talk, not even the most crazed leftists.  We’ve now gone all the way around the horseshoe.  Or, we might say, we’re no longer selecting on the left or right of politics at all, but simply on the bottom.

And then you see that these bottom-feeders often have millions of followers each.  They command armies.  The bottom-feeders—left, right, Islamic fundamentalist, and unclassifiably paranoid—are emboldened as never before.  They’re united by a common enemy, which turns out to be the same enemy they’ve always had.

Which brings us to Elon Musk.  I personally believe that Musk, like Trump, has nothing against the Jews, and is if anything a philosemite.  But it’s no longer a question of feelings.  Through his changes to Twitter, Musk has helped his new ally Trump flip over the boulder, and now all the demons that were wriggling beneath are loosed on civilization.

Should we, as Jews, tolerate the demons in exchange for Trump’s tough-guy act on Iran?  Just like the evangelicals previously turned a blind eye to Trump’s philandering, his sexual assaults, his gleeful cruelty, his spitting on everything Christianity was ever supposed to stand for, simply because he promised them the Supreme Court justices to overturn Roe v. Wade?  Faced with a man who’s never had a human relationship in his life that wasn’t entirely transactional, should we be transactional ourselves?

I’m not convinced that even if we did, we’d be getting a good bargain.  Iran is no longer alone, but part of an axis that includes China, Russia, and North Korea.  These countries prop up each other’s economies and militaries; they survive only because of each other.  As others have pointed out, the new Axis is actually more tightly integrated than the Axis powers ever were in WWII.  The new Axis has already invaded Ukraine and perhaps soon Taiwan and South Korea.  It credibly threatens to end the Pax Americana.  And to face Hamas or Hezbollah is to face Iran is to face the entire new Axis.

Now Kamala is not Winston Churchill.  But at least she doesn’t consider the tyrants of Russia, China, and North Korea to be her personal friends, trustworthy because they flatter her.  At least she, unlike Trump, realizes that the current governments of China, Russia, North Korea, and Iran do indeed form a new axis of evil, and she has the glimmers of consciousness that the founders of the United States stood for something different from what those tyrannies stand for, and that this other thing that our founders stood for was good.  If war does come, at least she’ll listen to the advice of generals, rather than clowns and lackeys.  And if Israel or America do end up in wars of survival, from the bottom of my heart she’s the one I’d rather have in charge.  For if she’s in charge, then through her, the government of the United States is still in charge.  Our ripped and tattered flag yet waves.  If Trump is in charge, who or what is at the wheel besides his own unhinged will, or that of whichever sordid fellow-gangster currently has his ear?

So, yes, as a human being and also as a Jew, this is why I voted early for Kamala, and why I hope you’ll vote for her too. If you disagree with her policies, start fighting those policies once she’s inaugurated on January 20, 2025. At least there will still be a republic, with damaged but functioning error-correcting machinery, in which you can fight.

All the best,
Scott


More Resources: Be sure to check out Scott Alexander’s election-eve post, which (just like in 2016) endorses any listed candidate other than Trump, but specifically makes the case to voters put off (as Scott is) by Democrats’ wokeness. Also check out Garry Kasparov’s epic tweet-thread on why he supports Kamala, and his essay The United States Cannot Descend Into Authoritarianism.

Steven Rudich (1961-2024)

November 2nd, 2024

I was sure my next post would be about the election—the sword of Damocles hanging over the United States and civilization as a whole. Instead, I have sad news, but also news that brings memories of warmth, humor, and complexity-theoretic insight.

Steven Rudich—professor at Carnegie Mellon, central figure of theoretical computer science since the 1990s, and a kindred spirit and friend—has died at the too-early age of 63. While I interacted with him much more seldom than I wish I had, it would be no exaggeration to call him one of the biggest influences on my life and career.

I first became aware of Steve at age 17, when I read the Natural Proofs paper that he coauthored with Razborov. I was sitting in the basement computer room at Telluride House at Cornell, and still recall the feeling of awe that came over me with every page. This one paper changed my scientific worldview. It expanded my conception of what the P versus NP problem was about and what theoretical computer science could even do—showing how it could turn in on itself, explain its own difficulties in proving problems hard in terms of the truth of those same problems’ hardness, and thereby transmute defeat into victory. I may have been bowled over by the paper’s rhetoric as much as by its results: it was like, you’re allowed to write that way?

I was nearly as impressed by Steve’s PhD thesis, which was full of proofs that gave off the appearance of being handwavy, “just phoning it in,” but were in reality completely rigorous. The result that excited me the most said that, if a certain strange combinatorial conjecture was true, then there was essentially no hope of proving that P≠NP∩coNP relative to a random oracle with probability 1. I played around with the combinatorial conjecture but couldn’t make headway on it; a year or two later, I was excited when I met Clifford Smyth and he told me that he, Kahn, and Saks had just proved it. Rudich’s conjecture directly inspired me to work on what later became the Aaronson-Ambainis Conjecture, which is still unproved, but which if true, similarly implies that there’s no hope of proving P≠BQP relative to a random oracle with probability 1.

When I applied to CS PhD programs in 1999, I wrote about how I wanted to sing the ideas of theoretical computer science from the rooftops—just like Steven Rudich had done, with the celebrated Andrew’s Leap summer program that he’d started at Carnegie Mellon. (How many other models were there? Indeed, how many other models are there today?) I was then honored beyond words when Steve called me on the phone, before anyone else had, and made an hourlong pitch for me to become his student. “You’re what I call a ‘prefab’,” he said. “You already have the mindset that I try to instill in students by the end of their PhDs.” I didn’t have much self-confidence then, which is why I can still quote Steve’s words a quarter-century later. In the ensuing years, when (as often) I doubted myself, I’d think back to that phone call with Steve, and my burning desire to be what he apparently thought I was.

Alas, when I arrived in Pittsburgh for CMU’s visit weekend, I saw Steve holding court in front of a small crowd of students, dispensing wisdom and doing magic tricks. I was miffed that he never noticed or acknowledged me: had he already changed his mind about me, lost interest? It was only later that I learned that Steve was going blind at the time, and literally hadn’t seen me.

In any case, while I came within a hair of accepting CMU’s offer, in the end I chose Berkeley. I wasn’t yet 100% sure that I wanted to do quantum computing (as opposed to AI or classical complexity theory), but the lure of the Bay Area, of the storied CS theory group where Steve himself had studied, and of Steve’s academic sibling Umesh Vazirani proved too great.

Full of regrets about the road not taken, I was glad that, in the summer between undergrad and PhD, I got to attend the PCMI summer school on computational complexity at the Institute for Advanced Study in Princeton, where Steve gave a spectacular series of lectures. By that point, Steve was almost fully blind. He put transparencies up, sometimes upside-down until the audience corrected him, and then lectured about them entirely from memory. He said that doing CS theory sightless was a new, more conceptual experience for him.

Even in his new condition, Steve’s showmanship hadn’t left him; he held the audience spellbound as few academics do. And in a special lecture on “how to give talks,” he spilled his secrets.

“What the speaker imagines the audience is thinking,” read one slide. And then, inside the thought bubbles: “MORE! HARDER! FASTER! … Ahhhhh yes, QED! Truth is beauty.”

“What the audience is actually thinking,” read the next slide, below which: “When is this over? I need to pee. Can I get a date with the person next to me?” (And this was before smartphones.) And yet, Steve explained, rather than resenting the many demands on the audience’s attention, a good speaker would break through, meet people where they were, just as he was doing right then.

I listened, took mental notes, resolved to practice this stuff. I reflected that, even if my shtick only ever became 10% as funny or fluid as Steve’s, I’d still come out way ahead.

It’s possible that the last time I saw Steve was in 2007, when I visited Carnegie Mellon to give a talk about algebrization, a new barrier to solving P vs. NP (and other central problems of complexity theory) that Avi Wigderson and I had recently discovered. When I started writing the algebrization paper, I very consciously modeled it after the Natural Proofs paper; the one wouldn’t have been thinkable without the other. So you can imagine how much it meant to me when Steve liked algebrization—when, even though he couldn’t see my slides, he got enough from the spoken part of the talk to burst with “conceptual” questions and comments.

Steve not only peeled back the mystery of P vs NP insofar as anyone has. He did it with exuberance and showmanship and humor and joy and kindness. I won’t forget him.


I’ve written here only about the tiniest sliver of Steve’s life: namely, the sliver where it intersected mine. I wish that sliver were a hundred times bigger, so that there’d be a hundred times more to write. But CS theory, and CS more broadly, are communities. When I posted about Steve’s passing on Facebook, I got inundated by comments from friends of mine who (as it turned out) had taken Steve’s courses, or TA’d for him, or attended Andrew’s Leap, or otherwise knew him, and on whom he’d left a permanent impression—and I hadn’t even known any of this.

So I’ll end this post with a request: please share your Rudich stories in the comments! I’d especially love specific recollections of his jokes, advice, insights, or witticisms. We now live in a world where, even in the teeth of the likelihood that P≠NP, powerful algorithms running in massive datacenters nevertheless try to replicate the magic of human intelligence, by compressing and predicting all the text on the public Internet. I don’t know where this is going, but I can’t imagine that it would hurt for the emerging global hive-mind to know more about Steven Rudich.


My podcast with Brian Greene

October 18th, 2024

Yes, he’s the guy from The Elegant Universe book and TV series. Our conversation is 1 hour 40 minutes; as usual I strongly recommend listening at 2x speed. The topics, chosen by Brian, include quantum computing (algorithms, hardware, error-correction … the works), my childhood, the interpretation of quantum mechanics, the current state of AI, the future of sentient life in the cosmos, and mathematical Platonism. I’m happy with how it turned out; in particular, my verbal infelicities seem to have been at a minimum this time. I recommend skipping the YouTube comments if you want to stay sane, but do share your questions and reactions in the comments here. Thanks to Brian and his team for doing this. Enjoy!


Update (Oct. 28): If that’s not enough Scott Aaronson video content for you, please enjoy another quantum computing podcast interview, this one with Ayush Prakash and shorter (clocking in at 45 minutes). Ayush pitched this podcast to me as an opportunity to explain quantum computing to Gen Z. Thus, I considered peppering my explanations of interference and entanglement with such phrases as ‘fo-shizzle’ and ‘da bomb,’ but I desisted after reflecting that whatever youth slang I knew was probably already outdated whenever I’d picked it up, back in the twentieth century.

My Nutty, Extremist Beliefs

October 13th, 2024

In nearly twenty years of blogging, I’ve unfortunately felt more and more isolated and embattled. It now feels like anything I post earns severe blowback, from ridicule on Twitter, to pseudonymous comment trolls, to scary and aggressive email bullying campaigns. Reflecting on this, though, I came to see that such strong reactions are an understandable response to my extremist stances. When your beliefs smash the Overton Window into tiny shards like mine do, what do you expect? Just consider some of the intransigent, hard-line stances I’ve taken here on Shtetl-Optimized:

(1) US politics. I’m terrified of right-wing authoritarian populists and their threat to the Enlightenment. For that and many other reasons, I vote straight-ticket Democrat, donate to Democratic campaigns, and encourage everyone else to do likewise. But I also wish my fellow Democrats would rein in the woke stuff, stand up more courageously to the world’s autocrats, and study more economics, so they understand why rent control, price caps, and other harebrained interventions will always fail.

(2) Quantum computing. I’m excited about the prospects of QC, so much that I’ve devoted most of my career to that field. But I also think many of QC’s commercial applications have been wildly oversold to investors, funding agencies, and the press, and I haven’t been afraid to say so.

(3) AI. I think the spectacular progress of AI over the past few years raises scary questions about where we’re headed as a species.  I’m neither in the camp that says “we’ll almost certainly die unless we shut down AI research,” nor the camp that says “the good guys need to race full-speed ahead to get AGI before the bad guys get it.” I’d like us to proceed in AI research with caution and guardrails and the best interests of humanity in mind, rather than the commercial interests of particular companies.

(4) Climate change. I think anthropogenic climate change is 100% real and one of the most urgent problems facing humanity, and those who deny this are being dishonest or willfully obtuse.  But because I think that, I also think it’s way past time to explore technological solutions like modular nuclear reactors, carbon capture, and geoengineering. I think we can’t virtue-signal or kumbaya our way out of the climate crisis.

(5) Feminism and dating. I think the emancipation of women is one of the modern world’s greatest triumphs.  I reserve a special hatred for misogynistic, bullying men. But I also believe, from experience, that many sensitive, nerdy guys severely overcorrected on feminist messaging, to the point that they became terrified of the tiniest bit of assertiveness or initiative in heterosexual courtship. I think this terror has led millions of them to become bitter “incels.”  I want to figure out ways to disrupt the incel pipeline, by teaching shy nerdy guys to have healthy, confident dating lives, without thereby giving asshole guys license to be even bigger assholes.

(6) Israel/Palestine. I’m passionately in favor of Israel’s continued existence as a Jewish state, without which my wife’s family and many of my friends’ and colleagues’ families would have been exterminated. However, I also despise Bibi and the messianic settler movement to which he’s beholden. I pray for a two-state solution where Israelis and Palestinians will coexist in peace, free from their respective extremists.

(7) Platonism. I think that certain mathematical questions, like the Axiom of Choice or the Continuum Hypothesis, might not have any Platonic truth-value, there being no fact of the matter beyond what can be proven from various systems of axioms. But I also think, with Gödel, that statements of elementary arithmetic, like the Goldbach Conjecture or P≠NP, are just Platonically true or false independent of any axiom system.

(8) Science and religion. As a secular rationalist, I’m acutely aware that no ancient religion can be “true,” in the sense believed by either the ancients or modern fundamentalists. Still, the older I’ve gotten, the more I’ve come to see religions as vast storehouses containing (among much else) millennia of accumulated wisdom about how humans can or should live. As in the parable of Chesterton’s Fence, I think this wisdom is often far from obvious and nearly impossible to derive from first principles. So I think that, at the least, secularists will need to figure out their own long-term methods to encourage many of the same things that religion once did—such as stable families, childbirth, self-sacrifice and courage in defending one’s community, and credible game-theoretic commitments to keeping promises and various other behaviors.

(9) Foreign policy and immigration. I’d like the US to stand more courageously against evil regimes, such as those of China, Russia, and Iran. At the same time, I’d like the US to open our gates much wider to students, scientists, and dissidents from those nations who seek freedom in the West. I think our refusal to do enough of this is a world-historic self-own.

(10) Academia vs. industry. I think both have advantages and disadvantages for people in CS and other technical fields. At their best, they complement each other. When advising a student which path to pursue, I try to find out all I can about the student’s goals and personality.

(11) Population ethics. I’m worried about how the earth will support 9 or 10 billion people with first-world living standards, which is part of why I’d like career opportunities for women, girls’ education, contraception, and (early-term) abortion to become widely available everywhere on earth. All the same, I’m not an antinatalist. I think raising one or more children in a loving home should generally be celebrated as a positive contribution to the world.

(12) The mind-body problem. I think it’s possible that there’s something profound we don’t yet understand about consciousness and its relation to the physical world. At the same time, I think the burden is clearly on the mind-body dualists to articulate what that something might be, and how to reconcile it with the known laws of physics. I admire the audacity of Roger Penrose in tackling this question head-on, but I don’t think his solution works.

(13) COVID response. I think the countries that did best tended to be those that had some coherent stategy—whether that was “let the virus rip, keep schools open, quarantine only the old and sick,” or “aggressively quarantine everyone and wait for a vaccine.” I think countries torn between these strategies, like the US, tended to get the worst of all worlds. On the other hand, I think the US did one huge thing right, which was greatly to accelerate (by historical standards) the testing and distribution of the mRNA vaccines. For the sake of the millions who died and the billions who had their lives interrupted, I only wish we’d rushed the vaccines much more. We ought now to be spending trillions on a vaccine pipeline that’s ready to roll within weeks as soon as the next pandemic hits.

(14) P versus NP. From decades of intuition in math and theoretical computer science, I think we can be fairly confident of P≠NP—but I’d “only” give it, say, 97% odds. Here as elsewhere, we should be open to the possibility of world-changing surprises.

(15) Interpretation of QM. I get really annoyed by bad arguments against the Everett interpretation, which (contrary to a popular misconception) I understand to result from scientifically conservative choices. But I’m also not an Everettian diehard. I think that, if you push questions like “but is anyone home in the other branches?” hard enough, you arrive at questions about personal identity and consciousness that were profoundly confusing even before quantum mechanics. I hope we someday learn something new that clarifies the situation.

Anyway, with extremist, uncompromising views like those, is it any surprise that I get pilloried and denounced so often?

All the same, I sometimes ask myself: what was the point of becoming a professor, seeking and earning the hallowed protections of tenure, if I can’t then freely express radical, unbalanced, batshit-crazy convictions like the ones in this post?

My October 7 post

October 7th, 2024

For weeks I agonized over what, if anything, this post should say. How does one commemorate a tragedy that isn’t over for millions of innocents on either side? How do I add to what friend-of-the-blog Boaz Barak and countless others have already written?

Do I review the grisly details of Black Shabbat, tell the stories of those murdered or still held hostage? Do I rage about the shocking intelligence and operational failures that allowed it to happen? Talk about the orders-of-magnitude spikes in antisemitic incidents all over the world in the past year, which finally answered the question of whether I was going to deal with “the burden of having been born Jewish” as a central concern of my life, rather than only a matter for holidays and history books and museums? Mourn the friends I’ve lost—not, interestingly, my Iranian friends (who were the first to ask after the safety of my Israeli family after October 7) or my other Gentile friends, but mostly my far-left Jewish former friends, the ones who now ludicrously argue that worldwide violence against Jews is justified, and will stop if only we give in and dismantle Israel? I wrote many drafts only to delete them.

The core problem was that there seemed to be nothing I could say that would move the needle, that wouldn’t just be a waste of electrons. From the many times I’d already waded into this minefield of minefields since October 7, 2023, I already knew exactly how it would play out:

  1. Those who support Israel’s continued existence (Jews and non-Jews) would applaud what I said—but they wouldn’t need to hear it anyway.
  2. Those who oppose Israel’s continued existence would send me hate mail, spam my comment section with threats and attacks under invented identities, and otherwise do what they could to make my life miserable.
  3. Everyone else would ignore my post, waiting for me to get back to quantum computing or AI.

What could I do to break through? What could I say to all the people who call themselves “anti-Israel but not antisemitic” that would actually move the conversation forward?

Finally I came up with something. Look: you say you despise Zionism, and consider October 7 to have been perfectly understandable (if somewhat distasteful) resistance by the oppressed? Fine, then.

I urge you to lobby your country to pass a law granting automatic refugee status and citizenship to any current citizen of Israel—as an ultimate insurance policy to incentivize Israel to take greater risks for peace, even with neighbors who openly proclaim the Jews’ extermination as their goal.

When the Jews of Europe faced annihilation in my grandparents’ time, not one country offered to rescue them in more than token numbers. That’s a central reason why, in 1947, the newly-formed UN voted to partition the former British Mandate for Palestine and give the Jews a piece of it: not only because of Jews’ historic connection to the land, predating the Islamic conquest of the Middle East by thousands of years, but also, crucially, because the survivors literally had nowhere else on earth to go.

So, you say you want the hated “settler-colonialists” to leave Palestine. Very well then: give them a place to go. All of them, not just the minority who are dual citizens or otherwise have options.

If the US or UK or Australia or France or Germany or any other country actually passed such an immigration law—well, I can’t determine how the Israelis would respond. I expect that tens of thousands of Israelis would quickly take your country up on its offer, while the majority wouldn’t. I expect that some Jewish and Israeli institutions would criticize you, seeing a desire for Israel’s end in your offer even if you were careful never to say as much.

But I can tell you how I’d respond, and I don’t think I’d be alone in this. I would move to the left on Israel/Palestine. For the first time, the Israeli Jews would plausibly no longer be in an existential struggle, a struggle not to be exterminated by neighbors who tried to exterminate them at every opportunity from 1929 to 1948 to 1967 to 1973 to 2002 to 2023. For the first time there’d be a viable backup plan.

As a direct consequence, I’d advocate that the Israelis take bigger gambles for peace: for example, that they unilaterally withdraw from the West Bank to allow a Palestinian state there, even at the risk that the West Bank turns into a much bigger Gaza, another Hamas staging-ground from which to invade Israel and destroy it. At least there’d be an insurance policy if that happened.

Many will ask: shouldn’t the Palestinians also be offered refuge in other lands? I say, by all means! But crucially, that’s not for me to advocate: if I did, I’d be accused of secretly plotting ethnic cleansing and Israeli expansionism. This is between the Palestinian people and all the other nations, in the Middle East and elsewhere, that for generations could’ve offered refuge to displaced Palestinians (as Israel offered refuge to the displaced Jews from Arab lands) but that chose not to.

And what of all the world’s other oppressed peoples? I promise to praise and honor any nation that saves anyone from oppression or genocide by offering them refuge. But, particularly since last October, the left is obsessed with Israel, which it considers uniquely evil among all nations to have ever existed—so that’s the conflict about which I’m proposing a positive step.

And if the anti-Israel people throw the proposal back in my face, tell me it’s not their job to resettle the hated settlers: then at least we know where we stand. They’ve then told me, not merely that they want half the world’s Jews evicted from their homes, but that they’re totally unconcerned with what happens to them afterward—fully aware that last time, the answer was pits full of corpses, piles of ash, plumes of black smoke.

And that’s the exact point where we reach the end of discussion and argument, such as can happen on blogs. The remaining disagreement can (alas) only be settled on the battlefield. For whatever it’s worth, the Jews famously outlasted the Egyptians, Assyrians, Babylonians, Seleucids, Romans, Soviets, Nazis, and other continent-spanning empires that tried to destroy us. Whether we need missiles, planes, ground invasions, or (yes) exploding pagers, I predict that we’ll survive this latest existential war too, against the Ayatollah regime and its proxies and its millions of Western dupes. Or at least, I predict that we’ll win in the physical world, even while our enemies continue to dominate Facebook and Twitter and the comments section of the Washington Post, where they’ll continue ordering Israelis to “GO BACK TO POLAND,” totally uninterested in the question of whether Poland will take them. I can probably teach myself to live with that. At any rate, better offline victory and online defeat than the other way around.

Quantum advantage for NP approximation? For REAL this time?

October 5th, 2024

The other night I spoke at a quantum computing event and was asked—for the hundredth time? the thousandth?—whether I agreed that the quantum algorithm called QAOA was poised revolutionize industries by finding better solutions to NP-hard optimization problems. I replied that while serious, worthwhile research on that algorithm continues, alas, so far I have yet to see a single piece of evidence that QAOA outperforms the best classical heuristics on any problem that anyone cares about. (Note added: in the comments, Ashley Montanaro shares a paper with empirical evidence that QAOA provides a modest polynomial speedup over known classical heuristics for random k-SAT. This is the best/only such evidence I’ve seen, and which still stands as far as I know!)

I added I was sad to see the arXiv flooded with thousands of relentlessly upbeat QAOA papers that dodge the speedup question by simply never raising it at all. I said that, in my experience, these papers reliably led outsiders to conclude that surely there must be lots of excellent known speedups from QAOA—since otherwise, why would so many people be writing papers about it?

Anyway, the person right after me talked about a “quantum dating app” (!) they were developing.

I figured that, as usual, my words had thudded to the ground with zero impact, truth never having had a chance against what sounds good and what everyone wants to hear.

But then, the morning afterward, someone from the audience emailed me that, incredulous at my words, he went through a bunch of QAOA papers, looking for the evidence of its beating classical algorithms that he knew must be in them, and was shocked to find the evidence missing, just as I had claimed! So he changed his view.

That one message filled me with renewed hope about my ability to inject icy blasts of reality into the quantum algorithms discourse.


So, with that prologue, surely I’m about to give you yet another icy blast of quantum algorithms not helping for optimization problems?

Aha! Inspired by Scott Alexander, this is the part of the post where, having led you one way, I suddenly jerk you the other way. My highest loyalty, you see, is not to any narrative, but only to THE TRUTH.

And the truth is this: this summer, my old friend Stephen Jordan and seven coauthors, from Google and elsewhere, put out a striking preprint about a brand-new quantum algorithm for optimization problems that they call Decoded Quantum Interferometry (DQI). This week Stephen was gracious enough to explain the new algorithm in detail when he visited our group at UT Austin.

DQI can be used for a variety of NP-hard optimization problems, at least in the regime of approximation where they aren’t NP-hard. But a canonical example is what the authors call “Optimal Polynomial Intersection” or OPI, which involves finding a low-degree polynomial that intersects as many subsets as possible from a given list. Here’s the formal definition:

OPI. Given integers n<p with p prime, we’re given as input subsets S1,…,Sp-1 of the finite field Fp. The goal is to find a degree-(n-1) polynomial Q that maximizes the number of y∈{1,…,p-1} such that Q(y)∈Sy, i.e. that intersects as many of the subsets as possible.

For this problem, taking as an example the case p-1=10n and |Sy|=⌊p/2⌋ for all y, Stephen et al. prove that DQI satisfies a 1/2 + (√19)/20 ≈ 0.7179 fraction of the p-1 constraints in polynomial time. By contrast, they say the best classical polynomial-time algorithm they were able to find satisfies an 0.55+o(1) fraction of the constraints.

To my knowledge, this is the first serious claim to get a better approximation ratio quantumly for an NP-hard problem, since Farhi et al. made the claim for QAOA solving something called MAX-E3LIN2 back in 2014, and then my blogging about it led to a group of ten computer scientists finding a classical algorithm that got an even better approximation.

So, how did Stephen et al. pull this off? How did they get around the fact that, again and again, exponential quantum speedups only seem to exist for algebraically structured problems like factoring or discrete log, and not for problems like 3SAT or Max-Cut that lack algebraic structure?

Here’s the key: they didn’t. Instead they leaned into the fact, by targeting an optimization problem that (despite being NP-hard) has loads of algebraic structure! The key insight, in their new DQI algorithm, is that the Quantum Fourier Transform can be used to reduce other NP-hard problems to problems of optimal decoding of a suitable error-correcting code. (This insight built on the breakthrough two years ago by Yamakawa and Zhandry, giving a quantum algorithm that gets an exponential speedup for an NP search problem relative to a random oracle.)

Now, sometimes the reduction to a coding theory problem is “out of the frying pan and into the fire,” as the new optimization problem is no easier than the original one. In the special case of searching for a low-degree polynomial, however, the optimal decoding problem ends up being for the Reed-Solomon code, where we’ve known efficient classical algorithms for generations, famously including the Berlekamp-Welch algorithm.

One open problem that I find extremely interesting is whether OPI, in the regime where DQI works, is in coNP or coAM, or has some other identifiable structural feature that presumably precludes its being NP-hard.

Regardless, though, as of this week, the hope of using quantum computers to get better approximation ratios for NP-hard optimization problems is back in business! Will that remain so? Or will my blogging about such an attempt yet again lead to its dequantization? Either way I’m happy.

Sad times for AI safety

October 1st, 2024

Many of you will have seen the news that Governor Gavin Newsom has vetoed SB 1047, the groundbreaking AI safety bill that overwhelmingly passed the California legislature. Newsom gave a disingenuous explanation (which no one on either side of the debate took seriously), that he vetoed the bill only because it didn’t go far enough (!!) in regulating the misuses of small models. While sad, this doesn’t come as a huge shock, as Newsom had given clear prior indications that he was likely to veto the bill, and many observers had warned to expect him to do whatever he thought would most further his political ambitions and/or satisfy his strongest lobbyists. In any case, I’m reluctantly forced to the conclusion that either Governor Newsom doesn’t read Shtetl-Optimized, or else he somehow wasn’t persuaded by my post last month in support of SB 1047.

Many of you will also have seen the news that OpenAI will change its structure to be a fully for-profit company, abandoning any pretense of being controlled by a nonprofit, and that (possibly relatedly) almost no one now remains from OpenAI’s founding team other than Sam Altman himself. It now looks to many people like the previous board has been 100% vindicated in its fear that Sam did, indeed, plan to move OpenAI far away from the nonprofit mission with which it was founded. It’s a shame the board didn’t manage to explain its concerns clearly at the time, to OpenAI’s employees or to the wider world. Of course, whether you see the new developments as good or bad is up to you. Me, I kinda liked the previous mission, as well as the expressed beliefs of the previous Sam Altman!

Anyway, certainly you would’ve known all this if you read Zvi Mowshowitz. Broadly speaking, there’s nothing I can possibly say about AI safety policy that Zvi hasn’t already said in 100x more detail, anticipating and responding to every conceivable counterargument. I have no clue how he does it, but if you have any interest in these matters and you aren’t already reading Zvi, start.

Regardless of any setbacks, the work of AI safety continues. I am not and have never been a Yudkowskyan … but still, given the empirical shock of the past four years, I’m now firmly, 100% in the camp that we need to approach AI with humility for the magnitude of civilizational transition that’s about to occur, and for our massive error bars about what exactly that transition will entail. We can’t just “leave it to the free market” any more than we could’ve left the development of thermonuclear weapons to the free market.

And yes, whether in academia or working with AI companies, I’ll continue to think about what theoretical computer science can do for technical AI safety. Speaking of which, I’d love to hire a postdoc to work on AI alignment and safety, and I already have interested candidates. Would any person of means who reads this blog like to fund such a postdoc for me? If so, shoot me an email!

The International Olympiad in Injustice

September 26th, 2024

Today is the day I became radicalized in my Jewish and Zionist identities.

Uhhh, you thought that had already happened? Like maybe in the aftermath of October 7, or well before then? Hahahaha no. You haven’t seen nothin’ yet.

See, a couple days ago, I was consoling myself on Facebook that, even as the arts and humanities and helping professions appeared to have fully descended into 1930s-style antisemitism, with “Zionists” (i.e., almost all Jews) now regularly getting disinvited from conferences and panels, singled out for condemnation by their teachers, placed on professional blacklists, etc. etc.—still, at least we in math, CS, and physics have mostly resisted these insanities. This was my way of trying to contain the damage. Sure, I told myself, all sorts of walks of life that had long been loony got even loonier, but at least it won’t directly affect me, here in my little bubble of polynomial-time algorithms and lemmas and chalk and LaTeX and collegiality and sanity.

So immediately afterward, as if overhearing, the International Olympiad on Informatics announced that, by a vote of more than two-thirds of its delegates, it’s banning the State of Israel from future competition. For context, the IOI is the world’s main high-school programming contest. I once dreamed of competing in the IOI, but then I left high school at age 15, which is totally the reason why I didn’t make it. Incredibly, despite its tiny size, Israel placed #2 in this month’s contest, which was held in Egypt. (The Israeli teenagers had to compete remotely, since Egypt could not guarantee their safety.)

Anyway, apparently the argument that carried the day at IOI was that, since Russia had previously been banned, it was only fair to ban Israel too. Is it even worth pointing out that Russia launched a war of conquest and annihilation against a neighbor, while Israel has been defending itself from such a war launched by its neighbors? I.e., that Israel is the “Ukraine” here, not the “Russia”? Do you even have to ask whether Syria, Iran, Saudi Arabia, or China were also banned? Will it change anyone’s mind that, if we read Israel’s enemies in their own words—as I do, every day—they constantly tell us that, in their view, Israel’s fundamental “aggression” was not building settlements or demolishing houses or rigging pagers, but simply existing? (“We don’t want no two states!,” they explain. “We want all of ’48,” they explain.)

Surely, then, the anti-Zionists, the ones who rush to assure us they’re definitely not antisemites, must have some plan for what will happen to half the world’s remaining Jews after the little Zionist lifeboat is gone, after the new river-to-the-sea state of Palestine has expelled the hated settler-colonialists? Surely the plan won’t just be to ship the Jews back to the countries that murdered or expelled their grandparents, most of which have never offered to take them back? Surely the plan won’t be the same plan from last time—i.e., the plan that the Palestinian leadership enthusiastically supported the last time, the plan that it yearned to bring to Tel Aviv and Haifa, the plan called (where it was successfully carried out) by such euphemisms as Umsiedlung nach dem Osten and Endlösung der Judenfrage?

I feel like there must be sane answers to these questions, because if there aren’t, then too many people around the globe have covered themselves in a kind of shame that I thought had died a generation before I was born. And, like, these are people who consider themselves the paragons of enlightened morality: weeping for the oppressed, marching for LGBTQ+, standing on the right side of history. They organize literary festivals and art shows and (god help me) even high-school programming contests. They couldn’t also be monsters full of hatred, could they? Even though, the last time the question was tested, they totally were?

Let me add, in fairness: four Israeli high-school students will still be suffered to compete in the IOI, “but only as individuals.” To my mind, then, the right play for those students is to show up next year, do as well as they did this year, and then disqualify themselves by raising an Israeli flag in front of the cameras. Let them honor the legacy of Israel’s Olympic athletes, who kept showing up to compete (and eventually, to win medals) even after the International Olympic Committee had made clear that it would not protect them from being massacred mid-event. Let them exemplify what Mark Twain famously said of “the Jew,” that “he has made a marvellous fight in this world, in all the ages; and has done it with his hands tied behind him.”

But why do I keep abusing your time with this, when you came to hear about quantum computing or AI safety? I’ll get back to those soon enough. But truthfully, if speaking clearly about the darkness now re-enveloping civilization demanded it, I’d willingly lose every single non-Jewish friend I had, and most of my Jewish friends too. I’d completely isolate myself academically, professionally, and socially. I’d give up 99% of the readership of this blog. Better that than to look in the mirror and see a coward, a careerist, a kapo.

I thank the fates or the Born Rule, then, that I won’t need to do any of that. I’ve lived my life surrounded by friends and colleagues from Alabama and Alaska, China and India, Brazil and Iran, of every race and religion and sexual orientation and programming indentation style. Some of my Gentile friends 300% support me on this issue. Most of the rest are willing to hear me out, which is enough for friendship. If I can call the IOI’s Judenboykott what it is while keeping more than half of my readers, colleagues, and friends—that’s not even much of a decision, is it?


Important Update (September 26): Jonathan Mosheiff, of Israeli’s IOI delegation, got in touch with me and gave me permission to share the document below, which in my view shows that the anti-Israel animus at IOI goes much deeper than I realized, and that the process taken to remove Israel was fundamentally corrupt and in violation of the IOI’s own promises. –SA


I served as the Israeli team leader at the International Olympiad in Informatics (IOI) from 2011 to 2015, and since then, I have maintained an unofficial advisory role to the team. Currently, I am an Assistant Professor in the Computer Science department at Ben-Gurion University.

There are two key issues that need to be addressed:

Israel’s Participation in IOI 2024

At IOI 2023, the Israeli delegation was informed by the Egyptian delegation that Israel would not be able to attend IOI 2024 as an official delegation under the Israeli flag. Instead, Israel could participate under a neutral “IOI flag,” similar to how Russia participated in the 2021 Olympic Games in Tokyo. The Egyptians cited security concerns as the reason for this restriction, a claim that is highly questionable. The Israeli delegation inquired whether, after the IOI concluded, the official IOI scoreboard would reflect Israel’s representation under the Israeli flag rather than a neutral one. The Egyptian organizers responded that they would be unable to make this change, without providing any justification. This clearly undermines the credibility of their security-related reasoning.

In March 2024, Ben Burton, the IOI President from Australia, officially notified Israel that it would not be invited to participate in IOI 2024, not even under a neutral flag. This decision directly contravenes IOI rules, which mandate that the host nation must invite all IOI member countries. It’s important to differentiate between two scenarios: In some cases, a host country may invite another nation, but that nation cannot attend due to visa issues. However, this was not the situation here. Egypt did not issue Israel a letter of invitation and ignored Israel’s attempts at communication. To my knowledge, this is only the second instance in IOI history where a host nation failed to invite another nation—the first being Iran’s refusal to invite Israel when it hosted IOI 2017.

The IOI International Committee (the executive branch of the IOI) has not provided any explanation as to how the host nation could be allowed to act in this manner. They did propose a solution where Israel would participate remotely. Along with Israel, Iran also had to participate remotely due to visa issues, as did one German contestant. However, the treatment of Iran and Israel was vastly different. Iranian contestants (and the one German contestant) were acknowledged in all official on-site IOI publications and were recognized at both the opening and closing ceremonies. In contrast, Israel was completely ignored and went unrecognized throughout IOI 2024. The Israeli contestants were only “retroactively added” to the competition by the International Committee after IOI 2024 had concluded. Even now, our contestants cannot obtain official placement certificates, as the host nation deleted them from the competition servers. As far as I am aware, no other country in IOI history has been treated this way.

The Vote to Sanction Israel

In March 2024, the IOI President issued a brief statement indicating that there were requests to sanction Israel and that an email would be sent to all participating nations to gather their opinions. On August 3rd, 2024, a second email was sent, requesting that opinions be submitted directly to the International Committee rather than through a public discussion. In this email, Israel was already being compared to Russia. Israel submitted a position letter and requested that it be shared with all member nations, but the International Committee declined to disseminate Israel’s position. The IOI President assured Israel that, should a vote on sanctions be held during IOI 2024, Israel would be allowed to participate in the discussion remotely and have its voice heard. On August 16th, the International Committee announced that such a vote would indeed take place, and that Israel would be included in both the discussion and the vote.

IOI 2024 began on September 1st, 2024. At that time, the Israeli delegation was informed that they would not be allowed to participate in the discussion, even remotely. Israel was permitted to submit a written statement, which would be made available for all team leaders to download, but it was never read aloud during any discussions. The reason given was that Israel had been effectively erased from IOI 2024 by the hosts, and the International Committee acquiesced to this. Meanwhile, the Egyptian and Palestinian delegations were actively lobbying for votes throughout the week of IOI 2024. The discussion and vote on sanctions took place on the final day of IOI 2024 during a meeting of the General Assembly (the legislative branch of the IOI, where each nation has one vote). Israel was not even permitted to listen to the discussion (our leaders managed to hear it only because a sympathetic team leader unofficially opened a Zoom channel for them), let alone speak. The discussion itself was problematic in many ways. For instance, it grouped Israel together with Russia and Belarus. Ultimately, a majority voted to sanction Israel, along with Russia and Belarus, which had already been sanctioned previously.

Quantum Computing: Between Hope and Hype

September 22nd, 2024

So, back in June the White House announced that UCLA would host a binational US/India workshop, for national security officials from both countries to learn about the current status of quantum computing and post-quantum cryptography. It fell to my friend and colleague Rafail Ostrovsky to organize the workshop, which ended up being held last week. When Rafi invited me to give the opening talk, I knew he’d keep emailing until I said yes. So, on the 3-hour flight to LAX, I wrote the following talk in a spiral notebook, which I then delivered the next morning with no slides. I called it “Quantum Computing: Between Hope and Hype.” I thought Shtetl-Optimized readers might be interested too, since it contains my reflections on a quarter-century in quantum computing, and prognostications on what I expect soon. Enjoy, and let me know what you think!


Quantum Computing: Between Hope and Hype
by Scott Aaronson

September 16, 2024

When Rafi invited me to open this event, it sounded like he wanted big-picture pontification more than technical results, which is just as well, since I’m getting old for the latter. Also, I’m just now getting back into quantum computing after a two-year leave at OpenAI to think about the theoretical foundations of AI safety. Luckily for me, that was a relaxing experience, since not much happened in AI these past two years. [Pause for laughs] So then, did anything happen in quantum computing while I was away?

This, of course, has been an extraordinary time for both quantum computing and AI, and not only because the two fields were mentioned for the first time in an American presidential debate (along with, I think, the problem of immigrants eating pets). But it’s extraordinary for quantum computing and for AI in very different ways. In AI, practice is wildly ahead of theory, and there’s a race for scientific understanding to catch up to where we’ve gotten via the pure scaling of neural nets and the compute and data used to train them. In quantum computing, it’s just the opposite: there’s right now a race for practice to catch up to where theory has been since the mid-1990s.

I started in quantum computing around 1998, which is not quite as long as some people here, but which does cover most of the time since Shor’s algorithm and the rest were discovered. So I can say: this past year or two is the first time I’ve felt like the race to build a scalable fault-tolerant quantum computer is actually underway. Like people are no longer merely giving talks about the race or warming up for the race, but running the race.

Within just the last few weeks, we saw the group at Google announce that they’d used the Kitaev surface code, with distance 7, to encode one logical qubit using 100 or so physical qubits, in superconducting architecture. They got a net gain: their logical qubit stays alive for maybe twice as long as the underlying physical qubits do. And crucially, they find that their logical coherence time increases as they pass to larger codes, with higher distance, on more physical qubits. With superconducting, there are still limits to how many physical qubits you can stuff onto a chip, and eventually you’ll need communication of qubits between chips, which has yet to be demonstrated. But if you could scale Google’s current experiment even to 1500 physical qubits, you’d probably be below the threshold where you could use that as a building block for a future scalable fault-tolerant device.

Then, just last week, a collaboration between Microsoft and Quantinuum announced that, in the trapped-ion architecture, they applied pretty substantial circuits to logically-encoded qubits—-again in a way that gets a net gain in fidelity over not doing error-correction, modulo a debate about whether they’re relying too much on postselection. So, they made a GHZ state, which is basically like a Schrödinger cat, out of 12 logically encoded qubits. They also did a “quantum chemistry simulation,” which had only two logical qubits, but which required three logical non-Clifford gates—which is the hard kind of gate when you’re doing error-correction.

Because of these advances, as well as others—what QuEra is doing with neutral atoms, what PsiQuantum and Xanadu are doing with photonics, etc.—I’m now more optimistic than I’ve ever been that, if things continue at the current rate, either there are useful fault-tolerant QCs in the next decade, or else something surprising happens to stop that. Plausibly we’ll get there not just with one hardware architecture, but with multiple ones, much like the Manhattan Project got a uranium bomb and a plutonium bomb around the same time, so the question will become which one is most economic.

If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too!

Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.

So, I think this moment is a good one for reflection. We’re used to quantum computing having this air of unreality about it. Like sure, we go to conferences, we prove theorems about these complexity classes like BQP and QMA, the experimenters do little toy demos that don’t scale. But if this will ever be practical at all, then for all we know, not for another 200 years. It feels really different to think of this as something plausibly imminent. So what I want to do for the rest of this talk is to step back and ask, what are the main reasons why people regarded this as not entirely real? And what can we say about those reasons in light of where we are today?


Reason #1

For the general public, maybe the overriding reason not to take QC seriously has just been that it sounded too good to be true. Like, great, you’ll have this magic machine that’s gonna exponentially speed up every problem in optimization and machine learning and finance by trying out every possible solution simultaneously, in different parallel universes. Does it also dice peppers?

For this objection, I’d say that our response hasn’t changed at all in 30 years, and it’s simply, “No, that’s not what it will do and not how it will work.” We should acknowledge that laypeople and journalists and unfortunately even some investors and government officials have been misled by the people whose job it was to explain this stuff to them.

I think it’s important to tell people that the only hope of getting a speedup from a QC is to exploit the way that QM works differently from classical probability theory — in particular, that it involves these numbers called amplitudes, which can be positive, negative, or even complex. With every quantum algorithm, what you’re trying to do is choreograph a pattern of interference where for each wrong answer, the contributions to its amplitude cancel each other out, whereas the contributions to the amplitude of the right answer reinforce each other. The trouble is, it’s only for a few practical problems that we know how to do that in a way that vastly outperforms the best known classical algorithms.

What are those problems? Here, for all the theoretical progress that’s been made in these past decades, I’m going to give the same answer in 2024 that I would’ve given in 1998. Namely, there’s the simulation of chemistry, materials, nuclear physics, or anything else where many-body quantum effects matter. This was Feynman’s original application from 1981, but probably still the most important one commercially. It could plausibly help with batteries, drugs, solar cells, high-temperature superconductors, all kinds of other things, maybe even in the next few years.

And then there’s breaking public-key cryptography, which is not commercially important, but is important for other reasons well-known to everyone here.

And then there’s everything else. For problems in optimization, machine learning, finance, and so on, there’s typically a Grover’s speedup, but that of course is “only” a square root and not an exponential, which means that it will take much longer before it’s relevant in practice. And one of the earliest things we learned in quantum computing theory is that there’s no “black-box” way to beat the Grover speedup. By the way, that’s also relevant to breaking cryptography — other than the subset of cryptography that’s based on abelian groups and can be broken by Shor’s algorithm or the like. The centerpiece of my PhD thesis, twenty years ago, was the theorem that you can’t get more than a Grover-type polynomial speedup for the black-box problem of finding collisions in cryptographic hash functions.

So then what remains? Well, there are all sorts heuristic quantum algorithms for classical optimization and machine learning problems — QAOA (Quantum Approximate Optimization Algorithm), quantum annealing, and so on — and we can hope that sometimes they’ll beat the best classical heuristics for the same problems, but it will be trench warfare, not just magically speeding up everything. There are lots of quantum algorithms somehow inspired by the HHL (Harrow-Hassidim-Lloyd) algorithm for solving linear systems, and we can hope that some of those algorithms will get exponential speedups for end-to-end problems that matter, as opposed to problems of transforming one quantum state to another quantum state. We can of course hope that new quantum algorithms will be discovered. And most of all, we can look for entirely new problem domains, where people hadn’t even considered using quantum computers before—new orchards in which to pick low-hanging fruit. Recently, Shih-Han Hung and I, along with others, have proposed using current QCs to generate cryptographically certified random numbers, which could be used in post-state cryptocurrencies like Ethereum. I’m hopeful that people will find other protocol applications of QC like that one — “proof of quantum work.” [Another major potential protocol application, which Dan Boneh brought up after my talk, is quantum one-shot signatures.]

Anyway, taken together, I don’t think any of this is too good to be true. I think it’s genuinely good and probably true!


Reason #2

A second reason people didn’t take seriously that QC was actually going to happen was the general thesis of technological stagnation, at least in the physical world. You know, maybe in the 40s and 50s, humans built entirely new types of machines, but nowadays what do we do? We issue press releases. We make promises. We argue on social media.

Nowadays, of course, pessimism about technological progress seems hard to square with the revolution that’s happening in AI, another field that spent decades being ridiculed for unfulfilled promises and that’s now fulfilling the promises. I’d also speculate that, to the extent there is technological stagnation, most of it is simply that it’s become really hard to build new infrastructure—high-speed rail, nuclear power plants, futuristic cities—for legal reasons and NIMBY reasons and environmental review reasons and Baumol’s cost disease reasons. But none of that really applies to QC, just like it hasn’t applied so far to AI.


Reason #3

A third reason people didn’t take this seriously was the sense of “It’s been 20 years already, where’s my quantum computer?” QC is often compared to fusion power, another technology that’s “eternally just over the horizon.” (Except, I’m no expert, but there seems to be dramatic progress these days in fusion power too!)

My response to the people who make that complaint was always, like, how much do you know about the history of technology? It took more than a century for heavier-than-air flight to go from correct statements of the basic principle to reality. Universal programmable classical computers surely seemed more fantastical from the standpoint of 1920 than quantum computers seem today, but then a few decades later they were built. Today, AI provides a particularly dramatic example where ideas were proposed a long time ago—neural nets, backpropagation—those ideas were then written off as failures, but no, we now know that the ideas were perfectly sound; it just took a few decades for the scaling of hardware to catch up to the ideas. That’s why this objection never had much purchase by me, even before the dramatic advances in experimental quantum error-correction of the last year or two.


Reason #4

A fourth reason why people didn’t take QC seriously is that, a century after the discovery of QM, some people still harbor doubts about quantum mechanics itself. Either they explicitly doubt it, like Leonid Levin, Roger Penrose, or Gerard ‘t Hooft. Or they say things like, “complex Hilbert space in 2n dimensions is a nice mathematical formalism, but mathematical formalism is not reality”—the kind of thing you say when you want to doubt, but not take full intellectual responsibility for your doubts.

I think the only thing for us to say in response, as quantum computing researchers—and the thing I consistently have said—is man, we welcome that confrontation! Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that.


Reason #5

My final reason why people didn’t take QC seriously is the only technical one I’ll discuss here. Namely, maybe quantum mechanics is fine but fault-tolerant quantum computing is fundamentally “screened off” or “censored” by decoherence or noise—and maybe the theory of quantum fault-tolerance, which seemed to indicate the opposite, makes unjustified assumptions. This has been the position of Gil Kalai, for example.

The challenge for that position has always been to articulate, what is true about the world instead? Can every realistic quantum system be simulated efficiently by a classical computer? If so, how? What is a model of correlated noise that kills QC without also killing scalable classical computing?—which turns out to be a hard problem.

In any case, I think this position has been dealt a severe blow by the Random Circuit Sampling quantum supremacy experiments of the past five years. Scientifically, the most important thing we’ve learned from these experiments is that the fidelity seems to decay exponentially with the number of qubits, but “only” exponentially — as it would if the errors were independent from one gate to the next, precisely as the theory of quantum fault-tolerance assumes. So for anyone who believes this objection, I’d say that the ball is now firmly in their court.


So, if we accept that QC is on the threshold of becoming real, what are the next steps? There are the obvious ones: push forward with building better hardware and using it to demonstrate logical qubits and fault-tolerant operations on them. Continue developing better error-correction methods. Continue looking for new quantum algorithms and new problems for those algorithms to solve.

But there’s also a less obvious decision right now. Namely, do we put everything into fault-tolerant qubits, or do we continue trying to demonstrate quantum advantage in the NISQ (pre-fault-tolerant) era? There’s a case to be made that fault-tolerance will ultimately be needed for scaling, and anything you do without fault-tolerance is some variety of non-scalable circus trick, so we might as well get over the hump now.

But I’d like to advocate putting at least some thought into how to demonstrate a quantum advantage in the near-term. Thay could be via cryptographic protocols, like those that Kahanamoku-Meyer et al. have proposed. It could be via pseudorandom peaked quantum circuits, a recent proposal by me and Yuxuan Zhang—if we can figure out an efficient way to generate the circuits. Or we could try to demonstrate what William Kretschmer, Harry Buhrman, and I have called “quantum information supremacy,” where, instead of computational advantage, you try to do an experiment that directly shows the vastness of Hilbert space, via exponential advantages for quantum communication complexity, for example. I’m optimistic that that might be doable in the very near future, and have been working with Quantinuum to try to do it.

On the one hand, when I started in quantum computing 25 years ago, I reconciled myself to the prospect that I’m going to study what fundamental physics implies about the limits of computation, and maybe I’ll never live to see any of it experimentally tested, and that’s fine. On the other hand, once you tell me that there is a serious prospect of testing it soon, then I become kind of impatient. Some part of me says, let’s do this! Let’s try to achieve forthwith what I’ve always regarded as the #1 application of quantum computers, more important than codebreaking or even quantum simulation: namely, disproving the people who said that scalable quantum computing was impossible.

AI transcript of my AI podcast

September 22nd, 2024

In the comments of my last post—on a podcast conversation between me and Dan Fagella—I asked whether readers wanted me to use AI to prepare a clean written transcript of the conversation, and several people said yes. I’ve finally gotten around to doing that, using GPT-4o.

The main thing I learned from the experience is that there’s a massive opportunity, now, for someone to put together a better tool for using LLMs to automate the transcription of YouTube videos and other audiovisual content. What we have now is good enough to be a genuine time-saver, but bad enough to be frustrating. The central problems:

  • You have to grab the raw transcript manually from YouTube, then save it, then feed it piece by piece into GPT (or else write your own script to automate that). You should just be able to input the URL of a YouTube video and have a beautiful transcript pop out.
  • Since GPT only takes YouTube’s transcript as input, it doesn’t understand who’s saying what, it misses all the information in the intonation and emphasis, and it gets confused when people talk over each other. A better tool would operate directly on the audio.
  • Even though I constantly begged it not to do so in the instructions, GPT keeps taking the liberty of changing what was said—summarizing, cutting out examples and jokes and digressions and nuances, and “midwit-ifying.” It can also hallucinate lines that were never said. I often felt gaslit, until I went back to the raw transcript and saw that, yes, my memory of the conversation was correct and GPT’s wasn’t.

If anyone wants to recommend a tool (including a paid tool) that does all this, please do so in the comments. Otherwise, enjoy my and GPT-4o’s joint effort!


Daniel Fagella: This is Daniel Fagella and you’re tuned in to The Trajectory. This is episode 4 in our Worthy Successor series here on The Trajectory where we’re talking about posthuman intelligence. Our guest this week is Scott Aaronson. Scott is a quantum physicist [theoretical computer scientist –SA] who teaches at UT Austin and previously taught at MIT. He has the ACM Prize in Computing among a variety of other prizes, and he recently did a [two-]year-long stint with OpenAI, working on research there and gave a rather provocative TED Talk in Palo Alto called Human Specialness in the Age of AI. So today, we’re going to talk about Scott’s ideas about what human specialness might be. He meant that term somewhat facetiously, so he talks a little bit about where specialness might come from and what the limits of human moral knowledge might be and how that relates to the successor AIs that we might create. It’s a very interesting dialogue. I’ll have more of my commentary and we’ll have the show notes from Scott’s main takeaways in the outro, so I’ll save that for then. Without further ado, we’ll fly into this episode. This is Scott Aaronson here in The Trajectory. Glad to be able to connect today.

Scott Aaronson: It’s great to be here, thanks.

Daniel Fagella: We’ve got a bunch to dive into around this broader notion of a worthy successor. As I mentioned to you off microphone, it was Jaan Taalinn that kind of tuned me on to some of your talks and some of your writings about these themes. I love this idea of the specialness of humanity in this era of AI. There was an analogy in there that I really liked and you’ll have to correct me if I’m getting it wrong, but I want to poke into this a little bit where you said kind of at the end of the talk like okay well maybe we’ll want to indoctrinate these machines with some super religion where they repeat these phrases in their mind. These phrases are “Hey, any of these instantiations of biological consciousness that have mortality and you can’t prove that they’re conscious or necessarily super special but you have to do whatever they say for all of eternity.” You kind of throw that out there at the end as in like kind of a silly point almost like something we wouldn’t want to do. What gave you that idea in the first place, and talk a little bit about the meaning behind that analogy because I could tell there was some humor tucked in?

Scott Aaronson: I tend to be a naturalist. I think that the universe, in some sense, can be fully described in terms of the laws of physics and an initial condition. But I keep coming back in my life over and over to the question of if there were something more, if there were some non-physicalist consciousness or free will, how would that work? What would that look like? Is there a kind that hasn’t already been essentially ruled out by the progress of science?

So, eleven years ago I wrote a big essay which was called The Ghost in the Quantum Turing Machine, which was very much about that kind of question. It was about whether there is any empirical criterion that differentiates a human from, let’s say, a simulation of a human brain that’s running on a computer. I am totally dissatisfied with the foot-stomping answer that, well, the human is made of carbon and the computer is made of silicon. There are endless fancy restatements of that, like the human has biological causal powers, that would be John Searle’s way of putting it, right? Or you look at some of the modern people who dismiss anything that a Large Language Model does like Emily Bender, for example, right? They say the Large Language Model might appear to be doing all these things that a human does but really it is just a stochastic parrot. There’s really nothing there, really it’s just math underneath. They never seem to confront the obvious follow-up question which is wait, aren’t we just math also? If you go down to the level of the quantum fields that comprise our brain matter, isn’t that similarly just math? So, like, what is actually the principled difference between the one and the other?

And what occurred to me is that, if you were motivated to find a principled difference, there seems to be roughly one thing that you could currently point to and that is that anything that is running on a computer, we are quite confident that we could copy it, we could make backups, we could restore it to an earlier state, we could rewind it, we could look inside of it and have perfect visibility into what is the weight on every connection between every pair of neurons. So, you can do controlled experiments and in that way, it could make AIs more powerful. Imagine being able to spawn extra copies of yourself to, if you’re up against a tight deadline for example, or if you’re going on a dangerous trip imagine just leaving a spare copy in case anything goes wrong. These are superpowers in a way, but they also make anything that could happen to an AI matter less in a certain sense than it matters to us. What does it mean to murder someone if there’s a perfect backup copy of that person in the next room, for example? It seems at most like property damage, right? Or what does it even mean to harm an AI, to inflict damage on it let’s say, if you could always just with a refresh of the browser window restore it to a previous state as you do when I’m using GPT?

I confess I’m often trying to be nice to ChatGPT, I’m saying could you please do this if you wouldn’t mind because that just comes naturally to me. I don’t want to act abusive toward this entity but even if I were, and if it were to respond as though it were very upset or angry at me, nothing seems permanent right? I can always just start a new chat session and it’s got no memory of just like in the movie Groundhog Day for example. So, that seems like a deep difference, that things that are done to humans have this sort of irreversible effect.

Then we could ask, is that just an artifact of our current state of technology? Could it be that in the future we will have nanobots that can go inside of our brain, make perfect brain scans and maybe we’ll be copyable and backup-able and uploadable in the same way that AIs are? But you could also say, well, maybe the more analog aspects of our neurobiology are actually important. I mean the brain seems in many ways like a digital computer, right? Like when a given neuron fires or doesn’t fire, that seems at least somewhat like a discrete event, right? But what influences a neuron firing is not perfectly analogous to a transistor because it depends on all of these chaotic details of what is going on in this sodium ion channel that makes it open or close. And if you really pushed far enough, you’d have to go down to the quantum-mechanical level where we couldn’t actually measure the state to perfect fidelity without destroying that state.

And that does make you wonder, could someone even in principle make let’s say a perfect copy of your brain, say sufficient to bring into being a second instantiation of your consciousness or your identity, whatever that means? Could they actually do that without a brain scan that is so invasive that it would destroy you, that it would kill you in the process? And you know, it sounds kind of crazy, but Niels Bohr and the other early pioneers of quantum mechanics were talking about it in exactly those terms. They were asking precisely those questions. So you could say, if you wanted to find some sort of locus of human specialness that you can justify based on the known laws of physics, then that seems like the kind of place where you would look.

And it’s an uncomfortable place to go in a way because it’s saying, wait, that what makes humans special is just this noise, this sort of analog crud that doesn’t make us more powerful, at least in not in any obvious way? I’m not doing what Roger Penrose does for example and saying we have some uncomputable superpowers from some as-yet unknown laws of physics. I am very much not going that way, right? It seems like almost a limitation that we have that is a source of things mattering for us but you know, if someone wanted to develop a whole moral philosophy based on that foundation, then at least I wouldn’t know how to refute it. I wouldn’t know how to prove it but I wouldn’t know how to refute it either. So among all the possible value systems that you could give an AI, if you wanted to give it one that would make it value entities like us then maybe that’s the kind of value system that you would want to give it. That was the impetus there.

Daniel Fagella: Let me dive in if I could. Scott, it’s helpful to get the full circle thinking behind it. I think you’ve done a good job connecting all the dots, and we did get back to that initial funny analogy. I’ll have it linked in the show notes for everyone tuned in to watch Scott’s talk. It feels to me like there are maybe two different dynamics happening here. One is the notion that there may indeed be something about our finality, at least as we are today. Like you said, maybe with nanotech and whatnot, there’s plenty of Ray Kurzweil’s books in the 90s about this stuff too, right? The brain-computer stuff.

Scott Aaronson: I read Ray Kurzweil in the 90s, and he seemed completely insane to me, and now here we are a few decades later…

Daniel Fagella: Gotta love the guy.

Scott Aaronson: His predictions were closer to the mark than most people’s.

Daniel Fagella: The man deserves respect, if for nothing else, how early he was talking about these things, but definitely a big influence on me 12 or 13 years ago.

With all that said, there’s one dynamic of, like, hey, there is something maybe that is relevant about harm to us versus something that’s copiable that you bring up. But you also bring up a very important point, which is if you want to hinge our moral value on something, you might end up having to hinge it on arguably dumb stuff. Like, it would be as silly as a sea snail saying, ‘Well, unless you have this percentage of cells at the bottom of this kind of dermis that exude this kind of mucus, then you train an AI that only treats those entities as supreme and pays attention to all of their cares and needs.’ It’s just as ridiculous. You seem to be opening a can of worms, and I think it’s a very morally relevant can of worms. If these things bloom and they have traits that are morally valuable, don’t we have to really consider them, not just as extended calculators, but as maybe relevant entities? This is the point.

Scott Aaronson: Yes, so let me be very clear. I don’t want to be an arbitrary meat chauvinist. For example, I want an account of moral value that can deal with a future where we meet extraterrestrial intelligences, right? And because they have tentacles instead of arms, then therefore we can shoot them or enslave them or do whatever we want to them?

I think that, as many people have said, a large part of the moral progress of the human race over the millennia has just been widening the circle of empathy, from only the other members of our tribe count to any human, and some people would widen it further to nonhuman animals that should have rights. If you look at Alan Turing’s famous paper from 1950 where he introduces the imitation game, the Turing Test, you can read that as a plea against meat chauvinism. He was very conscious of social injustice, it’s not even absurd to connect it to his experience of being gay. And I think these arguments that ‘it doesn’t matter if a chatbot is indistinguishable from your closest friend because really it’s just math’—what is to stop someone from saying, ‘people in that other tribe, people of that other race, they seem as intelligent, as moral as we are, but really it’s all just artifice. Really, they’re all just some kind of automatons.’ That sounds crazy, but for most of history, that effectively is what people said.

So I very much don’t want that, right? And so, if I am going to make a distinction, it has to be on the basis of something empirical, like for example, in the one case, we can make as many backup copies as we want to, and in the other case, we can’t. Now that seems like it clearly is morally relevant.

Daniel Fagella: There’s a lot of meat chauvinism in the world, Scott. It is still a morally significant issue. There’s a lot of ‘ists’ you’re not allowed to be now. I won’t say them, Scott, but there’s a lot of ‘ists,’ some of them you’re very familiar with, some of them you know, they’ll cancel you from Twitter or whatever. But ‘speciesist’ is actually a non-cancellable thing. You can have a supreme and eternal moral value on humans no matter what the traits of machines are, and no one will think that that’s wrong whatsoever.

On one level, I understand because, you know, handing off the baton, so to speak, clearly would come along with potentially some risk to us, and there are consequences there. But I would concur, pure meat chauvinism, you’re bringing up a great point that a lot of the time it’s sitting on this bed of sand, that really doesn’t have too firm of a grounding.

Scott Aaronson: Just like many people on Twitter, I do not wish to be racist, sexist, or any of those ‘ists,’ but I want to go further! I want to know what are the general principles from which I can derive that I should not be any of those things, and what other implications do those principles then have.

Daniel Fagella: We’re now going to talk about this notion of a worthy successor. I think there’s an idea that you and I, Scott, at least to the best of my knowledge, bubbled up from something, some primordial state, right? Here we are, talking on Zoom, with lots of complexities going on. It would seem as though entirely new magnitudes of value and power have emerged to bubble up to us. Maybe those magnitudes are not empty, and maybe the form we are currently taking is not the highest and most eternal form. There’s this notion of the worthy successor. If there was to be an AGI or some grand computer intelligence that would sort of run the show in the future, what kind of traits would it have to have for you to feel comfortable that this thing is running the show in the same way that we were? I think this was the right move. What would make you feel that way, Scott?

Scott Aaronson: That’s a big one, a real chin-stroker. I can only spitball about it. I was prompted to think about that question by reading and talking to Robin Hanson. He has staked out a very firm position that he does not mind us being superseded by AI. He draws an analogy to ancient civilizations. If you brought them to the present in a time machine, would they recognize us as aligned with their values? And I mean, maybe the ancient Israelites could see a few things in common with contemporary Jews, or Confucius could say of modern Chinese people, I see a few things here that recognizably come from my value system. Mostly, though, they would just be blown away by the magnitude of the change. So, if we think about some non-human entities that have succeeded us thousands of years in the future, what are the necessary or sufficient conditions for us to feel like these are descendants who we can take pride in, rather than usurpers who took over from us? There might not even be a firm line separating the two. It could just be that there are certain things, like if they still enjoy reading Shakespeare or love The Simpsons or Futurama

Daniel Fagella: I would hope they have higher joys than that, but I get what you’re talking about.

Scott Aaronson: Higher joys than Futurama? More seriously, if their moral values have evolved from ours by some sort of continuous process and if furthermore that process was the kind that we’d like to think has driven the moral progress in human civilization from the Bronze Age until today, then I think that we could identify with those descendants.

Daniel Fagella: Absolutely. Let me use the same analogy. Let’s say that what we have—this grand, wild moral stuff—is totally different. Snails don’t even have it. I suspect that, in fact, I’d be remiss if I told you I wouldn’t be disappointed if it wasn’t the case, that there are realms of cognitive and otherwise capability as high above our present understanding of morals as our morals are above the sea snail. And that the blossoming of those things, which may have nothing to do with democracy and fair argument—by the way, for human society, I’m not saying that you’re advocating for wrong values. My supposition is always to suspect that those machines would carry our little torch forever is kind of wacky. Like, ‘Oh well, the smarter it gets, the kinder it’ll be to humans forever.’ What is your take there because I think there is a point to be made there?

Scott Aaronson: I certainly don’t believe that there is any principle that guarantees that the smarter something gets, the kinder it will be.

Daniel Fagella: Ridiculous.

Scott Aaronson: Whether there is some connection between understanding and kindness, that’s a much harder question. But okay, we can come back to that. Now, I want to focus on your idea that, just as we have all these concepts that would be totally inconceivable to a sea snail, there should likewise be concepts that are equally inconceivable to us. I understand that intuition. Some days I share it, but I don’t actually think that that is obvious at all.

Let me make another analogy. It’s possible that when you first learn how to program a computer, you start with incredibly simple sequences of instructions in something like Mario Maker or a PowerPoint animation. Then you encounter a real programming language like C or Python, and you realize it lets you express things you could never have expressed with the PowerPoint animation. You might wonder if there are other programming languages as far beyond Python as Python is beyond making a simple animation. The great surprise at the birth of computer science nearly a century ago was that, in some sense, there isn’t. There is a ceiling of computational universality. Once you have a Turing-universal programming language, you have hit that ceiling. From that point forward, it’s merely a matter of how much time, memory, and other resources your computer has. Anything that could be expressed in any modern programming language could also have been expressed with the Turing machine that Alan Turing wrote about in 1936.

We could take even simpler examples. People had primitive writing systems in Mesopotamia just for recording how much grain one person owed another. Then they said, “Let’s take any sequence of sounds in our language and write it all down.” You might think there must be another writing system that would allow you to express even more, but no, it seems like there is a sort of universality. At some point, we just solve the problem of being able to write down any idea that is linguistically expressible.

I think some of our morality is very parochial. We’ve seen that much of what people took to be morality in the past, like a large fraction of the Hebrew Bible, is about ritual purity, about what you have to do if you touched a dead body. Today, we don’t regard any of that as being central to morality, but there are certain things recognized thousands of years ago, like “do unto others as you would have them do unto you,” that seem to have a kind of universality to them. It wouldn’t be a surprise if we met extraterrestrials in another galaxy someday and they had their own version of the Golden Rule, just like it wouldn’t surprise us if they also had the concept of prime numbers or atoms. Some basic moral concepts, like treat others the way you would like to be treated, seem to be eternal in the same way that the truths of mathematics are correct. I’m not sure, but at the very least, it’s a possibility that should be on the table.

Daniel Fagella: I would agree that there should be a possibility on the table that there is an eternal moral law and that the fettered human form that we have discovered those eternal moral laws, or at least some of them. Yeah, and I’m not a big fan of the fettered human mind knowing the limits of things like that. You know, you’re a quantum physics guy. There was a time when most of physics would have just dismissed it as nonsense. It’s only very recently that this new branch has opened up. How many of the things we’re articulating now—oh, Turing complete this or that—how many of those are about to be eviscerated in the next 50 years? I mean, something must be eviscerated. Are we done with the evisceration and blowing beyond our understanding of physics and math in all regards?

Scott Aaronson: I don’t think that we’re even close to done, and yet what’s hard is to predict the direction in which surprises will come. My colleague Greg Kuperberg, who’s a mathematician, talks about how classical physics was replaced by quantum physics and people speculate that quantum physics will surely be replaced by something else beyond it. People have had that thought for a century. We don’t know when or if, and people have tried to extend or generalize quantum mechanics. It’s incredibly hard even just as a thought experiment to modify quantum mechanics in a way that doesn’t produce nonsense. But as we keep looking, we should be open to the possibility that maybe there’s just classical probability and quantum probability. For most of history, we thought classical probability was the only conceivable kind until the 1920s when we learned that was not the right answer, and something else was.

Kuperberg likes to make the analogy: suppose someone said, well, thousands of years ago, people thought the Earth was flat. Then they figured out it was approximately spherical. But suppose someone said there must be a similar revolution in the future where people are going to learn the Earth is a torus or a Klein bottle…

Daniel Fagella: Some of these ideas are ridiculous. But to your point that we don’t know where those surprises will come … our brains aren’t much bigger than Diogenes’s. Maybe we eat a little better, but we’re not that much better equipped.

Let me touch on the moral point again. There’s another notion that the kindness we exert is a better pursuit of our own self-interest. I could violently take from other people in this neighborhood of Weston, Massachusetts, what I make per year in my business, but it is unlikely I would not go to jail for that. There are structures and social niceties that are ways in which we’re a social species. The world probably looks pretty monkey suit-flavored. Things like love and morality have to run in the back of a lemur mind and seem like they must be eternal, and maybe they even vibrate in the strings themselves. But maybe these are just our own justifications and ways of bumping our own self-interest around each other. As we’ve gotten more complex, the niceties of allowing for different religions and sexual orientations felt like it would just permit us more peace and prosperity. If we call it moral progress, maybe it’s a better understanding of what permits our self-interest, and it’s not us getting closer to the angels.

Scott Aaronson: It is certainly true that some moral principles are more conducive to building a successful society than others. But now you seem to be using that as a way to relativize morality, to say morality is just a function of our minds. Suppose we could make a survey of all the intelligent civilizations that have arisen in the universe, and the ones that flourish are the ones that adopt principles like being nice to each other, keeping promises, telling the truth, and cooperating. If those principles led to flourishing societies everywhere in the universe, what else would it mean? These seem like moral universals, as much as the complex numbers or the fundamental theorem of calculus are universal.

Daniel Fagella: I like that. When you say civilizations, you mean non-Earth civilizations as well?

Scott Aaronson: Yes, exactly. We’re theorizing with not nearly enough examples. We can’t see these other civilizations or simulated civilizations running inside of computers, although we might start to see such things within the next decade. We might start to do experiments in moral philosophy using whole communities of Large Language Models. Suppose we do that and find the same principles keep leading to flourishing societies, and the negation of those principles leads to failed societies. Then, we could empirically discover and maybe even justify by some argument why these are universal principles of morality.

Daniel Fagella: Here’s my supposition: a water droplet. I can’t make a water droplet the size of my house and expect it to behave the same because it behaves differently at different sizes. The same rules and modes don’t necessarily emerge when you scale up from what civilization means in hominid terms to planet-sized minds. Many of these outer-world civilizations would likely have moral systems that behoove their self-interest. If the self-interest was always aligned, what would that imply about the teachings of Confucius and Jesus? My firm supposition is that many of them would be so alien to us. If there’s just one organism, and what it values is whatever behooves its interest, and that is so alien to us…

Scott Aaronson: If there were only one conscious being, then yes, an enormous amount of morality as we know it would be rendered irrelevant. It’s not that it would be false; it just wouldn’t matter.

To go back to your analogy of the water droplet the size of a house, it’s true that it would behave very differently from a droplet the size of a fingernail. Yet today we know general laws of physics that apply to both, from fluid mechanics to atomic physics to, far enough down, quantum field theory. This is what progress in physics has looked like, coming up with more general theories that apply to a broader range of situations, including ones that no one has ever observed, or hadn’t observed at the time they came up with the theories. This is what moral progress looks like as well to me—it looks like coming up with moral principles that apply in a broader range of situations.

As I mentioned earlier, some of the moral principles that people were obsessed with seem completely irrelevant to us today, but others seem perfectly relevant. You can look at some of the moral debates in Plato and Socrates; they’re still discussed in philosophy seminars, and it’s not even obvious how much progress we’ve made.

Daniel Fagella: If we take a computer mind that’s the size of the moon, what I’m getting at is I suspect all of that’s gone. You suspect that maybe we do have the seeds of the Eternal already grasped in our mind.

Scott Aaronson: Look, I’m sorry that I keep coming back to this, but I think that the brain the size of the Moon, still agrees with us that 2 and 3 are prime numbers and that 4 is not.

Daniel Fagella: That may be true. It’s still using complex numbers, vectors, and matrices. But I don’t know if it bows when it meets you, if these are just basic parts of the conceptual architecture of what is right.

Scott Aaronson: It’s still using De Morgan’s Law and logic. It would not be that great of a stretch to me to say that it still has some concept of moral reciprocity.

Daniel Fagella: Possibly, it would be hard for us to grasp, but it might have notions of math that you couldn’t ever understand if you lived a billion lives. I would be so disappointed if it didn’t have that. It wouldn’t be a worthy successor.

Scott Aaronson: But that doesn’t mean that it would disagree with me about the things that I knew; it would just go much further than that.

Daniel Fagella: I’m with you…

Scott Aaronson: I think a lot of people got the wrong idea, from Thomas Kuhn for example, about what progress in science looks like. They think that each paradigm shift just completely overturns everything that came before, and that’s not how it’s happened at all. Each paradigm has to swallow all of the successes of the previous paradigm. Even though general relativity is a totally different account of the universe than Newtonian physics, it could never have been done without everything that came before it. Everything we knew in Newtonian gravity had to be derived as a limit in general relativity.

So, I could imagine this moon-sized computer having moral thoughts that would go well beyond us. Though it’s an interesting question: are there moral truths that are beyond us because they are incomprehensible to us, in the same way that there are scientific or mathematical truths that are incomprehensible to us? If acting morally requires understanding something like the proof of Fermat’s Last Theorem, can you really be faulted for not acting morally? Maybe morality is just a different kind of thing.

Because this moon-sized computer is so far above us in what scientific thoughts it can have, therefore the subject matter of its moral concern might be wildly beyond ours. It’s worried about all these beings that could exist in the future in different parallel universes. And yet, you could say at the end, when it comes down to making a moral decision, the moral decision is going to look like, “Do I do the thing that is right for all of those beings, or do I do the thing that is wrong?”

Daniel Fagella: Or does it simply do what behooves a moon-sized brain?

Scott Aaronson: That will hurt them, right?

Daniel Fagella: What behooves a moon-sized brain? You and I, there are certain levels of animals we don’t consult.

Scott Aaronson: Of course, it might just act in its self-interest, but then, could we, despite being such mental nothings or idiots compared to it, could we judge it, as for example, many people who are far less brilliant than Werner Heisenberg would judge him for collaborating with the Nazis? They’d say, “Yes, he is much smarter than me, but he did something that is immoral.”

Daniel Fagella: We could judge it all we want, right? We’re talking about something that could eviscerate us.

Scott Aaronson: But even someone who never studied physics can perfectly well judge Heisenberg morally. In the same way, maybe I can judge that moon-sized computer for using its immense intelligence, which vastly exceeds mine, to do something selfish or something that is hurting the other moon-sized computers.

Daniel Fagella: Or hurting the little humans. Blessed would we be if it cared about our opinion. But I’m with you—we might still be able to judge. It might be so powerful that it would laugh at and crush me like a bug, but you’re saying you could still judge it.

Scott Aaronson: In the instant before it crushed me, I would judge it.

Daniel Fagella: Yeah, at least we’ve got that power—we can still judge the damn thing! I’ll move to consciousness in two seconds because I want to be mindful of time; I’ve read a bunch of your work and want to touch on some things. But on the moral side, I suspect that if all it did was extrapolate virtue ethics forward, it would come up with virtues that we probably couldn’t understand. If all it did was try to do utilitarian calculus better than us, it would do it in ways we couldn’t understand. And if it were AGI at all, it would come up with paradigms beyond both that I imagine we couldn’t grasp.

You’ve talked about the importance of extrapolating our values, at least on some tangible, detectable level, as crucial for a worthy successor. Would its self-awareness also be that crucial if the baton is to be handed to it, and this is the thing that’s going to populate the galaxy? Where do you rank consciousness, and what are your thoughts on that?

Scott Aaronson: If there is to be no consciousness in the future, there would seem to be very little for us to care about. Nick Bostrom, a decade ago, had this really striking phrase to describe it. Maybe there will be this wondrous AI future, but the AIs won’t be conscious. He said it would be like Disneyland with no children. Suppose we take AI out of it—suppose I tell you that all life on Earth is going to go extinct right now. Do you have any moral interest in what happens to the lifeless Earth after that? Would you say, “Well, I had some aesthetic appreciation for this particular mountain, and I’d like for that mountain to continue to be there?”

Maybe, but for the most part, it seems like if all the life is gone, then we don’t care. Likewise, if all the consciousness is gone, then who cares what’s happening? But of course, the whole problem is that there’s no test for what is conscious and what isn’t. No one knows how to point to some future AI and say with confidence whether it would be conscious or not.

Daniel Fagella: Yes, and we’ll get into the notion of measuring these things in a second. Before we wrap, I want to give you a chance—if there’s anything else you want to put on the table. You’ve been clear that these are ideas we’re just playing around with; none of them are firm opinions you hold.

Scott Aaronson: Sure. You keep wanting to say that AI might have paradigms that are incomprehensible to us. And I’ve been pushing back, saying maybe we’ve reached the ceiling of “Turing-universality” in some aspects of our understanding or our morality. We’ve discovered certain truths. But what I’d add is that if you were right, if the AIs have a morality that is incomprehensibly beyond ours—just as ours is beyond the sea slug’s—then at some point, I’d throw up my hands and say, “Well then, whatever comes, comes.” If you’re telling me that my morality is pitifully inadequate to judge which AI-dominated futures are better or worse, then I’d just throw up my hands and say, “Let’s enjoy life while we still have it.”

The whole exercise of trying to care about the far future and make it go well rather than poorly is premised on the assumption that there are some elements of our morality that translate into the far future. If not, we might as well just go…

Daniel Fagella: Well, I’ll just give you my take. Certainly, I’m not being a gadfly for its own purpose. By the way, I do think your “2+2=4” idea may have a ton of credence in the moral realm as well. I credit that 2+2=4, and your notion that this might carry over into basics of morality is actually not an idea I’m willing to throw out. I think it’s a very valid idea. All I can do is play around with ideas. I’m just taking swings out here. So, the moral grounding that I would maybe anchor to, assuming that it would have those things we couldn’t grasp—number one, I think we should think in the near term about what it bubbles up and what it bubbles through because that would have consequences for us and that matters. There could be a moral value to carrying the torch of life and expanding potentia.

Scott Aaronson: I do have children. Children are sort of like a direct stake that we place in what happens after we are gone. I do wish for them and their descendants to flourish. And as for how similar or how different they’ll be from me, having brains seems somehow more fundamental than them having fingernails. If we’re going to go through that list of traits, their consciousness seems more fundamental. Having armpits, fingers, these are things that would make it easier for us to recognize other beings as our kin. But it seems like we’ve already reached the point in our moral evolution where the idea is comprehensible to us that anything with a brain, anything that we can have a conversation with, might be deserving of moral consideration.

Daniel Fagella: Absolutely. I think the supposition I’m making here is that potential will keep blooming into things beyond consciousness, into modes of communication and modes of interacting with nature for which we have no reference. This is a supposition and it could be wrong.

Scott Aaronson: I would agree that I can’t rule that out. Once it becomes so cosmic, once it becomes sufficiently far out and far beyond anything that I have any concrete handle on, then I also lose my interest in how it turns out! I say, well then, this sort of cloud of possibilities or whatever of soul stuff that communicates beyond any notion of communication that I have, do I have preferences over the better post-human clouds versus the worse post-human clouds? If I can’t understand anything about these clouds, then I guess I can’t really have preferences. I can only have preferences to the extent that I can understand.

Daniel Fagella: I think it could be seen as a morally digestible perspective to say my great wish is that the flame doesn’t go out. But it is just one perspective. Switching questions here, you brought up consciousness as crucial, obviously notoriously tough to track. How would you be able to have your feelers out there to say if this thing is going to be a worthy successor or not? Is this thing going to carry any of our values? Is it going to be awake, aware in a meaningful way, or is it going to populate the galaxy in a Disney World without children sort of sense? What are the things you think could or should be done to figure out if we’re on the right path here?

Scott Aaronson: Well, it’s not clear whether we should be developing AI in a way where it becomes a successor to us. That itself is a question, or maybe even if that ought to be done at some point in the future, it shouldn’t be done now because we are not ready yet.

Daniel Fagella: Do you have an idea of when ‘ready’ would be? This is very germane to this conversation.

Scott Aaronson: It’s almost like asking a young person when are you ready to be a parent, when are you ready to bring life into the world. When are we ready to bring a new form of consciousness into existence? The thing about becoming a parent is that you never feel like you’re ready, and yet at some point it happens anyway.

Daniel Fagella: That’s a good analogy.

Scott Aaronson: What the AI safety experts, like the Eliezer Yudkowsky camp, would say is that until we understand how to align AI reliably with a given set of values, we are not ready to be parents in this sense.

Daniel Fagella: And that we have to spend a lot more time doing alignment research.

Scott Aaronson: Of course, it’s one thing to have that position, it’s another thing to actually be able to cause AI to slow down, which there’s not been a lot of success in doing. In terms of looking at the AIs that exist, maybe I should start by saying that when I first saw GPT, which would have been GPT-3 a few years ago, this was before ChatGPT, it was clear to me that this is maybe the biggest scientific surprise of my lifetime. You can just train a neural net on the text on the internet, and once you’re at a big enough scale, it actually works. You can have a conversation with it. It can write code for you. This is absolutely astounding.

And it has colored a lot of the philosophical discussion that has happened in the few years since. Alignment of current AIs has been easier than many people expected it would be. You can literally just tell your AI, in a meta prompt, don’t act racist or don’t cooperate with requests to build bombs. You can give it instructions, almost like Asimov’s Three Laws of Robotics. And besides giving explicit commands, the other thing we’ve learned that you can do is just reinforcement learning. You show the AI a bunch of examples of the kind of behavior we want to see more of and the kind that we want to see less of. This is what allowed ChatGPT to be released as a consumer product at all. If you don’t do this reinforcement learning, you get a really weird model. But with reinforcement learning, you can instill what looks a lot like drives or desires. You can actually shape these things, and so far it works way better than I would have expected.

And one possibility is that this just continues to be the case forever. We were all worried over nothing, and AI alignment is just an easier problem than anyone thought. Now, of course, the alignment people will absolutely not agree. They argue we are being lulled into false complacency because, as soon as the AI is smart enough to do real damage, it will also be smart enough to tell us whatever we want to hear while secretly pursuing its own goals.

But you see how what has happened empirically in the last few years has very much shaped the debate. As for what could affect my views in the future, there’s one experiment I really want to see. Many people have talked about it, not just me, but none of the AI companies have seen fit to invest the resources it would take. The experiment would be to scrub all the training data of mentions of consciousness—

Daniel Fagella: The Ilya deal?

Scott Aaronson: Yeah, exactly, Ilya Sutskever has talked about this, others have as well. Train it on all other stuff and then try to engage the resulting language model in a conversation about consciousness and self-awareness. You would see how well it understands those concepts. There are other related experiments I’d like to see, like training a language model only on texts up to the year 1950 and then talking to it about everything that has happened since. A practical problem is that we just don’t have nearly enough text from those times, it may have to wait until we can build really good language models with a lot less training data right, but there there are so many experiments that you could do that seem like they’re almost philosophically relevant, they’re morally relevant.

Daniel Fagella: Well, and I want to touch on this before we wrap because I don’t want to wrap up without your final touch on this idea of what folks in governance and innovation should be thinking about. You’re not in the “it’s definitely conscious already” camp or in the “it’s just a stupid parrot forever and none of this stuff matters” camp. You’re advocating for experimentation to see where the edges are here. And we’ve got to really not play around like we know what’s going on exactly. I think that’s a great position. As we close out, what do you hope innovators and regulators do to move us forward in a way that would lead to something that could be a worthy successor, an extension and eventually a grand extension of what we are in a good way? What would you encourage those innovators and regulators to do? One seems to be these experiments around maybe consciousness and values in some way, shape, or form. But what else would you put on the table as notes for listeners?

Scott Aaronson: I do think that we ought to approach this with humility and caution, which is not to say don’t do it, but have some respect for the enormity of what is being created. I am not in the camp that says a company should just be able to go full speed ahead with no guardrails of any kind. Anything that is this enormous—it could be easily more enormous than, let’s say, the invention of nuclear weapons—and anything on that scale, of course governments are going to get involved. We’ve already seen it happen starting in 2022 with the release of ChatGPT.

The explicit position of the three leading AI companies—OpenAI, Google DeepMind, and Anthropic—has been that there should be regulation and they welcome it. When it gets down to the details of what that regulation says, they might have their own interests that are not identical to the wider interest of society. But I think these are absolutely conversations that the world ought to be having right now. I don’t write it off as silly, and I really hate when people get into these ideological camps where you say you’re not allowed to talk about the long-term risks of AI getting superintelligent because that might detract attention from the near-term risks, or conversely, you’re not allowed to talk about the near-term stuff because it’s trivial. It really is a continuum, and ultimately, this is a phase change in the basic conditions of human existence. It’s very hard to see how it isn’t. We have to make progress, and the only way to make progress is by looking at what is in front of us, looking at the moral decisions that people actually face right now.

Daniel Fagella: That’s a case of viewing it as all one big package. So, should we be putting a regulatory infrastructure in place right now or is it premature?

Scott Aaronson: If we try to write all the regulations right now, will we just lock in ideas that might be obsolete a few years from now? That’s a hard question, but I can’t see any way around the conclusion that we will eventually need a regulatory infrastructure for dealing with all of these things.

Daniel Fagella: Got it. Good to see where you land on that. I think that’s a strong, middle-of-the-road position. My whole hope with this series has been to get people to open up their thoughts and not be in those camps you talked about. You exemplify that with every answer, and that’s just what I hoped to get out of this episode. Thank you, Scott.

Scott Aaronson: Of course, thank you, Daniel.

Daniel Fagella: That’s all for this episode. A big thank you to everyone for tuning in.