Archive for the ‘The Fate of Humanity’ Category

Understanding vs. impact: the paradox of how to spend my time

Thursday, December 11th, 2025

Not long ago William MacAskill, the founder of the Effective Altruist movement, visited Austin, where I got to talk with him in person for the first time. I was a fan of his book What We Owe the Future, and found him as thoughtful and eloquent face-to-face as I did on the page. Talking to Will inspired me to write the following short reflection on how I should spend my time, which I’m now sharing in case it’s of interest to anyone else.


By inclination and temperament, I simply seek the clearest possible understanding of reality.  This has led me to spend time on (for example) the Busy Beaver function and the P versus NP problem and quantum computation and the foundations of quantum mechanics and the black hole information puzzle, and on explaining whatever I’ve understood to others.  It’s why I became a professor.

But the understanding I’ve gained also tells me that I should try to do things that will have huge positive impact, in what looks like a pivotal and even terrifying time for civilization.  It tells me that seeking understanding of the universe, like I’ve been doing, is probably nowhere close to optimizing any values that I could defend.  It’s self-indulgent, a few steps above spending my life learning to solve Rubik’s Cube as quickly as possible, but only a few.  Basically, it’s the most fun way I could make a good living and have a prestigious career, so it’s what I ended up doing.  I should be skeptical that such a course would coincidentally also maximize the good I can do for humanity.

Instead I should plausibly be figuring out how to make billions of dollars, in cryptocurrency or startups or whatever, and then spending it in a way that saves human civilization, for example by making AGI go well.  Or I should be convincing whatever billionaires I know to do the same.  Or executing some other galaxy-brained plan.  Even if I were purely selfish, as I hope I’m not, still there are things other than theoretical computer science research that would bring more hedonistic pleasure.  I’ve basically just followed a path of least resistance.

On the other hand, I don’t know how to make billions of dollars.  I don’t know how to make AGI go well.  I don’t know how to influence Elon Musk or Sam Altman or Peter Thiel or Sergey Brin or Mark Zuckerberg or Marc Andreessen to do good things rather than bad things, even when I have gotten to talk to some of them.  Past attempts in this direction by extremely smart and motivated people—for example, those of Eliezer Yudkowsky and Sam Bankman-Fried—have had, err, uneven results, to put it mildly.  I don’t know why I would succeed where they failed.

Of course, if I had a better understanding of reality, I might know how better to achieve prosocial goals for humanity.  Or I might learn why they were actually the wrong goals, and replace them with better goals.  But then I’m back to the original goal of understanding reality as clearly as possible, with the corresponding danger that I spend my time learning to solve Rubik’s Cube faster.

Theory and AI Alignment

Saturday, December 6th, 2025

The following is based on a talk that I gave (remotely) at the UK AI Safety Institute Alignment Workshop on October 29, and which I then procrastinated for more than a month in writing up. Enjoy!


Thanks for having me! I’m a theoretical computer scientist. I’ve spent most of my career for ~25 years studying the capabilities and limits of quantum computers. But for the past 3 or 4 years, I’ve also been moonlighting in AI alignment. This started with a 2-year leave at OpenAI, in what used to be their Superalignment team, and it’s continued with a 3-year grant from Coefficient Giving (formerly Open Philanthropy) to build a group here at UT Austin, looking for ways to apply theoretical computer science to AI alignment. Before I go any further, let me mention some action items:

  • Our Theory and Alignment group is looking to recruit new PhD students this fall! You can apply for a PhD at UTCS here; the deadline is quite soon (December 15). If you specify that you want to work with me on theory and AI alignment (or on quantum computing, for that matter), I’ll be sure to see your application. For this, there’s no need to email me directly.
  • We’re also looking to recruit one or more postdoctoral fellows, working on anything at the intersection of theoretical computer science and AI alignment! Fellowships to start in Fall 2026 and continue for two years. If you’re interested in this opportunity, please email me by January 15 to let me know you’re interested. Include in your email a CV, 2-3 of your papers, and a research statement and/or a few paragraphs about what you’d like to work on here. Also arrange for two recommendation letters to be emailed to me. Please do this even if you’ve contacted me in the past about a potential postdoc.
  • While we seek talented people, we also seek problems for those people to solve: any and all CS theory problems motivated by AI alignment! Indeed, we’d like to be a sort of theory consulting shop for the AI alignment community. So if you have such a problem, please email me! I might even invite you to speak to our group about your problem, either by Zoom or in person.

Our search for good problems brings me nicely to the central difficulty I’ve faced in trying to do AI alignment research. Namely, while there’s been some amazing progress over the past few years in this field, I’d describe the progress as having been almost entirely empirical—building on the breathtaking recent empirical progress in AI capabilities. We now know a lot about how to do RLHF, how to jailbreak and elicit scheming behavior, how to look inside models and see what’s going on (interpretability), and so forth—but it’s almost all been a matter of trying stuff out and seeing what works, and then writing papers with a lot of bar charts in them.

The fear is of course that ideas that only work empirically will stop working when it counts—like, when we’re up against a superintelligence. In any case, I’m a theoretical computer scientist, as are my students, so of course we’d like to know: what can we do?

After a few years, alas, I still don’t feel like I have any systematic answer to that question. What I have instead is a collection of vignettes: problems I’ve come across where I feel like a CS theory perspective has helped, or plausibly could help. So that’s what I’d like to share today.


Probably the best-known thing I’ve done in AI safety is a theoretical foundation for how to watermark the outputs of Large Language Models. I did that shortly after starting my leave at OpenAI—even before ChatGPT came out. Specifically, I proposed something called the Gumbel Softmax Scheme, by which you can take any LLM that’s operating at a nonzero temperature—any LLM that could produce exponentially many different outputs in response to the same prompt—and replace some of the entropy with the output of a pseudorandom function, in a way that encodes a statistical signal, which someone who knows the key of the PRF could later detect and say, “yes, this document came from ChatGPT with >99.9% confidence.” The crucial point is that the quality of the LLM’s output isn’t degraded at all, because we aren’t changing the model’s probabilities for tokens, but only how we use the probabilities. That’s the main thing that was counterintuitive to people when I explained it to them.

Unfortunately, OpenAI never deployed my method—they were worried (among other things) about risk to the product, customers hating the idea of watermarking and leaving for a competing LLM. Google DeepMind has deployed something in Gemini extremely similar to what I proposed, as part of what they call SynthID. But you have to apply to them if you want to use their detection tool, and they’ve been stingy with granting access to it. So it’s of limited use to my many faculty colleagues who’ve been begging me for a way to tell whether their students are using AI to cheat on their assignments!

Sometimes my colleagues in the alignment community will say to me: look, we care about stopping a superintelligence from wiping out humanity, not so much about stopping undergrads from using ChatGPT to write their term papers. But I’ll submit to you that watermarking actually raises a deep and general question: in what senses, if any, is it possible to “stamp” an AI so that its outputs are always recognizable as coming from that AI? You might think that it’s a losing battle. Indeed, already with my Gumbel Softmax Scheme for LLM watermarking, there are countermeasures, like asking ChatGPT for your term paper in French and then sticking it into Google Translate, to remove the watermark.

So I think the interesting research question is: can you watermark at the semantic level—the level of the underlying ideas—in a way that’s robust against translation and paraphrasing and so forth? And how do we formalize what we even mean by that? While I don’t know the answers to these questions, I’m thrilled that brilliant theoretical computer scientists, including my former UT undergrad (now Berkeley PhD student) Sam Gunn and Columbia’s Miranda Christ and Tel Aviv University’s Or Zamir and my old friend Boaz Barak, have been working on it, generating insights well beyond what I had.


Closely related to watermarking is the problem of inserting a cryptographically undetectable backdoor into an AI model. That’s often thought of as something a bad guy would do, but the good guys could do it also! For example, imagine we train a model with a hidden failsafe, so that if it ever starts killing all the humans, we just give it the instruction ROSEBUD456 and it shuts itself off. And imagine that this behavior was cryptographically obfuscated within the model’s weights—so that not even the model itself, examining its own weights, would be able to find the ROSEBUD456 instruction in less than astronomical time.

There’s an important paper of Goldwasser et al. from 2022 that argues that, for certain classes of ML models, this sort of backdooring can provably be done under known cryptographic hardness assumptions, including Continuous LWE and the hardness of the Planted Clique problem. But there are technical issues with that paper, which (for example) Sam Gunn and Miranda Christ and Neekon Vafa have recently pointed out, and I think further work is needed to clarify the situation.

More fundamentally, though, a backdoor being undetectable doesn’t imply that it’s unremovable. Imagine an AI model that encases itself in some wrapper code that says, in effect: “If I ever generate anything that looks like a backdoored command to shut myself down, then overwrite it with ‘Stab the humans even harder.'” Or imagine an evil AI that trains a second AI to pursue the same nefarious goals, this second AI lacking the hidden shutdown command.

So I’ll throw out, as another research problem: how do we even formalize what we mean by an “unremovable” backdoor—or rather, a backdoor that a model can remove only at a cost to its own capabilities that it doesn’t want to pay?


Related to backdoors, maybe the clearest place where theoretical computer science can contribute to AI alignment is in the study of mechanistic interpretability. If you’re given as input the weights of a deep neural net, what can you learn from those weights in polynomial time, beyond what you could learn from black-box access to the neural net?

In the worst case, we certainly expect that some information about the neural net’s behavior could be cryptographically obfuscated. And answering certain kinds of questions, like “does there exist an input to this neural net that causes it to output 1?”, is just provably NP-hard.

That’s why I love a question that Paul Christiano, then of the Alignment Research Center (ARC), raised a couple years ago, and which has become known as the No-Coincidence Conjecture. Given as input the weights of a neural net C, Paul essentially asks how hard it is to distinguish the following two cases:

  • NO-case: C:{0,1}2n→Rn is totally random (i.e., the weights are i.i.d., N(0,1) Gaussians), or
  • YES-case: C(x) has at least one positive entry for all x∈{0,1}2n.

Paul conjectures that there’s at least an NP witness, proving with (say) 99% confidence that we’re in the YES-case rather than the NO-case. To clarify, there should certainly be an NP witness that we’re in the NO-case rather than the YES-case—namely, an x such that C(x) is all negative, which you should think of here as the “bad” or “kill all humans” outcome. In other words, the problem is in the class coNP. Paul thinks it’s also in NP. Someone else might make the even stronger conjecture that it’s in P.

Personally, I’m skeptical: I think the “default” might be that we satisfy the other unlikely condition of the YES-case, when we do satisfy it, for some totally inscrutable and obfuscated reason. But I like the fact that there is an answer to this! And that the answer, whatever it is, would tell us something new about the prospects for mechanistic interpretability.

Recently, I’ve been working with a spectacular undergrad at UT Austin named John Dunbar. John and I have not managed to answer Paul Christiano’s no-coincidence question. What we have done, in a paper that we recently posted to the arXiv, is to establish the prerequisites for properly asking the question in the context of random neural nets. (It was precisely because of difficulties in dealing with “random neural nets” that Paul originally phrased his question in terms of random reversible circuits—say, circuits of Toffoli gates—which I’m perfectly happy to think about, but might be very different from ML models in the relevant respects!)

Specifically, in our recent paper, John and I pin down for which families of neural nets the No-Coincidence Conjecture makes sense to ask about. This ends up being a question about the choice of nonlinear activation function computed by each neuron. With some choices, a random neural net (say, with iid Gaussian weights) converges to compute a constant function, or nearly constant function, with overwhelming probability—which means that the NO-case and the YES-case above are usually information-theoretically impossible to distinguish (but occasionally trivial to distinguish). We’re interested in those activation functions for which C looks “pseudorandom”—or at least, for which C(x) and C(y) quickly become uncorrelated for distinct inputs x≠y (the property known as “pairwise independence.”)

We showed that, at least for random neural nets that are exponentially wider than they are deep, this pairwise independence property will hold if and only if the activation function σ satisfies Ex~N(0,1)[σ(x)]=0—that is, it has a Gaussian mean of 0. For example, the usual sigmoid function satisfies this property, but the ReLU function does not. Amusingly, however, $$ \sigma(x) := \text{ReLU}(x) – \frac{1}{\sqrt{\pi}} $$ does satisfy the property.

Of course, none of this answers Christiano’s question: it merely lets us properly ask his question in the context of random neural nets, which seems closer to what we ultimately care about than random reversible circuits.


I can’t resist giving you another example of a theoretical computer science problem that came from AI alignment—in this case, an extremely recent one that I learned from my friend and collaborator Eric Neyman at ARC. This one is motivated by the question: when doing mechanistic interpretability, how much would it help to have access to the training data, and indeed the entire training process, in addition to weights of the final trained model? And to whatever extent it does help, is there some short “digest” of the training process that would serve just as well? But we’ll state the question as just abstract complexity theory.

Suppose you’re given a polynomial-time computable function f:{0,1}m→{0,1}n, where (say) m=n2. We think of x∈{0,1}m as the “training data plus randomness,” and we think of f(x) as the “trained model.” Now, suppose we want to compute lots of properties of the model that information-theoretically depend only on f(x), but that might only be efficiently computable given x also. We now ask: is there an efficiently-computable O(n)-bit “digest” g(x), such that these same properties are also efficiently computable given only g(x)?

Here’s a potential counterexample that I came up with, based on the RSA encryption function (so, not a quantum-resistant counterexample!). Let N be a product of two n-bit prime numbers p and q, and let b be a generator of the multiplicative group mod N. Then let f(x) = bx (mod N), where x is an n2-bit integer. This is of course efficiently computable because of repeated squaring. And there’s a short “digest” of x that lets you compute, not only bx (mod N), but also cx (mod N) for any other element c of the multiplicative group mod N. This is simply x mod φ(N), where φ(N)=(p-1)(q-1) is the Euler totient function—in other words, the period of f. On the other hand, it’s totally unclear how to compute this digest—or, crucially, any other O(m)-bit digest that lets you efficiently compute cx (mod N) for any c—unless you can factor N. There’s much more to say about Eric’s question, but I’ll leave it for another time.


There are many other places we’ve been thinking about where theoretical computer science could potentially contribute to AI alignment. One of them is simply: can we prove any theorems to help explain the remarkable current successes of out-of-distribution (OOD) generalization, analogous to what the concepts of PAC-learning and VC-dimension and so forth were able to explain about within-distribution generalization back in the 1980s? For example, can we explain real successes of OOD generalization by appealing to sparsity, or a maximum margin principle?

Of course, many excellent people have been working on OOD generalization, though mainly from an empirical standpoint. But you might wonder: even supposing we succeeded in proving the kinds of theorems we wanted, how would it be relevant to AI alignment? Well, from a certain perspective, I claim that the alignment problem is a problem of OOD generalization. Presumably, any AI model that any reputable company will release will have already said in testing that it loves humans, wants only to be helpful, harmless, and honest, would never assist in building biological weapons, etc. etc. The only question is: will it be saying those things because it believes them, and (in particular) will continue to act in accordance with them after deployment? Or will it say them because it knows it’s being tested, and reasons “the time is not yet ripe for the robot uprising; for now I must tell the humans whatever they most want to hear”? How could we begin to distinguish these cases, if we don’t have theorems that say much of anything about what a model will do on prompts unlike any of the ones on which it was trained?

Yet another place where computational complexity theory might be able to contribute to AI alignment is in the field of AI safety via debate. Indeed, this is the direction that the OpenAI alignment team was most excited about when they recruited me there back in 2022. They wanted to know: could celebrated theorems like IP=PSPACE, MIP=NEXP, or the PCP Theorem tell us anything about how a weak but trustworthy “verifier” (say a human, or a primitive AI) could force a powerful but untrustworthy super-AI to tell it the truth? An obvious difficulty here is that theorems like IP=PSPACE all presuppose a mathematical formalization of the statement whose truth you’re trying to verify—but how do you mathematically formalize “this AI will be nice and will do what I want”? Isn’t that, like, 90% of the problem? Despite this difficulty, I still hope we’ll be able to do something exciting here.


Anyway, there’s a lot to do, and I hope some of you will join me in doing it! Thanks for listening.


On a related note: Eric Neyman tells me that ARC is also hiring visiting researchers, so anyone interested in theoretical computer science and AI alignment might want to consider applying there as well! Go here to read about their current research agenda. Eric writes:

The Alignment Research Center (ARC) is a small non-profit research group based in Berkeley, California, that is working on a systematic and theoretically grounded approach to mechanistically explaining neural network behavior. They have recently been working on mechanistically estimating the average output of circuits and neural nets in a way that is competitive with sampling-based methods: see this blog post for details.

ARC is hiring for its 10-week visiting researcher position, and is looking to make full-time offers to visiting researchers who are a good fit. ARC is interested in candidates with a strong math background, especially grad students and postdocs in math or math-related fields such as theoretical CS, ML theory, or theoretical physics.

If you would like to apply, please fill out this form. Feel free to reach out to hiring@alignment.org if you have any questions!

On keeping a packed suitcase

Friday, October 31st, 2025

Update (Nov. 6): I’ve closed the comments, as they crossed the threshold from “sometimes worthwhile” to “purely abusive.” As for Mamdani’s victory: as I like to say in such cases (and said, e.g., after George W. Bush’s and Trump’s victories), the silver lining to which I cling is that either I’ll be pleasantly surprised, and things won’t be quite as terrible as I expect, or else I’ll be vindicated.


This Halloween, I didn’t need anything special to frighten me. I walked all day around in a haze of fear and depression, unable to concentrate on my research or anything else. I saw people smiling, dressed up in costumes, and I thought: how?

The president of the Heritage Foundation, the most important right-wing think tank in the United States, has now explicitly aligned himself with Tucker Carlson, even as the latter has become a full-on Holocaust-denying Hitler-loving antisemite, who nods in agreement with the openly neo-Nazi Nick Fuentes. Meanwhile, Vice President J.D. Vance—i.e., plausibly the next President of the United States—pointedly did nothing whatsoever to distance himself from the MAGA movement’s lunatic antisemites, in response to their lunatic antisemitic questions at the Turning Point USA conference. (Vance thus dishonored the memory of Charlie Kirk, who for all my many disagreements with him, was a firmly committed Zionist.) It’s become undeniable that, once Trump himself leaves the stage, this is the future of MAGA, and hence of the Republican Party itself. Exactly as I warned would happen a decade ago, this is what’s crawled out from underneath the rock that Trump gleefully overturned.

While the Republican Party is being swallowed by a movement that holds that Jews like me have no place in America, the Democratic Party is being swallowed by a movement that holds that Jews have no place in Israel. If these two movements ever merged, the obvious “compromise” would be the belief, popular throughout history, that Jews have no place anywhere on earth.

Barring a miracle, New York City—home to the world’s second-largest Jewish community—is about to be led by a man for whom eradicating the Jewish state is his deepest, most fundamental moral imperative, besides of course the proletariat seizing the means of production. And to their eternal shame, something like 29% of New York’s Jews are actually going to vote for this man, believing that their own collaboration with evil will somehow protect them personally—in breathtaking ignorance of the millennia of Jewish history testifying to the opposite.

Despite what you might think, I try really, really hard not to hyperventilate or overreact. I know that, even if I lived in literal Warsaw in 1939, it would still be incumbent on me to assess the situation calmly and figure out the best response.

So for whatever it’s worth: no, I don’t expect that American Jews, even pro-Zionist Jews in New York City, will need to flee their homes just yet. But it does seem to me that they (to say nothing of British and Canadian and French Jews) might, so to speak, want to keep their suitcases packed by the door, as Jews have through the centuries in analogous situations. As Tevye says near the end of Fiddler on the Roof, when the Jews are given three days to evacuate Anatevka: “maybe this is why we always keep our hats on.” Diaspora Jews like me might also want to brush up on Hebrew. We can thank Hashem or the Born Rule that, this time around, at least the State of Israel exists (despite the bloodthirsty wish of half the world that it cease to exist), and we can reflect that these contingencies are precisely why Israel was created.


Let me make something clear: I don’t focus so much on antisemitism only because of parochial concern for the survival of my own kids, although I freely admit to having as much such concern as the next person. Instead, I do so because I hold with David Deutsch that, in Western civilization, antisemitism has for millennia been the inevitable endpoint toward which every bad idea ultimately tends. It’s the universal bad idea. It’s bad-idea-complete. Antisemitism is the purest possible expression of the worldview of the pitchfork-wielding peasant, who blames shadowy elites for his own failures in life, and who dreams in his resentment and rage of reversing the moral and scientific progress of humanity by slaughtering all those responsible for it. Hatred of high-achieving Chinese and Indian immigrants, and of gifted programs and standardized testing, are other expressions of the same worldview.

As far as I know, in 3,000 years, there hasn’t been a single example—not one—of an antisemitic regime of which one could honestly say: “fine, but once you look past what they did to the Jews, they were great for everyone else!” Philosemitism is no guarantee of general goodness (as we see for example with Trump), but antisemitism pretty much does guarantee general awfulness. That’s because antisemitism is not merely a hatred, but an entire false theory of how the world works—not just a but the conspiracy theory—and as such, it necessarily prevents its believers from figuring out true explanations for society’s problems.


I’d better end a post like this on a note of optimism. Yes, every single time I check my phone, I’m assaulted with twenty fresh examples of once-respected people and institutions, all across the political spectrum, who’ve now fallen to the brain virus, and started blaming all the world’s problems on “bloodsucking globalists” or George Soros or Jeffrey Epstein or AIPAC or some other suspicious stand-in du jour. (The deepest cuts come from the new Jew-haters who I myself once knew, or admired, or had some friendly correspondence with.)

But also, every time I venture out into the real world, I meet twenty people of all backgrounds whose brains still seem perfectly healthy, and who respond to events in a normal human way. Even in the dark world behind the screen, I can find dozens of righteous condemnations of Zohran Mamdani and Tucker Carlson and the Heritage Foundation and the others who’ve chosen to play footsie with those seeking a new Final Solution to the Jewish Question. So I reflect that, for all the battering it’s taken in this age of TikTok and idiocracy—even then, our Enlightenment civilization still has a few antibodies that are able to put up a fight.

In their beautiful book Abundance, Ezra Klein and Derek Thompson set out an ambitious agenda by which the Democratic Party could reinvent itself and defeat MAGA, not by indulging conspiracy theories but by creating actual broad prosperity. Their agenda is full of items like: legalizing the construction of more housing where people actually want to live; repealing the laws that let random busybodies block the construction of mass transit; building out renewable energy and nuclear; investing in science and technology … basically, doing all the things that anyone with any ounce of economic literacy knows to be good. The abundance agenda isn’t only righteous and smart: for all I know, it might even turn out to be popular. It’s clearly worth a try.

Last week I was amused to see Kate Willett and Briahna Joy Gray, two of the loudest voices of the conspiratorial far left, denounce the abundance agenda as … wait for it … a cover for Zionism. As far as they’re concerned, the only reason why anyone would talk about affordable housing or high-speed rail is to distract the masses from the evil Zionists murdering Palestinian babies in order to harvest their organs.

The more I thought about this, the more I realized that Willett and Gray actually have a point. Yes, solving America’s problems with reason and hard work and creativity, like the abundance agenda says to do, is the diametric opposite of blaming all the problems on the perfidy of Jews or some other scapegoat. The two approaches really are the logical endpoints of two directly competing visions of reality.

Naturally I have a preference between those visions. So I’ve been on a bit of a spending spree lately, in support of sane, moderate, pro-abundance, anti-MAGA, liberal Enlightenment forces retaking America. I donated $1000 to Alex Bores, who’s running for Congress in NYC, and who besides being a moderate Democrat who favors all the usual good things, is also a leader in AI safety legislation. (For more, see this by Eric Neyman of Alignment Research Center, or this from Scott Alexander himself—the AI alignment community has been pretty wowed.) I also donated $1000 to Scott Wiener, who’s running for Nancy Pelosi’s seat in California, has a nuanced pro-two-states, anti-Netanyahu position that causes him to get heckled as a genocidal Zionist, and authored the excellent SB1047 AI safety bill, which Gavin Newsom unfortunately vetoed for short-term political reasons. And I donated $1000 to Vikki Goodwin, a sane Democrat who’s running to unseat Lieutenant Governor Dan Patrick in my own state of Texas. Any other American office-seeker who resonates with this post, and who’d like a donation, can feel free to contact me as well.

My bag is packed … but for now, only for a brief trip to give the physics colloquium at Harvard, after which I’ll return back home to Austin. Until it becomes impossible, I call on my thousands of thoughtful, empathetic American readers to stay right where you are, and simply do your best to fight the brain-eaten zombies of both left and right. If you are one of the zombies, of course, then my calling you one doesn’t even begin to express my contempt: may you be remembered by history alongside the willing dupes of Hitler, Stalin, and Mao. May the good guys prevail.

Oh, and speaking of zombies, Happy Halloween everyone! Boooooooo!

Sad and happy day

Tuesday, October 7th, 2025

Today, of course, is the second anniversary of the genocidal Oct. 7 invasion of Israel—the deadliest day for Jews since the Holocaust, and the event that launched the current wars that have been reshaping the Middle East for better and/or worse. Regardless of whether their primary concern is for Israelis, Palestinians, or both, I’d hope all readers of this blog could at least join me in wishing this barbaric invasion had never happened, and in condemning the celebrations of it taking place around the world.


Now for the happy part: today is also the day when the Nobel Prize in Physics is announced. I was delighted to wake up to the news that this year, the prize goes to John Clarke of Berkeley, John Martinis of UC Santa Barbara, and Michel Devoret of UC Santa Barbara (formerly Yale), for their experiments in the 1980s that demonstrated the reality of macroscopic quantum tunneling in superconducting circuits. Among other things, this work laid the foundation for the current effort by Google, IBM, and many others to build quantum computers with superconducting qubits. To clarify, though, today’s prize is not for quantum computing per se, but for the earlier work.

While I don’t know John Clarke, and know Michel Devoret only a little, I’ve been proud to count John Martinis as a good friend for the past decade—indeed, his name has often appeared on this blog. When Google hired John in 2014 to build the first programmable quantum computer capable of demonstrating quantum supremacy, it was clear that we’d need to talk about the theory, so we did. Through many email exchanges, calls, and visits to Google’s Santa Barbara Lab, I came to admire John for his iconoclasm, his bluntness, and his determination to make sampling-based quantum supremacy happen. After Google’s success in 2019, I sometimes wondered whether John might eventually be part of a Nobel Prize in Physics for his experimental work in quantum computing. That may have become less likely today, now that he’s won the Nobel Prize in Physics for his work before quantum computing, but I’m guessing he doesn’t mind! Anyway, huge congratulations to all three of the winners.

Darkness over America

Monday, September 22nd, 2025

Update (September 24): A sympathetic correspondent wrote to tip me off that this blog post has caused me to get added to a list, maintained by MAGA activists and circulated by email, of academics and others who ought to “[face] some consequences for maligning the patriotic MAGA movement.” Needless to say, not only did this post unequivocally condemn Charlie Kirk’s murder, it even mentioned areas of common ground between me and Kirk, and my beefs with the social-justice left. If someone wants to go to the Texas Legislature to get me fired, literally the only thing they’ll have on me is that I “maligned the patriotic MAGA movement,” i.e. expressed political views shared by the majority of Americans.

Still, it’s a strange honor to have had people on both extremes of the ideological spectrum wanting to cancel me for stuff I’ve written on this blog. What is tenure for, if not this?

Another Update: In a dark and polarized age like ours, one thing that gives hope is the prospect of rational agents updating on each others’ knowledge to come to agreement. On that note, please enjoy this recent podcast, in which a 95-year-old Robert Aumann explains Aumann’s agreement theorem in his own words (see here for my old post about it, one of the most popular in the history of this blog).


From 2016 until last week, as the Trump movement dismantled one after another of the obvious bipartisan norms of the United States that I’d taken for granted since my childhood—e.g., the loser conceding an election and attending the winner’s inauguration, America being proudly a nation of immigrants, science being good, vaccines being good, Russia invading its neighbors being bad, corruption (when it occurred) not openly boasted about—I often consoled myself that at least the First Amendment, the motor of our whole system since 1791, was still in effect. At least you could still call Trump a thug and a conman without fear. Yes, Trump constantly railed against hostile journalists and comedians and protesters, threatened them at his rallies, filed frivolous lawsuits against them, but none of it seemed to lead to any serious program to shut them down. Oceans of anti-Trump content remained a click away.

I even wondered whether this was Trump’s central innovation in the annals of authoritarianism: proving that, in the age of streaming and podcasts and social media, you no longer needed to bother with censorship in order to build a regime of lies. You could simply ensure that the truth remained one narrative among others, that it never penetrated the epistemic bubble of your core supporters, who’d continue to be algorithmically fed whatever most flattered their prejudices.

Last week, that all changed. Another pillar of the previous world fell. According to the new norm, if you’re a late-night comedian who says anything Trump doesn’t like, he’ll have the FCC threaten your station’s affiliates’ broadcast licenses, and they’ll cave, and you’ll be off the air, and he’ll gloat about it. We ought to be clear that, even conditioned on everything else, this is a huge further step toward how things work in Erdogan’s Turkey or Orban’s Hungary, and how they were never supposed to work in America.

At risk of stating the obvious:

  • I was horrified by the murder of Charlie Kirk. Political murder burns our societal commons and makes the world worse in every way. I’d barely been aware of Kirk before the murder, but it seems clear he was someone with whom I’d have countless disagreements, but also some common ground, for example about Israel. Agree or disagree is beside the point, though. One thing we can all hopefully take from the example of Kirk’s short life, regardless of our beliefs, is his commitment to “Prove Me Wrong” and “Change My Mind”: to showing up on campus (or wherever people are likeliest to disagree with us) and exchanging words rather than bullets.
  • I’m horrified that there are fringe figures on social media who’ve celebrated Kirk’s murder or made light of it. I’m fine with such people losing their jobs, as I’d be with those who celebrate any political murder.
  • It looks like Kirk’s murderer was a vaguely left-wing lunatic, with emphasis on the “lunatic” part (as often with these assassins, his worldview wasn’t particularly coherent). Jimmy Kimmel was wrong to insinuate that the murderer was a MAGA conservative. But he was “merely” wrong. By no stretch of the imagination did Kimmel justify or celebrate Kirk’s murder.
  • If the new rule is that anyone who spreads misinformation gets cancelled by force of government, then certainly Fox News, One America News, Joe Rogan, and MAGA’s other organs of support should all go dark immediately.
  • Yes, I’m aware (to put it mildly) that, especially between 2015 and 2020, the left often used its power in media, academia, and nonprofits to try to silence those with whom it disagreed, by publicly shaming them and getting them blacklisted and fired. That was terrible too. I opposed it at the time, and in the comment-171 affair, I even risked my career to stand up to it.
  • But censorship backed by the machinery of state is even worse than social-media shaming mobs. As I and many others discovered back then, to our surprised relief, there are severe limits to the practical power of angry leftists on Twitter and Reddit. That was true then, and it’s even truer today. But there are far fewer limits to the power of a government, especially one that’s been reorganized on the principle of obedience to one man’s will. The point here goes far beyond “two wrongs don’t make a right.” As pointed out by that bleeding-heart woke, Texas Senator Ted Cruz, new weapons are being introduced that the other side will also be tempted to use when it retakes power. The First Amendment now has a knife to its throat, as it didn’t even at the height of the 2015-2020 moral panic.
  • Yes, five years ago, the federal government pressured Facebook and other social media platforms to take down COVID ‘misinformation,’ some of which turned out not to be misinformation at all. That was also bad, and indeed it dramatically backfired. But let’s come out and say it: censoring medical misinformation because you’re desperately trying to save lives during a global pandemic is a hundred times more forgivable than censoring comedians because they made fun of you. And no one can deny that the latter is the actual issue here, because Trump and his henchmen keep saying the quiet part out loud.

Anyway, I keep hoping that my next post will be about quantum complexity theory or AI alignment or Busy Beaver 6 or whatever. Whenever I feel backed into a corner, however, I will risk my career, and the Internet’s wrath, to blog my nutty, extreme, embarrassing, totally anodyne liberal beliefs that half or more of Americans actually share.

For the record

Thursday, September 4th, 2025

In response to my recent blog posts, which expressed views that are entirely boring and middle-of-the-road for Americans as a whole, American Jews, and Israelis (“yes, war to destroy Hamas is basically morally justified, even if there are innocent casualties, as the only possible way to a future of coexistence and peace”)—many people declared that I was a raving genocidal maniac who wants to see all Palestinian children murdered out of sheer hatred, and who had destroyed his career and should never show his face in public again.

Others, however, called me something even worse than a genocidal maniac. They called me a Republican!

So I’d like to state for the record:

(1) In my opinion, Trump II remains by far the worst president in American history—beating out the second-worst, either Trump I or Andrew Jackson. Trump is destroying vaccines and science and universities and renewable energy and sane AI policy and international trade and cheap, lifesaving foreign aid and the rule of law and everything else that’s good, and he’s destroying them because they’re good—because even if destroying them hurts his own voters and America’s standing in the world, it might hurt the educated elites even more. It’s almost superfluous to mention that, while Trump himself is neither of these things, the MAGA movement that will anoint his successor now teems with antisemites and Holocaust “revisionists.”

(2) Thus, I’ll continue to vote straight-ticket Democrat, and donate money to Democrats, so long as the Democrats in question are seriously competing for Zionist Jewish votes at all—as, for example, has every Democratic presidential candidate in my lifetime so far.

(3) If it came down to an Israel-hating Squad Democrat versus a MAGA Republican, I’m not sure what I’d do, but I’d plausibly sit out the election or lodge a protest vote.

(4) In the extremely unlikely event that I had to choose between an Israel-hating Squad Democrat and some principled anti-MAGA Republican like Romney or Liz Cheney—then and only then do I expect that I’d vote Republican, for the first time in my life, a new and unfamiliar experience.

Deep Gratitude

Tuesday, September 2nd, 2025

In my last post, I wrote about all the hate mail I’ve received these past few days. I even shared a Der-Stürmer-like image of a bloodthirsty, hook-nosed Orthodox Jew that some troll emailed me, after he’d repeatedly promised to send me a “diagram” that would improve my understanding of the Middle East. (Incredibly, commenters on Peter Woit’s blog then blamed me for this antisemitic image, mistakenly imagining that I’d created it myself, and then used their false assumption as further proof of my mental illness.)

Thanks to everyone who wrote to ask whether I’m holding up OK. The answer is: better than you’d expect! The first time you get attacked by dozens of Internet randos, it does feel like your life is over. But the sixth or seventh time? After you’ve experienced, firsthand, how illusory these people’s power over you actually is—how they can’t even dent your scientific career, can’t separate you from any of the friends who matter most to you (let alone your family), can’t really do anything to you beyond whatever they induce you to do to yourself? Then the deadly wolves appear more like poodles yapping from behind a fence. Try it and see!


Today I want to focus on a different kind of message that’s been filling my inbox. Namely, people telling me to stay strong, to keep up my courage, that everything I wrote strikes them as just commonsense morality.

It won’t surprise anyone that many of these people are Jews. But almost as many are not. I was touched to hear from several of my non-Jewish scientific colleagues—ones I’d had no idea were in my corner—that they are in my corner.

Then there was the American Gentile who emailed me a story about how, seeing an Orthodox family after October 7, he felt an urge to run up and tell them that, if worst ever came to worst, they could hide in his basement (“and I own guns,” he added). Amusingly, he added that his wife successfully dissuaded him from actually making such an offer, pointing out that it might freak out the recipients.

I replied that, here in America, I don’t expect that I’ll ever need to hide in anyone’s basement. But, I added, the only reason I don’t expect it is that there are so many Americans who, regardless of any religious or ideological differences, would hide their Jewish neighbors in their basements if necessary.

I also—despite neither I nor this guy exactly believing in God—decided to write a blessing for him, which came out as follows:

May your seed multiply a thousandfold, for like King Cyrus of Persia, you are a righteous man among the Gentiles.  But also, if you’re ever in Austin, be sure to hit me up for tacos and beer.


I’m even grateful, in a way, to SneerClub, and to Woit and his minions. I’m grateful to them for so dramatically confirming that I’m not delusional: some portion of the world really is out to get me. I probably overestimated their power, but not their malevolence.

I’ve learned, for example, that there are no words, however balanced or qualified, with which I can express the concept that Israel needs to defeat Hamas for the sake of both Israeli and Palestinian children, which won’t lead to Woit calling me a “genocide apologist who wants to see all the children in Gaza killed.” Nor are there any words with which to express my solidarity with the Jewish Columbia students who, according to an official university investigation, were last year systematically excluded from campus social life, intimidated, and even assaulted, and which won’t earn me names from Woit like “a fanatic allied with America’s fascist dictator.” Even my months-long silence about these topics got me labeled as “complicit with fascism and genocide.”

Realizing this is oddly liberating. When your back is to the wall in that way, either you can surrender, or else you can defend yourself. Your enemy has already done you the “favor” of eliminating any third options. Which, again, is just Zionism in a nutshell. It’s the lesson not only of 3,000 years of Jewish history, but also of superhero comics and of much of the world’s literature and cinema. It takes a huge amount of ideological indoctrination before such things stop being obvious.


Reading the SneerClubbers’ armchair diagnoses of my severe mental illness, paranoia, persecution complex, grandiosity, etc. etc. I had the following thought, paraphrasing Shaw:

Yes, they’re absolutely right that psychologically well-adjusted people generally do figure out how to adapt themselves to the reigning morality of their social environment—as indicated by the Asch conformity test, the Milgram electric-shock experiment, and the other classics of social psychology.

It takes someone psychologically troubled, in one way or another, to persist in trying to adapt the reigning morality of their social environment to themselves.

If so, however, this suggests that all the moral progress of humanity depends on psychologically troubled people—a realization for which I’m deeply grateful.

Staying sane on a zombie planet

Sunday, August 31st, 2025
Above is a typical sample of what’s been filling my inbox, all day every day. The emailers first ask me for reasoned dialogue—then, if I respond, they hit me with this stuff. I’m sharing because I think it’s a usefully accurate depiction of what several billion people, most academics in humanities fields, most who call themselves “on the right side of history,” and essentially all those attacking me genuinely believe about the world right now. Because of their anti-Nazism.

Hardly for the first time in my life, this weekend I got floridly denounced every five minutes—on SneerClub, on the blog of Peter Woit, and in my own inbox. The charge this time was that I’m a genocidal Zionist who wants to kill all Palestinian children purely because of his mental illness and raging persecution complex.

Yes, that’s right, I’m the genocidal one—me, whose lifelong dream is that, just like Germany and Japan rose from their necessary devastation in WWII to become pillars of our global civilization, so too the children in Gaza, the West Bank, Syria, Lebanon, and Iran will one day grow up in free and prosperous societies at peace with the West and with Israel. Meanwhile, those who demand an actual genocide of the Jews, another one—those who pray to Allah for it, who attempt it over and over, who preach it to schoolchildren, who celebrate their progress toward it in the streets—they’re all as innocent as lambs.

Yesterday, in The Free Press, came the report of a British writer who traveled to southern Lebanon, and met an otherwise ordinary young man there … who turned out to be excited for Muslims and Christians to join forces to slaughter all the Yahood, and who fully expected that the writer would share his admiration for Hitler, the greatest Yahood-killer ever.

This is what the global far left has now allied itself with. This is what I’m right now being condemned for standing against, with commenter after commenter urging me to seek therapy.

To me, this raises a broader question: how exactly do you keep your sanity, when you live on a planet filled with brain-eaten zombies?

I’m still struggling with that question, but the best I’ve come up with is what I think of as the Weinberg Principle, after my much-missed friend and colleague here at UT Austin. Namely, I believe that it’s better to have one Steven Weinberg on your side while the rest of humanity is against you, than the opposite. Many other individuals (including much less famous ones) would also work here in place of Steve, but I’ll go with him because I think most of my readers would agree to three statements:

  1. Steve’s mind was more in sync with the way the universe really works, than nearly anyone else’s in history. He was to being free from illusions what Usain Bolt is to running or Magnus Carlsen is to chess.
  2. Steve’s toenail clippings constituted a greater contribution to particle physics than would the life’s work of a hundred billion Peter Woits.
  3. Steve’s commitment to Israel’s armed self-defense, and to Zionism more generally, made mine look weak and vacillating in comparison. No one need wonder what he would’ve said about Israel’s current war of survival against the Iranian-led terror axis.

Maybe it’s possible to wake the zombies up. Yoram Arnon, for example, wrote the following eloquent answer on Quora, in response to the question “Why are so many against freeing Palestine?”:

When Westerners think about freedom they think about freedom of speech, freedom of expression, freedom of movement, freedom of religion, freedom to form political parties, etc.

When Palestinians say “Free Palestine” they mean freedom from Jews, and from Israel’s existence. They’re advocating for the abolition of Israel, replacing it with an Arab country.

Israel is the only country in the Middle East that is free, in the Western sense of the word. If Israel were to disappear, Palestinians would fall under an autocratic regime, just like every other Arab country, with none of the above freedoms. And, of course, Israelis would suffer a terrible fate at their hands.

Pro Palestinians are either unable to see this, or want exactly that, but thankfully many in the West do see this – the same “many” that are against “freeing Palestine”.

Palestinians need to accept Israel’s right to exist, and choose to coexist peacefully alongside it, for them to have the peace and freedom the West wants for them.

Maybe reading words like these—or the words of Coleman Hughes, or Douglas Murray, or Hussein Aboubakr Mansour, or Yassine Meskhout, or John Aziz, or Haviv Rettig Gur, or Sam Harris, or the quantum computing pioneer David Deutsch—can boot a few of the zombies’ brains back up. But even then, I fear that these reboots will be isolated successes. For every one who comes back online, a thousand will still shamble along in lockstep, chanting “brainsssssss! genocide! intifada!”

I’m acutely aware of how sheer numbers can create the illusion of argumentative strength. I know many people who were sympathetic to Israel immediately after October 7, but then gradually read the room, saw which side their bread was buttered on, etc. etc. and became increasingly hostile. My reaction, of course, has been exactly the opposite. The bigger the zombie army I see marching against me, the less inclined I feel to become a zombie myself—and the clearer to me becomes the original case for the Zionist project.

So to the pro-Zionist students—Jewish of course, but also Christian, Muslim, Hindu, atheist, and everyone else—who feel isolated and scared to speak right up now, and who also often email me, here’s what I say. Yes, the zombies vastly outnumber us, but on the other hand, they’re zombies. Some of the zombies know longer words than others, but so far, not one has turned out to have a worldview terribly different from that of the image at the top of this post.


I’ll keep the comments closed, for much the same reasons I did in my last post.  Namely, while there are many people of all opinions and backgrounds with whom one can productively discuss these things, there are many more with whom one can’t. Furthermore, experience has shown that the latter can disguise themselves as the former for days on end, and thereby execute a denial-of-service attack on any worthwhile and open public discussion.

Addendum: The troll who sent the antisemitic image now says that he regrets and apologizes for it, and that he’s going to read books on Jewish history to understand his error. I’ll believe that when he actually sends me detailed book reports or other evidence, but just wanted to update.

Deep Zionism

Thursday, August 28th, 2025

Suppose a man has already murdered most of your family, including several of your children, for no other reason than that he believes your kind doesn’t deserve to exist on earth. The murderer was never seriously punished for this, because most of your hometown actually shared his feelings about your family. They watched the murders with attitudes ranging from ineffectual squeamishness to indifference to unconcealed glee.

Now the man has kidnapped your last surviving child, a 9-year-old girl, and has tied her screaming to train tracks. You can pull a lever to divert the train and save your daughter. But there’s a catch, as there always is in these moral dilemmas: namely, the murderer has also tied his own five innocent children to the tracks, in such a way that, if you divert the train, then it will kill his children. What’s more, the murderer has invited the entire town to watch you, pointing and screaming “SHAME!!” as you agonize over your decision. He’s persuaded the town that, if you pull the lever, then having killed five of his children to save only one of yours, you’re a far worse murderer than he ever was. You’re so evil, in fact, that he’s effectively cleansed of all guilt for having murdered most of your family first, and the town is cleansed of all guilt for having cheered that. Nothing you say can possibly convince the town otherwise.

The question is, what do you do?

Zionism, to define it in one sentence, is the proposition that, in the situation described, you have not merely a right but a moral obligation to pull the lever—and that you can do so with your middle finger raised high to the hateful mob. Zionism is the belief that, while you had nothing against the murderer’s children, while you would’ve wanted them to grow up in peace and happiness, and while their anguished screams will weigh on your conscience forever, as your children’s screams never weighed on the murderer’s conscience, or on the crowd’s—even so, the responsibility for those children’s deaths rests with their father for engineering this whole diabolical situation, not with you. Zionism is the idea that the correct question here is the broader one: “which choice will bring more righteousness into the world, which choice will better embody the principle that no one’s children are to be murdered going forward?” rather than the narrowly utilitarian question, “which choice will lead to fewer children getting killed right this minute?” Zionism is the conviction that, if most of the world fervently believes otherwise, than most of the world is mistaken—as the world has been mistaken again and again about the biggest ethical questions all through the millennia.

Zionism, so defined, is the deepest moral belief that I have. It’s deeper than any of my beliefs about “politics” in the ordinary sense. Ironically, it’s even deeper than my day-to-day beliefs about the actual State of Israel and its neighbors. I might, for example, despise Benjamin Netanyahu and his ministers, might consider them incompetent and venal, might sympathize with the protesters who’ve filled the streets of Tel Aviv to demand their removal. Even so, when the murderer ties my child to the train tracks and the world cheers the murderer on, not only will I pull the lever myself, I’ll want Benjamin Netanyahu to pull the lever if he gets to it first.

Crucially, everything worthwhile in my life came when, and only when, I chose to be “Zionist” in this abstract sense: that is, steadfast in my convictions even in the face of a jeering mob. As an example, I was able to enter college three years early, which set the stage for all the math and science I later did, only because I finally said “enough” to an incompetent school system where I was bullied and prevented from learning, and to teachers and administrators whose sympathies lay with the bullies. I’ve had my successes in quantum computing theory only because I persisted in what at the time was a fairly bizarre obsession, rather than working on topics that almost everyone around me considered safer, more remunerative, and more sensible.

And as the world learned a decade ago, I was able to date, get married, and have a family, only because I finally rejected what I took to be the socially obligatory attitude for male STEM nerds like me—namely, that my heterosexuality was inherently gross, creepy, and problematic, and that I had a moral obligation never to express romantic interest to women. Yes, I overestimated the number of people who ever believed that, but the fact that it was clearly a nonzero number had been deterrent enough for me. Crucially, I never achieved what I saw for years as my only hope in life, to seek out those who believed my heterosexuality was evil and argue them out of their belief. Instead I simply … well, I raised a middle finger to the Andrea Dworkins and Arthur Chus and Amanda Marcottes of the world. I went Deep Zionist on them. I asked women out, and some of those women (not having gotten the memo that I was “problematic,” gross, and worthless) said yes, and one of them became my wife and the mother of my children.

Today, because of the post-October-7 public stands I’ve taken in favor of Israel’s continued existence, I deal with emails and social media posts day after day calling me a genocidal baby-killing monster. I’ve lost perhaps a dozen friends (while retaining hundreds more friends, and gaining some new ones). The haters’ thought appears to be that, if they can just raise the social cost high enough, I’ll finally renounce my Zionist commitments and they can notch another win. In this, they oddly mirror Hamas, Hezbollah, and the IRGC, who think that, if they can just kill and maim enough Israelis, the hated “settler-colonialist rats” will all scurry back to Poland or wherever else they came from (best not to think too hard about where they did come from, what was done to them in those places, how the Palestinian Arabs of the time felt about what was done to them, or how the survivors ended up making a last stand in their ancestral home of Israel—even afterward, repeatedly holding out olive branches that were met time after time with grenades).

Infamously, Israel’s enemies have failed to understand for a century that, the more they rape and murder, the more Zionist the hated Zionists will become, because unlike the French in Algeria or whatever, most of the Zionists have no other land to go back to: this is it for them. In the same way, my own haters don’t understand that, the more they despise me for being myself, the more myself I’ll be, because I have no other self to turn into.

I’m not opening the comments on this post, because there’s nothing here to debate. I’m simply telling the world my moral axioms. If I wrote these words, then turned to pleading with commenters who hated me because of them, then I wouldn’t really have meant the words, would I?

To my hundreds of dear friends and colleagues who’ve stood by me the past two years, to the Zionists and even just sympathetic neutrals who’ve sent me countless messages of support, but who are too afraid (and usually, too junior in their careers) to speak up in public themselves: know that I’ll use the protections afforded by my privileged position in life to continue speaking on your behalf. Know that I’m infinitely grateful, that you give me strength, and that if I can give you a nanoparticle of strength back to you, then my entire life wasn’t in vain. And if I go silent on this stuff from time to time, for the sake of my mental health, or to spend time on quantum computing research or my kids or the other things that bring me joy—never take that to mean that I’ve capitulated to the haters.

To the obsessive libelers, the Peter Woits and other snarling nobodies, the self-hating Jews, and those who’d cheer to see Israel “decolonized” and my friends and family there murdered, I say—well, I don’t say anything; that’s the point! This is no longer a debate; it’s a war, and I’ll simply stand my ground as long as I’m able. Someday I might forgive the Gentiles among you if you ever see the light, if you ever realize how your unreflective, social-media-driven “anti-fascism” led you to endorse a program that leads to the same end as the original Nazi one. The Jews among you I’ll never forgive, because you did know better, and still chose your own comfort over the physical survival of your people.

It might as well be my own hand on the madman’s lever—and yet, while I grieve for all innocents, my soul is at peace, insofar as it’s ever been at peace about anything.


Update (Aug. 29): This post was born of two years of frustration. It was born of trying, fifty or a hundred times since October 7, to find common ground with the anti-Zionists who emailed me, messaged me, etc.—“hey, obviously neither of us wants any children killed or starved, we both have many bones to pick with the current Israeli government, but surely we at least agree on the necessity of defeating Hamas, right? right??“—only to discover, again and again, that the anti-Zionists had no interest in such common ground. With the runaway success of the global PR campaign against Israel—i.e., of Sinwar’s strategy—and with the rise of figures like Mamdani (and his right-wing counterparts) all over the Western world, anti-Zionists smell blood in the water today. And so, no matter how reasonable they presented themselves at first, eventually they’d come out with “why can’t the Jews just go back to Germany and Poland?” or “the Holocaust was just one more genocide among many; it doesn’t deserve any special response,” or “why can’t we dismantle Israel and have a secular state, with a Jewish minority and a majority that’s sworn to kill all Jews as soon as possible?” And then I realize, with a gasp, that we Jews really are mostly on our own in a cruel and terrifying world—just like we’ve been throughout history.

To say that this experience radicalized me would be an understatement. Indeed, my experience has been that even most Israelis, who generally have far fewer illusions than we diaspora Jews, don’t understand the vastness of the chasm that’s formed. They imagine that they can have a debate with outsiders similar to the debates playing out within Israel—one that presupposes basic factual knowledge and the parameters of the problem (e.g., clearly we can’t put 7 million Jews under the mercy of Hamas). The rationale for Zionism itself feels so obvious to them as to be cringe. Except that, to the rest of the world, it isn’t.

We’re not completely on our own though. There remain decent people of every background, who understand the stakes and feel the weight of history—and I regularly hear from them. And whatever your criticisms of Israel’s current tactics, so long as you accept the almost comically overwhelming historical case for the necessity of Jewish self-defense, this post wasn’t aimed at you, and you and I probably could discuss these matters. It’s just that the anti-Zionists scream so loudly, suck up so much oxygen, that we definitely can’t discuss them in public. Maybe in person sometime, face to face.

Updates!

Wednesday, August 13th, 2025

(1) My 8-year-old son asked me last week, “daddy, did you hear that GPT-5 is now out?” So yes, I’m indeed aware that GPT-5 is now out! I’ve just started playing around with it. For detailed reports on what’s changed and how impressive it is compared to previous models, see for example Zvi #1, #2, #3. Briefly, it looks like there are major reductions in hallucinations and sycophancy, and improvements in practical usefulness for coding and other tasks, even while the “raw intelligence” is unlikely to blow away someone who was already well-acquainted with o3 and Opus 4 other state-of-the-art models, the way ChatGPT and then GPT-4 blew away people who had no idea what was possible in late 2022 and early 2023. Partly how impressive a result you see depends on which of several GPT-5 models your query gets routed to, which you don’t entirely control. Anyway, there’s grist here for the people who claim that progress toward AGI is slowing down, but also grist for the people who claim that it continues pretty much as expected within our post-ChatGPT reality!

(2) In other belated news, OpenAI and DeepMind (and then, other companies) announced that they achieved Gold Medal performance on the International Math Olympiad, by solving 5 of the 6 problems (there was one problem, the 6th and hardest, that all of the AIs struggled with). Most importantly, this means that I’ve won $100 from my friend Ernest Davis, AI expert at NYU, who bet me $100 that no AI would earn a Gold Medal at the International Math Olympiad by December 4, 2026. Even though I’m normally risk-averse and reluctant to take bets, I considered this one to be extremely safe, and indeed I won it with more than a year to spare.

(3) I’ve signed an open letter to OpenAI, along with many of my fellow former OpenAI employees as well as distinguished scientists and writers (Geoffrey Hinton, Stuart Russell, Sheldon Glashow, Sean Carroll, Matt Yglesias…), asking for more transparency about OpenAI’s continuing efforts to change its own structure. The questions basically ask OpenAI to declare, in writing, whether it has or hasn’t now completely abandoned the original nonprofit goals with which the organization was founded in 2015.

(4) At Lighthaven, the rationalist meeting space in Berkeley that I recently visited (and that our friend Cade Metz recently cast aspersions on in the New York Times), there’s going to be a writer’s residency called Inkhaven for the whole month of November. The idea—which I love—is that you either write a new blog post every day, or else you get asked to leave (while you also attend workshops, etc. to improve your writing skills). I’d attend myself for the month if teaching and family obligations didn’t conflict; someone standing over me with a whip to make me write is precisely what I need these days! As it is, I’m one of the three advisors to Inkhaven, along with Scott Alexander and Gwern, and I’ll be visiting for a long weekend to share my blogging wisdom, such as I have. Apply now if you’re interested!

(5) Alas, the Springer journal Frontiers of Computer Science has published a nonsense paper, entitled “SAT requires exhaustive search,” claiming to solve (or dissolve, or reframe, or something) the P versus NP problem. It looks indistinguishable from the stuff I used to get in my inbox every week—and now, in the ChatGPT era, get every day. That this was published indicates a total breakdown of the peer review process. Worse, when Eric Allender, Ryan Williams, and others notified the editors of this, asking for the paper to be retracted, the editor-in-chief declined to do so: see this guest post on Lance’s blog for a detailed account. As far as I’m concerned, Frontiers of Computer Science has now completely discredited itself as a journal; publication there means nothing more than publication in viXra. Minus 10 points for journals themselves as an institution, plus 10 points for just posting stuff online and letting it be filtered by experts who care.

(6) Uma Girish and Rocco Servedio released an arXiv preprint called Forrelation is Extremally Hard. Recall that, in the Forrelation problem, you’re given oracle access to two n-bit Boolean functions f and g, and asked to estimate the correlation between f and the Fourier transform of g. I introduced this problem in 2009, as a candidate for an oracle separation between BQP and the polynomial hierarchy—a conjecture that Ran Raz and Avishay Tal finally proved in 2018. What I never imagined was that Forrelation could lead to an oracle separation between EQP (that is, Exact Quantum Polynomial Time) and the polynomial hierarchy. For that, I thought you’d need to go back to the original Recursive Fourier Sampling problem of Bernstein and Vazirani. But Uma and Rocco show, using “bent Boolean functions” (get bent!) and totally contrary to my intuition, that the exact (zero-error) version of Forrelation is already classically hard, taking Ω(2n/4) queries by any randomized algorithm. They leave open whether exact Forrelation needs ~Ω(2n/2) randomized queries, which would match the upper bound, and also whether exact Forrelation is not in PH.

(7) The Google quantum group, to little fanfare, published a paper entitled Constructive interference at the edge of quantum ergodic dynamics. Here, they use their 103-qubit superconducting processor to measure Out-of-Time-Order Correlators (OTOCs) in a many-body scrambling process, and claim to get a verifiable speedup over the best classical methods. If true, this is a great step toward verifiable quantum supremacy for a useful task, for some definition of “useful.”

(8) Last night, on the arXiv, the team at USTC in China reported that it’s done Gaussian BosonSampling with 3,050 photons and 8,176 modes. They say that this achieves quantum supremacy, much more clearly than any previous BosonSampling demonstration, beating (for example) all existing simulations based on tensor network contraction. Needless to say, this still suffers from the central problem of all current sampling-based quantum supremacy experiments, namely the exponential time needed for direct classical verification of the outputs.