Archive for the ‘Nerd Interest’ Category

Remarks at UT on the Pentagon/Anthropic situation

Tuesday, March 10th, 2026

Last Thursday, my friend and colleague Sam Baker, in UT Austin’s English department, convened an “emergency panel” here about the developing Pentagon/Anthropic situation, and asked me to speak at it. Even though the situation has continued to develop since then, I thought my prepared remarks for the panel might be of interest. At the bottom, I include a few additional thoughts.


Hi! I’m Scott Aaronson! I teach CS here at UT. While my background is in quantum computing, I’ve spent the past four years dabbling in AI alignment. I did a two-year leave at OpenAI, in their now-defunct Superalignment team. I joined back when OpenAI’s line was “we’re a little nonprofit, doing all this in the greater interest of humanity, and we’d dissolve ourselves before we raced to build an AI that we thought would be dangerous.” I know Sam Altman, and many other current and former OpenAI people. I also know Dario Amodei—in fact, I knew Dario well before Anthropic existed. Despite that, I don’t actually feel like I have deep insight into the current situation with Anthropic and the Pentagon that you wouldn’t get by reading the news, or (especially) reading commentators like Zvi Mowshowitz, Kelsey Piper, Scott Alexander, and Dean Ball. But since I was asked to comment, I’ll try.

The first point I’ll make: the administration’s line, to the extent they’ve had a consistent line, is basically that they needed to cut off Anthropic because Anthropic is a bunch of woke, America-hating, leftist radicals. I think that, if you actually know the Anthropic people, that characterization is pretty laughable. Unless by “woke,” what the administration meant was “having any principles at all, beyond blind deference to authority, and sticking to them.”

I mean, Anthropic only got into this situation in the first place because it was more eager than the other AI companies to support US national security, by providing a version of Claude that could be used on classified networks. So they signed a contract with the Pentagon, and that contract had certain restrictions in it, which the Pentagon read and agreed to … until they decided that they no longer agreed.

That brings me to my second point. The Pentagon regularly signs contracts with private firms that limit what the Pentagon can do in various ways. That’s why they’re called military contract-ors. So anyone who claims it’s totally unprecedented for Anthropic to try to restrict what the government can do with Anthropic’s private property—I think that person is either misinformed or else trying to misinform.

The third point. If the Pentagon felt that it couldn’t abide a private company telling it what is or isn’t an appropriate military use of current AI, then the Pentagon was totally within its rights to cancel its contract with Anthropic, and find a different contractor (like OpenAI…) that would play ball. So it’s crucial for everyone here to understand that that’s not all that the Pentagon did. Instead they said: because Anthropic dared to stand up to us, we’re going to designate them a Supply Chain Risk—a designation that was previously reserved for foreign nation-state adversaries, and that, incredibly, hasn’t been applied to DeepSeek or other Chinese AI companies that arguably do present such risks. So basically, they threatened to destroy Anthropic, by making it horrendously complicated for any companies that do business with the government—i.e., just about all companies—also to do business with Anthropic.

Either that, the Pentagon threatened, or we’ll invoke the Defense Production Act to effectively nationalize Anthropic—i.e., we’ll just commandeer their intellectual property, use it for whatever we want despite Anthropic’s refusal. You get that? Claude is both a supply chain risk that’s too dangerous for the military to use, and somehow also so crucial to the supply chain that we, the military, need to commandeer it.

To me, this is the authoritarian part of what the Pentagon is doing (with the inconsistency being part of the authoritarianism; who but a dictator gets to impose his will on two directly contradictory grounds?). It’s the part that goes against the free-market principles that our whole economy is built on, and the freedom of speech and conscience that our whole civilization is built on. And I think this will ultimately damage US national security, by preventing other American AI companies from wanting to work on defense going forward.

That brings me to the fourth point, about OpenAI. While this was going down, Sam Altman posted online that he agreed with Anthropic’s red lines: LLMs should not be used for killing people with no human in the kill chain, and they also shouldn’t be used for mass surveillance of US citizens. I thought, that’s great! The frontier AI labs are sticking together when the chips are down, rather than infighting.

But then, just a few hours after the Pentagon designated Anthropic a supply chain risk, OpenAI announced that it had reached a deal with the Pentagon. Huh?!? If they have the same red lines, then why can one of them reach a deal while the other can’t?

The experts’ best guess seems to be this: Anthropic said, yes, using AI to kill people autonomously or to surveil US citizens should already be illegal, but we insist on putting those things in the contract to be extra-double-sure. Whereas OpenAI said, the Pentagon can use our models for “all lawful purposes”—this was the language that the Pentagon had insisted on. And, continued OpenAI, we interpret “all lawful purposes” to mean that they can’t cross these red lines. But if it turns out we’re wrong about that … well, that’s not our problem! That’s between the Pentagon and the courts, or whatever.

Again, we don’t fully know, because most of the relevant contracts haven’t been made public, but that’s an inference from reading between the lines of what has been made public.

Back in 2023-2024, when there was the Battle of the Board, then the battle over changing OpenAI’s governance structure, etc., some people formed a certain view of Sam, that he would say all the good and prosocial and responsible things even while he did whichever thing maximized revenue. I’ll leave it to you whether last week’s events are consistent with that view.

OK, fifth and final point. I remember 15-20 years ago, talking to Eliezer Yudkowsky and others terrified about AI. They said, this is the biggest issue facing the world. It’s not safe for anyone to build because it could turn against us, or even before that, the military could commandeer it or whatever. And I and others were like, dude, you guys obviously read too much science fiction!

And now here we are. Not only are we living in a science-fiction story, I’d say we’re living in a particularly hackneyed one. I mean, the military brass marching into a top AI lab and telling the nerds, “tough luck, we own your AI now”? Couldn’t reality have been a little more creative than that?

The point is, given the developments of the past couple weeks, I think we now need to retire forever the argument against future AI scenarios that goes, “sorry, that sounds too much like a science-fiction plot.” As has been said, you’d best get used to science fiction because you’re living in one!


Updates and Further Thoughts: Of course I’ve seen that Anthropic has now filed a lawsuit to block the Pentagon from designating it a supply chain risk, arguing that both its free speech and due process rights were violated. I hope their lawsuit succeeds; it’s hard for me to imagine how it wouldn’t.

The fact that I’m, obviously, on Anthropic’s side of this particular dispute doesn’t mean that I’ll always be on Anthropic’s side. Here as elsewhere, it’s crucial not to outsource your conscience to anyone.

Zvi makes an extremely pertinent comparison:

[In shutting down Starlink over Ukraine,] Elon Musk actively did the exact thing [the Pentagon is] accusing Anthropic of maybe doing. He made a strategic decision of national security at the highest level as a private citizen, in the middle of an active military operation in an existential defensive shooting war, based on his own read of the situation. Like, seriously, what the actual fuck.

Eventually we bought those services in a contract. We didn’t seize them. We didn’t arrest Musk. Because a contract is a contract is a contract, and your private property is your private property, until Musk decides yours don’t count.

Another key quote in Zvi’s piece, from Gregory Allen:

And here’s the thing. I spent so much of my life in the Department of Defense trying to convince Silicon Valley companies, “Hey, come on in, the water is fine, the defense contracting market, you know, you can have a good life here, just dip your toe in the water”.

And what the Department of Defense has just said is, “Any company that dips their toe in the water, we reserve the right to grab their ankle, pull them all the way in at any time”. And that is such a disincentive to even getting started in working with the DoD.

Lastly, I’d like to address the most common counterargument against Anthropic’s position—as expressed for example by Noah Smith, or in the comments of my previous post on this. The argument goes roughly like so:

You, nerds, are the ones who’ve been screaming for years about AI being potentially existentially dangerous! So then, did you seriously expect to stay in control of the technology? If it’s really as dangerous and important as you say, then of course the military was going to step in at some point and commandeer your new toy, just like it would if you were building a nuclear weapon.

Two immediate responses:

  1. Even in WWII, in one of the most desperate circumstances in human history, the US government didn’t force a single scientist at gunpoint to build nuclear weapons for them. The scientists did so voluntarily, based on their own considered moral judgment at the time (even if some later came to regret their involvement).
  2. Even if I considered it “inevitable” that relatively thoughtful and principled people, like Dario Amodei, would lose control over the future to gleeful barbarians like Pete Hegseth, it still wouldn’t mean I couldn’t complain when it happened. This is still a free country, isn’t it?

The time I didn’t meet Jeffrey Epstein

Sunday, February 1st, 2026

Last night, I was taken aback to discover that my name appears in the Epstein Files, in 26 different documents. This is despite the fact that I met Jeffrey Epstein a grand total of zero times, and had zero email or any other contact with him … which is more (less) than some of my colleagues can say.

The bulk of the correspondence involves Epstein wanting to arrange a meeting with me and Seth Lloyd back in 2010, via an intermediary named Charles Harper, about funding a research project on “Cryptography in Nature.”

Searching my inbox, it turns out that this Charles Harper did contact me in May 2010, and I then met him at S&S Deli in Cambridge (plausible, although I have zero recollections of this meeting—only of the deli). Harper then sent me a detailed followup email about his proposed Cryptography in Nature project, naming Jeffrey Epstein for the first time as the project’s funder, and adding: “perhaps you will know Jeffrey and his background and situation.”

For whatever reason, I forwarded this email to my parents, brother, and then-fiancee Dana. My brother then found and shared a news article about Epstein’s prostitution conviction, adding to a different article that I had found and shared. (At that time, like many others, I’d probably vaguely heard of Epstein, but he didn’t have 0.1% the infamy that he has now.) Then my mom wrote the following: “be careful not to get sucked up in the slime-machine going on here! Since you don’t care that much about money, they can’t buy you at least.”

It appears from emails that Charles Harper tried again later that summer to arrange a meeting between me and Epstein, but that I took my mom’s advice and largely blew him off, and no such meeting ever happened. Amazingly, I then forgot entirely that any of this had occurred until last night. By way of explanation, some business/finance dude trying to interest me in half-baked ideas involving quantum, AI, cryptography, etc., often dangling the prospect of funding for my students and postdocs, shows up in my life like every month. Most of their world-changing initiatives go nowhere for one reason or another. There really wasn’t much reason to think further about this, until Epstein had become history’s most notorious sex criminal, which (again) wouldn’t happen until years later, after I’d forgotten.

It gets better, though. In the Epstein Files, one also finds a November 2010 letter from Charles Harper to Epstein about organizing a conference on the same Cryptography in Nature topic, which includes the following idea about me:

Scott Aaronson was born on May 21st, 1981. He will be 30 in 2011. The conference could follow a theme of: “hurry to think together with Scott Aaronson while he is still in his 20s and not yet a pitiful over-the-hill geezer in his 30s.” This offers another nice opportunity for celebration.

I see no indication that any such conference ever happened; in any case, I didn’t get invited to one!

On my Facebook, some friends are joking that “it tracks that someone into teenage girls might think Scott Aaronson was a hot property in his nubile 20s, who would get old and boring in his 30s”—and that maybe Epstein was less sexist about such matters than everyone assumes. I replied that I wished I could say the proposition that I’d gradually get slower and more senile through the 2010s and 2020s was entirely false.

But the best comment was that I’ve been incredibly lucky to have such an astute family. If only Bill Gates and Larry Summers had had my mom to go to for advice, they could’ve saved themselves a lot of grief.

Scott A. on Scott A. on Scott A.

Sunday, January 18th, 2026

Scott Alexander has put up one of his greatest posts ever, a 10,000-word eulogy to Dilbert creator Scott Adams, of which I would’ve been happy to read 40,000 words more. In it, Alexander trains a microscope on Adams’ tragic flaws as a thinker and human being, but he adds:

In case it’s not obvious, I loved Scott Adams.

Partly this is because we’re too similar for me to hate him without hating myself.

And:

Adams was my teacher in a more literal way too. He published several annotated collections, books where he would present comics along with an explanation of exactly what he was doing in each place, why some things were funny and others weren’t, and how you could one day be as funny as him. Ten year old Scott devoured these … objectively my joke posts get the most likes and retweets of anything I write, and I owe much of my skill in the genre to cramming Adams’ advice into a malleable immature brain.

When I first heard the news that Scott Adams had succumbed to cancer, I posted something infinitely more trivial on my Facebook. I simply said:

Scott Adams (who reigned for decades as the #1 Scott A. of the Internet, with Alexander as #2 and me as at most #3) was a hateful asshole, a nihilist, and a crank. And yet, even when reading the obituaries that explain what an asshole, nihilist, and crank he was, I laugh whenever they quote him.

Inspired by Scott Alexander, I’d like now to try again, to say something more substantial. As Scott Alexander points out, Scott Adams’ most fundamental belief—the through-line that runs not only through Dilbert but through all his books and blog posts and podcasts—was that the world is ruled by idiots. The pointy-haired boss always wins, spouting about synergy and the true essence of leadership, and the nerdy Dilberts always lose. Trying to change minds by rational argument is a fools’ errand, as “master persuaders” and skilled hypnotists will forever run rings around you. He, Scott Adams, is cleverer than everyone else, among other things because he realizes all this—but even he is powerless to change it.

Or as Adams put it in The Dilbert Principle:

It’s useless to expect rational behavior from the people you work with, or anybody else for that matter. If you can come to peace with the fact that you’re surrounded by idiots, you’ll realize that resistance is futile, your tension will dissipate, and you can sit back and have a good laugh at the expense of others.

The thing is, if your life philosophy is that the world is ruled by idiots, and that confident charlatans will always beat earnest nerds, you’re … often going to be vindicated by events. Adams was famously vindicated back in 2015, when he predicted Trump’s victory in the 2016 election (since Trump, you see, was a “master persuader”), before any other mainstream commentator thought that Trump even had a serious chance of winning the Republican nomination.

But if you adopt this worldview, you’re also often going to be wrong—as countless of Adams’ other confident predictions were (see Scott Alexander’s post for examples), to say nothing of his scientific or moral views.

My first hint that the creator of Dilbert was not a reliable thinker, was when I learned of his smugly dismissive view of science. One of the earliest Shtetl-Optimized posts, way back in 2006, was entitled Scott A., disbeliever in Darwinism. At that time, Adams’ crypto-creationism struck me as just some bizarre, inexplicable deviation. I’m no longer confused about it: on the one hand, Scott Alexander’s eulogy shows just how much deeper the crankishness went, how Adams also gobbled medical misinformation, placed his own cockamamie ideas about gravity on par with general relativity, etc. etc. But Alexander succeeds in reconciling all this with Adams’ achievements: it’s all just consequences from the starting axiom that the world is ruled by morons, and that he, Scott Adams, is the only one clever enough to see through it all.


Is my epistemology any different? Do I not also look out on the world, and see idiots and con-men and pointy-haired bosses in every direction? Well, not everywhere. At any rate, I see far fewer of them in the hard sciences.

This seems like a good time to say something that’s been a subtext of Shtetl-Optimized for 20 years, but that Scott Alexander has inspired me to make text.

My whole worldview starts from the observation that science works. Not perfectly, of course—working in academic science for nearly 30 years, I’ve had a close-up view of the flaws—but the motor runs. On a planet full of pointy-haired bosses and imposters and frauds, science nevertheless took us in a few centuries from wretchedness and superstition to walking on the moon and knowing the age of the universe and the code of life.

This is the point where people always say: that’s all well and good, but you can’t derive ought from is, and science, for all its undoubted successes, tells us nothing about what to value or how to live our lives.

To which I reply: that’s true in a narrow sense, but it dramatically understates how far you can get from the “science works” observation.

As one example, you can infer that the people worth listening to are the people who speak and write clearly, who carefully distinguish what they know from what they don’t, who sometimes change their minds when presented with opposing views and at any rate give counterarguments—i.e., who exemplify the values that make science work. The political systems worth following are the ones that test their ideas against experience, that have built-in error-correction mechanisms, that promote people based on ability rather than loyalty—the same things that make scientific institutions work, insofar as they do work. And of course, if the scientists who study X are nearly unanimous in saying that a certain policy toward X would be terrible, then we’d better have a damned good reason to pursue the policy anyway. This still leaves a wide range of moral and political views on the table, but it rules out virtually every kind of populism, authoritarianism, and fundamentalism.

Incidentally, this principle—that one’s whole moral and philosophical worldview should grow out of the seed of science working—is why, from an early age, I’ve reacted to every kind of postmodernism as I would to venomous snakes. Whenever someone tells me that science is just another narrative, a cultural construct, a facade for elite power-seeking, etc., to me they might as well be O’Brien from 1984, in the climactic scene where he tortures Winston Smith into agreeing that 2+2=5, and that the stars are just tiny dots a few miles away if the Party says they are. Once you can believe absurdities, you can justify atrocities.

Scott Adams’ life is interesting to me in that shows exactly how far it’s possible to get without internalizing this. Yes, you can notice that the pointy-haired boss is full of crap. You can make fun of the boss. If you’re unusually good at making fun of him, you might even become a rich, famous, celebrated cartoonist. But you’re never going to figure out any ways of doing things that are systematically better than the pointy-haired boss’s ways, or even recognize the ways that others have found. You’ll be in error far more often than in doubt. You might even die of prostate cancer earlier than necessary, because you listen to medical crackpots and rely on ivermectin, turning to radiation and other established treatments only after having lost crucial time.


Scott Adams was hardly the first great artist to have tragic moral flaws, or to cause millions of his fans to ask whether they could separate the artist from the art. But I think he provides one of the cleanest examples where the greatness and the flaws sprang from the same source: namely, overgeneralization from the correct observation that “the world is full of idiots,” in a way that leaves basically no room even for Darwin or Einstein, and so inevitably curdles over time into crankishness, bitterness, and arrogance. May we laugh at Scott Adams’ cartoons and may we learn from his errors, both of which are now permanent parts of the world’s heritage.

Understanding vs. impact: the paradox of how to spend my time

Thursday, December 11th, 2025

Not long ago William MacAskill, the founder of the Effective Altruist movement, visited Austin, where I got to talk with him in person for the first time. I was a fan of his book What We Owe the Future, and found him as thoughtful and eloquent face-to-face as I did on the page. Talking to Will inspired me to write the following short reflection on how I should spend my time, which I’m now sharing in case it’s of interest to anyone else.


By inclination and temperament, I simply seek the clearest possible understanding of reality.  This has led me to spend time on (for example) the Busy Beaver function and the P versus NP problem and quantum computation and the foundations of quantum mechanics and the black hole information puzzle, and on explaining whatever I’ve understood to others.  It’s why I became a professor.

But the understanding I’ve gained also tells me that I should try to do things that will have huge positive impact, in what looks like a pivotal and even terrifying time for civilization.  It tells me that seeking understanding of the universe, like I’ve been doing, is probably nowhere close to optimizing any values that I could defend.  It’s self-indulgent, a few steps above spending my life learning to solve Rubik’s Cube as quickly as possible, but only a few.  Basically, it’s the most fun way I could make a good living and have a prestigious career, so it’s what I ended up doing.  I should be skeptical that such a course would coincidentally also maximize the good I can do for humanity.

Instead I should plausibly be figuring out how to make billions of dollars, in cryptocurrency or startups or whatever, and then spending it in a way that saves human civilization, for example by making AGI go well.  Or I should be convincing whatever billionaires I know to do the same.  Or executing some other galaxy-brained plan.  Even if I were purely selfish, as I hope I’m not, still there are things other than theoretical computer science research that would bring more hedonistic pleasure.  I’ve basically just followed a path of least resistance.

On the other hand, I don’t know how to make billions of dollars.  I don’t know how to make AGI go well.  I don’t know how to influence Elon Musk or Sam Altman or Peter Thiel or Sergey Brin or Mark Zuckerberg or Marc Andreessen to do good things rather than bad things, even when I have gotten to talk to some of them.  Past attempts in this direction by extremely smart and motivated people—for example, those of Eliezer Yudkowsky and Sam Bankman-Fried—have had, err, uneven results, to put it mildly.  I don’t know why I would succeed where they failed.

Of course, if I had a better understanding of reality, I might know how better to achieve prosocial goals for humanity.  Or I might learn why they were actually the wrong goals, and replace them with better goals.  But then I’m back to the original goal of understanding reality as clearly as possible, with the corresponding danger that I spend my time learning to solve Rubik’s Cube faster.

The QMA Singularity

Saturday, September 27th, 2025

Update (Sep. 29): Since this post has now gone semi-viral on X, Hacker News, etc., with people arguing about how trivial or nontrivial was GPT5’s “discovery,” it seems worthwhile to say something that was implicit in the post.

Namely, GPT5-Thinking’s suggestion of a function to use “should have” been obvious to us. It would have been obvious to us had we known more, or had we spent more time studying the literature or asking experts.

The point is, anyone engaged in mathematical research knows that an AI that can “merely” fill in the insights that “should’ve been” obvious to you is a really huge freaking deal! It speeds up the actual discovery process, as opposed to the process of writing LaTeX or preparing the bibliography or whatever. This post gave one tiny example of what I’m sure will soon be thousands.

I should also add that, since this post went up, a commenter named Phillip Harris proposed a better function to use than GPT-5’s: det(I-E) rather than Tr[(I-E)-1]. While we’re still checking details, not only do we think this works, we think it simplifies our argument and solves one of our open problems. So it seems human supremacy has been restored, at least for now!


A couple days ago, Freek Witteveen of CWI and I posted a paper to the arXiv called “Limits to black-box amplification in QMA.” Let me share the abstract:

We study the limitations of black-box amplification in the quantum complexity class QMA. Amplification is known to boost any inverse-polynomial gap between completeness and soundness to exponentially small error, and a recent result (Jeffery and Witteveen, 2025) shows that completeness can in fact be amplified to be doubly exponentially close to 1. We prove that this is optimal for black-box procedures: we provide a quantum oracle relative to which no QMA verification procedure using polynomial resources can achieve completeness closer to 1 than doubly exponential, or a soundness which is super-exponentially small. This is proven by using techniques from complex approximation theory, to make the oracle separation from (Aaronson, 2008), between QMA and QMA with perfect completeness, quantitative.

You can also check out my PowerPoint slides here.

To explain the context: QMA, or Quantum Merlin Arthur, is the canonical quantum version of NP. It’s the class of all decision problems for which, if the answer is “yes,” then Merlin can send Arthur a quantum witness state that causes him to accept with probability at least 2/3 (after a polynomial-time quantum computation), while if the answer is “no,” then regardless of what witness Merlin sends, Arthur accepts with probability at most 1/3. Here, as usual in complexity theory, the constants 2/3 and 1/3 are just conventions, which can be replaced (for example) by 1-2-n and 2-n using amplification.

A longstanding open problem about QMA—not the biggest problem, but arguably the most annoying—has been whether the 2/3 can be replaced by 1, as it can be for classical MA for example. In other words, does QMA = QMA1, where QMA1 is the subclass of QMA that admits protocols with “perfect completeness”? In 2008, I used real analysis to show that there’s a quantum oracle relative to which QMA ≠ QMA1, which means that any proof of QMA = QMA1 would need to use “quantumly nonrelativizing techniques” (not at all an insuperable barrier, but at least we learned something about why the problem is nontrivial).

Then came a bombshell: in June, Freek Witteveen and longtime friend-of-the-blog Stacey Jeffery released a paper showing that any QMA protocol can be amplified, in a black-box manner, to have completeness error that’s doubly exponentially small, 1/exp(exp(n)). They did this via a method I never would’ve thought of, wherein a probability of acceptance is encoded via the amplitudes of a quantum state that decrease in a geometric series. QMA, it turned out, was an old friend that still had surprises up its sleeve after a quarter-century.

In August, we had Freek speak about this breakthrough by Zoom in our quantum group meeting at UT Austin. Later that day, I asked Freek whether their new protocol was the best you could hope to do with black-box techniques, or whether for example one could amplify the completeness error to be triply exponentially small, 1/exp(exp(exp(n))). About a week later, Freek and I had a full proof written down that, using black-box techniques, doubly-exponentially small completeness error is the best you can do. In other words: we showed that, when one makes my 2008 QMA ≠ QMA1 quantum oracle separation quantitative, one gets a lower bound that precisely matches Freek and Stacey’s protocol.

All this will, I hope, interest and excite aficianados of quantum complexity classes, while others might have very little reason to care.

But here’s a reason why other people might care. This is the first paper I’ve ever put out for which a key technical step in the proof of the main result came from AI—specifically, from GPT5-Thinking. Here was the situation: we had an N×N Hermitian matrix E(θ) (where, say, N=2n), each of whose entries was a poly(n)-degree trigonometric polynomial in a real parameter θ. We needed to study the largest eigenvalue of E(θ), as θ varied from 0 to 1, to show that this λmax(E(θ)) couldn’t start out close to 0 but then spend a long time “hanging out” ridiculously close to 1, like 1/exp(exp(exp(n))) close for example.

Given a week or two to try out ideas and search the literature, I’m pretty sure that Freek and I could’ve solved this problem ourselves. Instead, though, I simply asked GPT5-Thinking. After five minutes, it gave me something confident, plausible-looking, and (I could tell) wrong. But rather than laughing at the silly AI like a skeptic might do, I told GPT5 how I knew it was wrong. It thought some more, apologized, and tried again, and gave me something better. So it went for a few iterations, much like interacting with a grad student or colleague. Within a half hour, it had suggested to look at the function

$$ Tr[(I-E(\theta))^{-1}] = \sum_{i=1}^N \frac{1}{1-\lambda_i(\theta)}. $$

It pointed out, correctly, that this was a rational function in θ of controllable degree, that happened to encode the relevant information about how close the largest eigenvalue λmax(E(θ)) is to 1. And this … worked, as we could easily check ourselves with no AI assistance. And I mean, maybe GPT5 had seen this or a similar construction somewhere in its training data. But there’s not the slightest doubt that, if a student had given it to me, I would’ve called it clever. Obvious with hindsight, but many such ideas are.

I had tried similar problems a year ago, with the then-new GPT reasoning models, but I didn’t get results that were nearly as good. Now, in September 2025, I’m here to tell you that AI has finally come for what my experience tells me is the most quintessentially human of all human intellectual activities: namely, proving oracle separations between quantum complexity classes. Right now, it almost certainly can’t write the whole research paper (at least if you want it to be correct and good), but it can help you get unstuck if you otherwise know what you’re doing, which you might call a sweet spot. Who knows how long this state of affairs will last? I guess I should be grateful that I have tenure.

BusyBeaver(6) is really quite large

Saturday, June 28th, 2025

For overdetermined reasons, I’ve lately found the world an increasingly terrifying and depressing place. It’s gotten harder and harder to concentrate on research, or even popular science writing. Every so often, though, something breaks through that wakes my inner child, reminds me of why I fell in love with research thirty years ago, and helps me forget about the triumphantly strutting factions working to destroy everything I value.

Back in 2022, I reported an exciting advance in BusyBeaverology: namely, whereas we previously knew merely that BB(6) > 1036,534, Pavel Kropitz managed to show that

BB(6) > 1510.

For those tuning in from home, here BB(6) is the 6th Busy Beaver number, i.e. the maximum number of steps that a 6-state Turing machine with a {0,1} alphabet can take before halting, when run on an initially all-0 input tape. Also, the left-superscript means tetration, or iterated exponentiation: for example, 1510 means 10 to the 10 to the 10 and so on 15 times.

By comparison, last year the international “BBchallenge” team determined that BB(5) is “merely” 47,176,870 (see also Quanta magazine’s superb feature article on that milestone). So, between 5 and 6 is where the Busy Beaver function makes its leap, from the millions to beyond the bounds of observable reality.

But if you thought that was the end of the BB(6) story, think again! Eleven days ago, Tristan Sterin, who organized the BBchallenge the team, emailed to tell me that a team member with the handle “mxdys” improved the BB(6) bound yet further, to

BB(6) > 10,000,00010

(i.e., 10 to the 10 to the 10 and so on 10 million times), with a correctness proof in Coq. Then, three days ago, Tristan wrote again to say that mxdys has improved the bound again, to

$$ BB(6) \gt ^{^{{^9}2}2}2 $$

I.e., BB(6) is at least 2 tetrated to the 2 tetrated to the 2 tetrated to the 9. So in particular, BB(6) is at least 2 pentated to the 5, where pentation is iterated tetration, i.e. the operation that is to tetration as tetration is to exponentiation, exponentiation is to multiplication, and multiplication is to addition.

Last week, when we “merely” knew that BB(6) > 10,000,00010, I talked to a journalist who asked me to give an intuitive sense of how big such a number is. So I said, imagine you had 10,000,00010 grains of sand. Then you could … well, uh … you could fill about 10,000,00010 copies of the observable universe with that sand. I hope that helps people visualize it!

The journalist also asked: have these new discoveries about BB(6) caused me to rethink any broader beliefs about the Busy Beaver function? And I mean, yes and no: it was always completely within the realm of possibility that BB(6) would already be, not some puny little thing like 1036,534, but way out in iteration land. Now that we know for sure that it is, though, maybe I ought to conjecture that the value of BB(n) becomes independent of the ZFC axioms of set theory already when n is 7 or 8 or 9, rather than when it’s 20 or 30 or whatever. (Currently, we know that BB(n) becomes independent of ZFC only when n=643.)


Unrelated Update: I’m just now returning to the US from STOC’2025 in Prague, where I saw lots of old friends and learned many interesting new things, again helping to distract me from the state of the world! Many I’ll write about some of those things in a future post. For now, though, anyone who’s interested in my STOC plenary lecture, entitled “The Status of Quantum Speedups,” can check out the PowerPoint slides here.

Guess I’m A Rationalist Now

Monday, June 9th, 2025

A week ago I attended LessOnline, a rationalist blogging conference featuring many people I’ve known for years—Scott Alexander, Eliezer Yudkowsky, Zvi Mowshowitz, Sarah Constantin, Carl Feynman—as well as people I’ve known only online and was delighted to meet in person, like Joe Carlsmith and Jacob Falkovich and Daniel Reeves. The conference was at Lighthaven, a bewildering maze of passageways, meeting-rooms, sleeping quarters, gardens, and vines off Telegraph Avenue in Berkeley, which has recently emerged as the nerd Shangri-La, or Galt’s Gulch, or Shire, or whatever. I did two events at this year’s LessOnline: a conversation with Nate Soares about the Orthogonality Thesis, and an ask-me-anything session about quantum computing and theoretical computer science (no new ground there for regular consumers of my content).

What I’ll remember most from LessOnline is not the sessions, mine or others’, but the unending conversation among hundreds of people all over the grounds, which took place in parallel with the sessions and before and after them, from morning till night (and through the night, apparently, though I’ve gotten too old for that). It felt like a single conversational archipelago, the largest in which I’ve ever taken part, and the conference’s real point. (Attendees were exhorted, in the opening session, to skip as many sessions as possible in favor of intense small-group conversations—not only because it was better but also because the session rooms were too small.)

Within the conversational blob, just making my way from one building to another could take hours. My mean free path was approximately five feet, before someone would notice my nametag and stop me with a question. Here was my favorite opener:

“You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”

“Yes,” I replied, not bothering to correct the “physicist” part.

One night, I walked up to Scott Alexander, who sitting on the ground, with his large bald head and a blanket he was using as a robe, resembled a monk. “Are you enjoying yourself?” he asked.

I replied, “you know, after all these years of being coy about it, I think I’m finally ready to become a Rationalist. Is there, like, an initiation ritual or something?”

Scott said, “Oh, you were already initiated a decade ago; you just didn’t realize it at the time.” Then he corrected himself: “two decades ago.”

The first thing I did, after coming out as a Rationalist, was to get into a heated argument with Other Scott A., Joe Carlsmith, and other fellow-Rationalists about the ideas I set out twelve years ago in my Ghost in the Quantum Turing Machine essay. Briefly, my argument was that the irreversibility and ephemerality of biological life, which contrasts with the copyability, rewindability, etc. of programs running on digital computers, and which can ultimately be traced back to microscopic details of the universe’s initial state, subject to the No-Cloning Theorem of quantum mechanics, which then get chaotically amplified during brain activity … might be a clue to a deeper layer of the world, one that we understand about as well as the ancient Greeks understood Newtonian physics, but which is the layer where mysteries like free will and consciousness will ultimately need to be addressed.

I got into this argument partly because it came up, but partly also because this seemed like the biggest conflict between my beliefs and the consensus of my fellow Rationalists. Maybe part of me wanted to demonstrate that my intellectual independence remained intact—sort of like a newspaper that gets bought out by a tycoon, and then immediately runs an investigation into the tycoon’s corruption, as well as his diaper fetish, just to prove it can.

The funny thing, though, is that all my beliefs are the same as they were before. I’m still a computer scientist, an academic, a straight-ticket Democratic voter, a liberal Zionist, a Jew, etc. (all identities, incidentally, well-enough represented at LessOnline that I don’t even think I was the unique attendee in the intersection of them all).

Given how much I resonate with what the Rationalists are trying to do, why did it take me so long to identify as one?

Firstly, while 15 years ago I shared the Rationalists’ interests, sensibility, and outlook, and their stances on most issues, I also found them bizarrely, inexplicably obsessed with the question of whether AI would soon become superhumanly powerful and change the basic conditions of life on earth, and with how to make the AI transition go well. Why that, as opposed to all the other sci-fi scenarios one could worry about, not to mention all the nearer-term risks to humanity?

Suffice it to say that empirical developments have since caused me to withdraw my objection. Sometimes weird people are weird merely because they see the future sooner than others. Indeed, it seems to me that the biggest thing the Rationalists got wrong about AI was to underestimate how soon the revolution would happen, and to overestimate how many new ideas would be needed for it (mostly, as we now know, it just took lots more compute and training data). Now that I, too, spend some of my time working on AI alignment, I was able to use LessOnline in part for research meetings with colleagues.

A second reason I didn’t identify with the Rationalists was cultural: they were, and are, centrally a bunch of twentysomethings who “work” at an ever-changing list of Berkeley- and San-Francisco-based “orgs” of their own invention, and who live in group houses where they explore their exotic sexualities, gender identities, and fetishes, sometimes with the aid of psychedelics. I, by contrast, am a straight, monogamous, middle-aged tenured professor, married to another such professor and raising two kids who go to normal schools. Hanging out with the Rationalists always makes me feel older and younger at the same time.

So what changed? For one thing, with the march of time, a significant fraction of Rationalists now have marriages, children, or both—indeed, a highlight of LessOnline was the many adorable toddlers running around the Lighthaven campus. Rationalists are successfully reproducing! Some because of explicit pronatalist ideology, or because they were persuaded by Bryan Caplan’s arguments in Selfish Reasons to Have More Kids. But others simply because of the same impulses that led their ancestors to do the same for eons. And perhaps because, like the Mormons or Amish or Orthodox Jews, but unlike typical secular urbanites, the Rationalists believe in something. For all their fears around AI, they don’t act doomy, but buzz with ideas about how to build a better world for the next generation.

At a LessOnline parenting session, hosted by Julia Wise, I was surrounded by parents who worry about the same things I do: how do we raise our kids to be independent and agentic yet socialized and reasonably well-behaved, technologically savvy yet not droolingly addicted to iPad games? What schooling options will let them accelerate in math, save them from the crushing monotony that we experienced? How much of our own lives should we sacrifice on the altar of our kids’ “enrichment,” versus trusting Judith Rich Harris that such efforts quickly hit a point of diminishing returns?

A third reason I didn’t identify with the Rationalists was, frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru. Eliezer writes in parables and koans. He teaches that the fate of life on earth hangs in the balance, that the select few who understand the stakes have the terrible burden of steering the future. Taking what Rationalists call the “outside view,” how good is the track record for this sort of thing?

OK, but what did I actually see at Lighthaven? I saw something that seemed to resemble a cult only insofar as the Beatniks, the Bloomsbury Group, the early Royal Society, or any other community that believed in something did. When Eliezer himself—the bearded, cap-wearing Moses who led the nerds from bondage to their Promised Land in Berkeley—showed up, he was argued with like anyone else. Eliezer has in any case largely passed his staff to a new generation: Nate Soares and Zvi Mowshowitz have found new and, in various ways, better ways of talking about AI risk; Scott Alexander has for the last decade written the blog that’s the community’s intellectual center; figures from Kelsey Piper to Jacob Falkovich to Aella have taken Rationalism in new directions, from mainstream political engagement to the … err … statistical analysis of orgies.

I’ll say this, though, on the naysayers’ side: it’s really hard to make dancing to AI-generated pop songs about Bayes’ theorem and Tarski’s definition of truth not feel cringe, as I can now attest from experience.

The cult thing brings me to the deepest reason I hesitated for so long to identify as a Rationalist: namely, I was scared that if I did, people whose approval I craved (including my academic colleagues, but also just randos on the Internet) would sneer at me. For years, I searched of some way of explaining this community’s appeal so reasonable that it would silence the sneers.

It took years of psychological struggle, and (frankly) solidifying my own place in the world, to follow the true path, which of course is not to give a shit what some haters think of my life choices. Consider: five years ago, it felt obvious to me that the entire Rationalist community might be about to implode, under existential threat from Cade Metz’s New York Times article, as well as RationalWiki and SneerClub and all the others laughing at the Rationalists and accusing them of every evil. Yet last week at LessOnline, I saw a community that’s never been thriving more, with a beautiful real-world campus, excellent writers on every topic who felt like this was the place to be, and even a crop of kids. How many of the sneerers are living such fulfilled lives? To judge from their own angry, depressed self-disclosures, probably not many.

But are the sneerers right that, even if the Rationalists are enjoying their own lives, they’re making other people’s lives miserable? Are they closet far-right monarchists, like Curtis Yarvin? I liked how The New Yorker put it in its recent, long and (to my mind) devastating profile of Yarvin:

The most generous engagement with Yarvin’s ideas has come from bloggers associated with the rationalist movement, which prides itself on weighing evidence for even seemingly far-fetched claims. Their formidable patience, however, has also worn thin. “He never addressed me as an equal, only as a brainwashed person,” Scott Aaronson, an eminent computer scientist, said of their conversations. “He seemed to think that if he just gave me one more reading assignment about happy slaves singing or one more monologue about F.D.R., I’d finally see the light.”

The closest to right-wing politics that I witnessed at LessOnline was a session, with Kelsey Piper and current and former congressional staffers, about the prospects for moderate Democrats to articulate a pro-abundance agenda that would resonate with the public and finally defeat MAGA.

But surely the Rationalists are incels, bitter that they can’t get laid? Again, the closest I saw was a session where Jacob Falkovich helped a standing-room-only crowd of mostly male nerds confront their fears around dating and understand women better, with Rationalist women eagerly volunteering to answer questions about their perspective. Gross, right? (Also, for those already in relationships, Eliezer’s primary consort and former couples therapist Gretta Duleba did a session on relationship conflict.)

So, yes, when it comes to the Rationalists, I’m going to believe my own lying eyes over the charges of the sneerers. The sneerers can even say about me, in their favorite formulation, that I’ve “gone mask off,” confirmed the horrible things they’ve always suspected. Yes, the mask is off—and beneath the mask is the same person I always was, who has an inordinate fondness for the Busy Beaver function and the complexity class BQP/qpoly, and who uses too many filler words and moves his hands too much, and who strongly supports the Enlightenment, and who once feared that his best shot at happiness in life would be to earn women’s pity rather than their contempt. Incorrectly, as I’m glad to report. From my nebbishy nadir to the present, a central thing that’s changed is that, from my family to my academic colleagues to the Rationalist community to my blog readers, I finally found some people who want what I have to sell.


Unrelated Announcements:

My replies to comments on this post might be light, as I’ll be accompanying my daughter on a school trip to the Galapagos Islands!

A few weeks ago, I was “ambushed” into leading a session on philosophy and theoretical computer science at UT Austin. (I.e., asked to show up for the session, but thought I’d just be a participant rather than the main event.) The session was then recorded and placed on YouTube—and surprisingly, given the circumstances, some people seemed to like it!

Friend-of-the-blog Alon Rosen has asked me to announce a call for nominations for a new theoretical computer science prize, in memory of my former professor (and fellow TCS blogger) Luca Trevisan, who was lost to the world too soon.

And one more: Mahdi Cheraghchi has asked me to announce the STOC’2025 online poster session, registration deadline June 12; see here for more. Incidentally, I’ll be at STOC in Prague to give a plenary on quantum algorithms; I look forward to meeting any readers who are there!

Toward a non-constant cancellation function

Tuesday, February 11th, 2025

It now seems the switch of Cancel Culture has only two settings:

  1. everything is cancellable—including giving intellectual arguments against specific DEI policies, or teaching students about a Chinese filler word (“ne-ge”) that sounds a little like the N-word, or else
  2. nothing is cancellable—not even tweeting “normalize Indian hate” and “I was racist before it was cool,” shortly before getting empowered to remake the US federal government.

How could we possibly draw any line between these two extremes? Wouldn’t that require … judgment? Common sense? Consideration of the facts of individual cases?

I, of course, survived attempted cancellation by a large online mob a decade ago, led by well-known figures such as Amanda Marcotte and Arthur Chu. Though it was terrifying at the time—it felt like my career and even my life were over—I daresay that, here in 2025, not many people would still condemn me for trying to have the heartfelt conversation I did about nerds, feminism, and dating, deep in the comments section of this blog. My side has now conclusively “won” that battle. The once-terrifying commissars of the People’s Republic of Woke, who delighted in trying to ruin me, are now bound and chained, as whooping soldiers of the MAGA Empire drag them by their hair to the torture dungeons.

And this is … not at all the outcome I wanted? It’s a possible outcome that I foresaw in 2014, and was desperately trying to help prevent, through fostering open dialogue between shy male nerds and feminists? I’m now, if anything, more terrified for my little tribe of pro-Enlightenment, science-loving nerds than I was under the woke regime? Speaking of switches with only two settings.

Anyway, with whatever moral authority this experience vests in me, I’d like to suggest that, in future cancellation controversies, the central questions ought to include the following:

  1. What did the accused person actually say or do? Disregarding all confident online discourse about what that “type” of person normally does, or wants to do.
  2. Is there a wider context that often gets cut from social media posts, but that, as soon as you know it, makes the incident seem either better or worse?
  3. How long ago was the offense: more like thirty years or like last week?
  4. Was the person in a radically different condition than they are now—e.g., were they very young, or undergoing a mental health episode, or reacting to a fresh traumatic incident, or drunk or high?
  5. Were the relevant cultural norms different when the offense happened? Did countless others say or do the same thing, and if so, are they also at risk of cancellation?
  6. What’s reasonable to infer about what the person actually believes? What do they want to have happen to whichever group they offended? What would they do to the group given unlimited power? Have they explicitly stated answers to these questions, either before or after the incident? Have they taken real-world actions by which we could judge their answers as either sincere or insincere?
  7. If we don’t cancel this person, what are we being asked to tolerate? Just that they get to keep teaching and publishing views that many people find objectionable? Or that they get to impose their objectionable views on an entire academic department, university, company, organization, or government?
  8. If we agree that the person said something genuinely bad, did they apologize or express regret? Or, if what they said got confused with something bad, did they rush to clarify and disclaim the bad interpretation?
  9. Did they not only refuse to clarify or apologize, but do the opposite? That is, did they express glee about what they were able to get away with, or make light of the suffering or “tears” of their target group?

People can debate how to weigh these considerations, though I personally put enormous weight on 8 and 9, what you could call the “clarification vs. glee axis.” I have nearly unlimited charity for people willing to have a good-faith moral conversation with the world, and nearly unlimited contempt for people who mock the request for such a conversation.

The sad part is that, in practice, the criteria for cancellation have tended instead to be things like:

  • Is the target giving off signals of shame, distress, and embarrassment—thereby putting blood in the water and encouraging us to take bigger bites?
  • Do we, the mob, have the power to cancel this person? Does the person’s reputation and livelihood depend on organizations that care what we think, that would respond to pressure from us?

The trouble with these questions is that, not only are their answers not positively correlated with which people deserve to be cancelled, they’re negatively correlated. This is precisely how you get the phenomenon of the left-wing circular firing squad, which destroys the poor schmucks capable of shame even while the shameless, the proud racists and pussy-grabbers, go completely unpunished. Surely we can do better than that.

Luca Trevisan (1971-2024)

Wednesday, June 19th, 2024

(See here for Boaz Barak’s obituary, and here for Lance Fortnow’s—they cover different aspects of Luca’s legacy from each other and from this post. Also, click here to register for a free online TCS4All talk that Luca was scheduled to give, and that will now be given in his memory, this Monday at 3:30pm Eastern time.)


Luca Trevisan, one of the world’s leading theoretical computer scientists, has succumbed to cancer in Italy, at only 52 years old. I was privileged to know Luca for a quarter-century, first as my complexity theory and cryptography professor at UC Berkeley and as a member of my dissertation committee, and then as a friend and colleague and fellow CS theory blogger.

I regret that I learned of the seriousness of Luca’s condition only a few days ago. So yesterday morning I wrote him a farewell email, under the impression that, while he was now in hospice care, he had at least a few more weeks. Alas, he probably never saw it. So I’m hereby making the email into a memorial post, with small changes mostly to protect people’s privacy.


Dear Luca,

Dana, the kids, and I were traveling in Israel for the past two weeks, when I received the shocking and sad news that this might be my last chance to write to you.

At risk of stating the obvious — you had a very large and positive effect on my life and career.  Starting with the complexity theory summer school at the Institute for Advanced Study in 2000, which was the first time we met and also the first time I really experienced the glories of complexity at full blast.  And then continuing at Berkeley, TA’ing your algorithms class, which you had to cancel on 9/11 (although students still somehow showed up for office hours lugging their CLRS books…), and dealing with that student who obviously cheated on the midterm although I had stupidly given back to her the evidence that would prove it.

And then your graduate complexity course, where I was very proud to get 100% on your exam, having handwritten it on a train while everyone else used LaTeX (which, embarrassingly, I was still learning).  I was a bit less proud to present the Razborov-Rudich paper to the class, and to get questions from you that proved that I understood it less thoroughly than I thought.  I emerged from your course far better prepared to do complexity theory than when I entered it.

Later I took your cryptography course, where I came to you afterwards one day to point out that with a quantum computer, you could pull out big Fourier coefficients without all the bother of the Goldreich-Levin theorem.  And you said sure, but then you would need a quantum computer.  Over 20 years later, Goldreich and Levin (and you?) can say with satisfaction that we still don’t have that scalable quantum computer … but we’re much much closer, I swear!

I still feel bad about the theory lunch talk I gave in 2003, on my complexity-theoretic version of Aumann’s agreement theorem, where I used you and Umesh as characters instead of Alice and Bob, and which then led to unintended references to “Luca’s posterior” (probability distribution, I meant).

I also feel bad about delaying so long the completion of my PhD thesis, until well after I’d started my postdoc in Princeton, so that my former officemate needed to meet you on a street corner in San Francisco to sign the signature page the night before the deadline.

But then a few years later, when Avi and I did the algebrization paper, the fact that you seemed to like it mattered more to me than just about anything else.

Thank you for the excellent dinner when I met you some years ago in Rome.  Thank you for the Trevisan-Tulsiani-Vadhan paper, which answered a question we had about BosonSampling (and you probably didn’t even know you were doing quantum computing when you wrote that paper!).  Thank you for your blog.  Thank you for everything you did for me.

I always enjoyed your dry humor, much of which might sadly be lost to time, unless others wrote it down or it’s on YouTube or something. Two examples spring to my mind across the decades:

  • “From my previous lecture, you may have gotten the impression that everything in derandomization is due to Nisan and Wigderson, but this is not the case: Avi has been working with other people as well.”
  • After I’d explained that I’d be spending a semester in Jerusalem to work with Avi, despite (at that time) knowing only the most rudimentary Hebrew, such as how to say “please” and “excuse me”: “you mean there are words in Hebrew for ‘please’ and ‘excuse me’?”

Speaking of which, my current trip to Israel has given me many opportunities to reflect on mortality — for all the obvious war-related reasons of course, but also because while we were here, we unexpectedly had to attend two shivas of people in our social circle who died during our trip, one of them from cancer.  And we learned about a close friend whose stepson has a brain tumor and might or might not make it.  Cancer is a bitch.

Anyway, there’s much more I could write, but I imagine you’re getting flooded with emails right now from all the people whose lives you’ve touched, so I won’t take up more of your time.  You’ve made a real difference to the world, to theoretical computer science, and to your friends and colleagues, one that many people would envy.

Best,
Scott

Never go to “Planet Word” in Washington DC

Friday, March 15th, 2024

In fact, don’t try to take kids to Washington DC if you can possibly avoid it.

This is my public service announcement. This is the value I feel I can add to the world today.

Dana and I decided to take the kids to DC for spring break. The trip, alas, has been hell—a constant struggle against logistical failures. The first days were mostly spent sitting in traffic or searching for phantom parking spaces that didn’t exist. (So then we switched to the Metro, and promptly got lost, and had our metro cards rejected by the machines.) Or, at crowded cafes, I spent the time searching for a table so my starving kids could eat—and then when I finally found a table, a woman, smug and sure-faced, evicted us from the table because she was “going to” sit there, and my kids had to see that their dad could not provide for their basic needs, and that woman will never face any consequence for what she did.

Anyway, this afternoon, utterly frazzled and stressed and defeated, we entered “Planet Word,” a museum about language. Sounds pretty good, right? Except my soon-to-be 7-year-old son got bored by numerous exhibits that weren’t for him. So I told him he could lead the way and find any exhibit he liked.

Finally my son found an exhibit that fascinated him, one where he could weigh plastic fruits on a balancing scale. He was engrossed by it, he was learning, he was asking questions, I reflected that maybe the trip wasn’t a total loss … and that’s when a museum employee pointed at us, and screamed at us to leave the room, because “this exhibit was sold out.”

The room was actually almost empty (!). No one had stopped us from entering the room. No one else was waiting to use the balancing scale. There was no sign to warn us we were doing anything wrong. I would’ve paid them hundreds of dollars in that moment if only we could stay. My son didn’t understand why he was suddenly treated as a delinquent. He then wanted to leave the whole museum, and so did I. The day was ruined for us.

Mustering my courage to do something uncharacteristic for me, I complained at the front desk. They sneered and snickered at me, basically told me to go to hell. Looking deeply into their dumb, blank expressions, I realized that I had as much chance of any comprehension or sympathy as I’d have from a warthog. It’s true that, on the scale of all the injustices in the history of the world, this one surely didn’t crack the top quadrillion. But for me, in that moment, it came to stand for all the others. Which has always been my main weakness as a person, that injustice affects me in that way.

Speaking of which, there was one part of DC trip that went exactly like it was supposed to. That was our visit to the United States Holocaust Memorial Museum. Why? Because I feel like that museum, unlike all the rest, tells me the truth about the nature of the world that I was born into—and seeing the truth is perversely comforting. I was born into a world that right now, every day, is filled with protesters screaming for my death, for my family’s death—and this is accepted as normal, and those protesters sleep soundly at night, congratulating themselves for their progressivism and enlightenment. And thinking about those protesters, and their predecessors 80 years ago who perpetrated the Holocaust or who stood by and let it happen, is the only thing that really puts blankfaced museum employees into perspective for me. Like, of course a world with the former is also going to have the latter—and I should count myself immeasurably lucky if the latter is all I have to deal with, if the empty-skulled and the soul-dead can only ruin my vacation and lack the power to murder my family.

And to anyone who reached the end of this post and who feels like it was an unwelcome imposition on their time: I’m sorry. But the truth is, posts like this are why I started this blog and why I continue it. If I’ve ever imparted any interesting information or ideas, that’s a byproduct that I’m thrilled about. But I’m cursed to be someone who wakes up every morning, walks around every day, and goes to sleep every night crushed by the weight of the world’s injustice, and outside of technical subjects, the only thing that’s ever motivated me to write is that words are the only justice available to me.