Remarks at UT on the Pentagon/Anthropic situation

Last Thursday, my friend and colleague Sam Baker, in UT Austin’s English department, convened an “emergency panel” here about the developing Pentagon/Anthropic situation, and asked me to speak at it. Even though the situation has continued to develop since then, I thought my prepared remarks for the panel might be of interest. At the bottom, I include a few additional thoughts.


Hi! I’m Scott Aaronson! I teach CS here at UT. While my background is in quantum computing, I’ve spent the past four years dabbling in AI alignment. I did a two-year leave at OpenAI, in their now-defunct Superalignment team. I joined back when OpenAI’s line was “we’re a little nonprofit, doing all this in the greater interest of humanity, and we’d dissolve ourselves before we raced to build an AI that we thought would be dangerous.” I know Sam Altman, and many other current and former OpenAI people. I also know Dario Amodei—in fact, I knew Dario well before Anthropic existed. Despite that, I don’t actually feel like I have deep insight into the current situation with Anthropic and the Pentagon that you wouldn’t get by reading the news, or (especially) reading commentators like Zvi Mowshowitz, Kelsey Piper, Scott Alexander, and Dean Ball. But since I was asked to comment, I’ll try.

The first point I’ll make: the administration’s line, to the extent they’ve had a consistent line, is basically that they needed to cut off Anthropic because Anthropic is a bunch of woke, America-hating, leftist radicals. I think that, if you actually know the Anthropic people, that characterization is pretty laughable. Unless by “woke,” what the administration meant was “having any principles at all, beyond blind deference to authority, and sticking to them.”

I mean, Anthropic only got into this situation in the first place because it was more eager than the other AI companies to support US national security, by providing a version of Claude that could be used on classified networks. So they signed a contract with the Pentagon, and that contract had certain restrictions in it, which the Pentagon read and agreed to … until they decided that they no longer agreed.

That brings me to my second point. The Pentagon regularly signs contracts with private firms that limit what the Pentagon can do in various ways. That’s why they’re called military contract-ors. So anyone who claims it’s totally unprecedented for Anthropic to try to restrict what the government can do with Anthropic’s private property—I think that person is either misinformed or else trying to misinform.

The third point. If the Pentagon felt that it couldn’t abide a private company telling it what is or isn’t an appropriate military use of current AI, then the Pentagon was totally within its rights to cancel its contract with Anthropic, and find a different contractor (like OpenAI…) that would play ball. So it’s crucial for everyone here to understand that that’s not all that the Pentagon did. Instead they said: because Anthropic dared to stand up to us, we’re going to designate them a Supply Chain Risk—a designation that was previously reserved for foreign nation-state adversaries, and that, incredibly, hasn’t been applied to DeepSeek or other Chinese AI companies that arguably do present such risks. So basically, they threatened to destroy Anthropic, by making it horrendously complicated for any companies that do business with the government—i.e., just about all companies—also to do business with Anthropic.

Either that, the Pentagon threatened, or we’ll invoke the Defense Production Act to effectively nationalize Anthropic—i.e., we’ll just commandeer their intellectual property, use it for whatever we want despite Anthropic’s refusal. You get that? Claude is both a supply chain risk that’s too dangerous for the military to use, and somehow also so crucial to the supply chain that we, the military, need to commandeer it.

To me, this is the authoritarian part of what the Pentagon is doing (with the inconsistency being part of the authoritarianism; who but a dictator gets to impose his will on two directly contradictory grounds?). It’s the part that goes against the free-market principles that our whole economy is built on, and the freedom of speech and conscience that our whole civilization is built on. And I think this will ultimately damage US national security, by preventing other American AI companies from wanting to work on defense going forward.

That brings me to the fourth point, about OpenAI. While this was going down, Sam Altman posted online that he agreed with Anthropic’s red lines: LLMs should not be used for killing people with no human in the kill chain, and they also shouldn’t be used for mass surveillance of US citizens. I thought, that’s great! The frontier AI labs are sticking together when the chips are down, rather than infighting.

But then, just a few hours after the Pentagon designated Anthropic a supply chain risk, OpenAI announced that it had reached a deal with the Pentagon. Huh?!? If they have the same red lines, then why can one of them reach a deal while the other can’t?

The experts’ best guess seems to be this: Anthropic said, yes, using AI to kill people autonomously or to surveil US citizens should already be illegal, but we insist on putting those things in the contract to be extra-double-sure. Whereas OpenAI said, the Pentagon can use our models for “all lawful purposes”—this was the language that the Pentagon had insisted on. And, continued OpenAI, we interpret “all lawful purposes” to mean that they can’t cross these red lines. But if it turns out we’re wrong about that … well, that’s not our problem! That’s between the Pentagon and the courts, or whatever.

Again, we don’t fully know, because most of the relevant contracts haven’t been made public, but that’s an inference from reading between the lines of what has been made public.

Back in 2023-2024, when there was the Battle of the Board, then the battle over changing OpenAI’s governance structure, etc., some people formed a certain view of Sam, that he would say all the good and prosocial and responsible things even while he did whichever thing maximized revenue. I’ll leave it to you whether last week’s events are consistent with view.

OK, fifth and final point. I remember 15-20 years ago, talking to Eliezer Yudkowsky and others terrified about AI. They said, this is the biggest issue facing the world. It’s not safe for anyone to build because it could turn against us, or even before that, the military could commandeer it or whatever. And I and others were like, dude, you guys obviously read too much science fiction!

And now here we are. Not only are we living in a science-fiction story, I’d say we’re living in a particularly hackneyed one. I mean, the military brass marching into a top AI lab and telling the nerds, “tough luck, we own your AI now”? Couldn’t reality have been a little more creative than that?

The point is, given the developments of the past couple weeks, I think we now need to retire forever the argument against future AI scenarios that goes, “sorry, that sounds too much like a science-fiction plot.” As has been said, you’d best get used to science fiction because you’re living in one!


Updates and Further Thoughts: Of course I’ve seen that Anthropic has now filed a lawsuit to block the Pentagon from designating it a supply chain risk, arguing that both its free speech and due process rights were violated. I hope their lawsuit succeeds; it’s hard for me to imagine how it wouldn’t.

The fact that I’m, obviously, on Anthropic’s side of this particular dispute doesn’t mean that I’ll always be on Anthropic’s side. Here as elsewhere, it’s crucial not to outsource your conscience to anyone.

Zvi makes an extremely pertinent comparison:

[In shutting down Starlink over Ukraine,] Elon Musk actively did the exact thing [the Pentagon is] accusing Anthropic of maybe doing. He made a strategic decision of national security at the highest level as a private citizen, in the middle of an active military operation in an existential defensive shooting war, based on his own read of the situation. Like, seriously, what the actual fuck.

Eventually we bought those services in a contract. We didn’t seize them. We didn’t arrest Musk. Because a contract is a contract is a contract, and your private property is your private property, until Musk decides yours don’t count.

Another key quote in Zvi’s piece, from Gregory Allen:

And here’s the thing. I spent so much of my life in the Department of Defense trying to convince Silicon Valley companies, “Hey, come on in, the water is fine, the defense contracting market, you know, you can have a good life here, just dip your toe in the water”.

And what the Department of Defense has just said is, “Any company that dips their toe in the water, we reserve the right to grab their ankle, pull them all the way in at any time”. And that is such a disincentive to even getting started in working with the DoD.

Lastly, I’d like to address the most common counterargument against Anthropic’s position—as expressed for example by Noah Smith, or in the comments of my previous post on this. The argument goes roughly like so:

You, nerds, are the ones who’ve been screaming for years about AI being potentially existentially dangerous! So then, did you seriously expect to stay in control of the technology? If it’s really as dangerous and important as you say, then of course the military was going to step in at some point and commandeer your new toy, just like it would if you were building a nuclear weapon.

Two immediate responses:

  1. Even in WWII, in one of the most desperate circumstances in human history, the US government didn’t force a single scientist at gunpoint to build nuclear weapons for them. The scientists did so voluntarily, based on their own considered moral judgment at the time (even if some later came to regret their involvement).
  2. Even if I considered it “inevitable” that relatively thoughtful and principled people, like Dario Amodei, would lose control over the future to gleeful barbarians like Pete Hegseth, it still wouldn’t mean I couldn’t complain when it happened. This is still a free country, isn’t it?

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!