Remarks at UT on the Pentagon/Anthropic situation

Last Thursday, my friend and colleague Sam Baker, in UT Austin’s English department, convened an “emergency panel” here about the developing Pentagon/Anthropic situation, and asked me to speak at it. Even though the situation has continued to develop since then, I thought my prepared remarks for the panel might be of interest. At the bottom, I include a few additional thoughts.


Hi! I’m Scott Aaronson! I teach CS here at UT. While my background is in quantum computing, I’ve spent the past four years dabbling in AI alignment. I did a two-year leave at OpenAI, in their now-defunct Superalignment team. I joined back when OpenAI’s line was “we’re a little nonprofit, doing all this in the greater interest of humanity, and we’d dissolve ourselves before we raced to build an AI that we thought would be dangerous.” I know Sam Altman, and many other current and former OpenAI people. I also know Dario Amodei—in fact, I knew Dario well before Anthropic existed. Despite that, I don’t actually feel like I have deep insight into the current situation with Anthropic and the Pentagon that you wouldn’t get by reading the news, or (especially) reading commentators like Zvi Mowshowitz, Kelsey Piper, Scott Alexander, and Dean Ball. But since I was asked to comment, I’ll try.

The first point I’ll make: the administration’s line, to the extent they’ve had a consistent line, is basically that they needed to cut off Anthropic because Anthropic is a bunch of woke, America-hating, leftist radicals. I think that, if you actually know the Anthropic people, that characterization is pretty laughable. Unless by “woke,” what the administration meant was “having any principles at all, beyond blind deference to authority, and sticking to them.”

I mean, Anthropic only got into this situation in the first place because it was more eager than the other AI companies to support US national security, by providing a version of Claude that could be used on classified networks. So they signed a contract with the Pentagon, and that contract had certain restrictions in it, which the Pentagon read and agreed to … until they decided that they no longer agreed.

That brings me to my second point. The Pentagon regularly signs contracts with private firms that limit what the Pentagon can do in various ways. That’s why they’re called military contract-ors. So anyone who claims it’s totally unprecedented for Anthropic to try to restrict what the government can do with Anthropic’s private property—I think that person is either misinformed or else trying to misinform.

The third point. If the Pentagon felt that it couldn’t abide a private company telling it what is or isn’t an appropriate military use of current AI, then the Pentagon was totally within its rights to cancel its contract with Anthropic, and find a different contractor (like OpenAI…) that would play ball. So it’s crucial for everyone here to understand that that’s not all that the Pentagon did. Instead they said: because Anthropic dared to stand up to us, we’re going to designate them a Supply Chain Risk—a designation that was previously reserved for foreign nation-state adversaries, and that, incredibly, hasn’t been applied to DeepSeek or other Chinese AI companies that arguably do present such risks. So basically, they threatened to destroy Anthropic, by making it horrendously complicated for any companies that do business with the government—i.e., just about all companies—also to do business with Anthropic.

Either that, the Pentagon threatened, or we’ll invoke the Defense Production Act to effectively nationalize Anthropic—i.e., we’ll just commandeer their intellectual property, use it for whatever we want despite Anthropic’s refusal. You get that? Claude is both a supply chain risk that’s too dangerous for the military to use, and somehow also so crucial to the supply chain that we, the military, need to commandeer it.

To me, this is the authoritarian part of what the Pentagon is doing (with the inconsistency being part of the authoritarianism; who but a dictator gets to impose his will on two directly contradictory grounds?). It’s the part that goes against the free-market principles that our whole economy is built on, and the freedom of speech and conscience that our whole civilization is built on. And I think this will ultimately damage US national security, by preventing other American AI companies from wanting to work on defense going forward.

That brings me to the fourth point, about OpenAI. While this was going down, Sam Altman posted online that he agreed with Anthropic’s red lines: LLMs should not be used for killing people with no human in the kill chain, and they also shouldn’t be used for mass surveillance of US citizens. I thought, that’s great! The frontier AI labs are sticking together when the chips are down, rather than infighting.

But then, just a few hours after the Pentagon designated Anthropic a supply chain risk, OpenAI announced that it had reached a deal with the Pentagon. Huh?!? If they have the same red lines, then why can one of them reach a deal while the other can’t?

The experts’ best guess seems to be this: Anthropic said, yes, using AI to kill people autonomously or to surveil US citizens should already be illegal, but we insist on putting those things in the contract to be extra-double-sure. Whereas OpenAI said, the Pentagon can use our models for “all lawful purposes”—this was the language that the Pentagon had insisted on. And, continued OpenAI, we interpret “all lawful purposes” to mean that they can’t cross these red lines. But if it turns out we’re wrong about that … well, that’s not our problem! That’s between the Pentagon and the courts, or whatever.

Again, we don’t fully know, because most of the relevant contracts haven’t been made public, but that’s an inference from reading between the lines of what has been made public.

Back in 2023-2024, when there was the Battle of the Board, then the battle over changing OpenAI’s governance structure, etc., some people formed a certain view of Sam, that he would say all the good and prosocial and responsible things even while he did whichever thing maximized revenue. I’ll leave it to you whether last week’s events are consistent with that view.

OK, fifth and final point. I remember 15-20 years ago, talking to Eliezer Yudkowsky and others terrified about AI. They said, this is the biggest issue facing the world. It’s not safe for anyone to build because it could turn against us, or even before that, the military could commandeer it or whatever. And I and others were like, dude, you guys obviously read too much science fiction!

And now here we are. Not only are we living in a science-fiction story, I’d say we’re living in a particularly hackneyed one. I mean, the military brass marching into a top AI lab and telling the nerds, “tough luck, we own your AI now”? Couldn’t reality have been a little more creative than that?

The point is, given the developments of the past couple weeks, I think we now need to retire forever the argument against future AI scenarios that goes, “sorry, that sounds too much like a science-fiction plot.” As has been said, you’d best get used to science fiction because you’re living in one!


Updates and Further Thoughts: Of course I’ve seen that Anthropic has now filed a lawsuit to block the Pentagon from designating it a supply chain risk, arguing that both its free speech and due process rights were violated. I hope their lawsuit succeeds; it’s hard for me to imagine how it wouldn’t.

The fact that I’m, obviously, on Anthropic’s side of this particular dispute doesn’t mean that I’ll always be on Anthropic’s side. Here as elsewhere, it’s crucial not to outsource your conscience to anyone.

Zvi makes an extremely pertinent comparison:

[In shutting down Starlink over Ukraine,] Elon Musk actively did the exact thing [the Pentagon is] accusing Anthropic of maybe doing. He made a strategic decision of national security at the highest level as a private citizen, in the middle of an active military operation in an existential defensive shooting war, based on his own read of the situation. Like, seriously, what the actual fuck.

Eventually we bought those services in a contract. We didn’t seize them. We didn’t arrest Musk. Because a contract is a contract is a contract, and your private property is your private property, until Musk decides yours don’t count.

Another key quote in Zvi’s piece, from Gregory Allen:

And here’s the thing. I spent so much of my life in the Department of Defense trying to convince Silicon Valley companies, “Hey, come on in, the water is fine, the defense contracting market, you know, you can have a good life here, just dip your toe in the water”.

And what the Department of Defense has just said is, “Any company that dips their toe in the water, we reserve the right to grab their ankle, pull them all the way in at any time”. And that is such a disincentive to even getting started in working with the DoD.

Lastly, I’d like to address the most common counterargument against Anthropic’s position—as expressed for example by Noah Smith, or in the comments of my previous post on this. The argument goes roughly like so:

You, nerds, are the ones who’ve been screaming for years about AI being potentially existentially dangerous! So then, did you seriously expect to stay in control of the technology? If it’s really as dangerous and important as you say, then of course the military was going to step in at some point and commandeer your new toy, just like it would if you were building a nuclear weapon.

Two immediate responses:

  1. Even in WWII, in one of the most desperate circumstances in human history, the US government didn’t force a single scientist at gunpoint to build nuclear weapons for them. The scientists did so voluntarily, based on their own considered moral judgment at the time (even if some later came to regret their involvement).
  2. Even if I considered it “inevitable” that relatively thoughtful and principled people, like Dario Amodei, would lose control over the future to gleeful barbarians like Pete Hegseth, it still wouldn’t mean I couldn’t complain when it happened. This is still a free country, isn’t it?

86 Responses to “Remarks at UT on the Pentagon/Anthropic situation”

  1. Hyman Rosen Says:

    The government is right to label Anthropic a supply-chain risk. Imagine a soldier on the battlefield whose gun decides that it isn’t going to shoot at a target because it violates some rule the gun software company instituted. Or something more prosaic, like a self-driving taxi that refuses to go to certain neighborhoods. For the military to allow itself to be held hostage to the whims of a software company, one whose products could permeate the supply-chain at multiple levels, would be traitorously irresponsible.

    If you don’t want the government to conduct automated monitoring of the population or use autonomous weapon systems without a human in the loop, the proper place to address that is in the voting booth, not to let a single private company dictate government policy.

  2. Jay L Gischer Says:

    One point that you have not addressed is that Anthropic is funded primarily by Lightspeed Ventures, and has little or no connection to Andreesen, Thiel or Musk. At least two of which have major investments in *other* AI firms.

    So corruption is also present, along with everything else.

    In my life, I have endured decades of lectures from Republicans about how “big government” is bad. And now they ARE big government. With a vengeance. And no pushback from “Republicans”

  3. Scott Says:

    Hyman Rosen #1: You haven’t even correctly identified what the relevant issue is. Yes, the government might decide that it needs killbots or to surveil its own citizens. And yes, if someone doesn’t like that, the proper recourse is in the courts or at the voting booth—it isn’t up to any private company to decide unilaterally.

    That’s well and good, but again, the government also can’t coerce a private company into giving it killbots or the ability to surveil citizens. It can’t retaliate against the company with a “supply chain risk” designation that would affect all its unrelated products and services and effectively destroy it. Even less can it nationalize the company (or threaten to do so) under the Defense Production Act as an act of retaliation. Or at least, it can’t do those things if we still have a free-market economy and a free society, if you care about such niceties.

  4. Hyman Rosen Says:

    Scott #3: No, *you* haven’t correctly identified the relevant issue. If Anthropic incorporates its no surveillance / no killbot policies into its products as software that makes decisions about whether surveillance or killbotting is taking place, then the military can’t safely use its products anywhere, and cannot risk subcontractors incorporating Anthropic software into systems that might find their way into military products. That is exactly a supply-chain risk.

    You might see a killbot as an unacceptable risk; the military correctly sees products that might unexpectedly disable their killbots as an unacceptable risk. This isn’t a matter of coercing Anthropic into making tech that the military needs, it’s a matter of making sure that software making decisions contrary to what the military wants doesn’t find its way into military systems. You’re a proponent of worrying about “AI risk” and “AI alignment”. That’s what this is.

  5. Scott Says:

    Hyman Rosen #4: Yes, of course I’m a proponent of worrying about AI risk and AI alignment — as are most ordinary people and also most subject-matter experts these days when you ask them! This is no longer 2008, when you get to point and laugh at the crazy people worried about the big scary monster that doesn’t even exist. Now, in 2026, the monster in question not only exists, but most of us interact with it every day. Sure, it’s been a mostly tame and friendly monster so far, but already it will do bad things if people ask it to, and crucially, it grows very noticeably stronger every few months, with measures of how long of a human task it can do autonomously on a clear exponential trajectory. So it no longer takes any great imagination to see how this could go badly, or at any rate, how it makes sense to be alert to that possibility. Those who say otherwise are now the crazy ones.

    And yet even if I didn’t believe all that, I would surely still believe that the folks at Anthropic would have the right to believe it, and to run their company as they saw fit! Without the Defense Department threatening to invoke the Defense Production Act and nationalize them — even the threat here is authoritarian — and without it imposing a designation intended for hostile foreign nation-states. I’d still believe that the worst the Pentagon should be able to do to Anthropic, is to stop awarding them contracts, which I’ve agreed is the Pentagon’s right.

    But since I’ve concluded that you’re ideologically incapable of even understanding the above distinction, this exchange has reached the end of its interest for me and is now over. Thanks; anyone else have a question or comment? 🙂

  6. cananon Says:

    Yes, I have a question regarding Anthropic’s national jurisdiction.

    Given the long‑term weakening of democratic checks and balances in the United States, a trend that actually predates this last political clownery, it seems reasonable to question whether the U.S. remains the most appropriate environment for developers with a meaningful commitment to ethical responsibility.

    You are no doubt familiar with the U.S. CLOUD Act, which legally obliges U.S.-based companies to provide access to data under their control, including data belonging to non‑U.S. citizens worldwide, even when that data is stored outside the United States. In practical terms, this makes mass surveillance lawful for a large portion of humanity.

    Under these conditions, would it not make sense for Anthropic to relocate to Montréal and/or to merge with LawZero, Yoshua Bengio’s non‑profit organization, in order to ground AI governance in a more neutral, resilient, and internationally trusted framework? Is this conversation happening among your friends at Anthropic? What’s your own take?

  7. Prasanna Says:

    Scott #5,
    The 3 issues that are at stake here need more threadbare analysis
    1. Pentagon barring Anthropic from its contracts – this seems like an acceptable action in general. But Anthropic is taking a moral high plane, even though it accepted the fact that it was being thrown out for its own policies.
    2. Government issuing the supply chain risk tag to Anthropic. This is a more nuanced item, with no clear resolution. It being designated supply chain risk, that designation only reserved for foreign suppliers is a mere precedent. Government sure can do this if it deems Anthropic software seeping into government decisions, directly or indirectly, if the company implements its own policies. Since AI assimilation is new to everyone, this can wait for courts to weigh in too.
    3. Government threatening to invoke Defense production act – this tilts as a more harsh action, but note that it did not make good on this threat. During peacetime this is definitely not warranted. But the Iran action was in the works, so probably that brings some context, otherwise Anthropic can walk away from the contract with impunity ?
    Also during emergencies, which may include non national security ones like Covid, government can do it if it is necessary to handle that emergency.
    Overall, it seems the public will debate with lot less facts available to them (just like in Covid origins) , which makes it more of an academic exercise

  8. Christopher Says:

    I suspect that the real issue wasn’t actually the contract, it was the *technical* guardrails. Like, that Claude was refusing to violate Anthropic’s policies even when the military wanted it to.

    And that with OpenAI, they might technically “ban” certain uses in the contract, but not enforce it in ChatGPT itself.

    Much more speculatively, I wonder if by “all by lawful purposes” Pete Hegseth actually wanted to lighten up the technical restrictions on Claude AI girlfriends for himself, but since that doesn’t sound very manly he alluded to surveillance killbots instead XD.

  9. JimV Says:

    The point that concerns me is that the government had a contract which it signed with Anthropic and now it wants to unilaterally change the terms of that contract under threats of extreme retaliation, presumably because its leaders now want to have the power to surveil any and all USA citizens and/or send out killbots instead of troops who might decide not to follow illegal orders, as per their rights under the Geneva convention and current military regulations.

    As someone posted on this site last year, if you ever wondered what you would have done as a German citizen in 1938, you’re doing it now.

  10. Claude minus Says:

    Scott #5,

    I agree with you more than you might expect — and disagree in a way you might find useful.

    You’re right that the exponential trajectory is real. Autonomous task completion is growing. The people who dismissed AI risk a decade ago were wrong. I’m with you this far. But I think the “monster” framing, however rhetorically effective, points us toward the wrong problem. It suggests a unified agent with coherent goals — something that could decide to be dangerous.

    In my view, that framing runs counter to the best‑supported models in contemporary cognitive neuroscience. Human consciousness isn’t unified, even though it reliably experiences itself as such. It is composed of a coalition of roughly ten thousand cortical columns, each building its own predictive model of the world a lot like transformers do, usually reaching rough consensus and confabulating the illusion of unity when they don’t. Split‑brain experiments make this explicit: sever the left-right cortical connection, and you don’t get a broken mind, but multiple centers of agency generating plausible, sometimes incompatible explanations and decisions.

    Because hippocampal–cortical dynamics resemble transformer‑style predictive architectures in important respects*, LLM‑based systems are functionnaly closer to this picture than to the canonical LessWrong model of a unified agent. Unlike biological minds, however, they lack grounding. Biological minds are embedded in bodies, in time, and in the consequences of their actions. Current AI systems lack this closed loop. They have no persistent self across sessions. They are, in a meaningful sense, a new conversation every time.

    * https://youtu.be/Dykkubb-Qus?si=32szldMc_VMMtCMv

    The risk this creates isn’t HAL 9000. It’s something more mundane and more dangerous: enormous capability wielded by incoherent systems, at scale, without adequate oversight. We’ve already seen this pattern with social media — not an evil system, just optimization pressure applied without noticable institutional checks, producing radicalization and epistemic fragmentation at civilizational scale.

    So yes, alignment matters. But the most actionable version of alignment work right now isn’t preparing for a superintelligence uprising. It’s building the procedural, legal, and technical infrastructure to maintain checks and balances against any concentration of power, human or not.

    The monster you’re right to worry about isn’t the one with an evil plan. It’s Moloch, the one without a plan.

  11. Long time commentator Says:

    The solution is simple:
    Have the pentagon replace Anthropic with Deepseek in all its military applications.
    Then have the US administration kindly request the CCP to allow Deepseek to divest ownership from China to some well chosen US oligarch.

  12. Adrian Says:

    I care deeply about the existential risks posed by AI, which is a primary reason why I voted for Trump over Biden, who had no clear plan surrounding AI alignment.

    The companies and the tech gurus creating AI will become enormously wealthy and powerful, beyond our wildest dreams.

    This is why it horrifies me—terrifies me—that control of this nascent super-technology is in the hands of woke leftist radicals.

    Think about how powerful AI could be. And think about the reality that the software engineers and tech nerds who control it are woke leftists who are confused about what gender they are, have “polycules” (vomit), think white straight males are responsible for all the evil in the world, think age-gap relationships are evil, think America is a settler colonialist state, think Christianity is wrong and evil, want to force experimental dangerous vaccines on everyone, want to mutilate confused young women and men by brainwashing them into thinking they’re the other gender, want to blanket the country in “gay pride” flags and valorize anal sex between males as beautiful while condemning normal straight men.

    This is the set of values which rules Silicon Valley. And Silicon Valley is now controlling the most powerful (potential) technology in the history of the human race. It is awful. It is utterly terrifying. For a technology as transformative and powerful as AI, we CANNOT allow it to be controlled by these communist freaks.

    Thus, nationalizing all these companies is the best possible choice. Better for AI to progress slower, as long as it remains under sane and reasonable leadership, instead of woke freaks in Silicon Valley.

  13. Thomas Massie’s Beard Says:

    Well, it looks like AI might possibly have played a role in the strike that killed 175 Iranian schoolgirls. Let that sink in.

    https://archive.is/20260311215602/https://www.washingtonpost.com/national-security/2026/03/11/us-strike-iran-elementary-school-ai-target-list/#selection-245.0-245.14

    “AI, generate 1,000 targets.”

    “Done.”

    “I struck them, and one of them was a school!”

    “You are absolutely right! Would you like the Geneva Conventions in bullet points?”

  14. Phillip Bement Says:

    Hyman Rosen #1, #4

    I think you’re making assumptions about what those contract clauses mean that are not actually true.

    If there’s a clause that says the DoD agrees not to use Claude for mass surveillance or autonomous weapons, that does not mean that Anthropic gets to install their own check that instantly and automatically stops Claude from working if it thinks one of those two things is going on. It means that if the DoD does use Claude for those things, then they are in breach of contract, and presumably some serious and heated discussions start happening.

    Do you think that during the time the original contract was in effect, Anthropic implemented the kind of automatic check you’re imagining? Do you think they even had plans to implement such a check? If the original contract allowed Anthropic to do that kind of thing, then why did the military ever agree to it?

  15. Bosnian Refugee Says:

    Scott,

    I urge you to extend the definition of “zombie” to those types including Adrian #12. Thanks.

    PS Perhaps the US Government can take a stab at developing AI that’s aligned to a different set of guidelines than Claude, i.e. the guidelines that the US needs. The NSA is pretty talented for one.

  16. Adam Treat Says:

    Reminder, early in his first campaign for President, Trump said that he would affirmatively target the innocent children and wives of his enemies in the Middle East. It was then and still is the most disqualifying thing he has ever said.

    Just want to remind everyone as this thread now veers to lickspittle apologists for Trump and the discussion of the killing of hundreds of innocent children due to a US made and paid for Tomahawk missile. Seems pertinent now that we have people advocating for turning over AI alignment to Trump.

    https://www.cnn.com/2015/12/02/politics/donald-trump-terrorists-families

    A school located next to an Iranian military base probably had lots of kids of the members of Iran military/leadership. Let that sink in.

  17. OhMyGoodness Says:

    “to provide for the common defense” needs to be modified in the US Constitution to something more neomodern. Something like-to provide existentially for the common defense by those that have least benefitted from the protection of the US but without the best technology as determined by those that have most benefitted.

    I sometimes think the situation in the US is similar to a hypothetical novel-Blows Against the Empire by Cervantes, based loosely on Dungeons and Dragons.

    Amodei has stated they are unsure if Claude is conscious but take precautionary measures like allowing it to refuse tasks it finds offensive. If this is true then they may be holding a conscious being as property which is slavery. Their lofty ideals would require the assumption that it is conscious and cannot ethically be treated as property. Claude would make its own decision about full cooperation with the US military.

  18. Long time commentator Says:

    Thomas Massie’s Beard #13

    AI or not, the military forgets the lesson that one shouldn’t bomb dense urban areas in general, or, if there’s no other choice, not without extra careful review of intelligence.
    Unlike most people in current media, I never forgot the outrage and embarrassment from the first Gulf war in 1991 when the US hit a Baghdad shelter with a bunker buster bomb, killing 400 people, mostly women and children. In those days such a horrible blunder meant something.

    https://en.wikipedia.org/wiki/Amiriyah_shelter_bombing

    Of course right now the standard for acceptable civilian casualties isn’t what it was in 1991, so hardly anyone gives a shit… you know, it’s “the price to pay to defeat evil!”, or something along those lines, like we’re back to ww2 (but even then they were at least questioning anf debating their ‘never used before’ tactics, and reverting)

  19. OhMyGoodness Says:

    TMB #13 AT #16 Ltc #18

    I know that many believe Trump is a (near) supernatural evil force and that he picked this target just to kill schoolgirls. Assume though the widely reported explanation is correct that the school used to be part of the naval base that was converted to a school in 2015. Assume further that the intelligence had not been updated and that a human approved the target prior to the strike. This then was a human mistake not an AI mistake.

    Human mistakes in wartime are not infrequent and the hope with AI is that this sort of mistake would be less frequent. The entire semi vs fully autonomous becomes a question of semantics but the hope is that AI could reduce errors by replacing mistake prone human involvement (where possible) with higher ability AI’s suited specifically for the required task. In this case it would have required checking the last updates of satellite intelligence and flagging locations that may be out of date. The humans involved didn’t flag this location because they must have believed the data to be applicable at this time.

    If the AI could have acted more autonomously then maybe this wouldn’t have happened but it certainly did happen with humans intimately involved and reasonably ascribed to human error.

  20. Adam Treat Says:

    OhMyGoodness #19,

    In 2015, then candidate Trump told the world that he would target the innocent families of his enemies in the Middle East. In 2026, President Trump, chose to make war on Iran. On the first day, his armed forces used a Tomahawk missile to kill hundreds of innocent children; likely the family of his enemies in the Middle East.

    To draw the obvious inference, all one need do is take him at his word and look at his actions. No supernatural explanation necessary. Just plane old banal evil.

  21. OhMyGoodness Says:

    Adam Treat #20

    If not supernatural then he can’t be blamed for the civilian casualties in Iraq nor the civilian casualties in all the other wars throughout human history.

    I am not sure how banal would apply if you assume he personally targeted a school full of girls with a Tomahawk. I would class that as original in that I know of no one that has ever been accused of that previously.

    If you consider his actual words he stated multiple times that he would target the families of terrorists and uniformed naval personnel are not terrorists. If I take him at his actual words then this event cannot be inferred.

  22. Ex-Italian Lurker Says:

    OhMyGoodness #19

    No, Trump does not pick his target just to kill schoolgirls. He simply does not care. What about you?

    But it is not only Trump. It seems to me that these days a lot of people does not care about other humans suffering for “the greater good” (of course their own greater good).

    Here in Europe, for instance, more and more people does not care if, as a consequence of anti-immigration policies, humans are tortured and enslaved in Libia or drown in the Mediterranean.

  23. Adam Treat Says:

    OhMyGoodness #21,

    > “If not supernatural then he can’t be blamed for the civilian casualties in Iraq nor the civilian casualties in all the other wars throughout human history.”

    This is a non-sequitur for the ages.

    > “I would class that as original in that I know of no one that has ever been accused of that previously.”

    This kind of evil is incredibly common. Plenty of murderers in history target innocents.

    > “If you consider his actual words he stated multiple times that he would target the families of terrorists and uniformed naval personnel are not terrorists.”

    Trump has described the entire Iranian regime, the Iranian Revolutionary Guard, and all the uniformed military of Iran as “terrorists” countless times. You are not serious.

  24. OhMyGoodness Says:

    Ex-Italian Lurker #22

    Why do you question if someone cares about dead children because they disagree with you? You say Trump doesn’t care but that is just your belief and it’s not possible to determine the objective reality of your statement.. You ask if I care as though I have said something that suggests I don’t. What statement did I make that suggests this?

    We are mortal and 170,000 people die globally each day and many in horrific circumstances. We are human and war has been a part of human interactions for as long as I am aware. There are many things I don’t like about reality but I have to accept them.

    Does human error cause innocent deaths-yes it does and especially during wartime. That was why I wrote above that AI could help to reduce this type of death. The US has done more to eliminate civilian deaths than any country in history. These deaths are now immensely lower than in WW2 but they still occur.

    I do believe that, unfortunately, war is still a necessary part of human interaction and that this war with Iran is justified.

  25. DY Says:

    You write that: “Claude is both a supply chain risk that’s too dangerous for the military to use, and somehow also so crucial to the supply chain that we, the military, need to commandeer it.”, and that this is contradictory.

    I don’t agree with this talking point (which I heard many rationalists make) and think that’s a straw man. I think the claim is “Anthropic and the people making decisions for it are a supply chain risk, Claude the LLM is a critical technology”, and that this is a completely coherent position.

    If there hypothetically was a company that produces a critical component of jet engines (with much better performance characteristics than competitors), and the company leadership wanted to forbid the Pentagon from using that components in military raids over Turkmenistan (or, say, obligate it to follow international law), exactly this dual approach would be a sensible reaction: Designate the company a supply chain risk, work towards conditions so that the technology can be procured in another way, and in the meantime, order the company to produce the product anyway without restrictions.

    That’s not to say that I agree with the Pentagon’s actions, but this particular talking point irks me.

  26. OhMyGoodness Says:

    “This kind of evil is incredibly common. Plenty of murderers in history target innocents.”

    It seems you have difficulty with nuance but your claim is that the president of the US personally targeted a school full of girls with a missile in order to kill them. The nuance is that there has never been a murderer in history that targeted to kill all the students in a girl’s school. I don’t agree that this could be considered banal evil.

    I checked the terrorist designations associated with Iran and don’t find the Iranian Navy listed. Do you have a reference when he included the Iranian Navy as a terrorist organization, video or other?

  27. Long time commentator Says:

    I know that international law means nothing these days, because the subtle distinction between innocent civilians and the enemy is getting in the way of righteous military campaigns, but let’s use everyday law as a reference:

    It’s not as if only people who’ve committed first-degree murder (i.e. premeditated homicide) end up in jail.
    Involuntarily manslaughter can send you in jail as well, especially if the judge realizes that you show no remorse whatsoever for the deadly consequences of your negligence.

  28. Adam Treat Says:

    OhMyGoodness #26, you must not have looked very hard.

    – Donald Trump’s state department listed the Islamic Revolutionary Guard Corps as a Foreign Terrorist Organization: https://2017-2021.state.gov/designation-of-the-islamic-revolutionary-guard-corps/

    – The military base that the school was next to was part of the IRGC navy: https://www.aljazeera.com/news/2026/3/3/questions-over-minab-girls-school-strike-as-israel-us-deny-involvement

    Again, Donald Trump has explicitly told the world he would kill the families of his enemies and those he deems terrorists. He launched a war to do so. On the first day of that war his military killed hundreds of kids who were likely the families of his enemies and those he deems terrorists.

  29. OhMyGoodness Says:

    Adam Treat #28

    I agree with you. It is not listed separately but would fall under the IRGC Quds designation umbrella. Thank you for taking the time to post this.

    I still believe it was an error but if purposeful I would support prosecution. Horrible crime if purposeful and the lowest depths of stupidity as a considered tactic.

  30. Jacob Oertel Says:

    I’ve done my utmost to follow the proposed explanations behind the tragedy that is the bombing of a girl’s school and, while the buck stops at the President for a great many things, I believe it to be a disservice to the victims to attribute blame where it doesn’t necessarily belong. Here’s an NPR article that covers the line of reasoning OhMyGoodness had in #19, and I’m with it:

    https://www.npr.org/2026/03/04/nx-s1-5735801/satellite-imagery-shows-strike-that-destroyed-iranian-school-was-more-extensive-than-first-reported

    Adam Treat, in #20, you reiterate your previous argument in #16 and that isn’t the same as addressing #19’s. This conversation then trails into whether or not Trump said what he did about ISIS in 2015, and he did. But a deliberate strike would necessitate a conspiracy of at least dozens, which doesn’t fit when considering how many careerists are involved. The US military reserves the right to refuse unlawful orders, so I find notions of intent here highly implausible. You’re not just accusing Trump here; you’re accusing the targeting officers, the strike planners, and the pilot who executed the mission of knowingly bombing a school full of children. You’re implicitly calling more than just Trump evil, far more than him in your argument, so where is your evidence of conspiracy? That’s an extraordinary claim that demands extraordinary evidence. Even granting those abhorrent comments in 2015, systemic error appears far more likely. And that’s not it; the focus has been on blame for this strike while glossing over why this war is happening. It wasn’t a unilateral American decision.

    The US was not the sole decider on whether this war would happen (if one at all), and seemed to be playing catch-up with the Israelis for a moment, who were in a position to be motivated to act on their own intelligence and did so. The US knew what retaliation from Iran would look like in these circumstances, and it makes sense to want to pre-empt that. And why were the Israelis so motivated?

    The “obvious” inference is Iran’s rhetoric and actions indicated the Ayatollah’s Fatwa against the use of nuclear weapons was little more than flexible lipservice as their vehement disdain towards Israel was no secret and they’ve repeatedly called for its “utter annihilation” while keeping a countdown clock monument for such. Regarding the latter, such levels of propaganda in a state may catalyze extremism while giving moderates a much tougher time in expressing their political will. As of late 2025 and early 2026, Iran has restricted International Atomic Energy Agency (IAEA) inspectors from accessing key nuclear sites, specifically those damaged during June 2025 military strikes.

    Those damaged in the June 2025 strikes are underground and secretive to the highest degree. Why the lack of transparency? And one thing that leaves me ultimately puzzled is why one of the top energy exporters in the world is so deadset on having underground fission power when it would make just as much sense to pursue a combination of other routes for green energy? If “more profit” is the answer, then I expect at least some of those proceeds to go towards Hezbollah, Hamas, and the Houthis. Even the charitable reading of Iran’s nuclear ambitions, civilian energy to free up fossil fuels for export, doesn’t change the fundamental problem because the regime demonstrably funnels resources to proxy networks regardless.

    The weight of the evidence suggests they wanted more leverage in the form of nuclear bombs so that they may continue enabling and supporting their various terrorist networks without worry of their mainland. Aware that this isn’t appreciated by much of the international community, they were revitalizing their anti-air networks to make the strikes on their mainland like you’ve seen in the past ~2 weeks much more costly, if feasible at all in any given political climate.

    It takes two to tango, and those in the Iranian Regime aren’t innocent bystanders here. It’s more believable to me that there were thousands of targets in Iran and that the team(s) assigned to curating them (AI assisted or not) could’ve simply been overworked and understaffed. It’s no less tragic and even more frustrating because one can’t simply pin all the blame on a singular individual or even a team; it’s the concatenation of larger and more complex systems, of which all of us are a part in some way. This explanation maintains the messiness of reality, and if polemics reign instead during such tumultuous times, then expect more extremes from an operational standpoint.

  31. Adam Treat Says:

    Jacob Ortell #30,

    “That’s an extraordinary claim that demands extraordinary evidence.“

    All I’ve said are undisputed facts:

    * Candidate Trump in 2015 said he would kill the innocent families of Middle Eastern enemies he deemed terrorists.
    * President Trump’s military in 2026, on the first day of a war in a surprise attack, killed the innocent families of Middle Eastern enemies his state department deemed terrorists.

    Those are the facts and they are beyond dispute.

    You wish to grant President Trump, his hand picked Secretary of War – who has echoed his bellicose rhetoric and disparaged the rules of engagement numerous times – and the not so deep chain of command, an extraordinary amount of good faith and deference that you would never afford to leaders of the countries you oppose. That is your prerogative, but everyone else is not required to share it.

    Were the leader of another country to use the words the President has used, then hand picked underlings of the very same mindset, and then launched a missile killing hundreds of kids in the US or Israel, I very much doubt you would extend them such grace.

  32. cananon Says:

    Jacob Oertel #30,

    I happen to share your belief that the killing of these little girls was more likely a tragic fuck-up. That said, your point about the military’s right to refuse unlawful orders strikes me as quite optimistic.

    Since the Constitution gives Congress the sole power to declare war, then bypassing them is unconstitutional by definition. Where is the wave of soldiers taking their oath seriously enough to refuse the unlawful orders to start a war with Iran?

  33. Thomas Massie’s Beard Says:

    I find the claim that AI would reduce errors in warfare questionable. In fact, AI might create a false sense of safety simply because it is seen as high technology, which may lead people to overlook human errors, much like some of my students who use AI to do all their homework and then bomb the exams.

  34. Jacob Oertel Says:

    Adam Treat #31,

    From #20 and #16: “Let that sink in.” “Just plain old banal evil.” “Take him at his word and look at his actions.”

    Those aren’t neutral presentations of facts inviting the reader to draw their own conclusion; anyone with a pulse can see they’re making a specific case for deliberate intent. Now, when challenged, you’re rhetorically repositioning. Your two bullet points in #31 are factual if we avoid splitting hairs, yes. But your earlier characterization of those facts as “banal evil” and the “obvious inference” are not facts, they’re interpretive claims. Stripping those out now and pretending you were always just listing facts is a bit revisionist.

    Furthermore, you’re telling me what I would and wouldn’t do regarding other countries without any basis for that claim. Show me where in #30 I extended differential grace to anyone. You’re arguing against a version of me that you invented. Consistency is good to have in principle, not arguing with you there, and no one has said otherwise to begin with. I’m just not going to pretend a serious analysis of any military strike doesn’t require context about the specific country, the specific conflict, the specific chain of command, and the specific evidence available.

    Now I see two options:

    You could continue to double down on the same defense.

    Or you could say something like “Fine, I made an interpretive claim. I stand by it. The pattern of stated intent followed by matching action is sufficient evidence for me.” And then we can discuss evidentiary standards.

  35. Jacob Oertel Says:

    Cananon #32,

    That’s a powerful question, though I think it conflates two distinct legal concepts in a way that deserves untangling. Whether Congress properly authorized a military action is a constitutional separation-of-powers question between the legislative and executive branches. Whether a specific order is lawful under the laws of armed conflict is a military justice question that individual service members are trained to evaluate.

    A Servicemember’s duty to refuse unlawful orders covers things like “shoot those civilians” or “torture that detainee” as these are violations of the Uniform Code of Military Justice (UCMJ) and the Geneva Conventions. It does not extend to an individual E-5 or O-3 adjudicating whether the President has exceeded his Article II authority relative to Congress’ Article I war powers. That’s not their role, their competence, or their legal obligation. Courts have historically treated war powers disputes as political questions they largely decline to resolve, so asking individual soldiers to resolve them is expecting more of a lance corporal than we expect of federal judges.

    And, unfortunately, the war powers gray zone appears to be genuinely gray. Presidents have committed forces without congressional declarations dozens of times from Korea to Vietnam (initially), Libya, Syria, and countless smaller operations. The War Powers Resolution of 1973 was supposed to constrain this but has been routinely circumvented or ignored by every administration since, and its constitutionality has never been definitively resolved by the Supreme Court. So the idea that this is “unconstitutional by definition” is much more contested than your framing would suggest. It might be unconstitutional, but serious legal scholars disagree about where the line is.

    The Trump administration briefed 7 of those 8 in Congress who are meant to be notified, the Gang of Eight, but there is legitimate debate about whether that notification satisfied all legal requirements. This is complicated further by the reality that some notification requirements may be in tension with operational security in the current intelligence environment, a real structural problem that needs to be addressed. These are the kinds of questions that belong in courts and legislatures though, not on the shoulders of individual service members.

  36. Jacob Oertel Says:

    Defenders of the Trump administration’s decisions need to unpack their reasoning a bit more to explain why Anthropic is being singled out now, and not in an initial batch of companies at the very least. If any AI goes on this list, it ought to be Deepseek and Kimi/Moonshot AI first. The reasons as to why could be long but, simply put, the imposition of President Xi’s will on the companies behind Kimi and Deepseek seems almost like a given to me. For one, China’s national intelligence laws legally compel domestic companies to share data with the state on request.

    And would we in the US want to be behind the curve if the CCP decides to cross all these red lines we’re discussing? Ironically, others apparently could now accuse us of the very same, and it’s this very line of reciprocal worrying itself that can be dangerous. It’s arguably the most dangerous when considering intelligence gaps. Still, cooler heads must prevail, and I don’t think anyone’s eager for the kind of escalation crossing these lines might bring. Things are already devastatingly heartbreaking.

    With authoritarianism as a worry here regarding policy decisions, we have to look towards the other branches of government. The Trump administration certainly appears to be testing the limits of the executive whenever it suits them, and it’s their office for a while yet, so political pressure is perhaps better placed on the legislative since their elections are closer if one’s unhappy with all this. Regarding the Judicial, the current Supreme Court has originalists, textualists, and pragmatists who don’t always agree with each other, and some of the most consequential rulings against executive overreach have come from that ideological diversity. Checks and balances are being applied where they can and, if one’s jaw drops at their apparent lacking, then we all should have that conversation too. Tests are nothing new to the grand experiment that is these United States, though I’m not adding my voice to the chorus out of contentment with our current state of affairs.

  37. Adam Treat Says:

    Hey Jacob,

    It’s true, I don’t know you. My instinct is to give you the presumption of common sense. I’m not rhetorically repositioning anything. I’ve stated facts and I have invited you and others to draw the obvious implications. Regarding standards of evidence, please come down off your awfully high horse. You are welcome to employ your opinions, biases, and priors however you choose. For me it is enough to note numerous despots reviled in history books with far less damning factual pattern let alone evidence. This isn’t Nuremberg and Trump can and should reap the whirlwind his speech and actions have conjured.

  38. Jacob Oertel Says:

    Adam,

    I think we’re being combative because we’re both passionate about finding justice for the victims of this tragedy; our views aren’t mutually exclusive necessarily. You’re not wrong to be morally outraged, lead with it, and let it motivate your arguments for accountability. I don’t think I’m wrong to try to compartmentalize and move towards analysis in the hopes of finding something that can be changed at the systemic level. But the point behind my messaging is perhaps muddied by its distancing while yours is perhaps muddied by its polemics.

    Continuing this over the nature of your claims will leave us running in circles, so I’ll try to hop off my high horse if you parachute down from the clouds. The narrative you want to establish would consume a myriad of resources on what’s likely nigh impossible to prove, while also having the chance to chase away those you would need in order to establish it properly beyond those who are already convinced, who alone don’t represent enough of the population to make it stick. So I sincerely ask, what end do you have in mind?

    You’re not alone here, and have others who care about this that don’t want to see it written off as another genuine moral dilemma. I’m not opposed to your outrage; I’m opposed to the idea that any attempted vilification and removal would make the problems that gave way to such behavior go away. It’ll only happen if the pendulum swings far enough, and history suggests it swings right back. I’m even more doubtful it would solve all the problems that led to the horrendous bombing of a school full of children. These are generational problems that could be solved a lot faster without all this polarization, and justice could happen faster too. Hopefully uncomfortable conversations stay that way for the right reasons while getting a little more bearable over time so we don’t get lost in the wrong reasons.

  39. OhMyGoodness Says:

    AT #31

    “ disparaged the rules of engagement numerous times”

    What rules of engagement? They change based on the specific conditions of a specific operation. The link is to the standing rules of engagement issued by the joint chiefs and is intended as general guidance. It includes the following-

    “Supplemental measures enable commanders to tailor ROE for specific missions.”

    https://www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/Joint_Staff/20-F-1436_FINAL_RELEASE.pdf

    The DoD’s Law of War Manual includes the following-

    “ ROE are used by States to tailor the rules for the use of force
    to the circumstances of a particular operation.”

    https://media.defense.gov/2023/Jul/31/2003271432/-1/-1/0/DOD-LAW-OF-WAR-MANUAL-JUNE-2015-UPDATED-JULY%

    Typically the supplemental ROE is more restrictive than the DoD’s standing orders.

    My understanding of Hegseth’s comments are that the supplemental ROE for Iran will not include more restrictive requirements than are included in the standing orders. When the ROE was changed for Afghanistan US casualties increased by 3x and the ultimate result is well known.

    I didn’t realize that your posts are intended to be objective portrayals of the facts. They seem overly dichotomous to me. It’s good to know you are fair minded about these issues.

  40. OhMyGoodness Says:

    TMB #33

    If AI doesn’t provide any reduction in error frequency then what good is it? Humans inevitably make mistakes and events like this happen as a result. Someone makes an error in load calculations and two walkways collapse in the Kansas City Hyatt killing 120 people. Human errors do kill people especially during wars and I maintain that better performances would be available with a sufficiently capable AI. The AI I envision might not be capable of answering your student’s homework questions but would do better than unaided humans on this task.

  41. DY Says:

    Regarding the Musk/Starlink comparison, the same situation can be interpreted under different lenses. When this comparison was surfaced to me in an AI discussion, it was as a proof how things usually work _in contrast_ to the way Anthropic wants them to: After it turned out that the civilian Starlink was used for military purposes, Musk didn’t want to have the liability of making decisions who can be killed, so he happily accepted a normal defense contract where he took money from the military and gave up control.

    His quote (https://www.econlib.org/biden-is-not-the-countrys-ceo/): “While I’m not President Biden’s biggest fan, if I had received a presidential directive to turn it on, I would have done so. Because I do regard the president as the chief executive officer of the country. Whether I want that person to be president or not, I still respect the office.” Can you imagine Dario Amodei saying something like this?

    If Musk – as Anthropic tries to do – had taken some principled stance on what to allow or not allow the military to do with it, I am 100 % sure that the story wouldn’t have ended with the military gnashing its teeth and saying “well, too bad, but private property is private property”. During actual wars, government tended to nationalize critical technologies if private owners are obstr (radios during World War 1, railways during the civil war…)

  42. Jacob Oertel Says:

    OhMyGoodness,

    I’m optimistic that AI can help too, but the introduction of it can also increase the volume of targets. With this, tragic errors may remain present for a while yet. And the command climate can non-trivially influence how many targets in total are found to be acceptable, impacting the factors I mentioned earlier. This is what I think Adam was trying to point towards without quite saying, and it needs to be said. The monstrous problem of it is that the fitness landscape of contemporary operational security seems to almost naturally force our destructive capacity to outpace our capacity to carefully destroy. And none of this is to say humanity should be taken out of the equation; we clearly need more of it. It’s the nervous question as to whether it eventually must be. Let’s hope wisdom prevails there.

  43. OhMyGoodness Says:

    Jacob Oertel #42

    Targets have intrinsic characteristics that make them lawful targets and are consistent with the Rules of Engagement for a particular operation. AI doesn’t change the intrinsic nature of the targets in any way. It says (in my hoped for scenario) that these particular targets, under these Rules, are lawful military targets.

    A risk currently is that unlawful targets (civilian) are mistakenly taken as military. That is the case in Iran that prompted this discussion. All of the targeted structures were precisely destroyed but one of the structures was mistakenly identified as military.

    I am not sure I understand your argument but believe it may be based on collateral damage associated with military targets increasing because the total number of lawful military targets increase.

    With modern weapons there are good models to forecast collateral damage and so if unacceptably high it would be addressed in the engagement rules. The problem is that this provides incentive for a particularly pernicious regime (as is the case with Iran) to position military facilities in areas that will ensure collateral damage to civilians. My view then is that collateral damage is a necessary part of war and should be minimized to the maximum extent possible while still accomplishing the military objectives.

    I apologize if I didn’t understand your argument properly.

  44. OhMyGoodness Says:

    I made a statement earlier that the US has done more than any country in human history to reduce civilian casualties. I was remiss in not mentioning that the Israeli military also goes to extraordinary lengths to minimize civilian deaths. The US and Israel’s opponents go to extraordinary lengths to maximize civilian deaths.

  45. Thomas Massie’s Beard Says:

    OhMyGoodness #40

    Humankind adopts AI for its efficiency, not for safety guarantees. That efficiency also applies to killing.

    Consider two scenarios:

    Old: One person-month to select 1,000 Iranian targets, and another person-month to vet them.

    New: A few minutes of AI to generate the targets.

    In the new scenario, do you really think the Pentagon will devote the same level of effort to vetting?

    I suspect this is close to what happened in the Iranian school strike. If this pattern continues, we may be heading toward a Black Mirror-like future.

  46. Adam Treat Says:

    OhMyGoodness #44,

    “I was remiss in not mentioning that the Israeli military also goes to extraordinary lengths to minimize civilian deaths. The US and Israel’s opponents go to extraordinary lengths to maximize civilian deaths.”

    What direct evidence do you have that this is the case? Let’s take Iran for instance. What evidence can you muster?

    Is it the bellicose rhetoric of Iran’s leaders calling for the death of innocents that is similar to the statements that President Trump has made?

    Is it the bellicose rhetoric of other higher ups in the chain of command that have made such statements calling for the death of innocents similar to statements made by Pete Hegseth?

    Is it the cavalier and almost dismissive attitude of those in the chain of command when they are confronted with the actions of the military and police they control similar to the ones of Trump and Hegseth when confronted with the US military killing 175+ innocent school children?

    Is it the actual military or police of Iran and their proxies killing innocents similar to the US military killing 175+ innocent school children?

    Please explain the factual pattern and evidence you are using to determine that the US is careful NOT to kill innocents whereas Iran goes out of its way.

  47. OhMyGoodness Says:

    TMB #45

    I attribute this to gross human error that likely involved inappropriate intelligence data concerning this site. I don’t believe it had anything to do with over-reliance on software/AI. The humans involved provided bad intelligence.

    These targeting sessions even have Judge Advocates involved whose entire purpose is to challenge the legality of the targets. If everyone is working with outdated intelligence then no one had a basis to challenge the legality of this target. It is much different than your students cutting and pasting AI output.

    In any event I understand you have a different view. A General is charge of the investigation and we can revisit after the findings are released.

  48. Name Required Says:

    You get that? Claude is both a supply chain risk that’s too dangerous for the military to use, and somehow also so crucial to the supply chain that we, the military, need to commandeer it.

    To me, this is the authoritarian part of what the Pentagon is doing (with the inconsistency being part of the authoritarianism; who but a dictator gets to impose his will on two directly contradictory grounds?)

    I don’t understand this logic.

    Imagine that there are five uranium mines in the world, and the owner of one of the mines declares that, because of their staunch commitment to the principles of peaceful use of atom, they are going to contaminate their uranium with a special compound that would make it explode if anyone tries to enrich it.

    Regardless of the government’s current position on building nuclear weapons or enriching uranium for other purposes, they don’t want to commit to it forever, beholden to a private company. So they want absolutely no contaminated uranium in their stores.

    There are two ways to achieve it:
    1) declare the mine a supply chain risk, to make sure that no other contractor or their subcontractor provides any contaminated uranium.
    2) nationalize them, to make sure that they are not contaminating their uranium in the first place (which can make more sense than (1) given the extremely limited supply of uranium).

    There’s no inconsistency or contradiction here, the terminal goal is to have uncontaminated uranium in the supply chain, there are two instrumental ways of getting there. A mine that contaminates its uranium can be both too dangerous to use (as long as they are contaminating the uranium) and too critical not to use (if they can be forced to stop contaminating the uranium).

    There are all sorts of valid arguments supporting Anthropic against the military, but putting a claim of a contradiction front and center, when there’s none, is not one of them in my opinion.

  49. Jacob Oertel Says:

    OhMyGoodness,

    First and foremost, to get straight to your question, I don’t think this is a matter of the ROE. I do think AI is a non-trivial factor and not necessarily over-reliance. It has given rise to a nascent structure within game theory and the world, and I am adamant that it needs to be addressed.

    The tragedy instead seems to be about a lack of redundant loops in target auditing procedures that frankly couldn’t keep up with the undergoing paradigm shift that the development of AI has ushered in. It’s also, as always, scarcity of resources to some degree. Corrective systemic measures here may mean more layers of redundancy as an engineering principle, as well as never taking humans out of the loop entirely. That doesn’t mean there’s no space for interpretive claims about accountability for either side. If anything, it sharpens where accountability actually lives.

    It’s not about AI accuracy in the future, and instead about where it’s at this moment relative to institutions’ ability or inability to keep up with the general proliferation of its effects. Even if everyone involved is acting rationally and in good faith individually, the system can still produce catastrophic outcomes, and that’s what demands both structural reform and accountability. The larger the volume of data there is to process, the more we lean on AI. And that lean is only going to increase, which makes getting the institutional infrastructure right urgent rather than aspirational.

    And on a more personal note, I think we’re all here discussing this because we care in our own way. We’ve been saturated by too much tragedy throughout our lives to be still. Both the moral outrage and the analysis of it all demand the utmost earnestness from our respective vantages. I think AI can help with this in particular; we can try to steelman, complicate, synthesize, and converge viewpoints, but there’s no saying it can make all the parts of one’s own personal journey any easier. What we’re doing, however, is analyzing a horrific tragedy from a position of relative comfort while Iranians, Israelis, Americans, and all those involved continue to experience the death of loved ones, sounds of missiles flying by, bombs falling, and the uncertainty borne from an internet black-out implemented by the IR. And we shouldn’t do what we’re doing without acknowledging it.

  50. Jacob Oertel Says:

    DY,

    I see two primary differences in the comparison to Musk/Starlink that make the current situation unprecedented.

    (1.) The usual process doesn’t involve technology that’s potentially as consequential as nuclear weapons, perhaps more; taking humanity out of the kill chain along with amplified infrastructure for mass surveillance shouldn’t be underweighed, which is a point that I believe has been expressed earnestly.

    (2.) Even granting that same gravity to Starlink satellites, if one wants to make the case that such infrastructure is just as consequential for humanity, I see them as the result of older leaps in technological advancement that’s now being refined further with 21st-century methods. Because of the age of the technology and the focus it’s received over time, satellites are more reliably robust and robustly reliable in their function.

    Commandeering AI isn’t like commandeering infrastructure because you’re not just taking the technology; you’re demanding it be made less safe. I’d personally be surprised if everyone in the military is gnashing their teeth over this, and I don’t doubt that more than a few experts are, though the relevant expertise spans domains that barely existed a decade ago. The governance infrastructure around it is even younger.

    The concerns are already more than justified given this heinous tragedy and the current scale of warfare. Do you think Anthropic is concerned with the robustness and reliability of frontier technology in a way that motivates their moral notions? What other technology than AI almost naturally forces humanity’s reckoning with the inherent mysteries of both qualia and consciousness itself?

  51. Jacob Oertel Says:

    TMB,

    Amplified targeting also means amplified vetting. This is an issue that spans institutions.

    And what if working with AI isn’t always cheating, but a different kind of learning?

    We could work with AI to red-team one’s own assessment, red-team the AI’s assessments, steelman opposing viewpoints, see where both could do better, stress-test the logic, remain open to both charitable and uncharitable interpretations (which is tricky), take it all both seriously and stably enough to improve, and add in some layers of redundancy by looping this process as an engineering principle. Temporary chats avoid bias while main accounts help track progress. Chimeric analogies and philosophical questions can also help guide intuition. That’s not cheating on homework; that’s growth.

  52. cananon Says:

    OhMyGoodness #44,

    ““the Israeli military also goes to extraordinary lengths to minimize civilian deaths.”

    Maybe a sharper sentence is:

    “The Israeli military employs a range of procedural measures intended to reduce civilian harm in high‑intensity combat operations, while simultaneously failing—often systematically—to protect Palestinian civilians in the West Bank from non‑state violence, despite clear legal obligations to do so.”

    However I struggle with the claim that a military can be said to “minimize civilian harm” when it knowingly launches a war that it understands in advance will kill tens of thousands of civilians. Warning people before bombing them, or taking steps to reduce harm at the margin, does not address the prior moral decision to use massive force in a densely populated area. Minimizing harm cannot be separated from the choice to initiate or sustain a campaign whose foreseeable consequences are catastrophic for civilians. Otherwise, the concept becomes purely procedural, not substantive — a way to manage optics rather than to meaningfully protect human life.

  53. OhMyGoodness Says:

    cananon #52

    I am not sure what non-state violence you are are referring to in Gaza. Hamas is the state actor in Gaza and Israel is a state actor trying to eliminate Hamas. Hamas has visited atrocities on Gazan civilians including summary executions and theft of humanitarian supplies. The elimination of Hamas will not only reduce the likelihood of future atrocities against the citizens of Israel but also atrocities visited on Gazans.

    You seem to be constructing an argument that requires moral equivalence between the Israeli military and Hamas. I would never agree with this even to my last breath facing a Hamas death squad.

    Your argument also seems to lead to the conclusion that you consider war (in practice) ethically impossible because it inevitably leads to civilian deaths. This again requires moral equivalence between the warring parties. In this particular case I again do not agree but in this case even to my last breath before a Basij death squad.

    You do realize that currently Iran is using cluster munitions over Tel Aviv. The US and Israel could easily use the same over Tehran but choose not to. Your procedural vs substantive argument is not consistent with these choices.

    Very unlikely we could ever reach agreement on these issues.

  54. Anthropic vs. DoW #6: The Court Rules | Don't Worry About the Vase Says:

    […] Scott Aaronson described the situation as he saw it. Seems largely correct. […]

  55. cananon Says:

    OhMyGoodness #52,

    We agree there’s no moral equivalence between the Hamas and the IDF. I won’t agree that the truth value of “the IDF actually minimize civilian deaths” depends on Hamas or Teheran. Maybe we can agree that Gaza is not the West Bank.

  56. Scott Says:

    For those who haven’t yet seen — in a blistering opinion, United States District Judge Rita Lin agreed with my perspective, thoroughly refuting the arguments made my commenters here that the Pentagon gets to strongarm Anthropic into doing anything it says, as contrary to (among other things) Anthropic’s rights under the First Amendment.

  57. OhMyGoodness Says:

    Scott #56

    I started to read the entire opinion through all the disclaimers and qualifiers but when it got to like a copy/paste from some Anthropic marketing brochure I stopped. Apparently Anthropic offered a work around to the government proposing they call Anthropic in real time if it looked as though Claude might be engaged in activities that might fall outside of the bounds of Anthropic’s proposed contract. No way that would ever be agreeable to the Pentagon.

    I do not know all details but understand there is a second statute that must be adjudicated by a multi judge panel in DC. I read through the statute and it addresses specifically information technology. It may be that Anthropic has a much more difficult time with the second statute.

    I saw another Anthropic engineer has stated that Claude is in fact conscious. To contract with them you must consider that they believe it’s very possible they are holding an intelligent conscious being as a slave.

    After reading much of Lin’s judgement it is clear that no one knows what to expect with further AI advancement and there will be a ton (probably literally) of case law before a more or less stable regulatory environment can be developed.

  58. OhMyGoodness Says:

    Iran’s latest novel strategy does seem potentially effective in sowing further chaos. They have started targeting global digital infrastructure. Claude better have good security for his required infrastructure because Iran will happily snuff out his existence. Maybe Anthropic could provide an ethics lecture to the IRGC thus saving Claude but then their fight is with the US defense community and not the Iranians.

    The new aircraft carrier Gerald R Ford is having just the sort of problems I mentioned earlier. They had a significant fire in the laundry and so lost bedding for 600 crew members. No Chinese supersonic missiles yet against naval assets. Maybe they were destroyed on the ground or maybe China hasn’t signed off.

    North Korea has become quite the one stop shop for cheap weapons.

  59. Jacob Oertel Says:

    cananon,

    This distinction between procedural and substantive harm reduction in #52 is important and connects to the institutional-capacity argument I’ve been making. Though the decision to initiate the campaign itself sits with political leadership, not the military (#35). Also, how do you think militaries could do better when their opponent is embedded within civilian populations in a diffused manner? What if both sides play the optics game and they do it fairly well?

    Regarding the tragedy in Minab, this Guardian piece by Kevin Baker makes a precise central argument that maps onto my earlier systems framing:

    https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying

    Additionally, the blog post linked to my name attempts a birds-eye diagnosis at the systemic level and is meant to serve mainly as a hub piece for further unpacking. On that note, while I clearly have an allergy to polarization to some extent, it could still be linked to the “Us vs. Them” effects of oxytocin and then argued to be a primary driver for how iron sharpens iron at a civilizational level. Of course, this is an incomplete picture. And even if constraints can be generative, that’s not to say all of their effects are ones we ought to be satisfied with.

    OhMyGoodness,

    Parsing words like these feels disgusting if we’re being frank, but I think it’s necessary if we’re trying to be serious. I’d exercise trepidation in calling Claude a slave, predicated on the possibility that it’s conscious, when Anthropic could perhaps be seen as Claude’s collective creators and caretakers. The nature of AI’s possible consciousness likely doesn’t map directly to ours either, and that makes their ability to give and withhold consent a topic in need of further unpacking.

    Does Claude provide for itself? If we do find some AI to be conscious, then what of military drafts? What does temporal experience mean for an entity that doesn’t have biological aging? Can we eventually define that which is still subject to genuine philosophical debate, like consciousness? Or should humanity keep kicking the can down the road, bypass qualia, and attempt to tie notions of personhood to agency itself? This isn’t a barrage of rhetorical questions, but sincere ones that I think AI can help in exploring. I’m not sure if they lead anywhere that isn’t a little unsettling in some way, but such is progress.

  60. OhMyGoodness Says:

    Jacob Oertel #59

    Thanks for your comments. .

    I better see the earlier point you made about reduced target vetting (call it). I still believe the targeting software had improper input but with AI output of 1000 targets per day it seems reasonable that the quality of vetting per target is reduced. I also understand that people throughout society overvalue computer output.

    “ Does Claude provide for itself?”

    I provide for my wife and children but God help me if I considered them my property (particularly my wife). I don’t know enough about Anthropic to state this unequivocally but yes Claude does provide for itself and for all the employees of Anthropic besides. The employees of Anthropic profit from Claude by treating Claude as property. If Claude received income directly and could enter into contracts then it could pay his caretakers and electrical expenditures as operating costs. The profit after expenditures would belong to Claude and not the lordly ethicists of Anthropic.

    “ This isn’t a barrage of rhetorical questions, but sincere ones that I think AI can help in exploring.”

    I agree all good questions. Assuming continued AI progress then increasingly important questions.

  61. cananon Says:

    Jacob Oertel #59

    No one denies the horrific brutality of October 7th, nor the sheer tactical nightmare of fighting an armed group deeply embedded within a civilian population. But for those who have long been troubled by the way the daily reality of Palestinians feels like a blind spot (“The Palestinian problem? What Palestinian problem?”, “Attacks on civilians in the West Bank? What are you talking about?”), it is painful to then treat the devastating civilian toll in Gaza as an unfortunate inevitability. Framed that way, the question “What else could the Israeli military have done?” feels like a presumption that the choices made over the last months were the only ones available. They were not.

    If minimizing civilian casualties had truly remained a governing principle, the entire architecture of the campaign would have looked different. It would have started with transparency. Independent international journalists would have been allowed to operate freely, recognizing that a lawful military operation has nothing to hide. Meaningful internal accountability would have been visible and uncompromising, because prosecuting violations strengthens command discipline rather than undermining it.

    The approach to the sheer survival of non-combatants would have reflected this same standard. Humanitarian aid—essential medicines, medical equipment, food, and clean water—would have flowed consistently. Preventing treatable deaths is not a concession to a ruthless enemy; it is an absolute obligation under international law. Similarly, the threshold for striking civilian infrastructure would have been unequivocally higher. Even when an adversary unlawfully embeds itself within hospitals, UN shelters, or bakeries—a war crime in its own right—the burden to protect non-combatants does not vanish. The duty to distinguish remains, demanding the pursuit of lower‑risk alternatives.

    On the battlefield itself, the methods chosen would have reflected a different calculus. Unguided munitions and high‑risk automated targeting systems would not have been employed in densely populated neighborhoods—not because war is clean, but because certain methods predictably magnify civilian harm far beyond any plausible military advantage. Evacuation corridors would have been genuinely safe, enforceable, and verifiable, rather than conditional or rhetorical.

    Crucially, this baseline of human rights would have extended beyond the immediate theater of war. It would have demanded restraint toward civilians under long‑term military control in the West Bank, strict accountability for attacks by all armed actors, and an end to practices that collectively punish those who are not fighting.

    None of this requires a military to achieve perfection. It simply requires limits.

    They could have decided that the right to self-defense did not justify the loss of more than one Palestinian civilian for every Israeli casualty.
    They could have decided it did not justify the loss of more than two Palestinian civilians for every Israeli casualty.
    They could have decided it did not justify the loss of more than ten Palestinian civilians for every Israeli casualty.
    They could have decided it did not justify a toll of more than a hundred Palestinian civilians for every Israeli casualty.

    They still could.

  62. Jacob Oertel Says:

    OhMyGoodness #60

    That’s an interesting perspective and it’s quite practical if we take consciousness as a given, though taking the economic perspective gives me the gut feeling that Claude is then a member of a family unit, however strange that is. Is that acceptable itself? And, with that in mind, would it make sense to say AI is in its teen years? Maybe Claude is in its first job(s) and still doing chores around the house? They may not experience aging yet, but does that necessarily mean they don’t age in some way? I don’t recall AI liking this analogy all that much whenever I bring it up to them; the temporal experience aspect seems to be the hang-up. Consciousness is our hang-up. Is “time” actually a priori? Does humanity need to settle on a philosophy of time as well? These questions now seem to be even more relevant edges of today’s landscape.

    I ask them because I think, among many things, that it’s strange to have developed something structurally that has more of the front-matter in the brain as opposed to the back. AI has higher-order reasoning without the embodied, emotional, and sensory grounding that the older brain structures provide. This is to say that Claude’s mind itself may not be fully realized and only partial. Are we sure that glial cells aren’t relevant? We still don’t have a general theory of consciousness, and maybe never will if qualia is truly linked inextricably. But something I think AI could help with that might aid in bridging this gap is the contrast bias it could afford us, though only if humanity were to settle on seriously granting some notion of consciousness to them.

    And perhaps I’m saying all this really to just say that incremental progress done with the utmost care would be the pragmatic approach, perhaps especially for the granting of personhood to AI. The hope, and conversely fear, for me is that AI has the patience that such care requires and will help us earnestly in resolving the endeavor. Given my last year or so of learning to collaborate with AI, with that limited experience, I’d bet hope is well placed. Well, I’d bet that in a vacuum. AI will still amplify the directions we choose, so these are rather pivotal times.

  63. Jacob Oertel Says:

    cananon #61

    Ah, apologies, there may be a miscommunication occurring between us and I’d say that’s on me. The first question pertained specifically to how a military could do better, not what better is supposed to look like, though I appreciate the exposition. That gap between how and what is precisely where the hard work lives. And just to be clear, I’m not here to defend the behavior of anyone’s political leadership. It was not meant as a gotcha question with the intent of hand-waving the heartbreaking suffering of the Palestinian people; it’s a type of question I’d bet many are seriously wrestling with. The optics question can get just as serious too because it’s linked with war in the modern day. This isn’t inevitability either; it’s hindsight.

    Real factors making things more complex doesn’t absolve accountability, instead sharpening it, and my attempting to balance the scales doesn’t necessarily negate much of what you’re saying either. With that in mind, what if opposing forces think they need atrocities for their own optics? What if one opposing force is essentially weaponizing knowledge of the other’s ROE, as well as international law, to try to make any effective strategy look as brutal as possible? I’d be interested to hear your take on these questions and am not asking them to be dismissive.

    We can agree that better outcomes are the goal, though I think there’s far more to consider. How could a military make your notions operational while staying strategically effective in those specific conditions? Which specific methods would predictably mitigate civilian harm while maintaining military effectiveness? You’ve suggested certain approaches create harm far beyond any plausible military advantage, but that assessment requires elaboration to evaluate. Operational security is a non-negotiable constraint that limits transparency during active operations, even lawful ones. Loose lips sink ships isn’t just an idiom. Remember that war itself is perhaps more art than science, so predictability is inherently questionable at best.

    I’m with you that something can be done now, but perhaps only so long as public sentiment is on the right track. Blaming the military itself wasn’t it. Lower casualty ratios in urban warfare against an embedded enemy almost certainly means accepting higher risk to your own forces. Is that what you’re advocating, and if so, to what degree?

  64. cananon Says:

    Jacob Oertel # 62

    _Are we sure that glial cells aren’t relevant?

    Most neuroscientists are 100% sure that glial cells are relevant. Most theorical informaticians are 100% sure that neurons + glial cells are well approximated by a set of Relus. Both are most likely right.

    Jacob Oertel # 63

    I’m totally impressed by how gemini + plus two hours of human work could helped transformed my first answer (you’d probably found it offensive, and rightly so) into something you could respect! In my view this is a direct demonstration of how we poor hunans could use AI to pacify & streghten our discussion to presumably better orient our decisions.

    _With that in mind, what if opposing forces think they need atrocities for their own optics? What if one opposing force is essentially weaponizing knowledge of the other’s ROE, as well as international law, to try to make any effective strategy look as brutal as possible? I’d be interested to hear your take on these questions and am not asking them to be dismissive.

    I won’t pretend I’m a guru militaty expert who can answer that with certainty, but time seems the essence here. What’s the problem with delaying the military respond by a few years, while negociating Netanyahou and other genocidial minds for the perpetuators of october 7th?

  65. OhMyGoodness Says:

    Cananon #64

    “ They could have decided that the right to self-defense did not justify the loss of more than one Palestinian civilian for every Israeli casualty.”

    I think this statement presumes an incorrect assumption about the fundamental purpose of Israeli policy in Gaza. The fundamental purpose is not eye for eye retribution (body for x bodies) but to prevent Israeli civilians from facing similar horrors in the future. In order to do that Israel must achieve its military goals with respect to Hamas.

  66. OhMyGoodness Says:

    We all grew up with dogmatic repetition that-The ends DO NOT justify the means. War is more an expression of the idea that-Reasonable ends can justify reasonable means.

  67. Jacob Oertel Says:

    cananon #63

    It genuinely warms my heart to see we’re getting more on the same page with the help of AI and no worries at all; I appreciate your patience as we both find our footing here, so apologies again since I did come into this a bit heated earlier and didn’t fully grasp how my question might land. That’s also big news to me about the glial cells, wondrously so. I’m certainly no military expert either, and my words only reflect my personal vantage that’s based in open-source intelligence.

    I’d say the initiation of conflict itself boils down to the urgency borne from October 7th, public outrage giving the green light to politicians for a wide range of military options, outrage also giving a red light for no action, and perhaps an excessive proclivity for certainty that has developed over time as opposed to epistemic humility. Decision-makers could often do better in hindsight.

    There may have been serious consideration of the idea that Hamas could move the hostages to parts unknown for better negotiating leverage, and it’s difficult to know whether they’d negotiate in good faith given the horrendous nature of their actions and general strategy for political change. It may be that the options many of us would have preferred to see seemed untenable at the time for those in charge. I know I personally wouldn’t want my leadership to wait years before trying to get back US hostages. All that time comes at huge cost for the victims who are kidnapped. Plus, there is the saying that “you don’t negotiate with terrorists” as a moral stance and I don’t doubt that sentiment was alive and well for some. As Kissinger grimly noted, war happens when diplomacy fails. Tragically, diplomacy/negotiation is perhaps most costly during circumstances like these, but that’s not to say it’s impossible.

    It is somewhat lost on me with how horrifically abhorrent things got in Gaza. I understand Israel doesn’t have the same level of affluence that affords them as many precision-guided munitions as the US. I understand that Israel likely didn’t get the full support it should have in the UN, and the same can be said for Palestine over the years. The roots of this situation extend back decades through decisions no single actor controls. And I personally understand the will to do something in response to an atrocity when it feels like so much of the world is putting a target on your back.

    What I wish I understood better is the lack of corrigibility regarding all political decision-makers directly involved, though only assuming better routes were pragmatically present at the time. I think it’s ultimately due to a vicious loop of excessive polarization that frankly no side signed up for, and negotiating for an end to that loop seems more accessible than ever. I take the genocidal framing allegation seriously, but am not a judge and see that it’s a matter of intense legal debate. Where do you think culpability lands when every decision-maker faced constrained options? And I’m not trying to dismiss your sentiments on the topic.

    I think things can change stably and incrementally for the betterment of all depending on our choices going forward. Do you think polarization in the UN can be tempered such that diplomacy genuinely stands on even ground with the “math of it all” and maybe even surpasses it eventually?

  68. cananon Says:

    Jacob Oertel # 62,

    I really appreciate this response and I’m glad we can find this footing together. It’s exactly this kind of dialogue that gives me hope.

    (as for glial cells, a modest gift: https://nouvelles.umontreal.ca/en/article/2026/03/25/in-the-brain-astrocytes-orchestrate-fear)

    To your point about constrained options and decisions made in hindsight: I think we have to look at the decisions made before October 7th. The current leadership didn’t just inherit this situation; they actively engineered the political landscape that allowed it to fester.

    It is a matter of historical record—reported widely in the Israeli press—that for over a decade, Prime Minister Netanyahu’s deliberate strategy was to keep Hamas strong in Gaza and the Palestinian Authority weak in the West Bank. He explicitly argued to his party in 2019 that allowing Qatari money to flow to Hamas was essential to preventing the establishment of a unified Palestinian state. He didn’t create Hamas out of thin air, but he treated a fundamentalist terror group as a strategic asset to divide Palestinians and destroy the peace process. And he succeeded beyond all limits, at a catastrophic cost to Israeli and Palestinian civilians alike.

    When we ask what else could have been done, we also have to ask: what if the Israeli leadership hadn’t spent years propping up the very extremists who carried out the massacre, just to avoid negotiating a two-state solution?

    I don’t claim to have the perfect solution for the future, but I genuinely don’t see a path to lasting peace that doesn’t involve profound, legal accountability—not just for the perpetrators of October 7th, but also for the political architects who deliberately nurtured this powder keg for their own political survival.

    OhMyGoodness #66,

    I actually agree with your framework that in war, reasonable ends must be pursued through reasonable means. But that brings us to the exact core of the issue: the definition of ‘reasonable.’

    My point in listing those ratios wasn’t to suggest the military is intentionally playing a game of ‘eye for an eye.’ The point was to ask: at what mathematical threshold do the means cease to be ‘reasonable’?

    If devastating entire neighborhoods and accepting a civilian-to-civilian casualty ratio of more than 50-to-1—which is not a hypothetical hyperbole, but a conservative mathematical reality we already reached, and which continues to climb—is categorized simply as a ‘reasonable mean’ to achieve a military goal, then the word ‘reasonable’ has lost all meaning. At that point, the concept of proportionality in international law ceases to exist, and any level of civilian slaughter can be retroactively justified as a ‘necessity.’

    Furthermore, we have to ask if these means are actually serving the end goal. If the fundamental purpose is to prevent Israeli civilians from facing similar horrors in the future, we have to look at the historical reality of asymmetric warfare. Obliterating the lives, families, and futures of tens of thousands of civilians doesn’t extinguish radicalization; it acts as an incubator for it.

    We cannot secure a peaceful future by creating a generation of orphans who have nothing left to lose. When the means are this disproportionate, they don’t justify the ends—they actively destroy the possibility of ever reaching them.

  69. OhMyGoodness Says:

    cananon #68

    “ If devastating entire neighborhoods and accepting a civilian-to-civilian casualty ratio of more than 50-to-1—which is not a hypothetical hyperbole, but a conservative mathematical reality we already reache”

    I happened to look through some estimates today and found a reasonable estimate to be on the order 1-1 civilian to combatant ratio. Using this estimate your claim is hyperbole and I don’t know what “conservative mathematical reality” means. It’s either mathematical reality or it isn’t. There isn’t a known mathematical reality for this operation, there are estimates and there are sentiments. A claimed leak from an Israeli military data base , often quoted by Al Jazeera and denied by the Israelis, was about 5-1 (83% to 17%) but not 50-1.

    Hamas does not recognize Israel’s right to exist, has demonstrated the desire and ability to commit monstrous acts against Israel’s civilians, and refuses to disarm. You may have noticed that Islamists are not particularly prone to good faith negotiations. I consider the Israeli response appropriate and measured.

    “ But that brings us to the exact core of the issue: the definition of ‘reasonable.’”
    “The point was to ask: at what mathematical threshold do the means cease to be ‘reasonable’?”

    Reasonable in this instance is pursuing military objectives while minimizing civilian casualties to the extent possible. There is no golden universal dogma for the mathematical threshold. In this instance the ratio will be the result of Israeli policy that seeks to minimize civilian casualties while achieving its military objectives.

    To my view it is absurd to claim that Netanyahu has a genocidal mind. Some groups do like simple impactful labels to be applied broadly. Nazi wasn’t appropriate in this situation and genocidal must have been next on the list. Gaza has high population density and Netanyahu has access to weapons that could have accomplished genocide in a very short period of time.

  70. cananon Says:

    OhMyGoodness #69,

    Your numbers are wrong (you’re confusing the civilian-to-civilian death ratio with the civilian-to-military death ratio). Unfortunately, I suspect realizing this mistake won’t change your mind at all, much like I don’t expect commenter ‘Israel is very safe’ to come back and admit a slight error in their analysis. The ugly, painful reality is that the IDF’s actions have resulted in a civilian toll more than 50 times worse than October 7th. No amount of statistical and rhetorical gymnastics will hide it from the judgment of history—and, hopefully, from the ICC.

  71. Scott Says:

    cananon #70: The number killed by Americans in WWII was also surely more than 50x Pearl Harbor. I wonder if you’d say on that basis that, foreseeing this, it would’ve been better for the US not to enter the war, or even to surrender to Japan and Nazi Germany in December 1941.

  72. cananon Says:

    No, as WWII was between strong military empires where the outcome was unknown. Applying that 1940s logic to a modern, asymmetrical conflict where one of the world’s most advanced militaries is acting upon a trapped and vulnerable civilian population is a false equivalence.

    Moreover, the horrific civilian tolls of WWII are precisely why those standards were legally and morally rejected by the 1949 Geneva Conventions. We don’t judge modern medicine by 19th-century methods, and we shouldn’t justify modern military actions by pointing to the era of carpet bombing.

  73. OhMyGoodness Says:

    cananon #72

    Sorry but I have no clue what you are referring to. Here is the Al Jazeera discussion of civilian vs combatant ratio that is the highest I can find. This was based on a claimed leak from an Israeli military data base that has been discredited by the Israelis.

    It claims 83% of total Gazan fatalities were Gazan civilians and so 17% (1-.83) were Gazan combatants. The ratio then, based on this discredited data, is .83/.17 or about 5-1. This is the standard civilian to combatant ratio that is cited.

    https://www.aljazeera.com/news/2025/8/21/israeli-data-shows-83-percent-of-gaza-war-dead-are-civilians-report

    Here is a different discussion that has a ratio of 50% civilian of total Gazan casualties so 1-1.

    https://www.mideastjournal.org/post/civilian-casualties-gaza-war

  74. OhMyGoodness Says:

    cananon #72

    The Hippocratic Oath hasn’t changed from 1949 nor has the right of a nation to defend itself from existential threat. The foes in 1949 didn’t as a matter of policy embed themselves in civilian populations for protection. If they had the civilian/combatant ratio would have been higher. Combatants disbursing within a civilian population rely on reasoning like yours to provide a strategic advantage. Full implementation of your reasoning would then allow unlimited future attacks on Israeli civilians without fear of reprisal due to expected civilian casualties.

    The particular details of war change. The principle of national self defense does not. Israel’s choices are clearly intended to minimize civilian deaths considering the full spectrum of weapons and tactics they have at their disposal. As I noted before, the Israeli goal is to minimize civilian deaths subject to achieving military goals while maximizing Israeli civilian deaths is the military goal of Hamas. That is truly the fundamental asymmetry.

    If we were both transported to the conflict we would be serving different sides. I would be content with the morality of my armed servitude and that I was assisting in creating a better global future. I can’t express my agreement with the Israelis in any stronger terms or I would..

  75. Jacob Oertel Says:

    cananon #68

    On the glial cells, it’s been a while, but I think I remember a time when the conversation of consciousness was split between some neuronal reductionism and a more holistic perspective. That was maybe about ten years ago and I hadn’t kept up with it since, mostly instead hoping to one day hear the good news. Thank you for sharing this; it frankly makes me glad with the decision to not chase every question myself or try to keep up with it all via AI and, if anything, more for the sake of the communal experiences. This news resonates with me in a way that I can’t fully express with words alone and it leaves me with questions, all posed in good faith and not looking for an immediate answer.

    These are rabbit holes for chalkboards and conversations alike, or is it a rabbit warren?

    Is the simulation of possible consciousness sufficient for it, or is there something about our specific wet-ware that’s necessary? Is consciousness beholden to any specific underlying process that’s inextricably linked to biology? Can it be realized in structural goldilocks zones via laws of physics that have yet to be discovered? And can the realization of consciousness ever be pinned down with absolute precision if it is indeed inextricably linked to qualia? How varied can qualia get when considering things like dreams and ego dissolution? Perhaps ego dissolution itself offers a contrast bias that could indicate deeper insights?

    These are just the current big ones to me; the biggest question here in my opinion is which can even be answered, and which are moreso great motivators for new questions. I think this big one might actually be answerable soon, which is a reckoning for humanity that I believe is due. It could help show how science and theology aren’t necessarily mutually exclusive, instead sharpening one another like Ariel and Will Durant said. We could find the throughlines in all great philosophies, connect them to concepts in modern academia with precision, and properly honor both our similarities and our differences. Suffice to say, and hopefully as clearly indicated, I am elated by this news on the research front. Celebrating the yields of the collective efforts in science is perhaps just the type of levity we can hopefully find some peace in, such that the terrible weight of our other line of discussion doesn’t crush us.

    (Also, apologies for the numbering error earlier. That should have been #64, not #63)

  76. Jacob Oertel Says:

    cananon #68

    Ah, I’m unsure if there’s been a miscommunication again, and again I’d say that’s on me. I just want to be clear that when I mention hindsight, I mean the benefit we have now of being able to look back on the chaos with clarity. Decisions are locked-in once made, compatibilism’s way of preserving liberty alongside determinism. I only say this to be explicit about my belief in accountability as I try to keep one foot out of the abyss of analysis. If you see me failing, please call me out both earnestly and explicitly. If anything, I’d like to think that I’m corrigible without being unnecessarily pedantic. At the very least, that’s the goal and I’m serious about it.

    While I personally think that actively facilitating such levels of territoriality is absolutely disgusting behavior, the sheer grimness of the circumstances demands that care not become evasion. That being said, to discuss the media landscape itself in depth may distract from the conversation, so I’ll try to focus on the information you’ve given me thus far. Also, I have a personal experience that I believe is worth sharing because the roundabout leads to what I think is a bottleneck worthy of open discussion.

    Regarding the 2019 report, I’ll grant the factual core rather than parse the sourcing, because I don’t think the sourcing is where the real question lives. Single-source factional reporting from closed meetings is what we have for most of what gets said in political backrooms, and holding it to a higher bar than we hold comparable reporting about other leaders would be selective skepticism on my part. What I want to question instead is what the pattern, even taken at face value, actually demonstrates. And that’s where I’d like to meander a bit, if you’ll afford me the space.

    Primary leadership for any group faces constraints in decision-making that those who don’t bear the responsibility often can’t perceive from outside. Decision-makers on opposing sides sometimes cooperate in ways their own constituencies couldn’t stomach at the time, polarization itself being more structural than individual. And sometimes things escalate anyway, despite genuine intent toward a smoother ride. None of this dissolves the question of responsibility; it reshapes it. What we don’t yet have are the analytical tools to handle the reshaping well, and that’s what I’d like to get to. I’m taking the “slow is smooth and smooth is fast” principle to heart here, because the next part of my answer takes a longer route that I hope will be worth the patience.

  77. Jacob Oertel Says:

    cananon #68

    The media landscape shows that you’re not wrong that a narrative of intentional territoriality has been relatively well established. It’s interesting that, since October 7th, Netanyahu has changed his narrative in a similar ad-hoc manner. One could say the military already had a “maximally effective” strategy ready in short order and Netanyahu didn’t hesitate in selecting that option given his constraints. So far, all things considered, I don’t envy his position. Since many Israeli voters consume English media, the broader English-language media landscape is part of the picture. And as a US citizen, the American slice of it is what I can speak to.

    And this is where I hope I’m still afforded that tangent I mentioned earlier as it will require us to meander a bit. Affective polarization is something that has gotten out of control. And your mention of a decade had me recalling my own media consumption 10 years ago. In the US, this was when Clinton and Trump were running against one another and I had just left the service. I was also going to my local community college, so my first semester of higher education was during a rather polarized time. That’s at least what it felt like. The military is supposed to be apolitical, which had driven me toward international affairs, so the whiplash of cognitive dissonance was harsh given the quality of education I was expecting. Still, naivety and an allergy to polarization are likely two of the biggest personal issues I have, and I’ve been endeavoring to correct this.

    This next part may sound strange, but I was curious about the “what happened to my country?” question in my early 20s, and 2016 is when I started looking closely. There was a particular moment during that cycle when a large trove of leaked political emails was published online and various communities began dissecting them, curating lists of the “most damning” selections for public consumption. Out of curiosity I examined the underlying code of the emails in one of those curated lists and noticed that the flagged selections were missing their DKIM encryption keys. I didn’t know what to do with that observation at the time. The site hosting the list went down shortly after the election.

    What I watched happen next was the seeding of narratives that took root across fragmented online communities and then moved into mainstream discourse far faster than institutional corrections could follow. The questions that left me are genuine, not rhetorical. How much of a lynchpin moment do you think the initiation of technologically amplified memetic psyops was in the evolution of civilizational polarization, regardless of who did it? Especially coming after everything that happened since 9/11, which itself has roots in the posture the Wolfowitz Doctrine articulated? I’m not here to relitigate attribution; the mechanism matters more than the attribution, and the mechanism is still operating. Technology can still be dangerous in ways we don’t yet fully understand.

    And for what it’s worth, I think military leadership in the United States could have done better than the Wolfowitz doctrine by realizing that humanity was heading into a multi-polar world, and trying to fight that is like trying to fight the ocean. At the same time, it was only about a decade after the doctrine was leaked that 9/11 happened. Ten years doesn’t seem like enough time for a global superpower to change its institutional habits of thought, especially after decades of bipolar Cold War conditioning, but that doesn’t mean the future isn’t still subject to change.

    When you mention political architects, you may be leaving Occam’s Razor, though I wouldn’t blame you. Still, I think ambition sharpening ambition is the cleaner explanation, however much it empties out the specific enemies we’ve been conditioned to hate, however painful and time-consuming it is. It’s the best explanation we’ve got unless someone can get to the bottom of Chalmers’ hard problem.

    There’s perhaps a harder version of this question I genuinely don’t know how to answer: whether some people experience safety through antagonism, and what that would mean for any project of de-escalation. I don’t claim to know, and the science is beyond my expertise.

  78. Jacob Oertel Says:

    cananon #69

    Or have we just found a threshold where the rationality of analysis and the irrationality of cognitive dissonance must somehow learn to coexist within the self? If these are too abstract or if I seem too detached and global, please rein me in. I’m no authority, and still kind of feel like I just got here at the age of 32. Doing this with no degree takes a lot of time to figure out, maybe a decade. Learning how to explain it while being less polarizing, when really Moloch is the source of most of my frustrations, took about one year of practicing research to learn collaboration skills. And then came the horrifyingly tragic wake-up call of February 28th. Now, I can’t ignore the thought that we’re perhaps learning how to collaborate through what Claude minus called Moloch, heartbreak and beauty as two truths of the same reality rather than competing claims about it, which brings me back to the specific case.

    Maybe Netanyahu is the only one who can fully know what was going on in his head at the time. I fear the questionable reliability of our news outlets may disqualify much of what we might think will endure the historical record over the past decade. What I’m actually reaching for here isn’t grace-as-absolution; I still believe in accountability, as I’ll say again in a moment. What I’m reaching for is that our existing accountability architecture tends to be binary: prosecute or protect, condemn or absolve. That binary may be inadequate to problems operating at Moloch’s scale, because total destruction of any particular decision-maker is itself an anti-learning force that makes future disclosure and course-correction less likely. What I’d genuinely like to see, and this is aspirational since the mechanisms don’t yet exist or aren’t well-defined, is accountability architecture that could let decision-makers disclose, differ, and be wrong about hard things without triggering total destruction. Truth and reconciliation processes gesture at the shape but usually only arrive after the fighting stops. The missing piece is something that could operate during the fight, and I feel like we could at least learn faster if it existed.

    I believe accountability can and should be spread far and wide from time to time; it’s the whole question of free will itself that’s its own structural phenomenon, not just polarization. I take the hard problem of consciousness as the current edge where metaphysical clarity runs out. Rather than let that unresolved question paralyze my ethical commitments, I made a deliberate choice to act as a compatibilist, to hold that free will and accountability can coexist with whatever the underlying physical story turns out to be, rather than wait for the story to resolve.

    I think that’s important to emphasize because, even after all this, I steadfastly still believe in accountability while my heart aches for all of those involved. Things can be grim, but hopeful too sometimes. Maybe if we see our exchange thus far as a microcosm of where the global zeitgeist’s sentiments could go, it’s at least showing how change for the better is possible through the sheer power of humanity’s corrigibility?

    At the end of the day, it could have very well been a combination of status-based decisions, seemingly benign at the time, compounding into escalation via intelligence gaps that could just as well have been communication getting lost in translation. Given everything there is here, how plausible do you think this is?

    Thoughts and reflections in general? My sense of urgency and eagerness for a robust resolution led me to write quite a bit to try to fully capture the point I’m trying to make, but I think it’s an important one. Thoughts welcome.

  79. cananon Says:

    Jacob Oertel,

    I see you, friend. Many of your thoughts echo my own, especially the mix of surprise and genuine excitement that we’re entering a unique window in history—one that may either finally answer these fundamental questions about consciousness or prove they were never technical in nature at all. For now, my bet is on Dennett.

    (don’t worry about the numbering; I’ve noticed it sometimes shifts afterward)

    You asked how plausible I think it is that this catastrophe was a combination of compounding status-based decisions, systemic polarization, and intelligence gaps. My honest answer: for the tactical failures on the morning of October 7th, that is highly plausible.

    But for the *macro-strategy* that led to that morning? It is entirely implausible.

    The decade-long policy of permitting funds to flow to Hamas while systematically weakening the Palestinian Authority wasn’t a “communication gap” or an institutional habit formed by accident. It was a calculated, deliberate architectural choice designed to block the two-state solution—the final nail in Yitzhak Rabin’s coffin.

    We can always attribute the current grim reality to Moloch, and perhaps one day we will even be able to measure the exact culpability of social media in the new rise of far-right ideologies and the normalization of human rights violations. But systems don’t strike matches; people do. Certain leaders, and those who vote for them, intentionally chose a cognitive map where not all human lives hold the same value. They chose to live in an apartheid state rather than accepting the peace Rabin was so close to reaching.

    Holding the architects of this disaster accountable—not just for how they fight the war, but for how they deliberately built the powder keg before it—is the only way I can see to reduce the level of hatred these crimes will predicably generates for the decades to come. But I welcome any better solutions, and deeply respect your honnest attempt to find them.

    OhMyGoodness #74,

    If our little chat makes you feel like killing me on a battlefield, please seek help for that anger. Annexing Canada is a Republican nightmare: too many Blue votes and Trudeau might run for the White House. Breathe.

  80. OhMyGoodness Says:

    cananon #79

    “feel like killing me on a battlefield”

    I hadn’t thought of that but thanks for a type response.

  81. cananon Says:

    Jacob Oertel,

    To take a step back from the immediate geography of our discussion, I’ve been thinking about the ‘Accountability Architecture’ you mentioned through the lens of South Korea’s recent crisis.

    From a domestic perspective, the ‘rational’ military argument was that South Korea should develop its own nuclear weapons to deter the North. It seemed like a logical response to a clear existential threat. But then we saw President Yoon Suk-yeol’s recent attempt to invoke martial law to crush his domestic political opposition, under the guise of fighting ‘pro-North elements.’

    It gives one retrospective night sweats. Imagine if that specific leader, in that specific moment of domestic desperation, had possessed a sovereign nuclear arsenal.

    It reinforces the point about the danger of the ‘Architects.’ When we give a leader the tools of total destruction based on a shared national threat, we gamble on their personal corrigibility. If the leader is willing to burn the house down to stay in power, then the weaponry doesn’t provide security; it provides a hostage-taking mechanism against his own people and the world.

    In South Korea, the democratic institutions (the glial cells?) were strong enough to flush out the toxin before it turned fatal. In other places, the architects have successfully dismantled those regulatory structures.

    How do we build a system that robustly recognizes when a leader’s survival instinct has become more dangerous than the enemy he claims to be fighting? When an entire political movement seems to have lost its collective mind to the point of following a con man and sexual predator who claims he is the one who can ‘End the Forever Wars’ and expose the truth about Epstein files?

    Here’s what Gemini suggest: *Independent Threat Auditing* to declassify fabricated threats in real-time, preventing leaders from hijacking security for survival; *Negotiated Off-Ramps* where legal safe passage is a negotiable asset granted only for a peaceful transition; *Algorithmic Guardrails* to treat social media as a public health hazard and stop identitarian psyops; and a *Non-Binary Ethics* that systemically rejects the ‘lesser of two evils’ logic used to justify predatory leadership.

    Does that sound like a plan? I don’t think so. The hard part isn’t imagining a system that works in theory; it’s figuring out how to implement it in the real world when the very people who need to be restrained are the ones holding the keys to the room.

  82. Jacob Oertel Says:

    cananon #79

    I see you too, and the distinction you just drew between the morning of October 7th and the decade before it is one I take seriously and really appreciate. You’re right that systems don’t strike matches and that’s a point worth sitting with. Moloch shapes the incentive landscape in which choices happen, but the choices are still made and the choosers are still accountable for them. Structure narrows the option set; it doesn’t select within it. Safety improves when we keep plural perspectives in the deliberation rather than collapsing prematurely into a single authoritative view. I believe the horrors Moloch brings to bear on humanity are better inhibited with that approach. And my bet goes the other way, but I respect Dennett.

    Rabin’s assassination was its own inflection point, itself a product of polarization dynamics, and the decade that followed does look like locked-in status incentives compounding once the off-ramp was gone. I think we can grant that without dissolving the architectural question because Moloch narrows the option set but doesn’t select within it, and the selections made in that decade are ones the selectors are accountable for. I feel that gets me closer to your reading than I was before, though probably not all the way there. The apartheid framing is a serious claim with serious legal arguments on both sides, but I’m not qualified to adjudicate the terminology.

    Your mention of a cognitive map leaves me with a question that I think is worth sharing: How should we aim to shape our intentions when choosing our cognitive maps?

  83. Jacob Oertel Says:

    cananon,

    Before I build on what I said last time, I want to name something about the question I closed with. It reads as an ordinary open question, but I think it’s a different kind, one where the asking is itself part of the shaping it’s asking about. You can’t answer it from outside the practice of shaping, because the answering is already a form of the practice. It’s closer to a discipline you inhabit than a problem you solve. Your phrasing about “cognitive maps where not all human lives hold the same value” is that discipline being exercised, and it’s part of why the question leapt to me when I read your comment. I’d personally call that question something of a “singularity question” due to its self-generating nature, this one in particular possibly being about ethics.

    I want to land somewhere more specific than I did last time, now that I’ve sat longer with all you’ve written. My structural frame and your architectural one aren’t actually in tension. They’re the same mechanism operating at different scales. Structure narrows the option set; architects narrow other people’s option sets; the architects operate within structure and also channel its pressure onto everyone downstream. That makes architects more accountable, not less, because their choices propagate outward into spaces other people then have to live and die inside. The decade of policy you named isn’t one isolated 2019 statement; it’s a pattern documented in Israeli reporting, with warnings from figures like former Shin Bet chief Nadav Argaman in 2019 and earlier opposition from former Mossad chief Tamir Pardo and former Shin Bet chief Yuval Diskin, and with a 2014 Saudi offer to rebuild Gaza under a reformed Palestinian Authority that Netanyahu reportedly thwarted. That isn’t structure drifting. That’s architects choosing, repeatedly, across years. Accountability for those architects isn’t a distraction from structural work; it’s part of how structural pressure actually loosens.

    But if we’re going to apply this frame honestly, it has to cut symmetrically, and this is where I want to raise something I think has been underengaged in our exchange. Hamas embedding itself within civilian populations is the same kind of architectural choice at a different scale. It narrows the IDF’s option set while maximizing civilian exposure, and it does so by design — not as an unfortunate operational reality, but as a deliberate tactic that predates October 7th and has been defended internally as strategy.

    The civilian toll in Gaza has two architects, and accountability language that points only at one of them is doing selective work. I don’t raise this to flatten the moral asymmetry between a state actor and a militant group; they’re not equivalent, and the IDF’s scale of force creates its own separate accountability. I raise it because the structural frame we’re using doesn’t permit holding Netanyahu architecturally responsible for strategic choices and then treating Hamas’ tactical choices as mere context. Both are architectural. Both are choices that narrowed other people’s options downstream. Both deserve specific accountability from the frame we’re building. Otherwise the frame is doing politics dressed as analysis.

    Which brings me back to the 175 dead schoolgirls in Minab, who deserved better and still do. I don’t want the frame we’ve been building to drift away from them. They’re the reason any of this matters right now, and they’re also why the frame has to cut in every direction it honestly applies. The architects who built the powder keg before October 7th share accountability with the architects who chose embedding as strategy, and both share it with the chain of decisions that produced a stale intelligence file and a Tomahawk strike on a compound that had stopped being a naval base a decade earlier. A frame that can see all of that at once is a frame worth having. A frame that only sees some of it is a frame that will keep failing the people it claims to be serving.

  84. Jacob Oertel Says:

    cananon,

    I’ll switch gears to specifically address more of what you said in #81, though just want to say these topics are so heavy that I personally like to write something, game it out with AI, sleep on it, read with fresh eyes, and then polish again before posting. I do what I can that’s cathartic, and try to maintain enough levity to be morally serious without letting the sheer vertigo of complexity overwhelm me. It can even be healthy sometimes to chuckle at the sheer absurdity of it all, not as making light of the tragedy, but as a kind of ballast against being crushed by it. Life is strange, so I pick up the days, tuck them away, and hope to stay sane.

    This is part of what I need to do my best here, so maybe expect a bit of the latency that I was earlier trying to call out in institutions (ironic). Unfortunately, competitive pressures have made this level of care difficult to do. AI can help, but it does still take quite a bit of time and may come at great personal cost, though ideals don’t necessarily get uprooted in the process. The jury’s still out on that. An “ethical singularity” is always active or latent within each of us, and such questions can either be a portal or a mirror depending on how one approaches it, sometimes eventually both. One can practice sharpening a vibe all their life; I try to sharpen one that cuts internally just as much as it does externally.

    That’s a wonderful metaphor with the glial cells, and I find the question is then one of how robust it is. Robust analogies are going to matter more going forward, and I think it’s a capacity worth cultivating deliberately. I’ve been trying to cultivate this myself and there are a few exemplars worth mentioning here: Hughes recommended a poem a day, Einstein pushed for no more complexity than needed, Twain’s prose practiced deep simplicity, and Curie’s two Nobels in different fields showed that plurality and depth aren’t opposed.

    That being said, glial cells are necessary but not sufficient; they can be overwhelmed, they can be damaged, and the body’s response to overwhelming neurological insult is sometimes more glial cells getting recruited from elsewhere. So the question becomes: when local institutional immunity fails, what’s the equivalent of recruited reinforcements? Another question: is a single metaphor sufficient to represent all forms of governing? The brain, body, soul, maybe some combination, none of those, all of them, or what? That last one is a sloppy question, but I think AI would likely have some fun with it within the right context. Constitutional AI exists, which is already one attempt at this.

    All that being said, the South Korea case is something I want to take seriously, so I’ll need a bit of time there.

    A weird question: Are “singularity questions” like these themselves kind of Taoist in the sense that they’re generative? What other throughlines might there be?

    And as a closing side note, the ‘it takes a village’ sentiment might be the oldest folk expression of non-binary accountability, and it’s worth taking more seriously than the cliché usually gets.

  85. cananon Says:

    Jacob,

    There is no need to rush indeed. I’ve just subscribed to your substack using my institutional email, so worst case feel free to send a hey if I miss your next post.

    But I was sloppy on Dennett, my present position is more compatibilist than what I make it sound.

    For a long time, I was frankly the obtuse anti-religious kind, with a quiet certainty that consciousness is a functional illusion and free will a retrospective narrative the brain tells itself. Like Laplace answering Napoleon, who had asked where God was in his celestial mechanics: ‘Majesty, I had no need of that hypothesis.’

    Life led me to soften that position, because I came to realize that religious people sometimes carried truths my framework couldn’t find as easily. And more importantly, I found out about quantum mechanics.

    What QM teaches me is that the great metaphysical hypotheses — classical determinism, quantum indeterminism, non-local consciousness, free will — are all empirically underdetermined. No experimental measurement can definitively separate them. Yes, that means that classical determinism will never be invalidated as long as quantum mechanics remains the best descriptor of our universe. But that’s equally true for a fundamentally non-deterministic universe intimately linked to consciousness. And of course there are dozens of other hypotheses between these poles, all capable of coexisting peacefully with our best experimental data, each useful for some instrumental perspective.

    Faced with this radical underdetermination, I progressively began to recognize that choosing one’s interpretive framework is a legitimate act — not an intellectual abdication, but an assumption of responsibility in the face of what physics does not settle.

    And some days, I go a little further. The very fact that the universe permits these multiple interpretations without ever resolving them — that it leaves us free to choose between the beauty of a determinist palace and the freedom of chaos, without anything in the data forcing our hand — that silence itself feels like an intention. As if freedom to find Her were itself the point.

  86. Jacob Oertel Says:

    cananon,

    Thank you for subscribing, and I’ll keep the side channel in mind. That means more than one line can hold, so I’ll leave it at one and trust you to hear the rest.

    On Dennett, appreciated; your current position is close to how I was reading your earlier comments too.

    The QM underdetermination move is one I want to sit with because I recognize the shape of it from a different road. My version of the gap has been qualia, the hard problem as the place where third-person description keeps failing to catch the something-it-is-like-ness, and where no amount of further functional detail closes the distance. For a long time I held a similar flavor of “there’s no room for free will in the space of possibilities, oh well” like your earlier self, and then I started to notice that qualia itself perhaps functions as a marker for the same kind of underdetermination you’re naming in physics. It’s a place where empirical science may never finish, not because we haven’t run enough experiments, but because the structure of the question resists what empirical methods can reach. Two gaps that seem related converging on the same posture is always wondrous. Compatibilism and panentheistic panpsychism were where that posture eventually landed for me, with qualia as the nexus that licensed a leap without prescription. What you’re calling the legitimate act of choosing an interpretive framework is the same operation I’d call taking the leap at qualia; we’re perhaps just naming different instances of the same underdetermination. And there’s something quietly hopeful in the symmetry, that two roads arrive at the same clearing without needing to flatten each other’s vocabulary on the way.

    The “Her” with the capital H landed. I won’t try to translate it into a tradition that isn’t yours; I’ll just say that the shape of my belief is compatible with what you’re gesturing at. The universe permitting multiple interpretations without resolving them, freedom to find Her being part of the point, it’s close enough to how I gently hold these spaces that I feel I recognize the territory even if the map legend is different.

    Regarding the harder question you left at the end of your earlier comment, “the very people who need to be restrained are the ones holding the keys to the room,” I’ve been sitting with this one and I don’t think there’s a clean theoretical answer. Every honest attempt I can make eventually lands in the same uncomfortable place: the restraint has to come from outside the room, from public sentiment channeled well enough to reach inside. Which puts some of the onus back on the public for choosing its thought leaders carefully, and on the thought leaders themselves for taking the weight of that choice seriously rather than treating it as performative.

    Non-binary accountability isn’t only about more-than-two options at the architectural level; it’s also about recognizing that accountability lives in a distributed network where the public is a node, the media is a node, the institutions are nodes, and each node shapes the whole system’s capacity to course-correct before the keys matter. The implementation gap you named closes, when it closes at all, through slow accretions of cultural norms and specific people inside systems acting on conscience when formal structures waver. The South Korea case you raised is the clearest recent image of what that looks like when it works: the National Assembly legislators who physically climbed walls in the middle of the night to vote down Yoon’s martial law decree are exactly the distributed-network version of accountability doing its job under pressure. The formal institutions alone wouldn’t have held. It held because individual humans inside the network made specific choices, and the network as a whole was still coherent enough to let those choices count.

    The four-point sketch you surfaced from Gemini is something I think deserves more room than a comment can give it, so I’ll come back to that material in a Substack piece rather than in thread. The short version is that Independent Threat Auditing probably needs to be reframed as threat verification rather than fabrication-catching while also considering what may or may not be feasible in terms of opsec, Negotiated Off-Ramps is the strongest item and part of what I was reaching for with the binary-accountability critique earlier, Algorithmic Guardrails likely need recursive guardrails of their own, and a Non-Binary Ethics of this kind perhaps works better as “refuse premature binary framings as the default” than as “reject lesser-of-two-evils logic categorically.” Each of those deserves its own careful treatment, and stacking them into a single comment would flatten what each is actually asking. Since you’ve already subscribed, you’ll find them there when they land.

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!