In July 2025, the Pentagon awarded $200 million contracts each to Anthropic, OpenAI, Google, and xAI to supply advanced AI models to the Department of Defense. What happened over the following eight months — culminating in a public confrontation, a supply-chain risk designation, and a federal lawsuit — is the most consequential conflict in the history of AI governance. It is not primarily a story about weapons or surveillance, though those are the surface issues. It is a story about who gets to set the limits on the most powerful technology in human history: the companies that build it, or the government that deploys it.
The Timeline: How the Fight Actually Unfolded
- July 2025: Pentagon awards $200 million contracts to all four major US AI labs with minimal specificity about use restrictions.
- December 2025: Anthropic begins negotiations with Pentagon on specific use limitations. Anthropic agrees to allow AI for cyber defense, missile defense, and intelligence analysis, but draws three red lines: mass domestic surveillance, fully autonomous weapons, and high-stakes automated decisions without human oversight.
- February 18, 2026: Pentagon's Under Secretary of Defense Emil Michael states publicly that the Department needs all AI providers at 'the same baseline' — meaning all lawful uses must be permitted. Michael tells AI companies that DoD's own ethical safeguards must override company safeguards.
- February 26, 2026: Dario Amodei issues a public statement reaffirming Anthropic's red lines. 'I believe deeply in the existential importance of using AI to defend the United States and other democracies. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.'
- February 27, 2026: Defense Secretary Pete Hegseth gives Anthropic until 5:01 PM ET to capitulate. Anthropic does not. Hegseth designates Anthropic a 'supply chain risk' — a classification that prohibits new federal contracts and requires existing ones to be wound down over six months.
- February 28 – March 2, 2026: OpenAI signs a classified Pentagon deal, framed as providing 'stronger guardrails' than prior arrangements. The deal explicitly prohibits mass domestic surveillance, directing autonomous weapons, and high-stakes automated decisions — the same red lines Anthropic held.
- March 4, 2026: Anthropic receives formal supply-chain risk designation letter.
- March 5, 2026: Dario Amodei announces Anthropic will sue the US government, arguing the designation was retaliation for protected speech and is narrow in scope.
The Core Dispute: What the Pentagon Actually Wanted
The Pentagon's demand was explicit: AI vendors must allow 'all lawful use' of their technology. This phrasing is carefully chosen. The Department of Defense has its own AI ethics directive (DoD Directive 3000.09) that it claims covers responsible deployment. Pentagon officials argued that Anthropic's company-level safeguards should not override DoD's own policies — that it is the government's prerogative to define what is permissible under its own authority. Anthropic's counterargument was also clear: the government's definition of 'lawful' is not the same as ethical or safe, and 'lawful' domestic surveillance and 'lawful' autonomous weapons targeting both represent uses that Anthropic believes fundamentally undermine democratic values, regardless of legality.
The OpenAI Deal — Is It Actually Different?
OpenAI's signed agreement includes explicit language prohibiting mass domestic surveillance, directing autonomous weapons, and high-stakes automated decisions. On the surface, this is identical to the red lines Anthropic was defending. The Electronic Frontier Foundation and several legal scholars have called OpenAI's contract language 'weasel words' — noting that phrases like 'as appropriate' create ambiguity that protects the government from accountability for violations. The critical technical difference: OpenAI's deployment is cloud-only, keeping OpenAI personnel in the security loop. But intelligence agencies have historically found ways to work around such arrangements. The ultimate enforceability of contract language against classified national security operations is genuinely untested in court.
What This Means for AI Users and Businesses
- Anthropic's Claude is currently banned from US government use for the duration of the supply-chain risk period. Federal agencies must wind down existing Claude deployments within six months. For government contractors and regulated industries that depend on federal approval, this matters directly.
- The fight revealed that Claude was already deployed on classified government networks. Amodei confirmed that Claude was the first frontier model on classified networks and was 'extensively deployed across the Department of War and other national security agencies.' That deployment is now being unwound.
- The precedent affects every AI company. If the government can designate an AI company a 'supply-chain risk' for maintaining safety guardrails, every company with government contracts faces the same pressure. The incentive for AI labs to accept military use without restrictions has increased significantly.
- Anthropic's legal challenge may reshape AI governance. If the federal lawsuit succeeds, it could establish that AI companies have the right to maintain safety restrictions on their products even when deploying to government clients — a major shift in the current power balance.