Something unprecedented happened in American military history in March 2026: the technology companies that built the AI tools millions of Americans use every day — to write emails, help their kids with homework, and plan vacations — became active participants in US military operations against Iran. The exact nature of AI's role in the conflict remains partially classified. But the public record is damning enough: OpenAI quietly signed a defense contract that prompted a mass user exodus. Anthropic — the company that built Claude specifically to be 'safe, honest, and harmless' — found itself described in MIT Technology Review as 'turbocharging US strikes on Iran.' And the resulting public backlash produced the largest protest against AI companies in history.
The Pentagon-OpenAI Deal: What We Know
In early 2026, OpenAI signed a contract with the US Department of Defense that gave American military planners access to OpenAI's frontier AI capabilities for operational use. The specifics of the contract were not fully disclosed, but reporting indicated it covered intelligence analysis, target prioritization support, and logistics optimization for ongoing military operations — including those related to Iran. The announcement came with minimal advance notice to the public or OpenAI's user base. For many users, the first they learned of it was through coverage of the 'QuitChatGPT' or 'QuitGPT' movement that erupted on social media within days.
The Anthropic Situation: The Safety Company Goes to War
Anthropic's involvement is in some ways more jarring than OpenAI's, because Anthropic was founded specifically as a response to safety concerns about frontier AI development. Its founding principle was that powerful AI should be developed carefully, with safety and ethics as first-order concerns. The company's Constitutional AI approach, Claude's trained aversion to harmful outputs, and Anthropic's public communications had positioned it as the responsible alternative to OpenAI's more aggressive deployment approach. MIT Technology Review's characterization of Anthropic as 'turbocharging US strikes on Iran' — the precise framing used by their AI Hype Index newsletter — cut through that positioning entirely.
- What Anthropic has disclosed: Anthropic has confirmed it has government contracts but declined to specify the operational details of how Claude is being used in defense contexts. The company maintains that human oversight requirements are built into all deployments.
- The Constitutional AI problem: Anthropic's training approach was specifically designed to make Claude resist harmful requests. Military targeting assistance — even when framed as 'intelligence support' or 'logistics optimization' — sits in clear tension with the values Claude was trained to embody.
- The employee response: Anthropic, like OpenAI, has faced internal employee dissatisfaction over military contracting. Multiple employees have spoken anonymously to journalists about concerns that the company's public safety positioning is inconsistent with its defense contracting decisions.
- What users actually experienced: Claude did not change its behavior for civilian users as a result of military contracting. The concerns are about what separate, government-specific deployments of AI systems are authorized to do — not about the consumer product.
The QuitGPT Movement: When AI Users Organized
The public response to AI military contracting was the most organized consumer AI backlash since ChatGPT launched. The 'QuitChatGPT' hashtag trended on multiple platforms. Users posted screenshots of their account deletion confirmations. Tech workers circulated open letters. Most significantly, a 59% Reuters/Ipsos poll finding that Americans did not support strikes on Iran gave the backlash mainstream moral grounding — this was not a fringe activist position but a majority American sentiment that the AI companies were now, effectively, participants in an unpopular war.
- Scale of the boycott: the QuitGPT movement produced meaningful subscription cancellations across multiple AI platforms. The exact numbers have not been publicly disclosed, but social media analysis suggests tens of thousands of active users participated in public account deletions.
- The London protests: demonstrations against AI companies' military ties reached the streets of London, producing what organizers and journalists described as the largest protest against AI companies in history. The protests targeted the offices of AI companies with US military contracts.
- Claude as beneficiary: in an irony not lost on observers, some QuitGPT participants switched to Claude — the Anthropic product — without knowing that Anthropic also had defense contracts. Claude briefly hit number one in the US App Store during the peak of the boycott.
- What the boycott actually achieved: the short answer is not much, immediately. OpenAI and Anthropic's enterprise and government revenue is structured to be insulated from consumer subscription pressure. But the reputational and regulatory implications are longer-term and less predictable.
AI in Combat: What Is Actually Being Used and How
Cutting through the public debate to what AI actually does in the current military context requires understanding the distinction between the most alarming hypothetical uses and the documented current uses. The US military's AI deployment in the Iran conflict is not, as far as public reporting indicates, AI autonomously selecting targets or launching weapons. What is documented is more mundane but still significant: AI-assisted intelligence analysis, processing of sensor data from surveillance assets, logistics optimization, and communications analysis.
- Intelligence analysis: AI systems that process large volumes of signals intelligence, imagery, and communications data faster than human analysts. This is the clearest documented use — AI accelerating the intelligence analysis pipeline, not replacing human judgment at the point of action.
- Target support analysis: the most controversial documented use. AI systems that analyze potential targets against targeting criteria and international humanitarian law requirements. Human sign-off is required, but the AI generates the analysis that humans review.
- Logistics and supply chain: AI optimization of military supply chains, fuel logistics, and equipment maintenance scheduling. Less controversial, less headline-grabbing, and probably the highest actual operational value use of AI in military contexts.
- Cyber operations: AI-assisted offensive and defensive cyber operations — generating and detecting malicious code, identifying vulnerabilities, and responding to Iranian cyber attacks on US infrastructure.
Pro Tip: For users trying to make informed decisions about which AI platforms to support in light of military contracting: the most accurate available information is each company's published transparency reports and government contract disclosures. OpenAI, Anthropic, and Google all publish some information about government partnerships. Reading these primary sources is more reliable than social media accounts of what these companies are doing militarily, which range from accurate to significantly exaggerated in both directions.