AI GuideAditya Kumar Jha·15 March 2026·18 min read

AI in Modern Warfare: How Operation Epic Fury Revealed the Future of Conflict — And What It Means for Every Human on Earth

On February 28, 2026, the US-Israel joint offensive against Iran launched what military analysts are calling the first AI-native war. 900 strikes in 12 hours. LLMs processing satellite feeds in real time. Autonomous drones navigating GPS-denied environments. Deepfakes flooding social media. This is the complete, unflinching analysis of AI in warfare in March 2026 — the technology, the ethics, the geopolitics, and the terrifying future it is accelerating toward.

On the morning of February 28, 2026, Tomahawk missiles began striking Pasteur Street in Tehran. What followed over the next 12 hours was unlike any military campaign in human history: nearly 900 strikes on Iranian targets, executed by a combination of stealth bombers, cruise missiles, and suicide drones, at an operational tempo that would have required days or weeks in any previous conflict. US and Israeli commanders did not achieve this scale through traditional military planning cycles. They achieved it because artificial intelligence had compressed the entire decision-making pipeline — from satellite imagery to target identification to weapons release — from days into minutes.

Operation Epic Fury, the codename for the joint US-Israeli offensive against Iran, is being described by military analysts, ethicists, and AI researchers as the defining moment in the history of AI-enabled warfare. Not because it is the first conflict to use AI — AI has been deployed in Ukraine, Gaza, and dozens of smaller operations for years — but because it is the first in which AI systems operated at every layer of conflict simultaneously: intelligence gathering, target generation, drone navigation, logistics coordination, and information warfare. The Iran conflict has done what no conference, white paper, or policy statement could: it has made AI warfare real, visible, and unavoidable in the public consciousness.

The OODA Loop Compressed: What AI Actually Did in This War

Military strategists have long operated by the OODA framework: Observe, Orient, Decide, Act. In past wars, each phase took hours or days. A commander observed intelligence, oriented through briefings, deliberated with legal and strategic advisors, and finally authorized an action. The entire cycle might span 48 hours for a complex strike operation. AI has compressed this cycle to minutes — and in some cases, to seconds. This compression is not metaphorical. It is architectural.

At the intelligence layer, large language models including Anthropic's Claude — deployed within Palantir's Maven Smart System — processed raw data feeds from multiple sources simultaneously: satellite imagery, drone footage spanning hundreds of hours, intercepted communications, decades of archived intelligence. A system that previously required a room of analysts working over days distilled this into actionable target packages in real time. As one defence industry official described it to The National: Claude 'sits on top of your existing digital infrastructure without complicated integration — it receives raw data feeds from satellites, drones with hundreds of hours of footage plus decades of archived data. It is too much for humans to handle, so you need automated tools that are able to make sense of it.'

At the operational layer, autonomous drones navigated GPS-denied, signal-jammed environments where no human operator could maintain a real-time link. Iran's forces attempted to blind US systems by attacking AWS data centres in the UAE — the infrastructure on which US AI systems depended — but the redundancy built into modern cloud military architectures limited the disruption. At the cyber layer, the US military's first move in the conflict was digital: coordinated space and cyber operations that, according to Chairman of the Joint Chiefs of Staff General Dan Caine, 'effectively disrupted communications and sensor networks across the area of responsibility, leaving the adversary without the ability to see, coordinate, or respond effectively.'

The Targeting Controversy: When AI Recommends Who Dies

The most ethically fraught dimension of AI in the Iran conflict is targeting. US targeting planners used the Maven Smart System — where AI analyses intelligence data, identifies candidates for strikes, and prioritises them for human review. In theory, a human makes the final decision on every lethal strike. In practice, critics warn that when AI generates hundreds of targets per hour and human reviewers are processing them under time pressure and cognitive load, the 'human in the loop' becomes effectively ceremonial.

This fear was not theoretical in the Iran conflict. An incident in the city of Minab, where a strike hit a location later found to contain civilians, is being investigated for potential AI targeting error — what analysts are describing as an overreliance on agentic AI coordination systems that failed to disaggregate the complicating aspects of a populated urban environment. The incident at Minab has become a flashpoint in the already volatile international debate about lethal autonomous weapon systems. As Craig Jones, a political geographer at Newcastle University, stated in Nature: 'There is no evidence that AI lowers civilian deaths or wrongful targeting decisions, and it may be that the opposite is true.'

The Anthropic-Pentagon Rupture: A Silicon Valley Reckoning

One of the most significant parallel stories to emerge from the Iran conflict is the very public breakdown between Anthropic and the US Department of Defense. Anthropic had a contract with the Pentagon — Claude was embedded in intelligence analysis workflows. But as the Iran offensive began, Anthropic insisted that its AI models not be used for fully autonomous weapons systems or mass surveillance applications. The Trump administration responded by blacklisting Anthropic as a 'supply chain risk', effectively banning the company from direct or indirect government business.

Pentagon spokeswoman Kingsley Wilson's response captured the administration's position with striking bluntness: 'America's warfighters supporting Operation Epic Fury and every mission worldwide will never be held hostage by unelected tech executives and Silicon Valley ideology. We will decide, we will dominate, and we will win.' OpenAI subsequently entered its own agreement with the Pentagon — stipulating that its models would not be used for surveillance or fully autonomous weapons, a constraint that the administration appeared more comfortable with or more willing to accept from a different company. As of March 5, Anthropic's CEO Dario Amodei is reportedly back in talks with the Department of Defense.

This rupture is not merely a corporate dispute. It represents a civilisational question crystallised into a contractual disagreement: who has the right to decide how AI is used in warfare? The companies that built the models? The governments that deploy them? International law? The Iran conflict has made this question urgent in a way that no peacetime policy debate ever could. And it has exposed the startling reality that there is no settled answer — only competing powers, compressed timelines, and systems already in motion.

AI Disinformation: The Invisible Battlefield

The kinetic conflict was only one front of the Iran war. The information warfare front was fought with AI tools that are, in some respects, more democratising and therefore more dangerous. Iran launched a coordinated disinformation campaign using AI-generated deepfake images and videos — purporting to show destruction of US naval bases in Qatar, strikes on Israeli cities, and mass American casualties in Gulf states. The BBC, using Google's SynthID watermark detection, identified at least one widely shared satellite image claiming to show a destroyed US base in Qatar as an AI fabrication built on authentic imagery from a Bahraini base taken in 2025.

A 2024 video of an Israeli strike on a Yemen port was repurposed as alleged Iranian drone footage from Saudi Arabia, generating 4.6 million views on X before debunking. Video game footage was misrepresented as actual combat. Chatbot-generated text circulated as eye-witness testimony. The pattern is not new — disinformation has always been part of warfare. What is new is the speed, volume, and quality of production. What once required state-level resources and professional actors now requires a consumer-grade AI image generator and a social media account.

The Drone Arsenal: From Ukraine to Iran

One of the more striking geopolitical elements of the Iran conflict is the role of Ukraine. Ukrainian manufacturers produced approximately 4.5 million drones in 2025 alone — developing low-cost interceptor drones specifically designed to hunt Iranian Shahed-class systems in the process. When the Iran conflict escalated, the US formally requested Ukrainian drone interceptors and technical specialists to help defend Gulf allies against Iranian swarm attacks. The circuit closed on a loop that, three years earlier, no strategist would have predicted: Ukrainian drone warfare expertise, forged in the fields of the Donbas, deployed in the skies over the Persian Gulf.

Iran's own drone program has evolved toward what the JINSA report describes as 'increasing autonomy, longer ranges, and improved precision.' The large, synchronised drone barrages that penetrated to urban areas during the early days of the conflict suggest at least semi-autonomous mission-planning and guidance systems. Admiral Brad Cooper, head of US Central Command, characterised the strategic logic of the drone era with a phrase that has since circulated widely in defence communities: 'We took them back to America, made them better, and fired them right back at Iran.' The future of warfare is not just expensive, high-precision platforms. It is mass-produced, low-cost autonomous systems that can overwhelm defences through sheer volume — and AI is both the manufacturing accelerant and the targeting intelligence.

The Regulation Gap: International Law Is Losing the Race

In the week that Operation Epic Fury began, academics and legal experts met in Geneva to discuss lethal autonomous weapons systems — part of long-running efforts to arrive at an international agreement on lawful uses of AI in warfare. The timing was not ironic; it was illustrative. As diplomats debated, the most consequential deployment of AI in military history was already underway, three thousand kilometres to the east. Political scientist Michael Horowitz of the University of Pennsylvania told Nature: 'Rapid technological development is outpacing slow international discussions. The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent.'

The US has an existing 'Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy' — but declarations are not treaties, they carry no enforcement mechanism, and the administration that launched Operation Epic Fury has shown limited deference to multilateral frameworks. China, watching the conflict closely, issued a warning against what it called 'the unrestricted application of AI by the military, using AI as a tool to violate the sovereignty of other nations' — a statement that reads simultaneously as a principled position and as a nation noting the capabilities gap it intends to close. Russia, meanwhile, has already deployed AI targeting systems in Ukraine and has its own programme of autonomous weapon development well underway.

What Comes Next: The Iran War as Technological Turning Point

The Iran conflict will be studied by military historians, AI ethicists, international lawyers, and political scientists for decades. It has demonstrated several things that were theoretically understood but operationally unproven. First: that AI can sustain an unprecedented operational tempo in a major conflict, enabling strike rates that human decision-making alone could never support. Second: that the 'human in the loop' principle is not robust under the time pressures of AI-enabled combat — the loop shrinks until it becomes nominal. Third: that AI disinformation can operate at a scale, speed, and quality that makes real-time public debunking effectively impossible. Fourth: that the companies building the most capable AI models are now de facto participants in questions of war and peace — and that they have no consensus on what their obligations are.

The Pentagon's FY2026 supplemental budget includes significant new funding for autonomous systems and AI targeting. Defence technology companies — Palantir, Anduril, Shield AI, and others — are expected to expand their roles substantially. The central question for policymakers is no longer whether AI will define future warfare. It is whether the rules governing it will be written before or after the next escalation. And on current trajectory, the rules are losing the race.

Understanding the technologies driving modern AI warfare — large language models, computer vision, autonomous systems, multi-source intelligence fusion — requires exactly the kind of deep, multi-model AI fluency that LumiChats is designed to build. On LumiChats, you access 40+ frontier AI models — including Claude Opus 4.6, GPT-5.4, Gemini 3 Pro, DeepSeek, Grok, Qwen, and Mistral — in a single platform at ₹69 per day or ₹1,199 per month for unlimited use. Whether you are a student of international relations, a journalist, a policy researcher, or a defence analyst, LumiChats gives you the breadth of AI access to explore these questions from every analytical angle simultaneously.

How to Think About This as a Student, Citizen, and Future Professional

It would be easy to read the story of AI in the Iran conflict as something distant — geopolitical, military, happening to governments and soldiers and populations far from your daily life. That reading would be a mistake. The same large language models processing satellite intelligence in the Maven Smart System are the models available to you through consumer AI applications. The same computer vision systems guiding autonomous drones are being applied to medical imaging, factory automation, and retail inventory. The same information warfare techniques flooding the internet with AI-generated disinformation are being applied in elections, corporate reputation attacks, and political polarisation campaigns in every country on earth.

What the Iran conflict makes viscerally clear — in a way that abstract safety discussions do not — is that AI capability and AI governance are not on the same development timeline. The capabilities are running far ahead of the rules, the norms, the legal frameworks, and the philosophical consensus about what humans should and should not allow AI systems to decide. For anyone who will spend their professional life working with AI — which is to say, for almost everyone entering the workforce in 2026 — this is not background noise. It is the defining civilisational condition of your career.

Pro Tip: If you want to understand AI in warfare at a deep level — technical, ethical, legal, and historical — start with three resources: the Nature article on AI in the Iran conflict, the ORF Expert Speak on intelligence in the 2026 Iran war, and Michael Horowitz's work on AI and military power. Then use Claude Opus 4.6 on LumiChats to synthesise these perspectives and ask it hard questions about the ethical frameworks — constitutional AI limitations, just war theory, and the humanitarian law implications of autonomous targeting. There is no better way to build genuine understanding of this issue than deep, multi-source research followed by rigorous AI-assisted analysis.

LumiChats is built for exactly the kind of research this moment demands. Study Mode lets you upload PDFs — government reports, academic papers, conflict analyses — and get page-cited answers from those specific documents, eliminating the hallucination risk that plagues general AI chat when discussing current events. Quiz Hub generates comprehension tests from your uploaded material. Agent Mode runs in-browser Node.js analysis for data-heavy research tasks. Persistent Memory via pgvector remembers your research context across sessions, so a multi-week deep dive into AI ethics and warfare builds continuity rather than resetting every session. 5 million tokens of context per day means you can process entire books and document collections in a single research session. For serious students of AI, geopolitics, and the future, this is the research infrastructure the moment requires.

Ready to study smarter?

Try LumiChats for ₹69/day

40+ AI models including Claude, GPT-5.4, and Gemini. NCERT Study Mode with page-locked answers. Pay only on days you use it.

Get Started — ₹69/day

Keep reading

More guides for AI-powered students.