Back to stories
Policy

US Military Confirms Widespread Use of AI Tools in Iran Campaign

Michael Ouroumis2 min read
US Military Confirms Widespread Use of AI Tools in Iran Campaign

The United States Central Command has publicly confirmed the extensive use of artificial intelligence systems in its ongoing military campaign against Iran, offering the most detailed official account yet of how AI is reshaping modern warfare in real time.

How AI Is Being Deployed

Speaking at a defense technology briefing on March 11, CENTCOM commander Admiral Cooper described AI as a "force multiplier" in Operation Epic Fury. The military is using Palantir's Maven Smart System — which relies in part on Anthropic's Claude AI — to process intelligence data, identify potential targets, and prioritize strike options.

The system's core capability is speed. Tasks that previously required hours or days of analyst work can now be completed in seconds, according to military officials. Since the campaign began, the US has struck more than 2,000 targets, including 1,000 within the first 24 hours — a tempo that would have been difficult to sustain without automated data processing.

The Human-in-the-Loop Question

Military leaders have been careful to stress that AI does not make kill decisions. "Humans always make final decisions on what to shoot, what not to shoot, and when to shoot," Cooper stated. The AI tools serve as a filtering and prioritization layer, surfacing the most relevant intelligence so commanders can act faster.

However, critics argue that the sheer speed of AI-assisted targeting compresses decision timelines to a point where meaningful human oversight becomes difficult. Several members of Congress have called for formal oversight hearings, citing concerns about civilian casualties and the lack of transparency around how these systems weight their recommendations.

Industry Tensions

The deployment has also exposed fault lines between the Pentagon and its AI suppliers. Defense Secretary Pete Hegseth has pushed aggressively to embed AI across combat operations, but Anthropic — whose Claude model underpins parts of Palantir's system — has publicly resisted expanding its tools' use in lethal targeting scenarios. The resulting standoff has become one of the most visible examples of the tension between AI safety commitments and national security demands.

What This Means Going Forward

Operation Epic Fury is now the largest real-world test of AI-assisted warfare in history. Its outcomes will likely shape military AI doctrine, procurement decisions, and international norms for years to come. For the AI industry, it raises urgent questions about dual-use technology and the limits of acceptable-use policies when governments come calling.

As AI capabilities continue to advance, the line between decision support and decision-making will only grow thinner — making the governance frameworks established today all the more consequential.

How AI Actually Works — Free Book on FreeLibrary

A free book that explains the AI concepts behind the headlines — no jargon, just clarity.

More in Policy

Washington State Passes Landmark AI Disclosure and Chatbot Safety Bills
Policy

Washington State Passes Landmark AI Disclosure and Chatbot Safety Bills

Washington became one of the first states to pass comprehensive AI transparency laws, requiring watermarks on AI-generated content and safety protocols for chatbots interacting with minors.

18 hours ago2 min read
Anthropic Sues Pentagon Over Supply Chain Risk Label in Landmark AI Policy Clash
Policy

Anthropic Sues Pentagon Over Supply Chain Risk Label in Landmark AI Policy Clash

Anthropic has filed lawsuits against the Department of Defense after being labeled a supply chain risk, escalating a dispute over military AI usage limits.

2 days ago2 min read
OpenAI Robotics Chief Caitlin Kalinowski Resigns Over Pentagon AI Deal
Policy

OpenAI Robotics Chief Caitlin Kalinowski Resigns Over Pentagon AI Deal

Caitlin Kalinowski, OpenAI's head of robotics, resigned over concerns about the company's Pentagon partnership, citing insufficient guardrails around surveillance and lethal autonomy.

3 days ago2 min read