The United States Central Command has publicly confirmed the extensive use of artificial intelligence systems in its ongoing military campaign against Iran, offering the most detailed official account yet of how AI is reshaping modern warfare in real time.
How AI Is Being Deployed
Speaking at a defense technology briefing on March 11, CENTCOM commander Admiral Cooper described AI as a "force multiplier" in Operation Epic Fury. The military is using Palantir's Maven Smart System — which relies in part on Anthropic's Claude AI — to process intelligence data, identify potential targets, and prioritize strike options.
The system's core capability is speed. Tasks that previously required hours or days of analyst work can now be completed in seconds, according to military officials. Since the campaign began, the US has struck more than 2,000 targets, including 1,000 within the first 24 hours — a tempo that would have been difficult to sustain without automated data processing.
The Human-in-the-Loop Question
Military leaders have been careful to stress that AI does not make kill decisions. "Humans always make final decisions on what to shoot, what not to shoot, and when to shoot," Cooper stated. The AI tools serve as a filtering and prioritization layer, surfacing the most relevant intelligence so commanders can act faster.
However, critics argue that the sheer speed of AI-assisted targeting compresses decision timelines to a point where meaningful human oversight becomes difficult. Several members of Congress have called for formal oversight hearings, citing concerns about civilian casualties and the lack of transparency around how these systems weight their recommendations.
Industry Tensions
The deployment has also exposed fault lines between the Pentagon and its AI suppliers. Defense Secretary Pete Hegseth has pushed aggressively to embed AI across combat operations, but Anthropic — whose Claude model underpins parts of Palantir's system — has publicly resisted expanding its tools' use in lethal targeting scenarios. The resulting standoff has become one of the most visible examples of the tension between AI safety commitments and national security demands.
What This Means Going Forward
Operation Epic Fury is now the largest real-world test of AI-assisted warfare in history. Its outcomes will likely shape military AI doctrine, procurement decisions, and international norms for years to come. For the AI industry, it raises urgent questions about dual-use technology and the limits of acceptable-use policies when governments come calling.
As AI capabilities continue to advance, the line between decision support and decision-making will only grow thinner — making the governance frameworks established today all the more consequential.



