Back to stories
Policy

US Military Confirms Widespread Use of AI Tools in Iran Campaign

Michael Ouroumis2 min read
US Military Confirms Widespread Use of AI Tools in Iran Campaign

The United States Central Command has publicly confirmed the extensive use of artificial intelligence systems in its ongoing military campaign against Iran, offering the most detailed official account yet of how AI is reshaping modern warfare in real time.

How AI Is Being Deployed

Speaking at a defense technology briefing on March 11, CENTCOM commander Admiral Cooper described AI as a "force multiplier" in Operation Epic Fury. The military is using Palantir's Maven Smart System — which relies in part on Anthropic's Claude AI — to process intelligence data, identify potential targets, and prioritize strike options.

The system's core capability is speed. Tasks that previously required hours or days of analyst work can now be completed in seconds, according to military officials. Since the campaign began, the US has struck more than 2,000 targets, including 1,000 within the first 24 hours — a tempo that would have been difficult to sustain without automated data processing.

The Human-in-the-Loop Question

Military leaders have been careful to stress that AI does not make kill decisions. "Humans always make final decisions on what to shoot, what not to shoot, and when to shoot," Cooper stated. The AI tools serve as a filtering and prioritization layer, surfacing the most relevant intelligence so commanders can act faster.

However, critics argue that the sheer speed of AI-assisted targeting compresses decision timelines to a point where meaningful human oversight becomes difficult. Several members of Congress have called for formal oversight hearings, citing concerns about civilian casualties and the lack of transparency around how these systems weight their recommendations.

Industry Tensions

The deployment has also exposed fault lines between the Pentagon and its AI suppliers. Defense Secretary Pete Hegseth has pushed aggressively to embed AI across combat operations, but Anthropic — whose Claude model underpins parts of Palantir's system — has publicly resisted expanding its tools' use in lethal targeting scenarios. The resulting standoff has become one of the most visible examples of the tension between AI safety commitments and national security demands.

What This Means Going Forward

Operation Epic Fury is now the largest real-world test of AI-assisted warfare in history. Its outcomes will likely shape military AI doctrine, procurement decisions, and international norms for years to come. For the AI industry, it raises urgent questions about dual-use technology and the limits of acceptable-use policies when governments come calling.

As AI capabilities continue to advance, the line between decision support and decision-making will only grow thinner — making the governance frameworks established today all the more consequential.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances
Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances

The EU AI Act's August 2026 high-risk enforcement deadline for hiring tools and the Mobley v. Workday class action signal a new era of AI recruitment regulation.

1 day ago2 min read
Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook
Policy

Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook

After months of fierce debate, Linus Torvalds and kernel maintainers agreed on a policy that permits AI-assisted contributions like Copilot while forcing human submitters to take full legal and technical responsibility for any bugs, security flaws, or licensing issues.

2 days ago2 min read
Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals
Policy

Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals

Maine's LD 2082, which would prohibit the clinical use of AI in mental health therapy without a licensed professional, has been sent to Governor Janet Mills — part of a wave of state-level crackdowns on therapy chatbots.

3 days ago3 min read