Back to stories
Policy

Federal Judge Calls Pentagon's Anthropic Ban an 'Attempt to Cripple' the AI Company

Michael Ouroumis2 min read
Federal Judge Calls Pentagon's Anthropic Ban an 'Attempt to Cripple' the AI Company

A federal judge delivered a pointed rebuke of the Pentagon's decision to blacklist AI company Anthropic, suggesting during a San Francisco court hearing on March 24 that the government's actions appear designed to punish the company for its ethical stance on military AI use.

U.S. District Judge Rita Lin did not mince words during the preliminary injunction hearing. "It looks like an attempt to cripple Anthropic," she said, pressing Department of Justice attorney Eric Hamilton on why the AI maker was designated a supply chain risk.

The Core Dispute

The conflict centers on Anthropic's refusal to grant the military unrestricted access to its Claude AI model. The company demanded that the Department of Defense not use Claude for fully autonomous weapons or mass surveillance of Americans. When contract negotiations broke down, the Pentagon designated Anthropic a supply chain risk — effectively banning federal agencies from using its technology.

Hamilton argued that the DOD had "come to worry that Anthropic may in the future take action to sabotage or subvert IT systems," justifying the designation. But Judge Lin appeared unconvinced.

Judge Questions the Government's Logic

Lin suggested the designation was retaliation rather than a legitimate security concern. "If the worry is about operational integrity, DOD could just stop using Claude," she noted. She further questioned the legal threshold, saying a company "can't be designated a supply chain risk for being stubborn and asking annoying questions. That seems a pretty low bar."

The judge also raised concerns about whether Anthropic was "being punished for criticizing the government's contracting position in the press" — a significant First Amendment implication that could shape the broader case.

What Comes Next

Anthropic attorney Michael Mongan urged the court to act quickly, requesting a decision by March 26. Judge Lin concluded the hearing without issuing a ruling but indicated she would decide the motion for a preliminary injunction within days.

The outcome could set a major precedent for the relationship between AI companies and the federal government. If Lin grants the injunction, it would temporarily pause the blacklisting and allow Anthropic's technology to be used by government agencies while the full case proceeds.

Broader Implications for AI Policy

The case has drawn significant attention across the tech industry. At its core, it tests whether AI companies can set ethical boundaries on how their products are used by the military — or whether the government can effectively punish them for doing so. With AI leaders like CEO Dario Amodei publicly advocating for responsible deployment, the ruling could influence how other AI firms approach government contracts going forward.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances
Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances

The EU AI Act's August 2026 high-risk enforcement deadline for hiring tools and the Mobley v. Workday class action signal a new era of AI recruitment regulation.

1 day ago2 min read
Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook
Policy

Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook

After months of fierce debate, Linus Torvalds and kernel maintainers agreed on a policy that permits AI-assisted contributions like Copilot while forcing human submitters to take full legal and technical responsibility for any bugs, security flaws, or licensing issues.

2 days ago2 min read
Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals
Policy

Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals

Maine's LD 2082, which would prohibit the clinical use of AI in mental health therapy without a licensed professional, has been sent to Governor Janet Mills — part of a wave of state-level crackdowns on therapy chatbots.

3 days ago3 min read