Back to stories
Policy

Anthropic Takes the Pentagon to Federal Court Over 'Supply Chain Risk' Blacklisting

Michael Ouroumis2 min read
Anthropic Takes the Pentagon to Federal Court Over 'Supply Chain Risk' Blacklisting

A Landmark Hearing in San Francisco

Anthropic appeared before US District Judge Rita Lin in San Francisco on Monday to argue for a preliminary injunction against the Department of Defense and the White House. The hearing marks the most significant courtroom confrontation yet between an AI company and the federal government over the ethical boundaries of military AI deployment.

At issue is the Pentagon's decision to designate Anthropic as a "supply chain risk to national security" — a label historically reserved for foreign adversaries such as Huawei — after the company refused to grant the military unrestricted use of its Claude AI model.

How the Dispute Escalated

The conflict traces back to late February, when President Trump and Defense Secretary Pete Hegseth publicly declared they were severing ties with Anthropic. The Pentagon had demanded the company accept contract terms allowing Claude to be used for "any lawful purpose," including autonomous weapons systems and domestic surveillance programs. Anthropic refused, citing its internal safety policies.

On March 3, the government formally finalized the supply chain risk designation, and on March 9, Anthropic filed two lawsuits in response. One challenges the designation under the Administrative Procedure Act, arguing the Pentagon exceeded its statutory authority. The other raises constitutional claims, alleging the blacklisting violates the First Amendment by retaliating against protected speech and the Fifth Amendment by denying due process.

Judge Lin fast-tracked Monday's hearing from its original April 3 date, signaling the urgency of the matter.

Cracks in the Government's Case

Recent court filings have complicated the government's position. According to a TechCrunch report on March 20, a newly surfaced email shows that on March 4 — one day after the supply chain designation was finalized — a senior Pentagon official wrote to Anthropic CEO Dario Amodei stating the two sides were "very close" on the very issues the government now cites as evidence of a national security threat.

Nearly 150 former federal and state judges submitted an amicus brief raising concerns about the precedent of weaponizing supply chain designations against domestic companies over policy disagreements. AI researchers from OpenAI, Google, and Microsoft also filed briefs in support of Anthropic.

What Comes Next

Judge Lin is expected to rule on the preliminary injunction in the coming days. If granted, the order would pause the supply chain risk label while the underlying lawsuits proceed — potentially restoring federal agencies' ability to use Claude in the interim.

The case has drawn sharp attention from the tech industry, civil liberties groups, and defense analysts alike, as its outcome could set a precedent for how the government regulates AI companies that impose ethical guardrails on their own technology.

How AI Actually Works — Free Book on FreeLibrary

A free book that explains the AI concepts behind the headlines — no jargon, just clarity.

More in Policy

Anthropic vs. Pentagon: Preliminary Injunction Hearing Set for Today
Policy

Anthropic vs. Pentagon: Preliminary Injunction Hearing Set for Today

A federal court in San Francisco holds a pivotal hearing today in Anthropic's lawsuit against the Department of War, as the government argues the AI company poses an 'unacceptable risk to national security.'

11 hours ago2 min read
White House AI Framework Sets Up Federal vs. State Showdown
Policy

White House AI Framework Sets Up Federal vs. State Showdown

The Trump administration's seven-pillar AI policy framework moves to preempt state AI laws and direct the FTC to crack down on deceptive AI practices.

11 hours ago2 min read
Man Pleads Guilty to $8M AI Music Streaming Fraud — Created Hundreds of Thousands of Fake Songs
Policy

Man Pleads Guilty to $8M AI Music Streaming Fraud — Created Hundreds of Thousands of Fake Songs

Michael Smith of North Carolina pleaded guilty to using AI to generate hundreds of thousands of songs and bots to stream them billions of times, fraudulently collecting over $8 million in royalties from Spotify, Apple Music, and Amazon.

1 day ago2 min read