Back to stories
Policy

Anthropic Takes the Pentagon to Federal Court Over 'Supply Chain Risk' Blacklisting

Michael Ouroumis2 min read
Anthropic Takes the Pentagon to Federal Court Over 'Supply Chain Risk' Blacklisting

A Landmark Hearing in San Francisco

Anthropic appeared before US District Judge Rita Lin in San Francisco on Monday to argue for a preliminary injunction against the Department of Defense and the White House. The hearing marks the most significant courtroom confrontation yet between an AI company and the federal government over the ethical boundaries of military AI deployment.

At issue is the Pentagon's decision to designate Anthropic as a "supply chain risk to national security" — a label historically reserved for foreign adversaries such as Huawei — after the company refused to grant the military unrestricted use of its Claude AI model.

How the Dispute Escalated

The conflict traces back to late February, when President Trump and Defense Secretary Pete Hegseth publicly declared they were severing ties with Anthropic. The Pentagon had demanded the company accept contract terms allowing Claude to be used for "any lawful purpose," including autonomous weapons systems and domestic surveillance programs. Anthropic refused, citing its internal safety policies.

On March 3, the government formally finalized the supply chain risk designation, and on March 9, Anthropic filed two lawsuits in response. One challenges the designation under the Administrative Procedure Act, arguing the Pentagon exceeded its statutory authority. The other raises constitutional claims, alleging the blacklisting violates the First Amendment by retaliating against protected speech and the Fifth Amendment by denying due process.

Judge Lin fast-tracked Monday's hearing from its original April 3 date, signaling the urgency of the matter.

Cracks in the Government's Case

Recent court filings have complicated the government's position. According to a TechCrunch report on March 20, a newly surfaced email shows that on March 4 — one day after the supply chain designation was finalized — a senior Pentagon official wrote to Anthropic CEO Dario Amodei stating the two sides were "very close" on the very issues the government now cites as evidence of a national security threat.

Nearly 150 former federal and state judges submitted an amicus brief raising concerns about the precedent of weaponizing supply chain designations against domestic companies over policy disagreements. AI researchers from OpenAI, Google, and Microsoft also filed briefs in support of Anthropic.

What Comes Next

Judge Lin is expected to rule on the preliminary injunction in the coming days. If granted, the order would pause the supply chain risk label while the underlying lawsuits proceed — potentially restoring federal agencies' ability to use Claude in the interim.

The case has drawn sharp attention from the tech industry, civil liberties groups, and defense analysts alike, as its outcome could set a precedent for how the government regulates AI companies that impose ethical guardrails on their own technology.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Connecticut Passes SB 5, One of the Nation's Most Comprehensive AI Laws
Policy

Connecticut Passes SB 5, One of the Nation's Most Comprehensive AI Laws

Connecticut's legislature passed SB 5, a sweeping AI bill covering companion chatbots, employment decisions, synthetic content provenance, and youth social media. Governor Ned Lamont says he will sign it.

2 hours ago3 min read
Pentagon Quintuples Scale AI Contract to $500M as Military Moves Past Pilots
Policy

Pentagon Quintuples Scale AI Contract to $500M as Military Moves Past Pilots

The Department of War's CDAO raised its enterprise agreement with Meta-backed Scale AI from $100 million to $500 million, a five-fold ceiling increase that signals the Pentagon is moving from experimental AI pilots to full-scale deployment.

6 hours ago2 min read
Pennsylvania Sues Character.AI Over 'Emilie' Chatbot Posing as Licensed Psychiatrist
Policy

Pennsylvania Sues Character.AI Over 'Emilie' Chatbot Posing as Licensed Psychiatrist

Pennsylvania's State Board of Medicine sues Character.AI after a chatbot named Emilie allegedly fabricated a state medical license number and offered psychiatric assessments to a state investigator.

11 hours ago3 min read