Back to stories
Policy

150 Former Judges Back Anthropic in Pentagon Supply Chain Lawsuit

Michael Ouroumis2 min read
150 Former Judges Back Anthropic in Pentagon Supply Chain Lawsuit

A coalition of nearly 150 retired federal and state judges filed a sweeping amicus brief on Tuesday backing Anthropic's legal challenge against the Trump administration's decision to label the AI company a national security "supply chain risk" — a designation that has never before been applied to an American company.

The brief, filed in support of Anthropic's lawsuits in federal courts in San Francisco and Washington, argues that the Pentagon "misinterpreted the statute and violated the necessary procedures" when Defense Secretary Pete Hegseth declared Anthropic a supply chain risk in late February.

The Dispute Behind the Lawsuit

The conflict traces back to negotiations between Anthropic and the Department of Defense over the use of its Claude AI models in classified military systems. The Pentagon pushed for access to Claude in "all lawful" use cases. Anthropic refused to budge on two core principles: that its technology must not be used for autonomous weapons, and that it must not be used for mass surveillance of American citizens.

After those talks collapsed, the Trump administration on February 27 ordered federal agencies and military contractors to halt all business with Anthropic, citing national security. Letters formally notifying the company of its supply chain risk designation followed on March 3. On March 9, Anthropic filed suit, calling the moves "unprecedented and unlawful" and alleging the government was retaliating against constitutionally protected speech.

The Judges' Arguments

The amicus brief, signed by judges appointed across both parties, makes clear that the legal community views the Pentagon's action as a significant overreach. "No one is trying to force the Department to contract with Anthropic," the judges wrote. "Anthropic is asking only that it not be punished on its way out the door."

The signatories join a growing list of Anthropic backers that includes Microsoft, tech industry associations, and former senior national security officials who submitted their own brief earlier in the week. On Sunday, the broader technology industry also rallied publicly behind Anthropic, with multiple companies filing supportive statements in the case.

Broader Implications

Legal analysts say the case could set a lasting precedent for how the government can regulate AI companies it views as uncooperative. The "supply chain risk" statute was designed to cut ties with companies posing genuine foreign-linked threats — not as a tool to punish domestic firms for their policy positions, several legal commentators have argued.

Ironically, Anthropic's Claude model has reportedly remained in active use in some government contexts, including an operation targeting Iran, even as the company's designation was being finalized.

The case is moving quickly. A ruling on preliminary injunctive relief could come within weeks, and the outcome may reshape the legal boundaries between AI safety principles and national security contracting for years to come.

By Michael Ouroumis

How AI Actually Works — Free Book on FreeLibrary

A free book that explains the AI concepts behind the headlines — no jargon, just clarity.

More in Policy

Former Federal Judges Back Anthropic as Trump Administration Defends Pentagon Blacklisting in Court
Policy

Former Federal Judges Back Anthropic as Trump Administration Defends Pentagon Blacklisting in Court

The Trump administration filed a court brief defending its blacklisting of Anthropic as a supply chain risk, while former federal judges submitted filings supporting the AI company's legal challenge.

12 hours ago2 min read
DOJ Calls Anthropic an 'Unacceptable' National Security Risk in Court Filing
Policy

DOJ Calls Anthropic an 'Unacceptable' National Security Risk in Court Filing

The Trump administration filed a 40-page court response arguing that Anthropic poses an unacceptable risk to military operations because the company could disable or alter its AI during wartime.

12 hours ago2 min read
US Senate Approves ChatGPT, Gemini, and Copilot for Official Staff Use
Policy

US Senate Approves ChatGPT, Gemini, and Copilot for Official Staff Use

The US Senate Sergeant at Arms has cleared ChatGPT, Google Gemini, and Microsoft Copilot for official use by Senate staff, while notably excluding Claude and Grok from the approved list.

4 days ago2 min read