Back to stories
Policy

150 Former Judges Back Anthropic in Pentagon Supply Chain Lawsuit

Michael Ouroumis2 min read
150 Former Judges Back Anthropic in Pentagon Supply Chain Lawsuit

A coalition of nearly 150 retired federal and state judges filed a sweeping amicus brief on Tuesday backing Anthropic's legal challenge against the Trump administration's decision to label the AI company a national security "supply chain risk" — a designation that has never before been applied to an American company.

The brief, filed in support of Anthropic's lawsuits in federal courts in San Francisco and Washington, argues that the Pentagon "misinterpreted the statute and ignored the necessary procedures" when Defense Secretary Pete Hegseth declared Anthropic a supply chain risk in late February.

The Dispute Behind the Lawsuit

The conflict traces back to negotiations between Anthropic and the Department of Defense over the use of its Claude AI models in classified military systems. The Pentagon pushed for access to Claude in "all lawful" use cases. Anthropic refused to budge on two core principles: that its technology must not be used for autonomous weapons, and that it must not be used for mass surveillance of American citizens.

After those talks collapsed, the Trump administration on February 27 ordered federal agencies and military contractors to halt all business with Anthropic, citing national security. Letters formally notifying the company of its supply chain risk designation followed on March 3. On March 9, Anthropic filed suit, calling the moves "unprecedented and unlawful" and alleging the government was retaliating against constitutionally protected speech.

The Judges' Arguments

The amicus brief, signed by judges appointed across both parties, makes clear that the legal community views the Pentagon's action as a significant overreach. "No one is trying to force the Department to contract with Anthropic," the judges wrote. "Anthropic is asking only that it not be punished on its way out the door."

The signatories join a growing list of Anthropic backers that includes Microsoft, tech industry associations, and former senior national security officials who submitted their own brief earlier in the week. On Sunday, the broader technology industry also rallied publicly behind Anthropic, with multiple companies filing supportive statements in the case.

Broader Implications

Legal analysts say the case could set a lasting precedent for how the government can regulate AI companies it views as uncooperative. The "supply chain risk" statute was designed to cut ties with companies posing genuine foreign-linked threats — not as a tool to punish domestic firms for their policy positions, several legal commentators have argued.

Ironically, Anthropic's Claude model has reportedly remained in active use in some government contexts, including an operation targeting Iran, even as the company's designation was being finalized.

The case is moving quickly. A ruling on preliminary injunctive relief could come within weeks, and the outcome may reshape the legal boundaries between AI safety principles and national security contracting for years to come.

By Michael Ouroumis

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Pentagon CTO Calls Anthropic's Mythos a 'National Security Moment' Even as the Company Stays Blacklisted
Policy

Pentagon CTO Calls Anthropic's Mythos a 'National Security Moment' Even as the Company Stays Blacklisted

Defense Department tech chief Emil Michael says Mythos has cyber capabilities the government must evaluate, while keeping Anthropic itself off the Pentagon's classified-network deal list.

15 hours ago3 min read
OpenAI Staff Begged Altman to Call Police Before Tumbler Ridge Shooting, Lawsuits Allege
Policy

OpenAI Staff Begged Altman to Call Police Before Tumbler Ridge Shooting, Lawsuits Allege

Newly filed lawsuits and a Wall Street Journal report claim OpenAI safety staff urged leadership to alert police about the future Tumbler Ridge shooter eight months before the February attack — and that Sam Altman overruled them.

20 hours ago2 min read
Pentagon Signs Classified-Network AI Deals With Seven Vendors as Anthropic Sits Out
Policy

Pentagon Signs Classified-Network AI Deals With Seven Vendors as Anthropic Sits Out

The Department of War announced agreements with SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft and AWS to deploy AI on its Impact Level 6 and 7 classified networks — pointedly excluding Anthropic.

1 day ago2 min read