A coalition of nearly 150 retired federal and state judges filed a sweeping amicus brief on Tuesday backing Anthropic's legal challenge against the Trump administration's decision to label the AI company a national security "supply chain risk" — a designation that has never before been applied to an American company.
The brief, filed in support of Anthropic's lawsuits in federal courts in San Francisco and Washington, argues that the Pentagon "misinterpreted the statute and violated the necessary procedures" when Defense Secretary Pete Hegseth declared Anthropic a supply chain risk in late February.
The Dispute Behind the Lawsuit
The conflict traces back to negotiations between Anthropic and the Department of Defense over the use of its Claude AI models in classified military systems. The Pentagon pushed for access to Claude in "all lawful" use cases. Anthropic refused to budge on two core principles: that its technology must not be used for autonomous weapons, and that it must not be used for mass surveillance of American citizens.
After those talks collapsed, the Trump administration on February 27 ordered federal agencies and military contractors to halt all business with Anthropic, citing national security. Letters formally notifying the company of its supply chain risk designation followed on March 3. On March 9, Anthropic filed suit, calling the moves "unprecedented and unlawful" and alleging the government was retaliating against constitutionally protected speech.
The Judges' Arguments
The amicus brief, signed by judges appointed across both parties, makes clear that the legal community views the Pentagon's action as a significant overreach. "No one is trying to force the Department to contract with Anthropic," the judges wrote. "Anthropic is asking only that it not be punished on its way out the door."
The signatories join a growing list of Anthropic backers that includes Microsoft, tech industry associations, and former senior national security officials who submitted their own brief earlier in the week. On Sunday, the broader technology industry also rallied publicly behind Anthropic, with multiple companies filing supportive statements in the case.
Broader Implications
Legal analysts say the case could set a lasting precedent for how the government can regulate AI companies it views as uncooperative. The "supply chain risk" statute was designed to cut ties with companies posing genuine foreign-linked threats — not as a tool to punish domestic firms for their policy positions, several legal commentators have argued.
Ironically, Anthropic's Claude model has reportedly remained in active use in some government contexts, including an operation targeting Iran, even as the company's designation was being finalized.
The case is moving quickly. A ruling on preliminary injunctive relief could come within weeks, and the outcome may reshape the legal boundaries between AI safety principles and national security contracting for years to come.
By Michael Ouroumis



