Back to stories
Policy

Former Federal Judges Back Anthropic as Trump Administration Defends Pentagon Blacklisting in Court

Michael Ouroumis2 min read
Former Federal Judges Back Anthropic as Trump Administration Defends Pentagon Blacklisting in Court

The legal battle between Anthropic and the U.S. government escalated this week as the Trump administration filed a court brief defending its blacklisting of the AI company, while former federal judges weighed in with filings supporting Anthropic's challenge.

The case has become the most closely watched AI policy dispute in the country, pitting national security authority against the rights of AI companies to maintain safety restrictions on their own products.

The Government's Defense

In a filing submitted on March 17, government lawyers argued that the Pentagon's designation of Anthropic as a supply chain risk was both justified and lawful. The administration contended that operational urgency justifies swift exclusion of companies from government contracts and that courts must defer to national security assessments made by defense officials.

At the core of the government's argument is a stark position: the Pentagon wants to use Anthropic's Claude AI for "all lawful purposes" and asserts it cannot allow a private company to dictate how its tools are used in a national security context. Government attorneys argued that Anthropic's safety guardrails — which prevent the AI from assisting with autonomous weapons targeting or domestic surveillance — are incompatible with military operational needs.

Former Judges Side With Anthropic

In a notable development on the same day, former federal judges submitted amicus briefs raising concerns about the Pentagon's use of the supply chain risk label against Anthropic. The former judges argued that the designation process lacked the procedural safeguards required by law, lending weight to Anthropic's claim that the government violated due process.

Background

The dispute began on March 3 when Secretary of War Pete Hegseth designated Anthropic a supply chain risk after the company refused to remove its safety guardrails. Anthropic filed suit on March 9, calling the designation "unprecedented and unlawful" and arguing it violated the company's free speech and due process rights.

Broader Implications

The case raises fundamental questions about whether AI companies can be compelled to remove safety restrictions to serve government customers. A ruling in the government's favor could set a precedent that effectively forces AI labs to choose between maintaining safety policies and accessing lucrative federal contracts.

Conversely, a ruling for Anthropic could establish that AI companies retain the right to set usage boundaries on their products, even when the customer is the U.S. military. The outcome will likely shape how every major AI lab approaches government contracts going forward.

A hearing date has not yet been set, but given the national security dimensions, legal observers expect the case to move through the courts on an expedited timeline.

How AI Actually Works — Free Book on FreeLibrary

A free book that explains the AI concepts behind the headlines — no jargon, just clarity.

More in Policy

150 Former Judges Back Anthropic in Pentagon Supply Chain Lawsuit
Policy

150 Former Judges Back Anthropic in Pentagon Supply Chain Lawsuit

Nearly 150 retired federal and state judges have filed an amicus brief supporting Anthropic's lawsuit against the Trump administration, calling the Pentagon's supply chain risk designation legally unprecedented and procedurally flawed.

12 hours ago2 min read
DOJ Calls Anthropic an 'Unacceptable' National Security Risk in Court Filing
Policy

DOJ Calls Anthropic an 'Unacceptable' National Security Risk in Court Filing

The Trump administration filed a 40-page court response arguing that Anthropic poses an unacceptable risk to military operations because the company could disable or alter its AI during wartime.

12 hours ago2 min read
US Senate Approves ChatGPT, Gemini, and Copilot for Official Staff Use
Policy

US Senate Approves ChatGPT, Gemini, and Copilot for Official Staff Use

The US Senate Sergeant at Arms has cleared ChatGPT, Google Gemini, and Microsoft Copilot for official use by Senate staff, while notably excluding Claude and Grok from the approved list.

4 days ago2 min read