Back to stories
Policy

DOJ Calls Anthropic an 'Unacceptable' National Security Risk in Court Filing

Michael Ouroumis2 min read
DOJ Calls Anthropic an 'Unacceptable' National Security Risk in Court Filing

The U.S. government escalated its legal battle with Anthropic on Tuesday, filing a 40-page response in federal court arguing that the AI company poses an "unacceptable risk" to national security and should remain barred from military contracts.

The filing, submitted in U.S. District Court for the Northern District of California, marks the Trump administration's first formal legal response to Anthropic's lawsuit challenging the Pentagon's unprecedented decision to label the Claude chatbot maker a supply chain risk — a designation typically reserved for foreign adversaries like Chinese telecom firms.

The Core Argument

At the heart of the government's case is a striking claim: that Anthropic could weaponize its own technology against the military by disabling or altering its AI models during active combat operations.

"AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic — in its discretion — feels that its corporate 'red lines' are being crossed," the filing states.

Government lawyers questioned whether Anthropic could be considered a "trusted partner," arguing that giving the company access to the Department of War's warfighting infrastructure would "introduce unacceptable risk into DoW supply chains."

How the Dispute Began

The confrontation traces back to contract negotiations reportedly worth $200 million. During those talks, Anthropic stipulated that it did not want its AI used for mass surveillance of Americans or integrated into autonomous lethal weapons systems. The Pentagon rejected those conditions, countering that a private company should not dictate how the military uses procured technology.

On March 3, Secretary of War Pete Hegseth designated Anthropic a national security supply chain risk after the company refused to remove those guardrails. Anthropic filed suit six days later.

Broader Implications

The case has drawn significant attention from the legal and tech communities. On March 17, a group of former federal judges filed an amicus brief expressing concern about the Pentagon's use of the supply chain risk designation against a domestic AI company, calling it an overreach of executive authority.

The dispute raises fundamental questions about who controls the ethical boundaries of AI in military applications — the companies that build the technology or the government that deploys it. War department officials acknowledged in the filing that the military had continued using Anthropic's technology to help analyze intelligence even as the legal battle unfolded.

What Comes Next

Anthropic has requested an emergency stay of the supply chain risk designation. The court is expected to hear arguments in the coming weeks, with the outcome likely to set a precedent for how the federal government can leverage national security designations against domestic technology companies that impose usage restrictions on their products.

How AI Actually Works — Free Book on FreeLibrary

A free book that explains the AI concepts behind the headlines — no jargon, just clarity.

More in Policy

150 Former Judges Back Anthropic in Pentagon Supply Chain Lawsuit
Policy

150 Former Judges Back Anthropic in Pentagon Supply Chain Lawsuit

Nearly 150 retired federal and state judges have filed an amicus brief supporting Anthropic's lawsuit against the Trump administration, calling the Pentagon's supply chain risk designation legally unprecedented and procedurally flawed.

12 hours ago2 min read
Former Federal Judges Back Anthropic as Trump Administration Defends Pentagon Blacklisting in Court
Policy

Former Federal Judges Back Anthropic as Trump Administration Defends Pentagon Blacklisting in Court

The Trump administration filed a court brief defending its blacklisting of Anthropic as a supply chain risk, while former federal judges submitted filings supporting the AI company's legal challenge.

12 hours ago2 min read
US Senate Approves ChatGPT, Gemini, and Copilot for Official Staff Use
Policy

US Senate Approves ChatGPT, Gemini, and Copilot for Official Staff Use

The US Senate Sergeant at Arms has cleared ChatGPT, Google Gemini, and Microsoft Copilot for official use by Senate staff, while notably excluding Claude and Grok from the approved list.

4 days ago2 min read