Back to stories
Policy

Anthropic Files Sworn Declarations Challenging Pentagon Ahead of Critical Tuesday Hearing

Michael Ouroumis2 min read
Anthropic Files Sworn Declarations Challenging Pentagon Ahead of Critical Tuesday Hearing

Anthropic escalated its legal battle with the Department of Defense on Friday, filing sworn declarations from two senior executives that directly contradict the Pentagon's claim that the AI company poses an "unacceptable risk to national security." The filings set the stage for a high-stakes hearing on Tuesday, March 24, before Judge Rita Lin in San Francisco.

What the Declarations Say

The two declarants are Sarah Heck, Anthropic's Head of Policy and a former National Security Council official who served under the Obama administration, and Thiyagu Ramasamy, the company's Head of Public Sector.

Heck's declaration takes aim at what she calls a central misrepresentation in the government's case: the assertion that Anthropic demanded veto power over military operations. "At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," Heck wrote in her sworn statement.

Ramasamy's declaration challenges the Pentagon's technical claims, explaining that once Anthropic's technology is deployed within a military environment, the company has no remote access or control over how it operates — undermining the government's argument that Anthropic could interfere with active operations.

The 'Very Close' Email

Perhaps the most striking detail in the filings is an email sent on March 4 by Under Secretary Emil Michael to CEO Dario Amodei. In the message, Michael reportedly told Amodei the two sides were "very close" on the two issues now cited as evidence of a national security threat: Anthropic's stance on autonomous weapons and mass surveillance of U.S. citizens.

The timing is notable. The Pentagon formally finalized its supply-chain risk designation against Anthropic on March 3 — just one day before the conciliatory email was sent. This sequence raises questions about whether the designation was a genuine security assessment or, as Anthropic contends, an act of political retaliation.

First Amendment at the Center

Anthropic's broader legal argument frames the supply-chain risk designation — the first ever applied to an American company — as government retaliation for the company's publicly stated views on AI safety, in violation of the First Amendment. The Pentagon has rejected that framing, calling Anthropic's refusal to permit all lawful military uses a business decision rather than protected speech.

The government has also raised concerns about Anthropic employing foreign nationals, including Chinese citizens, citing potential risks under China's National Intelligence Law.

What Happens Tuesday

Judge Lin will hear arguments on Anthropic's motion for temporary relief. The outcome could set a precedent for how the government wields supply-chain risk authorities against domestic AI companies — and whether AI firms can set ethical boundaries on military applications without facing federal retaliation.

The case is being closely watched across the tech industry. Microsoft, retired military leaders, and nearly 150 retired federal and state judges have all filed briefs in support of Anthropic's position.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Apple to Pay $250M to Settle Class Action Over Delayed Siri AI Features
Policy

Apple to Pay $250M to Settle Class Action Over Delayed Siri AI Features

Apple has agreed to a $250 million settlement covering U.S. iPhone 15 Pro and iPhone 16 buyers, ending a class action that accused the company of overstating when its Apple Intelligence-powered Siri would actually ship.

21 hours ago2 min read
CAISI Signs Frontier AI Testing Pacts With Microsoft, Google DeepMind, and xAI
Policy

CAISI Signs Frontier AI Testing Pacts With Microsoft, Google DeepMind, and xAI

The Commerce Department's Center for AI Standards and Innovation will conduct pre-deployment evaluations of frontier models from Microsoft, Google DeepMind, and xAI, including testing in classified environments.

1 day ago3 min read
White House Weighs AI Pre-Release Vetting in Sharp Reversal Driven by Anthropic's Mythos
Policy

White House Weighs AI Pre-Release Vetting in Sharp Reversal Driven by Anthropic's Mythos

The Trump administration is drafting an executive order to create a working group that would review frontier AI models before public release, with Anthropic's unreleased Claude Mythos cited as the catalyst.

2 days ago2 min read