Back to stories
Policy

Anthropic Files Sworn Declarations Challenging Pentagon Ahead of Critical Tuesday Hearing

Michael Ouroumis2 min read
Anthropic Files Sworn Declarations Challenging Pentagon Ahead of Critical Tuesday Hearing

Anthropic escalated its legal battle with the Department of Defense on Friday, filing sworn declarations from two senior executives that directly contradict the Pentagon's claim that the AI company poses an "unacceptable risk to national security." The filings set the stage for a high-stakes hearing on Tuesday, March 24, before Judge Rita Lin in San Francisco.

What the Declarations Say

The two declarants are Sarah Heck, Anthropic's Head of Policy and a former National Security Council official who served under the Obama administration, and Thiyagu Ramasamy, the company's Head of Public Sector.

Heck's declaration takes aim at what she calls a central misrepresentation in the government's case: the assertion that Anthropic demanded veto power over military operations. "At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," Heck wrote in her sworn statement.

Ramasamy's declaration challenges the Pentagon's technical claims, explaining that once Anthropic's technology is deployed within a military environment, the company has no remote access or control over how it operates — undermining the government's argument that Anthropic could interfere with active operations.

The 'Very Close' Email

Perhaps the most striking detail in the filings is an email sent on March 4 by Under Secretary Emil Michael to CEO Dario Amodei. In the message, Michael reportedly told Amodei the two sides were "very close" on the two issues now cited as evidence of a national security threat: Anthropic's stance on autonomous weapons and mass surveillance of U.S. citizens.

The timing is notable. The Pentagon formally finalized its supply-chain risk designation against Anthropic on March 3 — just one day before the conciliatory email was sent. This sequence raises questions about whether the designation was a genuine security assessment or, as Anthropic contends, an act of political retaliation.

First Amendment at the Center

Anthropic's broader legal argument frames the supply-chain risk designation — the first ever applied to an American company — as government retaliation for the company's publicly stated views on AI safety, in violation of the First Amendment. The Pentagon has rejected that framing, calling Anthropic's refusal to permit all lawful military uses a business decision rather than protected speech.

The government has also raised concerns about Anthropic employing foreign nationals, including Chinese citizens, citing potential risks under China's National Intelligence Law.

What Happens Tuesday

Judge Lin will hear arguments on Anthropic's motion for temporary relief. The outcome could set a precedent for how the government wields supply-chain risk authorities against domestic AI companies — and whether AI firms can set ethical boundaries on military applications without facing federal retaliation.

The case is being closely watched across the tech industry. Microsoft, retired military leaders, and nearly 150 retired federal and state judges have all filed briefs in support of Anthropic's position.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances
Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances

The EU AI Act's August 2026 high-risk enforcement deadline for hiring tools and the Mobley v. Workday class action signal a new era of AI recruitment regulation.

1 day ago2 min read
Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook
Policy

Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook

After months of fierce debate, Linus Torvalds and kernel maintainers agreed on a policy that permits AI-assisted contributions like Copilot while forcing human submitters to take full legal and technical responsibility for any bugs, security flaws, or licensing issues.

2 days ago2 min read
Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals
Policy

Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals

Maine's LD 2082, which would prohibit the clinical use of AI in mental health therapy without a licensed professional, has been sent to Governor Janet Mills — part of a wave of state-level crackdowns on therapy chatbots.

3 days ago3 min read