Back to stories
Policy

Anthropic vs. Pentagon: Preliminary Injunction Hearing Set for Today

Michael Ouroumis2 min read
Anthropic vs. Pentagon: Preliminary Injunction Hearing Set for Today

A pivotal courtroom showdown between one of Silicon Valley's leading AI safety companies and the U.S. Department of War reaches a critical milestone today, with a federal court in San Francisco hearing Anthropic's motion for a preliminary injunction in the case Anthropic PBC v. U.S. Department of War (Case No. 3:26-cv-01996-RFL).

What Triggered the Lawsuit

The dispute began after Anthropic refused to accept the Pentagon's standard "any lawful use" contractual policy — a clause the company argued was incompatible with its AI safety commitments. In response, the Secretary of War issued a Secretarial Determination designating Anthropic a "supply chain risk," prompting federal agencies to discontinue their use of Claude across government operations.

Anthropic then sued, arguing the designation violated the First Amendment, the Administrative Procedure Act, and due process guarantees.

The Government's Counter-Argument

In a 40-page opposition brief filed March 17, the Department of Justice pushed back hard. Attorneys argued that Anthropic could, in their assessment, "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations" if the company believed its ethical red lines were being crossed.

"The Pentagon deemed that an unacceptable risk to national security," the filing states.

The government's legal team further argued that Anthropic's refusal to accept the contractual term did not constitute protected speech, and that even if a retaliatory motive were assumed, the government would have taken the same action regardless. Prosecutors also challenged the company's claims of irreparable harm, arguing that Anthropic would not suffer lasting damage before a full ruling on the merits.

Broader Stakes

The case has drawn significant attention across the AI and legal communities. A coalition of over 80 former federal judges previously filed an amicus brief in support of Anthropic's position, warning that the government's approach could have a chilling effect on principled AI governance.

The hearing before Judge Rita F. Lin — scheduled for 1:30 PM Pacific today — is expected to determine whether a preliminary injunction will pause the supply chain risk designation while the case proceeds to full litigation.

Why This Case Matters

At its core, Anthropic v. Department of War is a test of whether an AI company can maintain ethical deployment constraints when contracting with the federal government — and what happens when those constraints conflict with military objectives. The outcome could set lasting precedent for how AI safety commitments interact with national security law.

For the broader AI industry, the case raises uncomfortable questions: Can safety-focused AI companies avoid the gravitational pull of defense contracts without facing regulatory retaliation? And does the government have the authority to effectively ban AI providers that refuse to offer unrestricted access to their models?

Whatever Judge Lin decides today, the case is unlikely to end here. Both sides have signaled they are prepared to take the dispute to the Ninth Circuit if necessary.

How AI Actually Works — Free Book on FreeLibrary

A free book that explains the AI concepts behind the headlines — no jargon, just clarity.

More in Policy

Anthropic Takes the Pentagon to Federal Court Over 'Supply Chain Risk' Blacklisting
Policy

Anthropic Takes the Pentagon to Federal Court Over 'Supply Chain Risk' Blacklisting

Anthropic argued before a federal judge in San Francisco today for a preliminary injunction against the Pentagon's designation of the AI company as a supply chain risk to national security.

11 hours ago2 min read
White House AI Framework Sets Up Federal vs. State Showdown
Policy

White House AI Framework Sets Up Federal vs. State Showdown

The Trump administration's seven-pillar AI policy framework moves to preempt state AI laws and direct the FTC to crack down on deceptive AI practices.

11 hours ago2 min read
Man Pleads Guilty to $8M AI Music Streaming Fraud — Created Hundreds of Thousands of Fake Songs
Policy

Man Pleads Guilty to $8M AI Music Streaming Fraud — Created Hundreds of Thousands of Fake Songs

Michael Smith of North Carolina pleaded guilty to using AI to generate hundreds of thousands of songs and bots to stream them billions of times, fraudulently collecting over $8 million in royalties from Spotify, Apple Music, and Amazon.

1 day ago2 min read