A pivotal courtroom showdown between one of Silicon Valley's leading AI safety companies and the U.S. Department of War reaches a critical milestone today, with a federal court in San Francisco hearing Anthropic's motion for a preliminary injunction in the case Anthropic PBC v. U.S. Department of War (Case No. 3:26-cv-01996-RFL).
What Triggered the Lawsuit
The dispute began after Anthropic refused to accept the Pentagon's standard "any lawful use" contractual policy — a clause the company argued was incompatible with its AI safety commitments. In response, the Secretary of War issued a Secretarial Determination designating Anthropic a "supply chain risk," prompting federal agencies to discontinue their use of Claude across government operations.
Anthropic then sued, arguing the designation violated the First Amendment, the Administrative Procedure Act, and due process guarantees.
The Government's Counter-Argument
In a 40-page opposition brief filed March 17, the Department of Justice pushed back hard. Attorneys argued that Anthropic could, in their assessment, "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations" if the company believed its ethical red lines were being crossed.
"The Pentagon deemed that an unacceptable risk to national security," the filing states.
The government's legal team further argued that Anthropic's refusal to accept the contractual term did not constitute protected speech, and that even if a retaliatory motive were assumed, the government would have taken the same action regardless. Prosecutors also challenged the company's claims of irreparable harm, arguing that Anthropic would not suffer lasting damage before a full ruling on the merits.
Broader Stakes
The case has drawn significant attention across the AI and legal communities. A coalition of over 80 former federal judges previously filed an amicus brief in support of Anthropic's position, warning that the government's approach could have a chilling effect on principled AI governance.
The hearing before Judge Rita F. Lin — scheduled for 1:30 PM Pacific today — is expected to determine whether a preliminary injunction will pause the supply chain risk designation while the case proceeds to full litigation.
Why This Case Matters
At its core, Anthropic v. Department of War is a test of whether an AI company can maintain ethical deployment constraints when contracting with the federal government — and what happens when those constraints conflict with military objectives. The outcome could set lasting precedent for how AI safety commitments interact with national security law.
For the broader AI industry, the case raises uncomfortable questions: Can safety-focused AI companies avoid the gravitational pull of defense contracts without facing regulatory retaliation? And does the government have the authority to effectively ban AI providers that refuse to offer unrestricted access to their models?
Whatever Judge Lin decides today, the case is unlikely to end here. Both sides have signaled they are prepared to take the dispute to the Ninth Circuit if necessary.


