A Landmark Hearing in San Francisco
Anthropic appeared before US District Judge Rita Lin in San Francisco on Monday to argue for a preliminary injunction against the Department of Defense and the White House. The hearing marks the most significant courtroom confrontation yet between an AI company and the federal government over the ethical boundaries of military AI deployment.
At issue is the Pentagon's decision to designate Anthropic as a "supply chain risk to national security" — a label historically reserved for foreign adversaries such as Huawei — after the company refused to grant the military unrestricted use of its Claude AI model.
How the Dispute Escalated
The conflict traces back to late February, when President Trump and Defense Secretary Pete Hegseth publicly declared they were severing ties with Anthropic. The Pentagon had demanded the company accept contract terms allowing Claude to be used for "any lawful purpose," including autonomous weapons systems and domestic surveillance programs. Anthropic refused, citing its internal safety policies.
On March 3, the government formally finalized the supply chain risk designation, and on March 9, Anthropic filed two lawsuits in response. One challenges the designation under the Administrative Procedure Act, arguing the Pentagon exceeded its statutory authority. The other raises constitutional claims, alleging the blacklisting violates the First Amendment by retaliating against protected speech and the Fifth Amendment by denying due process.
Judge Lin fast-tracked Monday's hearing from its original April 3 date, signaling the urgency of the matter.
Cracks in the Government's Case
Recent court filings have complicated the government's position. According to a TechCrunch report on March 20, a newly surfaced email shows that on March 4 — one day after the supply chain designation was finalized — a senior Pentagon official wrote to Anthropic CEO Dario Amodei stating the two sides were "very close" on the very issues the government now cites as evidence of a national security threat.
Nearly 150 former federal and state judges submitted an amicus brief raising concerns about the precedent of weaponizing supply chain designations against domestic companies over policy disagreements. AI researchers from OpenAI, Google, and Microsoft also filed briefs in support of Anthropic.
What Comes Next
Judge Lin is expected to rule on the preliminary injunction in the coming days. If granted, the order would pause the supply chain risk label while the underlying lawsuits proceed — potentially restoring federal agencies' ability to use Claude in the interim.
The case has drawn sharp attention from the tech industry, civil liberties groups, and defense analysts alike, as its outcome could set a precedent for how the government regulates AI companies that impose ethical guardrails on their own technology.


