The preliminary injunction hearing in Anthropic's lawsuit against the Department of Defense wrapped up on Monday, with Judge Rita Lin of the Northern District of California indicating she would issue a ruling within days. The case, which pits one of AI's most prominent safety-focused labs against the Trump administration, could set a precedent for how the government treats AI companies that refuse military use cases.
Anthropic filed suit earlier this month after the Pentagon designated it a "supply-chain risk" and directed all federal agencies to stop using its Claude models within six months. The designation is normally applied to foreign companies suspected of posing cybersecurity or national security threats — not American firms — and the move generated significant bipartisan backlash from lawmakers and AI researchers alike.
What Each Side Argued
Anthropic argued that the designation was unconstitutional retaliation for the company's decision to set "red lines" on certain military use cases, including mass domestic surveillance and fully autonomous weapons. The company contends the government violated its First and Fifth Amendment rights, and that the executive order directing agencies to abandon Anthropic exceeded presidential authority.
The government's position, articulated by DOJ attorneys representing the Pentagon, was that Anthropic poses an "unacceptable risk to national security" — though specifics were limited in public filings. The administration argued the designation was a legitimate national security determination and that courts should defer to executive branch judgment on such matters.
Real-time reporting from the hearing, live-posted to Bluesky by Lawfare's Molly Roberts, indicated a back-and-forth that gave neither side a clear advantage. Judge Lin reportedly pressed both sides on the legal standards for preliminary injunctions, including whether Anthropic could demonstrate irreparable harm.
What's at Stake
The practical stakes are enormous. Anthropic counts the General Services Administration, the Treasury Department, the State Department, and dozens of other federal agencies among its customers. Most have already publicly or privately announced plans to stop using Claude following the Trump administration's order.
Some of Anthropic's biggest private-sector clients — including Microsoft, which builds Anthropic into its Azure AI offerings — have made clear they're continuing to use Claude for non-Pentagon work. But the reputational and revenue damage from the federal pullout is already underway.
The preliminary injunction hearing is the first major legal test of whether the Trump administration can use national security mechanisms to punish AI companies for product decisions. The broader implications extend beyond Anthropic: any AI lab operating under contract with federal agencies is watching the case closely.
Industry Reaction
A group of more than 150 former federal judges filed an amicus brief supporting Anthropic's position, arguing that the designation process had been applied in an unprecedented and constitutionally dubious way. Employees from OpenAI and Google also published an open letter expressing concern about the precedent the government's actions could set for AI safety standards industry-wide.
The ruling, expected within days, will determine whether the supply-chain risk designation can remain in force while the case proceeds to a full trial.
By Michael Ouroumis

