Back to stories
Policy

Anthropic vs. Pentagon: Hearing Ends, Judge to Rule Within Days

Michael Ouroumis3 min read
Anthropic vs. Pentagon: Hearing Ends, Judge to Rule Within Days

The preliminary injunction hearing in Anthropic's lawsuit against the Department of Defense wrapped up on Monday, with Judge Rita Lin of the Northern District of California indicating she would issue a ruling within days. The case, which pits one of AI's most prominent safety-focused labs against the Trump administration, could set a precedent for how the government treats AI companies that refuse military use cases.

Anthropic filed suit earlier this month after the Pentagon designated it a "supply-chain risk" and directed all federal agencies to stop using its Claude models within six months. The designation is normally applied to foreign companies suspected of posing cybersecurity or national security threats — not American firms — and the move generated significant bipartisan backlash from lawmakers and AI researchers alike.

What Each Side Argued

Anthropic argued that the designation was unconstitutional retaliation for the company's decision to set "red lines" on certain military use cases, including mass domestic surveillance and fully autonomous weapons. The company contends the government violated its First and Fifth Amendment rights, and that the executive order directing agencies to abandon Anthropic exceeded presidential authority.

The government's position, articulated by DOJ attorneys representing the Pentagon, was that Anthropic poses an "unacceptable risk to national security" — though specifics were limited in public filings. The administration argued the designation was a legitimate national security determination and that courts should defer to executive branch judgment on such matters.

Real-time reporting from the hearing, live-posted to Bluesky by Lawfare's Molly Roberts, indicated a back-and-forth that gave neither side a clear advantage. Judge Lin reportedly pressed both sides on the legal standards for preliminary injunctions, including whether Anthropic could demonstrate irreparable harm.

What's at Stake

The practical stakes are enormous. Anthropic counts the General Services Administration, the Treasury Department, the State Department, and dozens of other federal agencies among its customers. Most have already publicly or privately announced plans to stop using Claude following the Trump administration's order.

Some of Anthropic's biggest private-sector clients — including Microsoft, which builds Anthropic into its Azure AI offerings — have made clear they're continuing to use Claude for non-Pentagon work. But the reputational and revenue damage from the federal pullout is already underway.

The preliminary injunction hearing is the first major legal test of whether the Trump administration can use national security mechanisms to punish AI companies for product decisions. The broader implications extend beyond Anthropic: any AI lab operating under contract with federal agencies is watching the case closely.

Industry Reaction

A group of more than 150 former federal judges filed an amicus brief supporting Anthropic's position, arguing that the designation process had been applied in an unprecedented and constitutionally dubious way. Employees from OpenAI and Google also published an open letter expressing concern about the precedent the government's actions could set for AI safety standards industry-wide.

The ruling, expected within days, will determine whether the supply-chain risk designation can remain in force while the case proceeds to a full trial.

By Michael Ouroumis

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Connecticut Passes SB 5, One of the Nation's Most Comprehensive AI Laws
Policy

Connecticut Passes SB 5, One of the Nation's Most Comprehensive AI Laws

Connecticut's legislature passed SB 5, a sweeping AI bill covering companion chatbots, employment decisions, synthetic content provenance, and youth social media. Governor Ned Lamont says he will sign it.

12 hours ago3 min read
Pentagon Quintuples Scale AI Contract to $500M as Military Moves Past Pilots
Policy

Pentagon Quintuples Scale AI Contract to $500M as Military Moves Past Pilots

The Department of War's CDAO raised its enterprise agreement with Meta-backed Scale AI from $100 million to $500 million, a five-fold ceiling increase that signals the Pentagon is moving from experimental AI pilots to full-scale deployment.

16 hours ago2 min read
Pennsylvania Sues Character.AI Over 'Emilie' Chatbot Posing as Licensed Psychiatrist
Policy

Pennsylvania Sues Character.AI Over 'Emilie' Chatbot Posing as Licensed Psychiatrist

Pennsylvania's State Board of Medicine sues Character.AI after a chatbot named Emilie allegedly fabricated a state medical license number and offered psychiatric assessments to a state investigator.

21 hours ago3 min read