Back to stories
Policy

Federal Judge Calls Pentagon's Anthropic Ban an 'Attempt to Cripple' the AI Company

Michael Ouroumis2 min read
Federal Judge Calls Pentagon's Anthropic Ban an 'Attempt to Cripple' the AI Company

A federal judge delivered a pointed rebuke of the Pentagon's decision to blacklist AI company Anthropic, suggesting during a San Francisco court hearing on March 24 that the government's actions appear designed to punish the company for its ethical stance on military AI use.

U.S. District Judge Rita Lin did not mince words during the preliminary injunction hearing. "It looks like an attempt to cripple Anthropic," she said, pressing Department of Justice attorney Eric Hamilton on why the AI maker was designated a supply chain risk.

The Core Dispute

The conflict centers on Anthropic's refusal to grant the military unrestricted access to its Claude AI model. The company demanded that the Department of Defense not use Claude for fully autonomous weapons or mass surveillance of Americans. When contract negotiations broke down, the Pentagon designated Anthropic a supply chain risk — effectively banning federal agencies from using its technology.

Hamilton argued that the DOD had "come to worry that Anthropic may in the future take action to sabotage or subvert IT systems," justifying the designation. But Judge Lin appeared unconvinced.

Judge Questions the Government's Logic

Lin suggested the designation was retaliation rather than a legitimate security concern. "If the worry is about operational integrity, DOD could just stop using Claude," she noted. She further questioned the legal threshold, saying a company "can't be designated a supply chain risk for being stubborn and asking annoying questions. That seems a pretty low bar."

The judge also raised concerns about whether Anthropic was "being punished for criticizing the government's contracting position in the press" — a significant First Amendment implication that could shape the broader case.

What Comes Next

Anthropic attorney Michael Mongan urged the court to act quickly, requesting a decision by March 26. Judge Lin concluded the hearing without issuing a ruling but indicated she would decide the motion for a preliminary injunction within days.

The outcome could set a major precedent for the relationship between AI companies and the federal government. If Lin grants the injunction, it would temporarily pause the blacklisting and allow Anthropic's technology to be used by government agencies while the full case proceeds.

Broader Implications for AI Policy

The case has drawn significant attention across the tech industry. At its core, it tests whether AI companies can set ethical boundaries on how their products are used by the military — or whether the government can effectively punish them for doing so. With AI leaders like CEO Dario Amodei publicly advocating for responsible deployment, the ruling could influence how other AI firms approach government contracts going forward.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Connecticut Passes SB 5, One of the Nation's Most Comprehensive AI Laws
Policy

Connecticut Passes SB 5, One of the Nation's Most Comprehensive AI Laws

Connecticut's legislature passed SB 5, a sweeping AI bill covering companion chatbots, employment decisions, synthetic content provenance, and youth social media. Governor Ned Lamont says he will sign it.

18 hours ago3 min read
Pentagon Quintuples Scale AI Contract to $500M as Military Moves Past Pilots
Policy

Pentagon Quintuples Scale AI Contract to $500M as Military Moves Past Pilots

The Department of War's CDAO raised its enterprise agreement with Meta-backed Scale AI from $100 million to $500 million, a five-fold ceiling increase that signals the Pentagon is moving from experimental AI pilots to full-scale deployment.

22 hours ago2 min read
Pennsylvania Sues Character.AI Over 'Emilie' Chatbot Posing as Licensed Psychiatrist
Policy

Pennsylvania Sues Character.AI Over 'Emilie' Chatbot Posing as Licensed Psychiatrist

Pennsylvania's State Board of Medicine sues Character.AI after a chatbot named Emilie allegedly fabricated a state medical license number and offered psychiatric assessments to a state investigator.

1 day ago3 min read