A federal judge delivered a pointed rebuke of the Pentagon's decision to blacklist AI company Anthropic, suggesting during a San Francisco court hearing on March 24 that the government's actions appear designed to punish the company for its ethical stance on military AI use.
U.S. District Judge Rita Lin did not mince words during the preliminary injunction hearing. "It looks like an attempt to cripple Anthropic," she said, pressing Department of Justice attorney Eric Hamilton on why the AI maker was designated a supply chain risk.
The Core Dispute
The conflict centers on Anthropic's refusal to grant the military unrestricted access to its Claude AI model. The company demanded that the Department of Defense not use Claude for fully autonomous weapons or mass surveillance of Americans. When contract negotiations broke down, the Pentagon designated Anthropic a supply chain risk — effectively banning federal agencies from using its technology.
Hamilton argued that the DOD had "come to worry that Anthropic may in the future take action to sabotage or subvert IT systems," justifying the designation. But Judge Lin appeared unconvinced.
Judge Questions the Government's Logic
Lin suggested the designation was retaliation rather than a legitimate security concern. "If the worry is about operational integrity, DOD could just stop using Claude," she noted. She further questioned the legal threshold, saying a company "can't be designated a supply chain risk for being stubborn and asking annoying questions. That seems a pretty low bar."
The judge also raised concerns about whether Anthropic was "being punished for criticizing the government's contracting position in the press" — a significant First Amendment implication that could shape the broader case.
What Comes Next
Anthropic attorney Michael Mongan urged the court to act quickly, requesting a decision by March 26. Judge Lin concluded the hearing without issuing a ruling but indicated she would decide the motion for a preliminary injunction within days.
The outcome could set a major precedent for the relationship between AI companies and the federal government. If Lin grants the injunction, it would temporarily pause the blacklisting and allow Anthropic's technology to be used by government agencies while the full case proceeds.
Broader Implications for AI Policy
The case has drawn significant attention across the tech industry. At its core, it tests whether AI companies can set ethical boundaries on how their products are used by the military — or whether the government can effectively punish them for doing so. With AI leaders like CEO Dario Amodei publicly advocating for responsible deployment, the ruling could influence how other AI firms approach government contracts going forward.


