Back to stories
Policy

300+ Google and OpenAI Employees Sign Open Letter Backing Anthropic's Pentagon Stand

Michael Ouroumis2 min read
300+ Google and OpenAI Employees Sign Open Letter Backing Anthropic's Pentagon Stand

A wave of internal dissent is rippling through Big Tech. More than 300 Google employees and over 60 OpenAI staffers have signed an open letter urging their companies to follow Anthropic's lead and refuse Pentagon demands for AI-powered mass surveillance and autonomous weapons systems.

What the Letter Says

The letter, published on February 27, calls on Google CEO Sundar Pichai and OpenAI CEO Sam Altman to publicly commit to the same red lines that got Anthropic blacklisted by the Trump administration: no AI for domestic mass surveillance, no autonomous armed drones, and no deployment in systems designed to bypass human oversight in lethal decision-making.

"We joined these companies to build technology that benefits humanity," the letter reads. "Allowing our work to be used for mass surveillance of American citizens or autonomous killing machines crosses a line that cannot be uncrossed."

The Context

The letter lands days after President Trump severed all federal ties with Anthropic after CEO Dario Amodei refused Defense Secretary Pete Hegseth's demands to deploy Claude in classified military programs without safety restrictions. Within hours of the blacklisting, OpenAI announced its own Pentagon deal — a move that drew sharp criticism from safety-focused employees.

Sam Altman himself publicly sided with Anthropic, calling the Pentagon's approach "threatening," but his company still signed the contract. That contradiction is at the heart of the open letter.

Google's Complicated History

For Google employees, the letter echoes Project Maven — the controversial 2018 Pentagon contract that triggered mass resignations and eventually led Google to publish its AI Principles. Those principles explicitly state that Google will not design AI for weapons or surveillance that violates international norms.

Signatories argue that the current Pentagon demands clearly fall within those red lines and that Google should say so publicly.

Industry Implications

The letter represents the largest coordinated cross-company employee action in AI since the OpenAI board crisis of 2023. It signals that the workforce building these models is not willing to stay silent as their employers navigate the growing tension between government contracts and safety commitments.

Neither Google nor OpenAI has officially responded to the letter. Anthropic's Dario Amodei thanked the signatories in a brief post, calling the support "meaningful and humbling."

What Happens Next

The practical impact depends on whether leadership responds with policy or silence. Google has its AI Principles. OpenAI just dropped "safety" from its mission statement. The gap between the two companies' positions — and between leadership and rank-and-file employees — has never been wider.

For now, the letter stands as a public record: hundreds of the people actually building frontier AI models want hard limits on how those models are used.

More in Policy

UK Launches £40 Million Frontier AI Lab in Push for Tech Independence
Policy

UK Launches £40 Million Frontier AI Lab in Push for Tech Independence

The British government announces a new £40 million Fundamental AI Research Lab aimed at solving core AI limitations like hallucinations and unreliable reasoning while reducing dependence on US tech giants.

8 hours ago2 min read
Grammy Awards Rule AI-Generated Tracks Eligible With Human Authorship
Policy

Grammy Awards Rule AI-Generated Tracks Eligible With Human Authorship

The Recording Academy announces that AI-generated music is eligible for Grammy Awards as long as a human author makes meaningful creative contributions, setting the first major industry standard for AI in music.

1 day ago3 min read
AI Voice Cloning Fraud Losses Hit $1B as Deepfake Scams Surge
Policy

AI Voice Cloning Fraud Losses Hit $1B as Deepfake Scams Surge

The FBI reports that AI voice cloning scams caused over $1 billion in losses in 2025, a 400% increase from the prior year, as deepfake audio tools become cheap, accurate, and widely available.

1 day ago3 min read