A wave of internal dissent is rippling through Big Tech. More than 300 Google employees and over 60 OpenAI staffers have signed an open letter urging their companies to follow Anthropic's lead and refuse Pentagon demands for AI-powered mass surveillance and autonomous weapons systems.
What the Letter Says
The letter, published on February 27, calls on Google CEO Sundar Pichai and OpenAI CEO Sam Altman to publicly commit to the same red lines that got Anthropic blacklisted by the Trump administration: no AI for domestic mass surveillance, no autonomous armed drones, and no deployment in systems designed to bypass human oversight in lethal decision-making.
"We joined these companies to build technology that benefits humanity," the letter reads. "Allowing our work to be used for mass surveillance of American citizens or autonomous killing machines crosses a line that cannot be uncrossed."
The Context
The letter lands days after President Trump severed all federal ties with Anthropic after CEO Dario Amodei refused Defense Secretary Pete Hegseth's demands to deploy Claude in classified military programs without safety restrictions. Within hours of the blacklisting, OpenAI announced its own Pentagon deal — a move that drew sharp criticism from safety-focused employees.
Sam Altman himself publicly sided with Anthropic, calling the Pentagon's approach "threatening," but his company still signed the contract. That contradiction is at the heart of the open letter.
Google's Complicated History
For Google employees, the letter echoes Project Maven — the controversial 2018 Pentagon contract that triggered mass resignations and eventually led Google to publish its AI Principles. Those principles explicitly state that Google will not design AI for weapons or surveillance that violates international norms.
Signatories argue that the current Pentagon demands clearly fall within those red lines and that Google should say so publicly.
Industry Implications
The letter represents the largest coordinated cross-company employee action in AI since the OpenAI board crisis of 2023. It signals that the workforce building these models is not willing to stay silent as their employers navigate the growing tension between government contracts and safety commitments.
Neither Google nor OpenAI has officially responded to the letter. Anthropic's Dario Amodei thanked the signatories in a brief post, calling the support "meaningful and humbling."
What Happens Next
The practical impact depends on whether leadership responds with policy or silence. Google has its AI Principles. OpenAI just dropped "safety" from its mission statement. The gap between the two companies' positions — and between leadership and rank-and-file employees — has never been wider.
For now, the letter stands as a public record: hundreds of the people actually building frontier AI models want hard limits on how those models are used.



