The Department of War announced on May 1 that it has reached agreements with seven leading AI companies to deploy their tools on the U.S. military's classified networks, formalizing what officials describe as a sweeping shift toward an "AI-first fighting force." The list — SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft and Amazon Web Services — pointedly omits Anthropic, the maker of Claude, whose months-long standoff with the Pentagon has now hardened into a procurement freeze.
The agreements cover the Defense Department's Impact Level 6 and Impact Level 7 environments, the cloud accreditation tiers used to host Secret and Top Secret workloads. Functions inside these enclaves include mission planning and weapons targeting — work that has historically been walled off from commercial AI vendors. According to the Pentagon's release, the new contracts will "streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments."
A formal AI-first posture
Department officials cast the deals as the operational expression of Secretary of War Pete Hegseth's push to embed frontier models across military workflows. "These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfighters' ability to maintain decision superiority across all domains of warfare," the Pentagon said in its statement.
The announcement formalizes deals that had been reported piecemeal over recent months. SpaceX, OpenAI and Google had previously been linked to classified-network engagements, while Nvidia, Microsoft, AWS and Reflection — a comparatively young player focused on agentic systems — were added in this round. Reflection's inclusion is the most striking newcomer: it places the startup alongside hyperscalers and the dominant chipmaker as a sanctioned vendor for the most sensitive U.S. defense compute.
The Anthropic gap
Anthropic's absence is the loudest part of the announcement. The company is currently in litigation with the Department of War after being labeled a supply-chain risk last month, a designation the company is challenging in federal court. The Pentagon has been openly seeking alternatives to Anthropic's Claude tools amid disagreements over how guardrails on offensive cyber and lethal-effects use cases should be written.
The practical effect is that, as of today, every other major U.S. frontier-model lab now has an authorized path onto classified networks while Anthropic does not. That gap is likely to influence enterprise procurement signals well beyond defense, since IL6/IL7 accreditation is often treated as a proxy for security maturity in regulated industries.
What it means
For the vendors, the deals translate into privileged access to high-margin defense workloads at a moment when commercial AI growth is being stress-tested by capacity and energy constraints. For Anthropic, the announcement raises the cost of its standoff: every quarter spent outside IL6/IL7 is a quarter of revenue, reference customers and operational telemetry flowing to its competitors. And for the broader market, the Pentagon has now made the implicit explicit — frontier AI is national infrastructure, and access to it will be governed accordingly.



