Back to stories
Policy

Pentagon Approves Musk's Grok for Classified Military Systems as Anthropic Faces Ultimatum

Michael Ouroumis2 min read
Pentagon Approves Musk's Grok for Classified Military Systems as Anthropic Faces Ultimatum

The Department of Defense has granted xAI's Grok model preliminary authorization to operate within classified military systems, while simultaneously issuing Anthropic a 30-day ultimatum to accept revised contract terms or face termination of its existing defense partnerships.

Fast-Track Approval

The authorization, disclosed in a Pentagon procurement filing published Tuesday, allows Grok to process information up to the Secret classification level across several defense applications, including logistics planning and intelligence summarization. The approval bypassed several steps in the standard Authority to Operate process that typically takes 12 to 18 months.

Pentagon spokesperson Lt. Col. Rebecca Torres said the expedited timeline reflected "operational urgency and the maturity of xAI's security architecture," though she declined to specify which programs would use the system.

Anthropic's Deadline

Simultaneously, Anthropic — which has held defense contracts since late 2025 — received notice that it must agree to expanded data-sharing provisions and reduced liability protections within 30 days. Sources familiar with the negotiations said the new terms would require Anthropic to grant the DoD broader access to model internals, including fine-tuning weights and safety evaluation data.

Anthropic has not publicly commented on the ultimatum, though a person close to the company described the terms as "fundamentally incompatible with our security commitments to other customers."

Industry Reaction

The dual announcements sent ripples through the AI defense contracting community. Several former Pentagon officials expressed concern about the compressed approval timeline for Grok, noting that security certifications exist for critical reasons.

"Speed is important, but so is rigor," said Dr. Lisa Sohl, a former Deputy CIO at the Department of Defense. The compressed timeline contrasts sharply with OpenAI's own approach to removing safety from its mission, suggesting a broader pattern of safety considerations taking a back seat to speed. "Cutting corners on classification authority can create vulnerabilities that adversaries are very good at finding."

Broader Implications

The situation highlights the increasingly tangled relationship between Silicon Valley's AI giants and the defense establishment. With AI budgets across the DoD projected to reach $15 billion in fiscal year 2027, the stakes for model providers are enormous — and growing.

Anthropic's response, expected within the month, could reshape how AI companies negotiate the tension between commercial independence and government partnerships. The situation is further complicated by the recent SpaceX-xAI merger, which gives Musk's AI lab access to space-based infrastructure and defense relationships that no competitor can match. The outcome may also set precedent for how defense contracts handle model transparency requirements going forward.

More in Policy

UK Launches £40 Million Frontier AI Lab in Push for Tech Independence
Policy

UK Launches £40 Million Frontier AI Lab in Push for Tech Independence

The British government announces a new £40 million Fundamental AI Research Lab aimed at solving core AI limitations like hallucinations and unreliable reasoning while reducing dependence on US tech giants.

8 hours ago2 min read
Grammy Awards Rule AI-Generated Tracks Eligible With Human Authorship
Policy

Grammy Awards Rule AI-Generated Tracks Eligible With Human Authorship

The Recording Academy announces that AI-generated music is eligible for Grammy Awards as long as a human author makes meaningful creative contributions, setting the first major industry standard for AI in music.

1 day ago3 min read
AI Voice Cloning Fraud Losses Hit $1B as Deepfake Scams Surge
Policy

AI Voice Cloning Fraud Losses Hit $1B as Deepfake Scams Surge

The FBI reports that AI voice cloning scams caused over $1 billion in losses in 2025, a 400% increase from the prior year, as deepfake audio tools become cheap, accurate, and widely available.

1 day ago3 min read