Back to stories
Policy

Pentagon Approves Musk's Grok for Classified Military Systems as Anthropic Faces Ultimatum

Michael Ouroumis2 min read
Pentagon Approves Musk's Grok for Classified Military Systems as Anthropic Faces Ultimatum

The Department of Defense has granted xAI's Grok model preliminary authorization to operate within classified military systems, while simultaneously issuing Anthropic a 30-day ultimatum to accept revised contract terms or face termination of its existing defense partnerships.

Fast-Track Approval

The authorization, disclosed in a Pentagon procurement filing published Tuesday, allows Grok to process information up to the Secret classification level across several defense applications, including logistics planning and intelligence summarization. The approval bypassed several steps in the standard Authority to Operate process that typically takes 12 to 18 months.

Pentagon spokesperson Lt. Col. Rebecca Torres said the expedited timeline reflected "operational urgency and the maturity of xAI's security architecture," though she declined to specify which programs would use the system.

Anthropic's Deadline

Simultaneously, Anthropic — which has held defense contracts since late 2025 — received notice that it must agree to expanded data-sharing provisions and reduced liability protections within 30 days. Sources familiar with the negotiations said the new terms would require Anthropic to grant the DoD broader access to model internals, including fine-tuning weights and safety evaluation data.

Anthropic has not publicly commented on the ultimatum, though a person close to the company described the terms as "fundamentally incompatible with our security commitments to other customers."

Industry Reaction

The dual announcements sent ripples through the AI defense contracting community. Several former Pentagon officials expressed concern about the compressed approval timeline for Grok, noting that security certifications exist for critical reasons.

"Speed is important, but so is rigor," said Dr. Lisa Sohl, a former Deputy CIO at the Department of Defense. The compressed timeline contrasts sharply with OpenAI's own approach to removing safety from its mission, suggesting a broader pattern of safety considerations taking a back seat to speed. "Cutting corners on classification authority can create vulnerabilities that adversaries are very good at finding."

Broader Implications

The situation highlights the increasingly tangled relationship between Silicon Valley's AI giants and the defense establishment. With AI budgets across the DoD projected to reach $15 billion in fiscal year 2027, the stakes for model providers are enormous — and growing.

Anthropic's response, expected within the month, could reshape how AI companies negotiate the tension between commercial independence and government partnerships. The situation is further complicated by the recent SpaceX-xAI merger, which gives Musk's AI lab access to space-based infrastructure and defense relationships that no competitor can match. The outcome may also set precedent for how defense contracts handle model transparency requirements going forward.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Anthropic Outspends OpenAI in Biggest-Ever AI Lobbying Quarter
Policy

Anthropic Outspends OpenAI in Biggest-Ever AI Lobbying Quarter

Federal disclosures filed for Q1 2026 show Anthropic spent $1.6 million and OpenAI $1 million lobbying Washington — both record quarters for the rival AI labs as Pentagon procurement, copyright and export controls dominate the agenda.

6 min ago2 min read
Anthropic Now Demands Photo ID and Selfie to Block Claude Access From China, Russia, and North Korea
Policy

Anthropic Now Demands Photo ID and Selfie to Block Claude Access From China, Russia, and North Korea

Anthropic is requiring government-issued photo IDs and live selfies from some Claude users to cut off access from US adversaries, according to The Information, as black-market relay platforms race to preserve Chinese developer demand.

4 hours ago3 min read
YouTube Opens AI Likeness Detection to Hollywood as Deepfakes Target Celebrities
Policy

YouTube Opens AI Likeness Detection to Hollywood as Deepfakes Target Celebrities

YouTube is expanding its AI likeness detection tool to celebrities, talent agencies, and management companies, giving Hollywood a Content ID-style system for hunting down deepfakes of their clients.

6 hours ago3 min read