Back to stories
Policy

Pentagon CTO Calls Anthropic's Mythos a 'National Security Moment' Even as the Company Stays Blacklisted

Michael Ouroumis3 min read
Pentagon CTO Calls Anthropic's Mythos a 'National Security Moment' Even as the Company Stays Blacklisted

Pentagon Chief Technology Officer Emil Michael told CNBC on Friday that Anthropic remains blacklisted as a Defense Department supply-chain risk, but cast the company's Mythos model as a separate matter the U.S. government cannot ignore. The framing — drawing a line between the company and its most powerful model — landed the same day the Pentagon announced classified-network AI deals with seven of Anthropic's rivals.

"With Anthropic, they're a supply chain risk," Michael said in the interview. He then drew a sharper distinction around the model itself: "The Mythos issue … is a separate national security moment. We have to make sure our networks are hardened up because that model has capabilities that are particular to finding cyber vulnerabilities and patching them."

Anthropic on the outside, Mythos on the inside

The Defense Department on May 1 unveiled deals with OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, SpaceX, and Reflection to deploy AI on its most sensitive classified networks. Anthropic, which has been designated a supply-chain risk after disputes with the Pentagon over use restrictions, was conspicuously absent. According to reporting from earlier in April, the National Security Agency — which sits under the Defense Department — has nonetheless been testing a specialized version of Mythos and benchmarking it against sovereign cyber tools.

Michael's remarks acknowledge that contradiction in plain terms. He described agency access to Mythos as evaluation, not operational deployment, and signalled that more frontier models would be pulled into similar reviews. "We think about the first drop being Mythos, but there's going to be others," he said. "The government's looking for how to work with all of these companies in the coming year so that their capabilities are understood by us first so that we can fix any issues we have in the private and the public sectors."

Why the cybersecurity angle is escalating

The Pentagon's interest in Mythos's offensive and defensive cyber capabilities tracks a broader anxiety inside the U.S. government that frontier AI is starting to compress the timeline between vulnerability discovery and exploitation. If a model can autonomously hunt bugs in widely deployed code, the agency that gets first look at it gains a head start on patching its own systems — and a head start understanding what an adversary armed with the same model could do.

That logic explains why Mythos can be both "a national security moment" and a product the Pentagon will not buy through its maker. The government is effectively asserting a right to evaluate frontier models independent of a commercial relationship with the developer, a posture that could become standard as more labs ship models with dual-use security capabilities.

Implications for Anthropic and the frontier-model market

For Anthropic, the public split between company and model is a mixed signal. The blacklist remains in place, the company's lawsuits against the Trump administration in San Francisco and Washington are still active, and Trump's order requiring federal agencies to phase out Claude has not been rescinded. But Michael's comments show that even officials publicly opposed to working with Anthropic concede the model's strategic importance — a position that gives the company leverage in any future negotiation.

For the rest of the industry, Michael's "first drop" framing previews a more aggressive government posture toward evaluating frontier capabilities ahead of release. Labs that benefit from being inside the Pentagon's classified-network deals may face similar scrutiny next, especially as cyber-capable models proliferate across OpenAI, Google, and Anthropic. The question for vendors is no longer whether the U.S. government will assess their models, but on what terms — and whether being inside or outside the procurement tent changes the answer.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

OpenAI Staff Begged Altman to Call Police Before Tumbler Ridge Shooting, Lawsuits Allege
Policy

OpenAI Staff Begged Altman to Call Police Before Tumbler Ridge Shooting, Lawsuits Allege

Newly filed lawsuits and a Wall Street Journal report claim OpenAI safety staff urged leadership to alert police about the future Tumbler Ridge shooter eight months before the February attack — and that Sam Altman overruled them.

6 hours ago2 min read
Pentagon Signs Classified-Network AI Deals With Seven Vendors as Anthropic Sits Out
Policy

Pentagon Signs Classified-Network AI Deals With Seven Vendors as Anthropic Sits Out

The Department of War announced agreements with SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft and AWS to deploy AI on its Impact Level 6 and 7 classified networks — pointedly excluding Anthropic.

12 hours ago2 min read
White House Drafts Executive Action to Bring Anthropic Back, Bypassing Pentagon's Risk Flag
Policy

White House Drafts Executive Action to Bring Anthropic Back, Bypassing Pentagon's Risk Flag

The White House is workshopping a draft executive action this week that would let federal agencies bypass the Pentagon's supply chain risk designation against Anthropic and onboard its new Mythos model, according to an Axios scoop.

2 days ago2 min read