Anthropic has inadvertently revealed the name and details of its next major AI model — not through a deliberate announcement, but through a significant security lapse in its own content management system.
The model is called Claude Mythos. It's also referred to as "Capybara" in the leaked documents — a new tier that sits above Opus in Anthropic's model hierarchy. The company describes it as "the most capable we've built to date" and a "step change" in AI performance.
How the Leak Happened
Anthropic's website content management system was configured so that uploaded assets were public by default, unless explicitly marked private. A misconfiguration meant that nearly 3,000 unpublished assets — blog posts, images, PDFs, internal documents — were accessible to anyone who knew how to query the system.
Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge, reviewed the material after being contacted by Fortune, which first reported the story. Roy Paz, a senior AI security researcher at LayerX Security, also independently located and reviewed the files.
Among the unpublished assets: a complete draft blog post announcing Claude Mythos, details of an invite-only CEO summit planned for Europe, and other internal materials.
Anthropic was informed by Fortune on Thursday. The company promptly locked down the data store. An Anthropic spokesperson attributed the exposure to "human error in the CMS configuration."
Notably, Anthropic — which has publicly described automating significant portions of its internal software development using Claude-based coding agents — said AI was not responsible for this particular error.
What Mythos Is
The leaked draft describes Claude Mythos as representing a new tier of model capability. Anthropic's current naming convention runs Haiku (small, fast, cheap) → Sonnet (mid-tier) → Opus (largest and most capable). Mythos/Capybara sits above that entire stack.
The document uses the name "Capybara" to describe the new tier itself — "a new name for a new tier of model: larger and more intelligent than our Opus models — which were, until now, our most powerful." Mythos appears to be the specific model name within that new Capybara tier.
Anthropic said Mythos is currently being tested with "early access customers" — suggesting it's already past internal development and into limited external validation before a formal public launch.
The leaked draft also mentioned that Anthropic considers Mythos to pose "unprecedented cybersecurity risks" — a disclosure that will draw scrutiny from AI safety researchers and policymakers who have been pressing companies to be more transparent about capability assessments before release.
The Context
The leak lands in an unusual week for Anthropic. Just days before the CMS exposure was reported, a federal judge ruled that the Pentagon's attempt to ban Anthropic from government contracts constituted "illegal First Amendment retaliation" — a landmark legal victory. The company is simultaneously in a strong competitive position, with Claude widely regarded as the leading model for coding and reasoning tasks.
The security lapse doesn't change the underlying story of Anthropic's technical progress, but it does raise questions about how a company that has positioned itself as a responsible, safety-focused AI developer handles the operational security of its own infrastructure.
The company's public response has been measured: acknowledge the human error, fix the issue, confirm the model exists. What it hasn't done is provide a launch timeline for Mythos — leaving the AI community to speculate based on a draft document that was never supposed to be public.



