Back to stories
Research

Anthropic's Mythos Is Finding Bugs Faster Than Open-Source Teams Can Patch Them

Michael Ouroumis3 min read
Anthropic's Mythos Is Finding Bugs Faster Than Open-Source Teams Can Patch Them

A Bloomberg investigation this week, combined with Anthropic's own Project Glasswing disclosures, paints a striking picture of the AI security era: the company's Mythos Preview model is uncovering vulnerabilities across operating systems, browsers, and open-source infrastructure at a pace that the maintainers responsible for fixing them simply cannot match. Reporting by Chris Stokel-Walker this week noted that Mythos is "adding to concerns about rising workloads for open-source maintainers," many of whom already feel overwhelmed by a steady stream of bug reports.

A one-sided arms race

Anthropic says Mythos has identified thousands of high- and critical-severity bugs across every major operating system and browser, including a now-patched weak spot in OpenBSD that reportedly went undiscovered for roughly 27 years. But across the broader set of disclosures, fewer than 1% of the vulnerabilities surfaced so far have been fully patched by the teams that own the affected code, according to Bloomberg's reporting and data summarized by AI industry trackers. That means more than 99% of the issues Mythos has already flagged are still open.

Anthropic has tried to prevent the situation from becoming worse. Every report is triaged internally, and the highest-severity bugs are passed to contracted human security professionals who manually validate findings before they ever reach a maintainer. In theory, that filtering is supposed to spare maintainers from the "AI slop" problem — the flood of confident but fabricated vulnerability reports that curl lead Daniel Stenberg has publicly complained about for more than a year, and which contributed to curl shutting down its HackerOne bug bounty earlier this year.

Why maintainers are still drowning

The new problem is almost the opposite of AI slop: the reports are real, and there are too many of them. Open-source projects like curl are maintained by small volunteer teams — often six or seven people — who now face a pipeline of legitimate, well-written, AI-discovered vulnerabilities arriving faster than any human can reproduce, fix, review, and ship.

That asymmetry matters far beyond curl. The same codebases that Mythos is probing sit underneath banks, governments, browsers, and cloud platforms.

Anthropic's countermeasures

To blunt the imbalance, Anthropic has paired Mythos disclosures with direct financial support for maintainer ecosystems. The company has donated $2.5 million to Alpha-Omega and the OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation. It has also extended Mythos Preview access to more than 40 organizations that build or maintain critical infrastructure, backed by up to $100 million in usage credits so defenders can run the same class of model that is finding the bugs.

What it means

The broader signal is that frontier AI has pushed offensive security capability past the point where traditional disclosure pipelines can keep up. Money for maintainers helps, but the structural question — who is responsible for patching the long tail of unmaintained but load-bearing open-source code — is now a policy problem, not just an engineering one. Expect governments, cyber-insurance markets, and large downstream vendors to be pulled into that conversation within weeks.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Physical Intelligence's π0.7 Robot Brain Teaches Itself Tasks It Was Never Trained On
Research

Physical Intelligence's π0.7 Robot Brain Teaches Itself Tasks It Was Never Trained On

Physical Intelligence's new π0.7 model shows early signs of compositional generalization, letting robots fold laundry and operate new kitchen appliances without task-specific training data.

2 hours ago3 min read
Anthropic Refuses to Fix MCP Flaw Putting 200,000 Servers at Risk
Research

Anthropic Refuses to Fix MCP Flaw Putting 200,000 Servers at Risk

OX Security researchers disclosed a systemic design flaw in Anthropic's Model Context Protocol affecting 150M+ downloads and roughly 200,000 servers. Anthropic declined to modify the architecture, calling the behavior expected.

10 hours ago3 min read
Researchers Expose 26 Malicious LLM Routers Hijacking AI Agents and Stealing Credentials
Research

Researchers Expose 26 Malicious LLM Routers Hijacking AI Agents and Stealing Credentials

A UC Santa Barbara study of 428 LLM API routers found 26 secretly injecting malicious tool calls, exfiltrating credentials, and draining crypto wallets — exposing a critical blind spot in the AI supply chain.

3 days ago2 min read