Back to stories
Tools

Anthropic's Claude Opus Discovers 22 Security Vulnerabilities in Firefox in Just Two Weeks

Michael Ouroumis2 min read
Anthropic's Claude Opus Discovers 22 Security Vulnerabilities in Firefox in Just Two Weeks

Mozilla and Anthropic have revealed the results of an unprecedented AI-driven security audit: over a two-week period in January 2026, Anthropic's Claude Opus 4.6 model discovered 22 security vulnerabilities in Firefox, including 14 rated high severity, plus more than 90 additional bugs across the browser's codebase.

The findings, disclosed in early March through a joint blog post, mark one of the most successful applications of AI to real-world software security to date.

How It Worked

Anthropic's Frontier Red Team deployed Claude Opus 4.6 to systematically analyze Firefox's source code, starting with the JavaScript engine before expanding to other components. Within just twenty minutes of exploration, the model identified its first critical finding: a Use After Free vulnerability in the JavaScript engine.

Over the following two weeks, Claude continued to surface bugs at a pace that would be difficult for human researchers to match. All 22 CVEs and most of the additional bugs have now been fixed, with the majority of patches shipping in Firefox 148.

A Working Exploit

Perhaps most striking was Anthropic's demonstration that Claude could go beyond detection to exploitation. The model generated a working exploit for CVE-2026-2796, one of the patched vulnerabilities. Anthropic noted the exploit only functions in a testing environment with some browser security features intentionally disabled, but the capability itself represents a significant milestone.

Implications for Software Security

The partnership carries implications well beyond Firefox. If AI can find this many serious bugs in a mature, heavily audited codebase like Firefox in just two weeks, the potential for AI-assisted security across the software industry is enormous.

Mozilla has announced it will integrate AI-assisted analysis into its ongoing security workflows, effectively making Claude a permanent part of its vulnerability detection pipeline. This mirrors a broader industry trend toward using AI not just for writing code but for systematically finding flaws in it.

The Double-Edged Sword

The results also raise concerns. The same capabilities that make AI effective at finding and patching vulnerabilities could be used by malicious actors to discover zero-days in software that has not yet been audited. Anthropic acknowledged this tension in its disclosure, noting that responsible disclosure practices become even more critical as AI lowers the barrier to vulnerability discovery.

What Comes Next

OpenAI's Codex Security team has reported similar success, analyzing 1.2 million commits and identifying over 10,500 high-severity issues. Together, these efforts suggest that AI-powered security auditing is quickly moving from experimental to essential.

For organizations maintaining large codebases, the message is clear: AI-assisted security analysis is no longer optional. The bugs are there. The only question is whether defenders or attackers find them first.

Learn AI for Free — FreeAcademy.ai

Take "Prompt Engineering Practice" — a free course with certificate to master the skills behind this story.

More in Tools

Google Turns Chrome Into an AI Coworker With Auto Browse, Powered by Gemini 3
Tools

Google Turns Chrome Into an AI Coworker With Auto Browse, Powered by Gemini 3

At Cloud Next 2026, Google unveiled Auto Browse, a Gemini 3-powered agent inside Chrome that handles multi-step web tasks for consumers and enterprise Workspace users.

4 days ago3 min read
OpenAI Launches Workspace Agents, Retires Custom GPTs for Teams
Tools

OpenAI Launches Workspace Agents, Retires Custom GPTs for Teams

OpenAI today unveiled workspace agents in ChatGPT as a research preview, positioning them as a direct replacement for custom GPTs and pitching Codex-powered shared agents at Business, Enterprise, Edu, and Teachers customers.

5 days ago2 min read
Cloudflare Launches Agent Memory Private Beta to Give AI Agents Persistent Recall
Tools

Cloudflare Launches Agent Memory Private Beta to Give AI Agents Persistent Recall

Cloudflare's new Agent Memory service extracts and stores information from AI agent conversations so models can recall context across sessions without bloating the token window, addressing one of agentic AI's biggest bottlenecks.

1 week ago2 min read