Back to stories
Research

AI Offensive Cyber Capabilities Are Doubling Every 5.7 Months, Safety Researchers Find

Michael Ouroumis2 min read
AI Offensive Cyber Capabilities Are Doubling Every 5.7 Months, Safety Researchers Find

A new study from AI safety research firm Lyptus Research has found that artificial intelligence offensive cybersecurity capabilities are improving at an alarming rate — doubling roughly every 5.7 months since 2024, a sharp acceleration from the 9.8-month doubling period observed since 2019.

The findings, published on April 5 and based on the METR time-horizon methodology, paint a sobering picture of how quickly AI systems are gaining the ability to autonomously discover and exploit software vulnerabilities.

From 30 Seconds to Three Hours

The study evaluated 291 offensive cybersecurity tasks, grounded in a new human expert study involving ten professional security practitioners. Researchers measured how long equivalent tasks would take skilled humans to complete, then tested how well AI models could solve them.

The results were striking. The time horizon — the difficulty level at which models achieve a 50 percent success rate — grew from roughly 30 seconds with GPT-2 in 2019 to approximately three hours with today's frontier models, Claude Opus 4.6 and GPT-5.3 Codex, when given a two-million-token compute budget.

When the budget was increased to ten million tokens, GPT-5.3 Codex pushed that ceiling even further, achieving a 10.5-hour time horizon compared to 3.1 hours at the lower budget. This suggests that the true capability frontier may be significantly higher than standard benchmarks indicate.

Open-Source Models Trailing by Months

The study also found that open-source models consistently lag behind their closed-source counterparts by approximately 5.7 months — roughly one doubling period. While this gap provides a buffer, it also means capabilities that are exclusive to frontier labs today will likely be widely available within half a year.

Why It Matters

The acceleration from a 9.8-month to a 5.7-month doubling rate since 2024 suggests that recent advances in reasoning, agentic tool use, and code generation have disproportionately benefited offensive cyber applications. Tasks that once required hours of human expertise — reconnaissance, vulnerability discovery, exploit crafting — are increasingly within reach of automated systems.

Researchers cautioned that their findings likely underestimate actual progress, since performance jumps significantly when models are given more computational resources. The gap between benchmark results and real-world capability may be wider than previously assumed.

Implications for Defense

The study underscores the urgency of investing in AI-powered defensive cybersecurity tools. As Ledger CTO Charles Guillemet separately warned this week, AI-generated code and increasingly sophisticated malware demand a shift toward formal verification — using mathematical proofs to validate code — rather than relying solely on traditional security audits.

With offensive AI capabilities on this trajectory, the cybersecurity community faces a narrowing window to build defenses that can keep pace. The full dataset is available on GitHub and Hugging Face for independent verification.

The research adds to a growing body of evidence that AI safety evaluations need to account for rapid capability gains, particularly in high-stakes domains where the gap between helpful automation and dangerous exploitation is razor-thin.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Claude AI Autonomously Writes FreeBSD Kernel Exploit in Four Hours, Sparking Security Alarm
Research

Claude AI Autonomously Writes FreeBSD Kernel Exploit in Four Hours, Sparking Security Alarm

Researcher Nicholas Carlini reveals Claude Code autonomously developed a working remote root exploit for a FreeBSD kernel vulnerability, part of a broader initiative that has uncovered 500 zero-day bugs.

1 day ago2 min read
Netflix Open-Sources VOID — An AI That Erases Objects From Video and Rewrites the Physics They Left Behind
Research

Netflix Open-Sources VOID — An AI That Erases Objects From Video and Rewrites the Physics They Left Behind

Netflix releases VOID (Video Object and Interaction Deletion), an open-source AI model that removes objects from video and inpaints physically plausible outcomes. Human testers preferred VOID over Runway 64.8% to 18.4%.

2 days ago2 min read
Google Says Quantum Computers Could Crack Bitcoin's Encryption in 9 Minutes — 20x Fewer Qubits Than Thought
Research

Google Says Quantum Computers Could Crack Bitcoin's Encryption in 9 Minutes — 20x Fewer Qubits Than Thought

New Google research estimates quantum computers could break the elliptic curve cryptography protecting Bitcoin using fewer than 500,000 physical qubits — a 20-fold reduction in resources required. Ethereum faces an even broader structural vulnerability.

4 days ago3 min read