Back to stories
Research

AI Offensive Cyber Capabilities Are Doubling Every 5.7 Months, Safety Researchers Find

Michael Ouroumis2 min read
AI Offensive Cyber Capabilities Are Doubling Every 5.7 Months, Safety Researchers Find

A new study from AI safety research firm Lyptus Research has found that artificial intelligence offensive cybersecurity capabilities are improving at an alarming rate — doubling roughly every 5.7 months since 2024, a sharp acceleration from the 9.8-month doubling period observed since 2019.

The findings, published on April 5 and based on the METR time-horizon methodology, paint a sobering picture of how quickly AI systems are gaining the ability to autonomously discover and exploit software vulnerabilities.

From 30 Seconds to Three Hours

The study evaluated 291 offensive cybersecurity tasks, grounded in a new human expert study involving ten professional security practitioners. Researchers measured how long equivalent tasks would take skilled humans to complete, then tested how well AI models could solve them.

The results were striking. The time horizon — the difficulty level at which models achieve a 50 percent success rate — grew from roughly 30 seconds with GPT-2 in 2019 to approximately three hours with today's frontier models, Claude Opus 4.6 and GPT-5.3 Codex, when given a two-million-token compute budget.

When the budget was increased to ten million tokens, GPT-5.3 Codex pushed that ceiling even further, achieving a 10.5-hour time horizon compared to 3.1 hours at the lower budget. This suggests that the true capability frontier may be significantly higher than standard benchmarks indicate.

Open-Source Models Trailing by Months

The study also found that open-source models consistently lag behind their closed-source counterparts by approximately 5.7 months — roughly one doubling period. While this gap provides a buffer, it also means capabilities that are exclusive to frontier labs today will likely be widely available within half a year.

Why It Matters

The acceleration from a 9.8-month to a 5.7-month doubling rate since 2024 suggests that recent advances in reasoning, agentic tool use, and code generation have disproportionately benefited offensive cyber applications. Tasks that once required hours of human expertise — reconnaissance, vulnerability discovery, exploit crafting — are increasingly within reach of automated systems.

Researchers cautioned that their findings likely underestimate actual progress, since performance jumps significantly when models are given more computational resources. The gap between benchmark results and real-world capability may be wider than previously assumed.

Implications for Defense

The study underscores the urgency of investing in AI-powered defensive cybersecurity tools. As Ledger CTO Charles Guillemet separately warned this week, AI-generated code and increasingly sophisticated malware demand a shift toward formal verification — using mathematical proofs to validate code — rather than relying solely on traditional security audits.

With offensive AI capabilities on this trajectory, the cybersecurity community faces a narrowing window to build defenses that can keep pace. The full dataset is available on GitHub and Hugging Face for independent verification.

The research adds to a growing body of evidence that AI safety evaluations need to account for rapid capability gains, particularly in high-stakes domains where the gap between helpful automation and dangerous exploitation is razor-thin.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Researchers Expose 26 Malicious LLM Routers Hijacking AI Agents and Stealing Credentials
Research

Researchers Expose 26 Malicious LLM Routers Hijacking AI Agents and Stealing Credentials

A UC Santa Barbara study of 428 LLM API routers found 26 secretly injecting malicious tool calls, exfiltrating credentials, and draining crypto wallets — exposing a critical blind spot in the AI supply chain.

1 day ago2 min read
AI Chatbots Fail Over 80% of Early Medical Diagnoses, JAMA Study Finds
Research

AI Chatbots Fail Over 80% of Early Medical Diagnoses, JAMA Study Finds

A JAMA Network Open study of 21 leading AI models found they fail to produce appropriate differential diagnoses more than 80% of the time when patient data is incomplete, despite achieving over 90% accuracy on final diagnoses with full information.

1 day ago2 min read
Stanford AI Index 2026: Capability Is Accelerating, But Benefits Are Concentrating
Research

Stanford AI Index 2026: Capability Is Accelerating, But Benefits Are Concentrating

The Stanford HAI AI Index 2026, released today, reports $581.7B in global corporate AI investment, a 29.6 GW data-center power footprint, and a shrinking US–China capability gap.

3 days ago2 min read