Back to stories
Research

Claude AI Autonomously Writes FreeBSD Kernel Exploit in Four Hours, Sparking Security Alarm

Michael Ouroumis2 min read
Claude AI Autonomously Writes FreeBSD Kernel Exploit in Four Hours, Sparking Security Alarm

A technical writeup published by security researcher Nicholas Carlini has sent shockwaves through the cybersecurity community. Anthropic's Claude Code autonomously developed two working remote root exploits for a FreeBSD kernel vulnerability — each succeeding on its first attempt after approximately four hours of compute time. No human guided the exploitation process.

The Exploit

The target was CVE-2026-4747, a stack buffer overflow in FreeBSD's RPCSEC_GSS authentication module that was patched on March 26. According to Carlini's writeup, he stepped away from his keyboard and returned to find the AI had solved six distinct technical problems without human assistance — from identifying the vulnerability's exploitable characteristics to crafting a reliable remote code execution chain that achieved root shell access.

Writing a kernel-level remote exploit is among the most technically demanding tasks in offensive security. It requires understanding memory layouts, bypassing kernel protections, and chaining multiple primitives into a reliable payload. Until now, this level of autonomous exploitation was considered firmly beyond machine capability.

MAD Bugs: The Bigger Picture

The FreeBSD exploit is not an isolated demonstration. It is part of MAD Bugs — Month of AI-Discovered Bugs — a research initiative running through the end of April 2026. Using the same Claude-powered pipeline, Carlini has generated over 500 validated high-severity vulnerabilities across multiple open-source codebases, with new zero-day disclosures emerging every few days.

The sheer volume is staggering. Traditional security researchers might find a handful of critical vulnerabilities in a career-defining year. An AI pipeline is now producing them at industrial scale.

Dual-Use Dilemma

The research forces an uncomfortable conversation about AI capabilities in security. On the defensive side, automated vulnerability discovery could dramatically improve the security of open-source software that underpins critical infrastructure. Projects with limited security budgets could benefit enormously from AI-powered auditing.

But the offensive implications are equally clear. If a research-grade setup can produce kernel exploits autonomously, the barrier to sophisticated cyberattacks drops considerably. Nation-state actors, criminal organizations, and even hobbyists could potentially weaponize similar pipelines.

Industry Response

Security professionals are divided. Some argue that responsible disclosure through initiatives like MAD Bugs is the best way to harden the software ecosystem before malicious actors develop similar capabilities independently. Others worry that publicizing these methods provides a roadmap for attackers.

The FreeBSD Foundation confirmed the vulnerability was patched before Carlini's disclosure and praised the responsible coordination. But with hundreds more disclosures in the pipeline, the pressure on open-source maintainers to keep up with AI-speed vulnerability discovery is only beginning.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

AI Offensive Cyber Capabilities Are Doubling Every 5.7 Months, Safety Researchers Find
Research

AI Offensive Cyber Capabilities Are Doubling Every 5.7 Months, Safety Researchers Find

A new study from Lyptus Research reveals AI offensive cybersecurity capabilities have been doubling every 5.7 months since 2024, with frontier models now matching tasks that take human experts three hours to complete.

1 day ago2 min read
Netflix Open-Sources VOID — An AI That Erases Objects From Video and Rewrites the Physics They Left Behind
Research

Netflix Open-Sources VOID — An AI That Erases Objects From Video and Rewrites the Physics They Left Behind

Netflix releases VOID (Video Object and Interaction Deletion), an open-source AI model that removes objects from video and inpaints physically plausible outcomes. Human testers preferred VOID over Runway 64.8% to 18.4%.

3 days ago2 min read
Google Says Quantum Computers Could Crack Bitcoin's Encryption in 9 Minutes — 20x Fewer Qubits Than Thought
Research

Google Says Quantum Computers Could Crack Bitcoin's Encryption in 9 Minutes — 20x Fewer Qubits Than Thought

New Google research estimates quantum computers could break the elliptic curve cryptography protecting Bitcoin using fewer than 500,000 physical qubits — a 20-fold reduction in resources required. Ethereum faces an even broader structural vulnerability.

5 days ago3 min read