A technical writeup published by security researcher Nicholas Carlini has sent shockwaves through the cybersecurity community. Anthropic's Claude Code autonomously developed two working remote root exploits for a FreeBSD kernel vulnerability — each succeeding on its first attempt after approximately four hours of compute time. No human guided the exploitation process.
The Exploit
The target was CVE-2026-4747, a stack buffer overflow in FreeBSD's RPCSEC_GSS authentication module that was patched on March 26. According to Carlini's writeup, he stepped away from his keyboard and returned to find the AI had solved six distinct technical problems without human assistance — from identifying the vulnerability's exploitable characteristics to crafting a reliable remote code execution chain that achieved root shell access.
Writing a kernel-level remote exploit is among the most technically demanding tasks in offensive security. It requires understanding memory layouts, bypassing kernel protections, and chaining multiple primitives into a reliable payload. Until now, this level of autonomous exploitation was considered firmly beyond machine capability.
MAD Bugs: The Bigger Picture
The FreeBSD exploit is not an isolated demonstration. It is part of MAD Bugs — Month of AI-Discovered Bugs — a research initiative running through the end of April 2026. Using the same Claude-powered pipeline, Carlini has generated over 500 validated high-severity vulnerabilities across multiple open-source codebases, with new zero-day disclosures emerging every few days.
The sheer volume is staggering. Traditional security researchers might find a handful of critical vulnerabilities in a career-defining year. An AI pipeline is now producing them at industrial scale.
Dual-Use Dilemma
The research forces an uncomfortable conversation about AI capabilities in security. On the defensive side, automated vulnerability discovery could dramatically improve the security of open-source software that underpins critical infrastructure. Projects with limited security budgets could benefit enormously from AI-powered auditing.
But the offensive implications are equally clear. If a research-grade setup can produce kernel exploits autonomously, the barrier to sophisticated cyberattacks drops considerably. Nation-state actors, criminal organizations, and even hobbyists could potentially weaponize similar pipelines.
Industry Response
Security professionals are divided. Some argue that responsible disclosure through initiatives like MAD Bugs is the best way to harden the software ecosystem before malicious actors develop similar capabilities independently. Others worry that publicizing these methods provides a roadmap for attackers.
The FreeBSD Foundation confirmed the vulnerability was patched before Carlini's disclosure and praised the responsible coordination. But with hundreds more disclosures in the pipeline, the pressure on open-source maintainers to keep up with AI-speed vulnerability discovery is only beginning.



