Back to stories
Policy

Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook

Michael Ouroumis2 min read
Linux Kernel Formally Allows AI-Generated Code — With Humans On The Hook

The Linux kernel project has ended a months-long argument over artificial intelligence by doing something characteristically Torvalds: refusing to ban the tools, refusing to romanticize them, and making humans eat every mistake the machines make. On April 12, kernel maintainers agreed on a formal, project-wide policy that explicitly allows AI-assisted code contributions, provided submitters follow strict new disclosure rules and accept full accountability for what they ship.

Linus Torvalds ultimately cut the debate short, reportedly dismissing calls for an outright ban as "pointless posturing" and framing AI as just another tool in the developer's belt. The decision closes a fight that had been running since at least January, as maintainers wrestled with a flood of low-quality, machine-generated patches — what detractors called "AI slop" — showing up on kernel mailing lists.

What the policy actually says

The new rules permit developers to use systems such as GitHub Copilot while insisting that human contributors remain fully accountable for every line they submit. That includes code quality, licence compliance, and any bugs or security problems that emerge downstream. A developer can prompt Copilot for a suggestion, but the moment they add their Signed-off-by line, they are personally attesting to its correctness.

To make the provenance visible, the kernel is introducing a new "Assisted-by" tag for patches that involved AI. The tag is meant to identify which model and which tools were used, giving maintainers and reviewers a clearer view of how a submission was produced. Crucially, AI agents themselves are forbidden from adding Signed-off-by tags — only humans can take the legal step of certifying a patch.

Why this matters beyond the kernel

The Linux kernel is not just another open-source project. Its contribution norms — the Developer Certificate of Origin, the Signed-off-by workflow, the maintainer hierarchy — have been copied across thousands of downstream projects for two decades. When the kernel adopts a stance on AI, it becomes the de facto template for Git-based open-source governance.

The "humans pay for every mistake" framing also sends a clear signal to enterprises now deploying coding agents at scale. As AI-generated patches proliferate across GitHub, GitLab, and internal repos, kernel-style accountability rules give legal and security teams something concrete to point to. Expect the Assisted-by tag, or close cousins of it, to spread quickly.

The middle ground

The most notable aspect of the decision may be how unremarkable it looks in hindsight. Rather than adopting either of the extremes — ban AI contributions outright, or treat them like any other patch — Torvalds and his maintainers picked a middle path: transparency plus human liability. It is a bet that the kernel's decades-old discipline of individual responsibility can absorb a new class of tool without losing its character.

For now, that bet holds. Whether it survives the first major AI-introduced CVE is a different question.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Policy

Anthropic's Amodei Meets Wiles and Bessent at White House in Pentagon Dispute Thaw
Policy

Anthropic's Amodei Meets Wiles and Bessent at White House in Pentagon Dispute Thaw

Anthropic CEO Dario Amodei met White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent on April 17, 2026, signaling a possible thaw in the company's Pentagon supply-chain-risk standoff.

8 hours ago2 min read
AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances
Policy

AI Hiring Enters the Regulated Era as EU Deadline Looms and Landmark Lawsuit Advances

The EU AI Act's August 2026 high-risk enforcement deadline for hiring tools and the Mobley v. Workday class action signal a new era of AI recruitment regulation.

2 days ago2 min read
Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals
Policy

Maine Sends AI Therapy Ban to Governor as States Move to Protect Licensed Professionals

Maine's LD 2082, which would prohibit the clinical use of AI in mental health therapy without a licensed professional, has been sent to Governor Janet Mills — part of a wave of state-level crackdowns on therapy chatbots.

4 days ago3 min read