The Linux kernel project has ended a months-long argument over artificial intelligence by doing something characteristically Torvalds: refusing to ban the tools, refusing to romanticize them, and making humans eat every mistake the machines make. On April 12, kernel maintainers agreed on a formal, project-wide policy that explicitly allows AI-assisted code contributions, provided submitters follow strict new disclosure rules and accept full accountability for what they ship.
Linus Torvalds ultimately cut the debate short, reportedly dismissing calls for an outright ban as "pointless posturing" and framing AI as just another tool in the developer's belt. The decision closes a fight that had been running since at least January, as maintainers wrestled with a flood of low-quality, machine-generated patches — what detractors called "AI slop" — showing up on kernel mailing lists.
What the policy actually says
The new rules permit developers to use systems such as GitHub Copilot while insisting that human contributors remain fully accountable for every line they submit. That includes code quality, licence compliance, and any bugs or security problems that emerge downstream. A developer can prompt Copilot for a suggestion, but the moment they add their Signed-off-by line, they are personally attesting to its correctness.
To make the provenance visible, the kernel is introducing a new "Assisted-by" tag for patches that involved AI. The tag is meant to identify which model and which tools were used, giving maintainers and reviewers a clearer view of how a submission was produced. Crucially, AI agents themselves are forbidden from adding Signed-off-by tags — only humans can take the legal step of certifying a patch.
Why this matters beyond the kernel
The Linux kernel is not just another open-source project. Its contribution norms — the Developer Certificate of Origin, the Signed-off-by workflow, the maintainer hierarchy — have been copied across thousands of downstream projects for two decades. When the kernel adopts a stance on AI, it becomes the de facto template for Git-based open-source governance.
The "humans pay for every mistake" framing also sends a clear signal to enterprises now deploying coding agents at scale. As AI-generated patches proliferate across GitHub, GitLab, and internal repos, kernel-style accountability rules give legal and security teams something concrete to point to. Expect the Assisted-by tag, or close cousins of it, to spread quickly.
The middle ground
The most notable aspect of the decision may be how unremarkable it looks in hindsight. Rather than adopting either of the extremes — ban AI contributions outright, or treat them like any other patch — Torvalds and his maintainers picked a middle path: transparency plus human liability. It is a bet that the kernel's decades-old discipline of individual responsibility can absorb a new class of tool without losing its character.
For now, that bet holds. Whether it survives the first major AI-introduced CVE is a different question.



