Back to stories
Industry

Anthropic Leaked Its Own Source Code — Then Accidentally Took Down 8,000 GitHub Repos Trying to Fix It

Michael Ouroumis3 min read
Anthropic Leaked Its Own Source Code — Then Accidentally Took Down 8,000 GitHub Repos Trying to Fix It

Anthropic is having a very bad week. What began as a routine npm package update turned into one of the most embarrassing source code leaks in recent AI history — and the company's attempt to contain the damage managed to make things considerably worse.

How 512,000 Lines of Code Ended Up on npm

On March 31, Anthropic pushed version 2.1.88 of its Claude Code command-line tool. Buried inside the package was a source map file that should never have been included — a debugging artifact that pointed directly to a zip archive on Anthropic's own cloud storage containing the complete TypeScript codebase.

Security researcher Chaofan Shou was the first to notice and post about it publicly. His post accumulated over 28 million views on X. Within hours, the code was spreading across GitHub. The leaked package contained nearly 2,000 files and over 512,000 lines of code.

Anthropic pulled the version from npm quickly. By that point, it didn't matter.

"No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson confirmed. "This was a release packaging issue caused by human error, not a security breach."

The distinction is technically accurate but commercially awkward. The leaked codebase surpassed 84,000 GitHub stars. Spin-off projects — including one called OpenCode — emerged directly from the exposure.

KAIROS: The Always-On Agent Mode Nobody Knew Existed

The leak gave researchers an unusually detailed look at Anthropic's internal roadmap. The most striking discovery was a feature codenamed KAIROS, mentioned over 150 times throughout the codebase.

KAIROS appears to describe an autonomous daemon mode — a version of Claude Code that runs continuously in the background, performing tasks like memory consolidation and context management while the user is doing something else. It's the kind of always-on AI assistant capability that has been discussed in the abstract for years, now confirmed as an active internal project.

The leak also confirmed several internal model codenames: Capybara for a Claude 4.6 variant, Fennec for Opus 4.6, and Numbat for an unreleased model still in testing. Anthropic hadn't publicly disclosed any of these.

The DMCA Response That Targeted Innocent Developers

Anthropic's legal response to the leak created a second, separate controversy. The company filed a DMCA takedown notice asking GitHub to remove repositories containing the leaked source code. The notice was executed against 8,100 repositories.

The problem: that sweep included legitimate forks of Anthropic's own publicly-released Claude Code repository. Developers who had forked the public codebase for entirely unrelated reasons suddenly found their work blocked — not because they had done anything wrong, but because they were caught in the blast radius of an automated takedown system that didn't distinguish between forks of the leaked code and forks of the legitimate public repo.

Boris Cherny, Anthropic's head of Claude Code, acknowledged the error on X: "This was not intentional, we've been working with GitHub to fix it." The company retracted the notices for everything except the one target repository and its 96 direct forks containing the actual leaked code. GitHub restored access to the affected forks.

A Separate Trojan, a Worse Timing

In an unfortunate coincidence, a separate security incident unfolded on the same day. Users who installed or updated Claude Code via npm on March 31 between 00:21 and 03:49 UTC may have pulled a trojanized version of the axios HTTP client that contained a remote access trojan. Security researchers confirmed this was a supply chain attack on the axios package itself, not an Anthropic-specific failure — but the timing made an already difficult situation harder to communicate clearly.

Users who updated during that window are advised to downgrade to a safe version and rotate all secrets.

The Bigger Picture

Anthropic is reportedly planning an IPO. A source code leak at an AI company that sells reliability and safety as core differentiators is more than a PR problem — it raises real questions about internal controls and release process maturity. The fact that the DMCA cleanup then swept up thousands of innocent developers compounds the reputational cost.

The AI coding tools market is one of the most commercially important battlegrounds in enterprise AI right now. Claude Code is a category leader. The question Anthropic now has to answer isn't just how the leak happened — it's whether the cleanup response signals the kind of execution discipline that public market investors and enterprise customers need to see.

Learn AI for Free — FreeAcademy.ai

Take "AI for Business: Practical Implementation" — a free course with certificate to master the skills behind this story.

More in Industry

Apple Testing Multi-Command Siri for iOS 27 — A Smarter Assistant, Two Years in the Making
Industry

Apple Testing Multi-Command Siri for iOS 27 — A Smarter Assistant, Two Years in the Making

Bloomberg reports Apple is testing multi-step Siri requests for iOS 27, allowing users to chain commands like getting directions and texting them to a contact in a single interaction. The update targets a September release after a WWDC preview on June 8.

16 hours ago3 min read
83% of Enterprises Still Haven't Adopted Modern Language AI, DeepL Report Finds
Industry

83% of Enterprises Still Haven't Adopted Modern Language AI, DeepL Report Finds

DeepL's 2026 Borderless Business report reveals that despite a 50% surge in enterprise content volume since 2023, 83% of businesses have not transitioned to next-gen language AI, with most still relying on manual translation or legacy automation.

16 hours ago4 min read
OpenAI Shares Struggle to Find Buyers as Investors Pivot to Anthropic
Industry

OpenAI Shares Struggle to Find Buyers as Investors Pivot to Anthropic

OpenAI shares have become nearly impossible to sell on secondary markets as institutional investors rush to deploy billions into rival Anthropic instead.

16 hours ago2 min read