Anthropic had a very bad Tuesday.
When the company pushed Claude Code version 2.1.88, it accidentally included a source map file containing its entire TypeScript codebase — all 512,000+ lines of it. Within hours, the internet had found it, forked it 50,000 times, and turned it into a public treasure hunt for upcoming Anthropic features.
How It Happened
The leak stemmed from a release packaging mistake. When building Claude Code's JavaScript bundle, the build process included an internal source map file — meant only for debugging — that contained the full TypeScript source code. It's the kind of slip that can happen when shipping fast; it's rarely catastrophic when it does. Rarely.
"Earlier today, a Claude Code release included some internal source code," Anthropic spokesperson Christopher Nulty confirmed to The Verge. "No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
Anthropic fixed the issue quickly. By then it was too late.
The GitHub Wildfire
A user on X spotted the exposed source map and posted a file containing the code. Within hours, a GitHub repository named "claw-code" hosting the leaked source had accumulated more than 50,000 forks. That number makes retrieval essentially impossible — once code is forked at that scale, it's out.
What People Found
The code contained far more than implementation details. Users who dug into it surfaced several things Anthropic clearly wasn't ready to announce:
The Tamagotchi Pet Perhaps the most surprising discovery: a virtual pet that "sits beside your input box and reacts to your coding." According to a Reddit post summarizing the findings, the companion appears designed to respond to what you're building — celebrating progress, reacting to errors, keeping developers company during long sessions. It's a charming, distinctly un-corporate product idea from an AI lab known for sober safety research.
KAIROS — The Always-On Agent More significant from a product standpoint is what users identified as a "KAIROS" feature — code indicating an always-on background agent capability. Rather than responding only when prompted, KAIROS would let Claude run continuously in the background, monitoring tasks, watching for events, and taking action autonomously. That's a meaningful architectural shift from a tool you talk to toward something closer to a teammate that never goes idle.
The Memory Architecture The leaked code also gave a look at how Claude Code manages memory across sessions — the internal architecture that lets it maintain context about your project, coding style, and previous decisions over time.
A Developer's Candor And then there was this: a code comment from one of Anthropic's engineers admitting that "the memoization here increases complexity by a lot, and im not sure it really improves performance." A nice reminder that the people building AI are just as prone to leaving honest comments in code as anyone else.
The Security Angle
Gartner AI analyst Arun Chandrasekaran offered a measured assessment: while the leak "poses risks such as providing bad actors with possible outlets to bypass guardrails," its long-term impact may be limited. More likely, he told The Verge, it serves as a "call for action for Anthropic to invest more in processes and tools for better operational maturity."
The guardrail concern is real. Understanding how a safety system is implemented is the first step toward circumventing it. But Anthropic's core model weights — the thing that actually drives Claude's behavior — weren't exposed here. Source code for a CLI tool and a model's safety training are very different things.
What Comes Next
Anthropic will patch its release pipeline. The KAIROS and Tamagotchi features will eventually ship, or not, on Anthropic's own timeline — though now with the world watching. And somewhere, an Anthropic engineer is very carefully reading their own code comments before committing.
The Claude Code leak is already being called one of the more significant accidental open-source moments in recent AI history — not for what it exposed about security, but for what it revealed about where the most safety-conscious lab in AI is quietly heading.



