Anthropic has publicly accused three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — of conducting massive coordinated attacks to extract Claude's capabilities through a technique known as model distillation. The company says the operations used approximately 24,000 fake accounts to generate over 16 million queries.
The Scale of the Operation
According to Anthropic, each company ran its distillation campaign independently but at significant scale:
- DeepSeek — Over 150,000 exchanges designed to extract reasoning patterns
- Moonshot AI — Approximately 3.4 million queries targeting specific knowledge domains
- MiniMax — Around 13 million exchanges, the largest of the three campaigns
Model distillation involves systematically querying a more capable AI system to generate training data for a smaller, cheaper model. By asking carefully crafted questions and collecting the responses, a company can effectively transfer knowledge from the target model to its own systems without bearing the cost of original training.
How It Works
The attack pattern typically involves creating large numbers of API accounts, then running automated scripts that pose diverse questions across many domains. The responses are collected, filtered for quality, and used as training data for the attacker's own models.
Anthropic says it detected the activity through anomalous usage patterns — accounts generating far more queries than typical users, with patterns consistent with automated extraction rather than genuine usage. The fake accounts used various techniques to evade detection, including rotating IP addresses and mimicking human usage patterns.
The Broader Problem
Model distillation is an open secret in the AI industry. While the technique is well-understood academically, the scale Anthropic describes — tens of millions of queries across thousands of accounts — represents an industrial-level operation.
The accusations raise questions about intellectual property in AI. Notably, one of the accused companies — Moonshot AI — recently open-sourced its Kimi Code terminal agent, raising separate questions about the relationship between distillation and open-source development. Training data, compute costs, and research investment are the primary moats for frontier AI companies. If competitors can extract meaningful capability through API access alone, the economics of building frontier models shift significantly.
What Happens Next
Anthropic has not disclosed whether it plans legal action. The company has terminated the identified accounts and says it has implemented additional detection measures to prevent similar campaigns.
None of the three accused companies have publicly responded to the allegations. The incident is likely to intensify discussions about API access controls, usage monitoring, and the enforceability of terms of service across international boundaries — particularly as China implements new government review requirements for AI models.



