Back to stories
Industry

Anthropic Accuses DeepSeek, Moonshot, and MiniMax of Stealing Claude's Capabilities

Michael Ouroumis2 min read
Anthropic Accuses DeepSeek, Moonshot, and MiniMax of Stealing Claude's Capabilities

Anthropic has publicly accused three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — of conducting massive coordinated attacks to extract Claude's capabilities through a technique known as model distillation. The company says the operations used approximately 24,000 fake accounts to generate over 16 million queries.

The Scale of the Operation

According to Anthropic, each company ran its distillation campaign independently but at significant scale:

Model distillation involves systematically querying a more capable AI system to generate training data for a smaller, cheaper model. By asking carefully crafted questions and collecting the responses, a company can effectively transfer knowledge from the target model to its own systems without bearing the cost of original training.

How It Works

The attack pattern typically involves creating large numbers of API accounts, then running automated scripts that pose diverse questions across many domains. The responses are collected, filtered for quality, and used as training data for the attacker's own models.

Anthropic says it detected the activity through anomalous usage patterns — accounts generating far more queries than typical users, with patterns consistent with automated extraction rather than genuine usage. The fake accounts used various techniques to evade detection, including rotating IP addresses and mimicking human usage patterns.

The Broader Problem

Model distillation is an open secret in the AI industry. While the technique is well-understood academically, the scale Anthropic describes — tens of millions of queries across thousands of accounts — represents an industrial-level operation.

The accusations raise questions about intellectual property in AI. Notably, one of the accused companies — Moonshot AI — recently open-sourced its Kimi Code terminal agent, raising separate questions about the relationship between distillation and open-source development. Training data, compute costs, and research investment are the primary moats for frontier AI companies. If competitors can extract meaningful capability through API access alone, the economics of building frontier models shift significantly.

What Happens Next

Anthropic has not disclosed whether it plans legal action. The company has terminated the identified accounts and says it has implemented additional detection measures to prevent similar campaigns.

None of the three accused companies have publicly responded to the allegations. The incident is likely to intensify discussions about API access controls, usage monitoring, and the enforceability of terms of service across international boundaries — particularly as China implements new government review requirements for AI models.

More in Industry

AMD Unveils MI400 AI Accelerator — First Real Threat to NVIDIA's Dominance
Industry

AMD Unveils MI400 AI Accelerator — First Real Threat to NVIDIA's Dominance

AMD launches the Instinct MI400, an AI accelerator with 256GB of HBM4 memory and training performance that AMD claims matches NVIDIA's H200 at 40% lower cost per chip.

1 day ago2 min read
Apple Announces On-Device LLM at WWDC 2026 — Privacy-First AI
Industry

Apple Announces On-Device LLM at WWDC 2026 — Privacy-First AI

Apple unveils a 3-billion parameter large language model that runs entirely on-device across iPhone, iPad, and Mac, powering a dramatically upgraded Siri with no cloud dependency for core features.

1 day ago2 min read
Cursor AI Raises $500M at $2B Valuation as AI-Native IDEs Go Mainstream
Industry

Cursor AI Raises $500M at $2B Valuation as AI-Native IDEs Go Mainstream

Anysphere, the company behind the Cursor AI code editor, closes a $500 million Series C at a $2 billion valuation, signaling that AI-native development environments are becoming the industry default.

1 day ago2 min read