An AI company at the center of how the world's top AI labs get their training data has been breached — and extortion group Lapsus$ is claiming it has the receipts.
Mercor, a $10 billion AI recruiting platform that helps OpenAI and Anthropic hire scientists, doctors, and lawyers to train AI models, confirmed Tuesday that it was hit by a supply chain cyberattack. The attack vector: LiteLLM, a wildly popular open-source project that millions of developers use daily to talk to AI models from OpenAI, Anthropic, Google, and dozens of others.
The Attack Chain
The breach traces back to last week, when malicious code was discovered inside a package associated with LiteLLM's open-source project. Security firm Snyk noted the library is downloaded millions of times per day, giving the attackers — a hacking group called TeamPCP — an enormous potential blast radius.
LiteLLM identified and removed the malicious code within hours of discovery and has since overhauled its compliance processes, switching from security compliance startup Delve (which had itself become controversial) to Vanta for certifications. But the damage was already done for some companies in its dependency chain.
Mercor told TechCrunch it was "one of thousands of companies" affected by the LiteLLM compromise.
Lapsus$ Claims the Data
Complicating the picture is Lapsus$, the extortion group known for high-profile breaches of Nvidia, Samsung, Microsoft, and Uber. Lapsus$ posted on its leak site claiming responsibility for targeting Mercor and obtaining access to its data.
The group shared a data sample with TechCrunch that included material referencing Slack conversations, ticketing data, and two videos purportedly showing conversations between Mercor's AI systems and contractors working on its platform.
Whether Lapsus$ obtained its claimed 4TB of data directly through the LiteLLM supply chain attack, or through a secondary intrusion enabled by it, is not yet clear. Mercor's spokesperson declined to answer questions about the connection.
What's at Stake
The potential exposure matters beyond Mercor's own business. The company's contractors — many of them domain experts in specialized fields — work directly on AI model training data for some of the world's most powerful AI systems. Conversations between those contractors and AI systems could contain sensitive information about how AI models are trained, what kinds of tasks they're being trained on, and the responses that shape model behavior.
Mercor processes more than $2 million in daily payouts to contractors, suggesting a large and active contractor base. Whether any of their data was exposed remains unconfirmed.
The Broader Warning
The Mercor breach is the LiteLLM incident's most significant confirmed downstream casualty so far — but likely not the last. With the library downloaded millions of times daily, security teams across the AI industry are still auditing whether their systems were exposed.
Supply chain attacks have become one of the most effective vectors in modern cybersecurity precisely because they target trusted dependencies. In the AI industry, where startups often prioritize speed and rely heavily on open-source infrastructure, the attack surface is vast.
"The open-source AI infrastructure stack is now a primary attack surface," noted one security analyst. Mercor is the proof of concept.



