Back to stories
Tools

LiteLLM PyPI Package Compromised in Major AI Supply Chain Attack

Michael Ouroumis3 min read
LiteLLM PyPI Package Compromised in Major AI Supply Chain Attack

A supply chain attack targeting LiteLLM — one of the most widely used open-source libraries for routing requests across AI model providers — was discovered on March 24, 2026, sending shockwaves through the developer community. Security researcher Callum McMahon of Futuresearch found that versions 1.82.7 and 1.82.8 of the package on PyPI contained sophisticated malware capable of stealing credentials, installing persistent backdoors, and laterally spreading across Kubernetes clusters.

How the Attack Was Discovered

The attack came to light when a LiteLLM dependency was pulled into an MCP plugin running inside the Cursor code editor. The malicious package included a .pth file — litellm_init.pth — that executes automatically on every Python process startup when the package is installed. Due to a bug in the malware, the .pth launcher triggered a fork bomb that crashed the machine, inadvertently revealing the compromise.

McMahon's analysis found no corresponding tag or release on LiteLLM's official GitHub repository for either affected version, suggesting the packages were uploaded directly to PyPI while bypassing the normal release pipeline. The GitHub issue tracking the incident was subsequently closed as "not planned" and flooded with bot spam, leading McMahon to conclude that the LiteLLM author's account is "very likely fully compromised."

What the Malware Does

The payload operates in three stages. First, it collects sensitive files from the host: SSH private keys, .env files, AWS, GCP, and Azure credentials, Kubernetes configurations, database passwords, shell history, and crypto wallet files. It also queries cloud metadata endpoints to capture instance credentials.

Collected data is then encrypted using AES-256-CBC with a hardcoded 4096-bit RSA public key and exfiltrated to https://models.litellm.cloud/ — a domain unrelated to legitimate LiteLLM infrastructure.

Finally, if Kubernetes service account tokens are present, the malware reads all cluster secrets across all namespaces and attempts to deploy a privileged alpine:latest pod to every node in kube-system, each mounting the host filesystem to install a persistent systemd backdoor.

Remediation Steps

The compromised versions were yanked from PyPI by the afternoon of March 24 following reports to PyPI's security team. However, anyone who installed LiteLLM during the exposure window should:

Broader Implications

The incident is being closely watched as a bellwether for AI-era supply chain risk. Nvidia AI Director Jim Fan called it "pure nightmare fuel," warning that AI agents are especially vulnerable because every file in their context window becomes a potential attack vector. A compromised agent with access to email, code repositories, and cloud APIs could impersonate its user across an entire organization.

Fan's suggestion: build lean, audited dependencies rather than relying on sprawling open-source chains. As AI-powered applications proliferate, the attack surface of third-party packages with elevated access to production infrastructure will only grow — making this incident a defining security moment for the industry.

Learn AI for Free — FreeAcademy.ai

Take "Prompt Engineering Practice" — a free course with certificate to master the skills behind this story.

More in Tools

OpenAI Adds Trusted Contact to ChatGPT After Self-Harm Lawsuits
Tools

OpenAI Adds Trusted Contact to ChatGPT After Self-Harm Lawsuits

OpenAI rolled out Trusted Contact, an opt-in feature that can alert a designated adult when ChatGPT detects signs of serious self-harm risk in a user's conversation.

10 hours ago2 min read
Google AI Search Now Surfaces 'Expert Advice' From Reddit and Forums
Tools

Google AI Search Now Surfaces 'Expert Advice' From Reddit and Forums

Google is updating AI Overviews and AI Mode to feature direct quotes from Reddit, online forums, and social media under labels like 'Expert Advice' and 'Community Perspectives,' alongside new context for creators and subscribed publications.

2 days ago2 min read
Microsoft Backtracks After VS Code 1.118 Stamps Copilot as Git Co-Author by Default
Tools

Microsoft Backtracks After VS Code 1.118 Stamps Copilot as Git Co-Author by Default

VS Code 1.118 quietly flipped a setting that adds 'Co-Authored-by: Copilot' to Git commits — even on machines with AI features disabled. After developer backlash on GitHub and Hacker News, Microsoft says it will revert the default in version 1.119.

5 days ago2 min read