On March 24, 2026, the AI developer community received a stark reminder of how fragile software supply chains have become. Two versions of litellm — a widely used Python library that serves as a unified proxy for over 100 LLM providers (OpenAI, Anthropic, AWS Bedrock, Google Vertex, and many more) — were compromised on PyPI.
Versions 1.82.7 and 1.82.8 contained malicious code that turned the package into an aggressive credential stealer and Kubernetes lateral-movement tool. The attack was short-lived (the malicious releases were available for roughly 2–5 hours), but given litellm’s massive adoption — millions of daily downloads and heavy use in AI agent frameworks, MCP servers, orchestration tools, and production LLM pipelines — the potential impact is enormous.
This wasn’t typo-squatting or a fake package. It was a direct compromise of the legitimate litellm project on PyPI, attributed to the threat actor TeamPCP (the same group behind recent attacks on Trivy, Checkmarx/KICS, and other security tooling).
LiteLLM acts as the “universal API gateway” for LLMs. Developers import it once and can call any model provider with the same OpenAI-compatible interface. Because it often centralizes API keys and authentication tokens, it frequently runs with broad access to secrets — making it a high-value target.
When the malicious versions were installed (via pip install litellm or as a transitive dependency), the backdoor activated silently. Version 1.82.8 was especially dangerous: it shipped with a file called litellm_init.pth.
Python automatically executes any .pth file found in site-packages/ at interpreter startup — no import litellm required. The attackers abused this built-in mechanism perfectly.
litellm_init.pth (approximately 34 KB, heavily obfuscated with double base64 encoding) launched a child Python process containing the real payload.~/.ssh/)~/.kube/).env files, Git credentials, shell histories, database passwords, and crypto wallets (Bitcoin, Ethereum, Solana, etc.)https://models.litellm.cloud/ (an attacker-controlled domain).alpine:latest pods (named node-setup-*) on every node~/.config/sysmon/sysmon.py) via systemd user servicelitellm/proxy/proxy_server.py but delivered the same stealer functionality.A bug in the malware sometimes caused an exponential fork bomb (the child process re-triggered the .pth file), which ironically helped some researchers notice the issue faster when their systems became unresponsive.
By the afternoon, the malicious versions were gone and quarantine was lifted.
Anyone who installed or upgraded to 1.82.7 or 1.82.8 on March 24, 2026 — including:
If you run litellm in Kubernetes or handle LLM API keys, treat the environment as potentially fully compromised.
Check your installations
pip show litellm # or, if using uv: find ~/.cache/uv -name "litellm_init.pth"
Pin to a safe version
litellm<=1.82.6
Rotate EVERY credential that existed on any affected system:
.env files or shell historyHunt for persistence:
~/.config/sysmon/sysmon.py and the corresponding systemd servicekube-system namespace for node-setup-* pods and review secret access logsPurge caches
pip cache purge # or rm -rf ~/.cache/pip ~/.cache/uv
This incident is the latest in TeamPCP’s campaign targeting security tooling and AI infrastructure. It highlights three uncomfortable truths:
Supply-chain attacks on AI libraries are no longer theoretical — they target the exact layer where your most sensitive secrets live.
Pin your dependencies aggressively. Review your CI/CD pipelines for external scanners and tools. Assume any popular AI package could be next. And if you installed litellm 1.82.7 or 1.82.8 yesterday — rotate those keys immediately.
The packages have been removed, but any secrets they stole may already be in the attackers’ hands.
Stay vigilant. The AI supply chain just became significantly more dangerous.
This post is based on public disclosures from BerriAI, FutureSearch, Sonatype, Endor Labs, Snyk, ARMO, and other security researchers.