AI Integration & Development

A Few Hours on PyPI Almost Exposed Everything: The LiteLLM Supply Chain Attack

Two malicious LiteLLM versions hit PyPI for a few hours. They tried to steal your API keys. Here's what happened and what to do about it.

The LiteLLM Supply Chain Attack - Cover Image On March 24, 2026, two versions of LiteLLM were published to PyPI. They looked normal. They weren't.

Those versions contained code designed to steal your API keys, database credentials, and SSH keys. They were live for a few hours. Then they were pulled.

 The Mt. Gox Collapse: Bitcoin's First Crisis

The Mt. Gox Collapse: Bitcoin's First Crisis

How Mt. Gox went from a Magic: The Gathering site to the world's largest Bitcoin exchange—then lost 850,000 BTC and triggered crypto's first existential crisis.

Learn More

A few hours. That's the window between "nothing happened" and "we need to file a breach report."

What LiteLLM Is and Why This Is a Big Deal

If you haven't used LiteLLM, here's the short version: it's an abstraction layer that lets you call OpenAI, Anthropic, and other LLM providers through a single unified interface. Simple. Useful. Widely adopted.

It also typically runs close to your most sensitive systems. Environment variables full of API keys. Database connections. Kubernetes clusters. CI/CD pipelines. The thing that makes LiteLLM useful is exactly what makes it a perfect attack vector: it sits at the center of your AI infrastructure with access to everything.

What Happened

Two malicious versions hit PyPI:

  • litellm==1.82.7
  • litellm==1.82.8

The injected code collected sensitive data from the host machine and sent it to attacker-controlled infrastructure. Both versions were available for only a few hours on March 24 before being identified and removed.

Not affected: older pinned versions, GitHub installs, official Docker images, and LiteLLM Cloud.

This wasn't a widespread catastrophe. But it absolutely could have been.

Are You Affected?

You were at risk if you installed one of those exact versions during the window, or if your build pipeline automatically pulls latest dependencies without version pinning.

If that's you, rotate everything. Now.

  • API keys for every LLM provider
  • Database credentials
  • Cloud access tokens
  • SSH keys

Don't gamble on "it probably didn't run." Assume it did. Rotate and move on.

This Isn't Really About LiteLLM

Here's the part that matters more than the incident itself.

The attackers didn't target your application. They targeted something your application trusts. The trust chain looks like this:

Your App → Dependency → Package Registry → Maintainer → Infrastructure

Break any single link and you're exposed. This is the same category of attack as SolarWinds and the Codecov Bash Uploader breach. The difference: now it's hitting AI infrastructure.

That's a different level of risk.

Why AI Tooling Is a Bigger Target

AI libraries aren't like a random frontend utility package. They sit in the center of your system. They often have access to multiple API providers, internal data pipelines, prompt logs, user data, and cloud credentials.

A compromised AI library becomes a central extraction point. One dependency, access to everything.

Most teams still treat these libraries like "just another package." They're not. They're production infrastructure with privileged access, and they need to be secured like it.

What You Should Do

You don't need a 50-person security team. You need a few non-negotiable habits.

Pin Your Dependencies

If your requirements file says litellm without a version number, you're gambling. Every install pulls whatever is latest on PyPI, including a compromised version that's been live for 30 minutes.

# Don't do this
litellm

# Do this
litellm==1.82.6

Pin versions in requirements.txt, pyproject.toml, Docker builds, and CI/CD pipelines. Everywhere.

Lock Down Your Build Pipeline

Your pipeline should not pull latest packages blindly, rebuild with floating dependencies, or trust upstream changes automatically. Use cached builds, dependency locking, and image scanning.

If your CI runs pip install -r requirements.txt and those requirements aren't pinned, every build is a roll of the dice.

Treat AI Infrastructure Like Production Infrastructure

This is the mindset shift. AI tooling is no longer experimental. It's handling real data. It's sitting in production paths. It's holding real secrets.

That means: use a secrets manager instead of environment variables. Limit credential scope to what each service actually needs. Stop dumping every API key into a single .env file that every process can read.

Set Up Dependency Monitoring

Tools like Dependabot, Snyk, and Socket can alert you when a dependency publishes a suspicious update. You won't catch every attack this way, but you'll catch the obvious ones, and you'll know about new versions before your pipeline blindly installs them.

The Pattern to Watch

We're going to see more of this. Not less.

AI tooling is becoming central, powerful, widely adopted, and in many organizations, poorly secured. That combination attracts attention.

The LiteLLM incident lasted a few hours. That was enough to potentially expose entire systems. The difference between "nothing happened" and "this is a breach report" comes down to boring habits: pin your dependencies, rotate your secrets, treat your infrastructure seriously.

Do the boring things well, and the next compromised package becomes a non-event instead of an incident.

References