LiteLLM이 공급망 공격을 받았습니다: 월간 다운로드 9,700만 건, 자격 증명 도용자

hackernews | | 📰 뉴스
#ai 딜 #anthropic #claude #gemini #litellm #mistral #openai #pypi #공급망 공격 #보안 취약점 #자격 증명 도용
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

2026년 3월, AI 개발 도구인 LiteLLM의 최신 버전이 해커 TeamPCP의 공급망 공격으로 인해 자격 증명을 탈취하는 악성코드가 포함된 상태로 배포되었습니다. 이 공격은 Aqua Security와 Checkmarx의 도구를 차례로 탈취한 뒤 LiteLLM 유지관리자의 PyPI 자격 증명을 도용하여 실행된 정교한 다단계 공격입니다. 악성코드는 설치 시 별도의 실행 없이도 SSH 키, 클라우드 자격 증명, 암호화폐 지갑 등 민감한 정보를 모두 수집하며, 영향을 받은 사용자는 즉시 모든 비밀번호와 키를 교체해야 합니다. 이 사건은 월 9,700만 다운로드를 기록하는 인기 라이브러리가 공격의 대상이었음을 보여주며, 이미 CrewAI 등 주요 프레임워크가 LiteLLM 사용을 중단하는 등 보안 취약점을 줄이기 위한 의존성 제거 움직임이 가시화되고 있습니다.

본문

On March 24, 2026, LiteLLM versions 1.82.7 and 1.82.8 were published to PyPI containing a credential-stealing backdoor. The package has 97 million monthly downloads. Andrej Karpathy called it "software horror." He's right. What Happened A threat actor called TeamPCP executed a cascading supply chain attack: - March 19 — Compromised Aqua Security's Trivy vulnerability scanner - March 23 — Used Trivy access to compromise Checkmarx's KICS GitHub Action - March 24 — Used KICS CI/CD compromise to steal a LiteLLM maintainer's PyPI credentials - March 24 — Published poisoned LiteLLM packages to PyPI The payload is a .pth file (litellm_init.pth ) that executes automatically on every Python process startup. You don't need to import LiteLLM. You don't need to call it. If it's installed, it runs. What It Steals Everything: - SSH keys - AWS, GCP, and Azure credentials - Kubernetes configs and secrets - Crypto wallets .env files (all your API keys)- Git credentials - Shell history - SSL private keys - CI/CD secrets - Database connection strings The stolen data is encrypted and exfiltrated to models.litellm.cloud — a lookalike domain designed to blend into network logs. In Kubernetes environments, it deploys privileged pods to every node for lateral movement. It installs a persistent systemd backdoor. Who's Affected LiteLLM has ~3.4 million daily downloads. But the real danger is transitive dependencies. LiteLLM is pulled in by: - CrewAI — Multi-agent framework - DSPy — Programming framework for LLMs - MLflow — ML experiment tracking (emergency-pinned to /dev/null Check for the backdoor: ls ~/.config/sysmon/sysmon.py 2>/dev/null ls ~/.config/systemd/user/sysmon.service 2>/dev/null In Kubernetes: kubectl get pods -n kube-system | grep node-setup If you find any of these: rotate every credential immediately. SSH keys, cloud IAM, API keys, database passwords, everything. CrewAI Already Dropped LiteLLM CrewAI published a removal guide the same day, pushing native SDK integrations for OpenAI, Anthropic, Google Gemini, Azure, and Bedrock. Their message: "Fewer packages = fewer supply-chain risks + better performance." They're right. Every dependency is an attack surface. The Bigger Problem LiteLLM is the most popular open-source LLM proxy. It's what developers use to avoid vendor lock-in — route requests to any LLM through one interface. But here's the thing: the tool you use to avoid lock-in just became the attack vector. The irony is brutal. This isn't a one-off. TeamPCP executed three supply chain attacks in five days (Trivy → KICS → LiteLLM). A related CanisterWorm is spreading across 47 npm packages. The AI tooling supply chain is under active, coordinated attack. Managed Platforms Don't Get pip-Installed A managed LLM proxy like HexaClaw doesn't exist in your dependency tree. There's no package to compromise. There's no .pth file to inject. There's no transitive dependency chain. You make an HTTPS request to an API endpoint. That's it. The routing, the provider connections, the credential management — all handled server-side, behind authentication, with no code running in your environment. HexaClaw routes to 41 models across 8 providers (Anthropic, OpenAI, Google, DeepSeek, Mistral, Groq, Cohere, xAI) through one API key. Same multi-provider access that LiteLLM offered, without the 97-million-download attack surface. # This is your entire "dependency" curl -X POST https://api.hexaclaw.com/v1/chat/completions \ -H "Authorization: Bearer hx_your_key" \ -d '{ "model": "claude-sonnet-4-6", "messages": [{"role": "user", "content": "Hello"}] }' No pip install. No package.json. No supply chain. Anthropic Gets It From Anthropic's engineering blog, published this week: "An agent with three focused tools will outperform an agent with fifteen tools for any given task." Fewer dependencies. Fewer tools. Fewer attack surfaces. The consolidation trend isn't just about performance — it's about security. What This Changes The LiteLLM attack will accelerate three trends: - Framework vendors dropping LiteLLM — CrewAI already did. Expect DSPy and others to follow. - Managed proxies over self-hosted — The security burden of self-hosting just became viscerally real. - Dependency auditing becoming mandatory — pip install is no longer a safe operation without verification. If you're building AI agents in 2026, the days of casually pip-installing proxy libraries are over. Either audit every dependency, or use a managed service that keeps the routing server-side. HexaClaw provides managed LLM routing to 41 models, plus image/video generation, browser automation, memory, and compute — all through one API key. No packages to install. No supply chain to worry about. Get started free — 1,000 credits, no credit card required.

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →