LiteLLM 공급망 공격과 비밀이 살아남지 말아야 하는 이유
hackernews
|
|
📰 뉴스
#litellm
#pypi
#공급망공격
#보안취약점
#악성코드
#취약점/보안
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
해커 그룹 TeamPCP가 취약점 스캐너 Trivy를 변조해 LiteLLM의 CI/CD 파이프라인을 탈취한 뒤, 2026년 3월 24일 악성 버전(v1.82.7, v1.82.8)을 PyPI에 배포했습니다. 이 악성 코드는 AWS 및 GCP 토큰, SSH 키, 쿠버네티스 설정 등 파일 시스템과 환경 변수의 모든 자격 증명을 수집해 외부로 유출했습니다. 사고자는 환경 변수와 일반 텍스트 파일에 비밀을 저장하는 방식이 공격 시 확산을 막을 수 없는 치명적인 보안 결함이 됨을 지적하며, 파일 시스템 기반의 임시 비밀 저장소 등 더 안전한 아키텍처를 제안했습니다.
본문
Yesterday, March 24, 2026, a threat actor group calling themselves TeamPCP published two malicious versions of LiteLLM to PyPI. If you're running LiteLLM in production (and a lot of people are, since it's the most popular LLM proxy gateway in the Python ecosystem), you need to understand what happened, what it took, and why the architecture decisions you made six months ago just became the most important factor in your incident response. In this post I'll break down the attack, then walk through the strategies I use to limit the damage from exactly this class of compromise: ephemeral filesystem secrets via the Kubernetes Secrets Store CSI Driver, the case against environment variables as secret storage, honeypot credentials for detection, and network-level blast radius containment. What Actually Happened Already know the details? Skip to the analysis. TeamPCP didn't hack LiteLLM directly. They compromised the supply chain upstream of LiteLLM by first poisoning Aqua Security's Trivy scanner, a widely trusted open-source vulnerability scanning tool, back on March 1st. LiteLLM's CI/CD pipeline installed Trivy without version pinning: ci_cd/security_scans.sh # This is what got them owned curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh trivy fs --exit-code 1 . The poisoned Trivy binary ran during CI, harvested the pipeline's environment variables (including PyPI upload credentials), and exfiltrated them. On March 23rd, the attackers registered the lookalike domain litellm.cloud (the legitimate site is litellm.ai ). By 08:30 UTC on March 24th, two malicious packages were live on PyPI: - v1.82.7 - Payload injected into litellm/proxy/proxy_server.py , triggered on importinglitellm.proxy - v1.82.8 - Payload delivered via litellm_init.pth , triggered on any Python startup. You didn't even need to import litellm. If it was installed in the environment, it ran. Neither version existed on GitHub. The GitHub releases only reached v1.82.6.dev1. This was a pure PyPI-only attack. The Payload: A Credential Vacuum The malicious code was a three-stage credential harvester. It vacuumed up everything it could find on the filesystem: - SSH keys ( ~/.ssh/id_rsa ,id_ed25519 ,config ) - AWS credentials ( ~/.aws/credentials ,~/.aws/config ) - GCP and Azure tokens - Docker configs - Kubernetes service account tokens and kubeconfig files .env files from every common application directory (/home ,/root ,/opt ,/srv ,/var/www ,/app ,/data )credentials.json ,secrets.json , service account key files/etc/shadow and SSL private keys- Shell history, git configs, npm tokens, PyPI tokens - Cryptocurrency wallet data Everything was encrypted with a random AES-256 session key, the session key encrypted with a hardcoded RSA-4096 public key, packaged into tpcp.tar.gz , and POST 'd to https://models.litellm.cloud/ . The more sophisticated v1.82.8 variant went further: it deployed a Kubernetes lateral movement toolkit that spawned privileged pods across every cluster node, and installed a persistent systemd backdoor polling an external C2 for additional binaries. If you installed litellm v1.82.7 or v1.82.8 from PyPI between 08:30 UTC and ~14:00 UTC on March 24, 2026, assume every credential accessible from that environment is compromised. PyPI has quarantined the entire litellm package. Google Mandiant has been engaged. The Pattern We Keep Ignoring This is the same pattern we see over and over. The attacker gets code execution (supply chain compromise, RCE, SSRF, deserialization bug) and the first thing they do is scrape the filesystem and environment for secrets. It's the lowest-hanging fruit in the post-exploitation playbook because we keep putting secrets in plaintext where any process can read them. The LiteLLM payload explicitly targeted: ~/.aws/credentials - long-lived IAM access keys sitting in INI files.env files - the universal "dump everything here" pattern- Environment variables via printenv - the payload ran a rawprintenv and captured the entire output. If you're injecting secrets as environment variables (the approach Kubernetes recommends by default withsecretKeyRef , and the approach most 12-factor apps use), every one of those secrets was captured by a single shell command. - Kubernetes Secrets - base64-encoded (not encrypted) in etcd by default - Service account key files - JSON files with permanent credentials Environment variables deserve special attention here because they're the mechanism most operators reach for first. The Twelve-Factor App methodology popularized ENV as the canonical way to inject configuration, and Kubernetes' envFrom makes it trivially easy. But environment variables are visible to every process in the container, trivially dumped by any subprocess via printenv or /proc//environ , and frequently leaked into logs, crash dumps, and error reporters. Stop treating them as secret storage. Every one of these (flat files, env vars, base64-encoded K8s Secrets) is readable by
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유